Researchers discover that privacy-preserving tools leave pri

Researchers discover that privacy-preserving tools leave private data unprotected


Press Release
Posted:
March 3, 2021
BROOKLYN, New York, Tuesday, March 2, 2021 – Machine-learning (ML) systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them, including face expression recognition systems. Companies that make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity. But how good are they?
,” presented last month at the 35
th AAAI Conference on Artificial Intelligence, a team led by Siddharth Garg, Institute Associate Professor of electrical and computer engineering at NYU Tandon, explored whether private data could still be recovered from images that had been “sanitized” by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and that had even passed empirical tests. The team, including lead author Kang Liu, a Ph.D. candidate, and Benjamin Tan, research assistant professor of electrical and computer engineering, found that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images.

Related Keywords

New York , United States , Brooklyn , Kang Liu , Siddharth Garg , Tandon School Of Engineering , Institute Associate Professor , Hiding Secrets , Sanitized Images , Artificial Intelligence , Benjamin Tan , புதியது யார்க் , ஒன்றுபட்டது மாநிலங்களில் , ப்ரூக்ளின் , காங் லியூ , சித்தார்த் கார்க் , டான்டன் பள்ளி ஆஃப் பொறியியல் , நிறுவனம் இணை ப்ரொஃபெஸர் , மறைத்து ரகசியங்கள் , சுத்திகரிக்கப்பட்டது படங்கள் , செயற்கை உளவுத்துறை , பெஞ்சமின் பழுப்பு ,

© 2025 Vimarsana