Study shows AI image-generators being trained on explicit ph

Study shows AI image-generators being trained on explicit photos of children

Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built. Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world. The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in a database used to train leading AI image-makers.

Related Keywords

Canada , London , City Of , United Kingdom , Germany , Stanford , Leicestershire , Canadian , German , Rich Skrenta , Rebecca Portnoff , David Thiel , Christoph Schuhmann , Lloyd Richardson , Associated Press , Google , Canadian Centre For Child Protection , Artificial Intelligence Open Network , Stanford University , Stanford Internet Observatory , Canadian Centre , Child Protection , Large Scale Artificial Intelligence Open Network , Stable Diffusion , Common Crawl , Hugging Face , Online Privacy Protection ,

© 2025 Vimarsana