Saturday, August 31, 2024
HomeWorldChild abuse images removed from AI image-generator training source, researchers say

Child abuse images removed from AI image-generator training source, researchers say

Artificial intelligence researchers reported Friday that they have removed more than 2,000 web links to suspected child sexual abuse images from a dataset used to train popular AI image generator tools.

The LAION research dataset is a massive index of online images and captions that serves as a resource for leading AI image makers such as Stable Diffusion and Midjourney.

But a report from last year A study by the Stanford Internet Observatory found that the app contained links to sexually explicit images of children, adding to the ease with which some AI tools can produce photorealistic deepfakes depicting children.

That December report prompted LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it was working with the Stanford University watchdog group and anti-abuse groups in Canada and the United Kingdom to fix the problem and release a sanitized dataset for future AI research.

Stanford researcher David Thiel, author of the December report, praised LAION for the significant improvements but said the next step is to remove from distribution the “tainted models” that can still produce child abuse images.

One of the LAION-based tools that Stanford identified as the “most popular model for generating explicit images” — an older and lightly filtered version of Stable Diffusion — remained easily accessible until Thursday, when New York-based company Runway ML removed it from its Hugging Face AI model repository. Runway said in a statement Friday that it was a “planned deprecation of research models and code that have not been actively maintained.”

READ ALSO  Arcane Season 1 Gets a Stunning Collector’s Edition Box Set

The cleaned-up version of the LAION dataset comes as governments around the world are taking a closer look at how certain technological tools are being used to create or distribute illegal images of children.

San Francisco’s city attorney filed a lawsuit earlier this month seeking a group of websites that enable the creation of AI-generated nude photos of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what guided French authorities to file charges on Wednesday to the platform’s founder and CEO, Pavel Durov.

Durov’s arrest “signals a sea change across the tech industry that the founders of these platforms can be held personally accountable,” said David Evan Harris, a researcher at the University of California, Berkeley who recently contacted Runway to ask why the problematic AI image generator was still publicly available. It was taken down days later.

WATCH VIDEO

DOWNLOAD VIDEO

YOU MAY ALSO LIKE
- Advertisment -

RECENT POSTS

- Advertisment -
- Advertisment -