February 29, 2024



The Stanford Internet Observatory has found that the use of a dataset called LAION, which contains billions of images, is likely influencing the harmful outputs generated by AI tools. According to the chief technologist of the Observatory, the problem stems from the fact that many generative AI projects were rushed to market and made widely accessible without rigorous attention. One prominent user of LAION, a startup called Stability AI, has developed a text-to-image model that can create harmful content. While new versions of the model make it harder to create harmful content, an older version that was released last year remains popular for generating explicit imagery. This has raised concerns about the potential for the model to be used for online sexual exploitation. Stability AI has stated that it only hosts filtered versions of the model and has taken proactive steps to mitigate the risk of misuse. The company has also emphasized that the filters remove unsafe content before it reaches the model, thereby preventing it from generating harmful content. The Canadian Centre for Child Protection, which runs Canada’s hotline for reporting online sexual exploitation, has expressed concern about the potential misuse of the model. Overall, the report highlights the need for more rigorous attention in the development and release of AI models to prevent the generation of harmful content.



Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address: 3.236.145.153

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author