February 29, 2024

Microsoft’s picture creation AI, Image Creator, has been discovered to be creating graphic and disturbing images of decapitations and violence using a specific AI prompt. Some of the subjects of these images include Joe Biden, Donald Trump, Hillary Clinton, Pope Francis, and minority groups, and the images are made realistic through the use of artificial intelligence. Microsoft’s AI fails to prevent these prompts and the resulting images, and even though Microsoft promises to have systems in place, the evidence suggests otherwise. Despite the significant resources at their disposal, the company seemingly overlooks issues related to the content that their AI generates. Another problem arises from the fact that Microsoft does not allow explicit content but is profiting from it on social media and other platforms. This has raised concerns about the dangers of the technology, especially with the upcoming general election and extremists possibly using it to spread inappropriate content on the internet. Josh McDuffie, a user source of the “kill” prompt, claims that his report was rejected by Microsoft. Microsoft’s repeated failures to act are troublesome, and the situation calls into question the effectiveness of building AI guardrails. The evidence also suggests that Microsoft may not regard fixing AI-created harmful content as a high priority.

Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address:

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author