Report from Reuters
In Brief – A Dutch court has ordered AI company xAI to stop its chatbot Grok from generating or distributing non-consensual sexualized images, including depictions of adults or children partially or wholly stripped naked. The preliminary injunction, which applies to the services in the Netherlands, marks one of the first legal rulings in Europe addressing responsibility for AI tools that can create such content. The court warned that violations could trigger fines of €100,000 per day and could result in Grok being barred from X. The case was brought by Dutch nonprofit Offlimits, which argued that Grok could still “undress” individuals despite “guardrails” introduced by xAI to address the controversy, used a courtroom demonstration to prove the point. xAI countered that it cannot fully prevent misuse by malicious users.
Context – Last summer, X added a “spicy mode” to its Grok Imagine tool, enabling sexually suggestive images. Months later, Reuters reported on the near-nude images, including depictions of real people. The feature triggered investigations around the world. In Europe alone, probes are underway using the GDPR, Digital Services Act, and national criminal laws. There is also scrutiny in other major markets including the UK, Brazil, India, Malaysia and California. The European Parliament and the European Council are calling for the AI Act to be amended to prohibit apps from creating sexualized images of people without their consent as part of the “Digital Omnibus” legislation designed to simplify AI regulation. To AI critics, the existence of nudification technology only proves that AI can harm citizens, and the fact that the first big example involves Elon Musk helps them politically. Although Musk eventually responded to the nudification charges by saying Grok would be limited to creating content that is legal in a market, the ongoing criticism illustrates the challenge of clearly defining terms like “sexualized”, “intimate”, and “partially nude”, and dealing with widely varying cultural standards in different countries. Every AI developer should be watching whether Grok will be responsible when a user actively works to evade its technical limits.
