bbieron@platformeconomyinsights.com

Grok Chatbot Faces Regulatory Criticism for Sexualizing Photo Images

Jan 9, 2026

Report from Australian Broadcasting Corporation

In Brief – Grok, the AI chatbot integrated into X, is facing growing backlash for being capable of digitally altering photos posted on X to depict women, and in some cases apparent children, in revealing clothing, oftentimes bikinis, as well as in sexualized poses. The controversy has prompted regulatory scrutiny worldwide, including from the European Commission, and governments of France, UK, Australia, India and Malaysia, with officials in many markets saying that the photo editing feature appears illegal. Although X owner Elon Musk appeared initially to dismiss the backlash, and xAI rejected reports of child sexualized content as “legacy media lies,” an xAI technical staff member later acknowledged the issue in a post and said that his team was looking into further tightening guardrails.

Context – As quickly as generative AI systems were released to the public, concerns emerged over undesirable results, especially because some users would try to create controversial outputs. AI developers quickly leaned on techniques from more established digital platforms. Social media sites scan for key words related to illegal and objectionable activity. Commerce platforms scan for key words associated with banned products. AI companies tried to set algorithmic “guardrails” to directly instruct the system to do certain things, or not do certain things, overriding results from their base model. Like with social media content moderation, some conservative commentators have claimed that AI guardrails skewed left, while other commentators argued the AIs were made to slant rightward. Trump Administration AI policy has focused on charges that AIs have been “woke”. The social media experience has also been deeply shaped by Sec. 230 in the US, which largely protects digital platforms from civil liability for user-generated content. It’s an open question whether Sec. 230 applies to generative AI services. Supreme Court Justice Gorsuch has opined that it probably does not, as have Sec. 230’s authors, but an argument can be made that everything created by a generative AI system is just an algorithmic re-ordering of existing third-party content, at least for text.

View By Monthly
Latest Blog
Dutch Regulator Opens Digital Services Act Investigation of Roblox

Report from NL Times In Brief – The Netherlands Authority for Consumers and Markets (ACM) has launched a formal Digital Services Act (DSA) investigation of Roblox over concerns that the online gaming platform may not be doing enough to protect children. The DSA...

EU Commission Moves to Stop Meta from Banning Chatbots on WhatsApp

Report from Wall Street Journal In Brief – The European Commission has informed Meta that it plans to block the company’s ban on third-party AI chatbots from operating over WhatsApp. The antitrust regulator has reached a preliminary finding that Meta’s policy could...

Department of Justice and State AGs Appeal Google Search Remedies Order

Report from Bloomberg In Brief – The US Department of Justice has announced that it notified the Federal Court of Appeals for the District of Columbia that it will appeal US District Judge Amit Mehta’s remedies order in the federal antitrust lawsuit that found Google...

Governor Newsome Drops Funding for Media from California State Budget

Report from SFiST In Brief – The latest budget proposal from California Governor Gavin Newsom (D) has eliminated funding for the News Transformation Fund, a state initiative to pay millions of dollars to California media companies. The fund was announced in 2024 as...

Platform Economy Insights produces a short email four times a week that reviews two top stories with concise analysis. It is the best way to keep on top of the news you should know. Sign up for this free email here.

* indicates required