scorecardresearch
Add as a preferred source on Google
Saturday, January 10, 2026
Support Our Journalism
HomeWorldFactbox-Elon Musk's Grok faces global scrutiny for sexualised AI photos

Factbox-Elon Musk’s Grok faces global scrutiny for sexualised AI photos

Follow Us :
Text Size:

Jan 9 (Reuters) – Governments and regulators from Europe to Asia have condemned and some have opened inquiries into sexually explicit content generated by Elon Musk’s xAI chatbot Grok on X, putting pressure on the platform to show what it is doing to prevent and remove illegal content.

Grok said late on Thursday it was restricting image generation and editing to paying subscribers after it said on January 2 that it was fixing safeguard lapses after isolated cases in which it produced sexualised outputs, including depictions of minors in minimal clothing. 

Musk said earlier on X that anyone using Grok to make illegal content would suffer the same consequences as if they uploaded illegal content.

Here are some reactions from governments and regulators around the world.

EUROPE 

The European Commission extended on Thursday a retention order sent to X last year to retain and preserve all internal documents and data related to Grok until the end of 2026, amid concern over Grok-generated sexualised “undressed” images.

Britain’s communications regulator Ofcom said on Monday it had made “urgent contact” with X and xAI and would make a swift assessment of whether the service was meeting its legal duties to protect users under the UK’s Online Safety Act framework.

In France, government ministers said on January 2 they had referred sexually explicit Grok-generated content circulating on X to prosecutors and also alerted French media regulator Arcom to check the platform’s compliance with the European Union’s Digital Services Act. 

Germany’s media minister Wolfram Weimer called on the European Commission on Tuesday to take legal steps, saying EU rules provided tools to tackle illegal content and alleging the problem risked turning into the “industrialisation of sexual harassment”.

Italy’s data protection authority warned on Thursday that using AI tools to create “undressed” deepfake imagery of real people without consent could amount to serious privacy violations and, in some cases, criminal offences. 

Swedish political leaders condemned on Thursday Grok-generated sexualised “undressing” content after reporting that imagery involving Sweden’s deputy prime minister was produced from a user prompt.

ASIA

India’s IT Ministry on January 2 sent X a formal notice over alleged Grok-enabled creation or sharing of obscene sexualised images, directing the content to be taken down and requiring a report on the actions being taken within 72 hours.

Malaysia’s communications regulator MCMC said on January 3 it would summon X and open an investigation into alleged misuse of Grok to generate obscene or sexualised “undressing” content, warning it may involve offences under Section 233 of Malaysia’s Communications and Multimedia Act 1998.

OCEANIA 

Australia’s online-safety regulator eSafety said on Wednesday it was investigating Grok-generated “digitally undressed” sexualised deepfake images, assessing adult material under its image‑based abuse scheme and noting current child-related examples it had reviewed did not meet the legal threshold for child sexual abuse material under Australian law.

(Reporting by Hugo Lhomedet in Gdansk, editing by Milla Nissi-Prussak)

Disclaimer: This report is auto generated from the Reuters news service. ThePrint holds no responsibility for its content.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

  • Tags

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular