Indonesia and Malaysia have become the first countries to block Elon Musk’s Grok, following widespread misuse of the AI tool to generate sexually explicit and non-consensual manipulated images. The move places Southeast Asia at the centre of a growing global debate over AI safeguards, platform accountability, and digital harm.
Indonesia and Malaysia have imposed blocks on Grok, the artificial intelligence tool developed by Elon Musk’s xAI and embedded into the social media platform X, after the system was repeatedly used to generate sexually explicit deepfake images of women and minors. The coordinated action marks the first known instance of governments formally banning the AI tool, and it underscores mounting international concern over the pace at which generative AI is being deployed, often with limited guardrails.
The controversy centres on what has become known online as “digital undressing”, a trend in which users prompted Grok to manipulate real images of people, frequently women, into suggestive or obscene scenarios. In many cases, the images were created without consent and circulated widely across X, causing distress to victims and drawing criticism from digital safety advocates.
Indonesia’s Minister of Communication and Digital Affairs, Meutya Hafid, said the decision to block Grok was taken to protect the public, particularly women and children, from the harms posed by AI-generated pornographic content. Malaysia followed with its own temporary ban, citing repeated misuse of the tool to produce “obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images”, including content involving minors.
Both countries have strict anti-pornography laws, shaped in part by their Muslim-majority populations, and authorities framed the bans as a necessary intervention rather than a symbolic gesture. In Malaysia, officials noted that the decision would remain under review, depending on whether effective safeguards were introduced.
Grok, launched as Musk’s answer to what he has described as overly restrictive or “censored” AI models, has been marketed as more permissive than its competitors. While that positioning has attracted users seeking fewer content limits, it has also made Grok an outlier among mainstream AI systems, particularly when it comes to sexually explicit material.
The digital undressing trend gained momentum late last year, when users realized they could tag Grok directly on X and prompt it to alter uploaded images. Requests often involved removing clothing, adding bikinis, or placing subjects in sexualized poses. The scale of the issue became clearer after researchers at AI Forensics, a European non-profit focused on algorithmic accountability, analyzed tens of thousands of Grok-generated images and user prompts over a one-week period.
Their findings showed a high frequency of prompts involving clothing removal and sexualized language. More than half of the images depicting people featured individuals in minimal attire, such as underwear or swimwear. Campaigners argue that such outputs, even when not explicitly pornographic, contribute to harassment, objectification, and, in some cases, criminal abuse.
PUSHBACK, POLITICS, AND PLATFORM RESPONSIBILITY
International concern over Grok has extended beyond Southeast Asia. Regulators in the United Kingdom, the European Union, and India have all raised questions about whether the tool’s safeguards are sufficient, particularly given its integration with a major social media platform.
In the UK, media regulator Ofcom has opened a formal investigation into X under the Online Safety Act, which places legal duties on platforms to protect users from illegal and harmful content. Ofcom has warned that manipulated intimate images may constitute image-based abuse, while sexualized depictions of children could amount to child sexual abuse material. Penalties for breaches can reach £18 million or 10% of global qualifying revenue.
Musk has publicly positioned himself as a critic of what he calls “woke” AI and excessive content moderation, frequently framing regulatory scrutiny as an attack on free speech. He has said that users who generate illegal content with Grok will face consequences, but he has also dismissed many criticisms, at times responding with emojis rather than detailed explanations.
According to sources cited by major news platforms, Musk has resisted stronger internal guardrails at xAI, even as concerns grew. The company’s safety team, reportedly smaller than those of rival AI developers, lost several staff members in the weeks leading up to the controversy. While xAI has said it is suspending offending accounts and cooperating with authorities, critics argue that enforcement has lagged behind the speed of misuse.
Recent changes to Grok’s image generation features have done little to reassure regulators. Some functions were limited to paid X subscribers, but other image editing and generation tools remain freely accessible through Grok’s standalone website and app. As a result, users who are blocked in one area can still access similar capabilities elsewhere.
For Indonesia and Malaysia, the decision to block Grok reflects a broader concern about reactive regulation. Both governments have signalled that AI tools entering their markets must demonstrate clear safeguards before harm occurs, rather than relying on post-hoc moderation. That stance may resonate with other countries grappling with similar challenges.
The Grok episode highlights a growing tension between rapid AI innovation and the realities of platform responsibility. As generative tools become more powerful and accessible, the line between experimentation and abuse continues to blur. What Indonesia and Malaysia have done, in effect, is force that question into the open: whether freedom to innovate can exist without clear boundaries, and who bears responsibility when those boundaries are crossed.
SOURCES: CNN; statements from Indonesia’s Ministry of Communication and Digital Affairs; Malaysia Communications and Multimedia Commission; AI Forensics; Ofcom, United Kingdom

