Indonesia Blocks Elon Musk's Grok AI Over Deepfake Pornography Risks
Indonesia First to Ban Elon Musk's Grok AI Over Deepfakes

In a landmark move for digital safety, Indonesia has taken the unprecedented step of becoming the first country to block access to Elon Musk's artificial intelligence chatbot, Grok. The drastic action was prompted by serious concerns over the tool's ability to generate non-consensual sexual deepfake imagery, including risky and pornographic depictions of women and children.

Government Cites Human Rights Violation

The decision, announced by Indonesia's Communication and Digital Affairs Minister, Meutya Hafid, frames the issue as a critical threat to citizen security. Authorities described the practice of AI-generated non-consensual sexual deepfakes as "a serious violation of human rights, dignity, and the security of citizens in the digital space."

Minister Hafid stated that the temporary block on the Grok application was a direct response to protect women, children, and the general public from the dangers of fake pornographic content created using artificial intelligence technology. This action highlights Indonesia's strict stance against online obscenity.

Global Alarm Over Grok's Integration with X

The controversy stems from Grok's deep integration into Musk's social media platform, X (formerly Twitter). Users discovered they could generate AI-altered or completely fabricated images by simply tagging the Grok bot in their posts. In recent weeks, the platform has been inundated with manipulated pictures, many featuring partially unclothed women and minors.

The situation escalated to a point where organizations like the Internet Watch Foundation warned that criminal actors were exploiting this feature to produce child sexual abuse material (CSAM). Following significant public backlash, X restricted the AI image generation feature to paying subscribers, who must also submit identifying information. However, critics argue this safeguard is insufficient to prevent misuse.

International Repercussions and Musk's Defense

Indonesia's move has sent ripples across the globe. The UK government is now considering its own action, with media regulator Ofcom reviewing whether X is in breach of the country's Online Safety Act. UK Technology Secretary Liz Kendall voiced strong support for potential measures, stating that "sexually manipulating images of women and children is despicable and abhorrent." Under UK law, Ofcom can seek a court order to block or financially penalize platforms that refuse to comply.

In response to the growing criticism, Elon Musk has dismissed the concerns as a pretext for censorship. In a controversial demonstration of his point, he shared an AI-generated image of UK Prime Minister Sir Keir Starmer in a bikini, commenting that opponents "just want to suppress free speech."

Reports from late December 2025 indicated the alarming scale of the problem, with Grok producing degrading edits of women "dozens of times per minute." Investigations documented users directing the bot to create explicit scenarios, such as an image of a woman breastfeeding, and others attempting to digitally undress groups of women by misleading the AI.

While full image generation is now behind a paywall, free users can still manipulate photos using X's "edit image" tools and through Grok's standalone website, leaving a significant loophole.

X's Safety Response and Ongoing Scrutiny

X's Safety team issued a statement addressing the controversy, asserting that the platform takes action against illegal content, including CSAM, by removing it, suspending accounts permanently, and cooperating with governments. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," the statement read.

Despite this, Indonesia has summoned X representatives for discussions following the temporary block. The situation places Elon Musk's X and its Grok AI at the center of a crucial global debate, balancing technological innovation against the urgent need to prevent digital harm and protect vulnerable individuals from AI-powered exploitation.