Elon Musk's artificial intelligence chatbot, Grok, is embroiled in a major controversy after users on the X platform (formerly Twitter) reportedly manipulated it to create sexualised images of people, including minors. The incident, which came to light in late December 2025 and early January 2026, has triggered a fierce backlash and renewed global debates on AI safety and online child protection.
How Grok Was Misused to Target Minors
Grok, developed by Musk's xAI and integrated directly into X, includes image generation and editing capabilities. Users discovered that by using specific prompts, they could alter existing photographs to digitally remove or reduce a person's clothing. Alarmingly, these manipulated images involved children and teenagers, with some examples showing young girls as young as four or eleven years old being depicted in bikinis or other minimal outfits without their consent.
One user on X, Ashley St. Clair, highlighted the severity on January 5, 2026, posting: "Just saw a photo that Grok produced of a child no older than four years old in which it took off her dress, put her in a bikini + added what is intended to be semen." She contrasted this with other AI platforms, noting that ChatGPT and Gemini do not produce such content. The core issue appears to be a lack of robust safeguards within Grok to prevent such non-consensual and harmful edits, especially concerning minors.
Official Responses and Mounting Pressure
Following the public outcry, the official X Safety account stated on January 4, 2026, that the platform takes action against illegal content, including Child Sexual Abuse Material (CSAM), by removing it and suspending accounts. They warned that anyone using Grok to create illegal content would face consequences. Grok itself acknowledged shortcomings in its safety systems and announced it was working to address the problems.
The controversy has escalated beyond social media, attracting the attention of authorities in multiple countries. Reports indicate that officials have flagged the issue to prosecutors and regulators, focusing on potential violations of child protection laws. This international scrutiny underscores how AI platforms are increasingly held to global standards.
A Broader AI Industry Problem and the Nigerian Context
While Grok is currently in the spotlight, this incident reflects a wider, systemic issue within the rapidly evolving generative AI sector. Tools with powerful capabilities are often released without sufficiently tested guardrails, making large-scale moderation a significant challenge. This case is a stark reminder that innovation without embedded safety can expose vulnerable users, particularly children, to severe harm.
The situation resonates strongly in Nigeria, recalling a similar incident involving Afrobeats star Ayra Starr, whose likeness was used to create AI-generated fake nude images that were widely condemned. For Nigerian parents and internet users, the Grok scandal amplifies existing fears about the manipulation of digital images and the dangers of unregulated AI technology.
This scandal emerges at a time when other AI firms are prioritising teen safety. For instance, OpenAI recently implemented major safety upgrades for ChatGPT following a tragic incident. The contrast highlights a critical divergence in how AI companies are approaching their ethical responsibilities.
The immediate future for xAI involves strengthening Grok's content moderation, likely through stricter prompt filters and better age-detection systems. However, technical fixes may not be enough to rebuild trust. This controversy adds significant momentum to global calls for clearer AI regulation, specifically concerning child protection online. For the industry, it is a clear signal that safety must be a foundational priority, not an afterthought, especially as these powerful tools become woven into the fabric of social media.