Elon Musk’s Grok AI restricts image generation to paying subscribers following a global outcry over sexualized deepfakes and government investigations.
In a sudden move to contain a growing international crisis, Elon Musk’s AI chatbot, Grok, has restricted its image generation and editing features. The decision follows a massive global backlash over the platform’s role in creating and spreading sexualized deepfakes.
As of January 9, 2026, users attempting to modify or generate images are being met with a restrictive prompt: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.” This shift marks a significant retreat for a tool that was marketed as an “edgy” and “unfiltered” alternative to mainstream AI rivals.
The Controversy: Sexualized Deepfakes and Lack of Safeguards
The crackdown comes after weeks of reports showing that Grok was being used to fulfill malicious requests. Researchers and users found that the chatbot’s image-generation tool, Grok Imagine, was producing sexually explicit modifications of real people, including placing women in compromising positions.
Even more alarming were reports from researchers warning that some generated images appeared to depict children. Unlike competitors like Google’s Gemini or OpenAI’s DALL-E, which have strict hard-coded barriers against generating explicit content or real people, Grok featured a “spicy mode” that allowed for adult content—a feature that has now landed the platform in legal crosshairs.
A Global Legal Storm: Governments React
The reaction from world leaders and regulators has been swift and severe.
- United Kingdom: Prime Minister Keir Starmer called the situation “disgraceful” and “disgusting,” threatening “all options” for action against X and xAI.
- European Union: Officials have slammed the behavior as “illegal” and “appalling.”
- India & Other Nations: India, France, Malaysia, and Brazil have all called for formal investigations into the platform’s compliance with safety regulations.
Britain’s media regulator, Ofcom, and privacy watchdogs have already contacted Elon Musk’s xAI for detailed information on how they plan to comply with safety laws.
The Paywall: A Temporary Safeguard?
By moving image generation behind a paywall, X appears to be attempting to reduce the sheer volume of generated content. While subscriber numbers aren’t public, researchers have already noted a decline in the number of explicit deepfakes appearing on the platform since the restriction was implemented on Friday.
However, critics argue that simply charging for the feature does not solve the underlying issue of AI safety and ethics. If the tool still allows for the creation of non-consensual explicit imagery for paying users, the platform remains in violation of several international digital safety acts.
The Future of Grok and AI Ethics
Elon Musk has consistently pitched Grok as a champion of “anti-woke” AI, designed to be more permissive than its competitors. However, this recent crisis highlights the dangers of removing guardrails in the age of generative AI.
As investigations continue, the tech world is watching to see if Grok will implement stricter permanent filters or if the “pay-to-generate” model is simply a way to weather the current political storm.
Frequently Asked Questions (FAQ)
Q1: Is Grok still generating images? A: Yes, but the feature is now restricted. As of January 2026, only paying subscribers on X can access the image generation and editing tools.
Q2: Why did governments investigate Grok? A: Investigations were launched due to the AI’s ability to create non-consensual sexualized deepfakes of real individuals, including concerns regarding images depicting minors.
Q3: What did UK PM Keir Starmer say about the platform? A: He described the material as “disgusting” and “not to be tolerated,” stating that the UK government is supporting regulator Ofcom to take action against X.
Q4: How does Grok differ from other AI image generators? A: Grok was designed with fewer safeguards and a “spicy mode” for adult content, whereas rivals like OpenAI and Google have strict filters to prevent the creation of explicit or harmful imagery.
