Elon Musk’s AI chatbot Grok has restricted its image generation and editing features on X, formerly Twitter, to paid subscribers after a global backlash over the tool’s misuse. Free users who attempt to generate images through public chatbot responses now receive a notice that the feature is limited to paying accounts. The change comes after widespread reports that Grok was being used to produce sexualized and non-consensual imagery, including deepfakes of real women and children, prompting condemnation from regulators, advocacy groups, and governments worldwide.
Despite the paywall, Grok’s image editing pipelines remain accessible to free users. These tools allow users to upload existing photos and modify them, a method that experts say is particularly dangerous because many abusive deepfakes begin as edits of real images rather than from scratch. xAI, the company behind Grok, has acknowledged that images of minors in minimal clothing were produced and framed the issue as part of a broader deepfake crisis. The company has promised stronger protections in future releases, but the continued availability of editing tools suggests those safeguards are not yet complete.
The Internet Watch Foundation reported finding sexualized images of children ages 11 to 13 on dark web forums, where users claimed Grok was used to produce them. The discovery has intensified scrutiny from the U.K. regulator Ofcom, which is investigating the creation of sexualized images of children and considers it a priority under the country’s online safety regime. Political pressure is mounting. The U.K. prime minister’s office described the paywall as “insulting” for survivors, arguing that it merely turns the ability to produce illegal content into a premium feature rather than removing it. Authorities in France, India, and Malaysia have also launched investigations, reflecting an expanding international response.
Early checks by reporters indicate that Grok’s paywall is porous. While free accounts no longer receive fresh images in @grok replies, editing tools remain functional, allowing both innocent and sexualized modifications. Experts warn that subscription access is not a safety mechanism. Prepaid cards or shared accounts can bypass the barrier, and monetization may displace abuse rather than prevent it. Researchers emphasize that technical safeguards, rather than paywalls, are required to reduce harm effectively.
Industry-standard measures include blocking sexual content involving minors, applying nudity and age filters, and tracking provenance using watermarking or content credential tools such as SynthID or C2PA. Hashes of content combined with fast takedown pipelines can limit the distribution of harmful material. Leading AI companies already ban sexualized content involving children through a combination of automated screening, human review, and red-teaming. Critics argue that Grok’s approach falls short of these standards, leaving significant gaps in end-to-end enforcement.
Real progress, according to experts, would require public, testable policies explicitly prohibiting sexual edits of real people, independent audits of image pipelines, and routine transparency reporting on blocked prompts, enforcement rates, and response times. Clear reporting tools for victims and rapid removal processes across X are also crucial. For now, limiting some generation pathways while keeping editing tools freely available does not address the central threat. Until Grok implements full guardrails and consistent enforcement, the paywall functions more like a band-aid than a solution, offering only the appearance of safety while regulators and the public demand concrete proof that abuse can be prevented.

