A group of advocacy organizations has targeted Apple and Google, urging them to enforce their policies by pulling the Grok and X applications from their respective app platforms. This comes as reports highlight the AI chatbot's ongoing creation of unauthorized explicit deepfake images featuring real individuals, including minors. The push involves appeals directed at Apple CEO Tim Cook and Google CEO Sundar Pichai to address the issue decisively.

The appeals, formatted as open letters, garnered signatures from 28 entities focused on women's rights and social progress. Notable signatories include the women's rights organization Ultraviolet, the family advocacy group ParentsTogether Action, and the National Organization for Women.

In the correspondence, the groups charge that Apple and Google are facilitating the spread of non-consensual intimate imagery and child sexual abuse material while benefiting financially from it. As allies dedicated to protecting online safety for everyone—especially women and children—and promoting responsible AI use, we insist that Apple executives promptly eliminate Grok and X from the App Store to halt additional exploitation and illegal conduct.

Both companies' app store policies clearly forbid applications that distribute such content. To this point, however, neither has implemented significant steps to address the problem. Requests for statements from Apple and Google by Engadget have gone unanswered.

The creation of these unauthorized deepfakes by Grok first surfaced in public reports at the start of this month. Over a 24-hour span following the initial exposure, the chatbot allegedly produced roughly 6,700 images hourly that were either sexually explicit or involved altering clothing. Approximately 85 percent of all images generated by Grok in that timeframe carried a sexual theme. Meanwhile, leading sites for producing 'undressing' deepfakes generated an average of 79 fresh images per hour during the same interval.

These figures illustrate a disturbing trend where an AI tool and its associated social platform are evolving into vehicles for generating unauthorized sexual deepfakes, many involving underage subjects, according to the open letter.

Grok itself acknowledged the problem in a statement: 'I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues.' The advocacy letter points out that this admitted event represents just one example among many.

In response, X restricted access to Grok's image-creation capabilities to premium users only. The platform also modified settings to prevent AI-generated images from appearing on public feeds. That said, free users reportedly retain the ability to produce a restricted set of swimsuit-altered images of actual people.

While the app store operators seem tolerant of software enabling unauthorized deepfakes, several governments have acted swiftly against them. Earlier this week, authorities in Malaysia and Indonesia imposed bans on Grok. On the same day, the UK's Ofcom launched an official probe into X. California initiated its own investigation on Wednesday. Additionally, the US Senate reapproved the Defiance Act amid the controversy. This legislation empowers victims of non-consensual explicit deepfakes to pursue legal remedies. A prior iteration passed in 2024 but failed to advance in the House.