A coalition of nearly 30 advocacy organizations is calling on Apple and Google to remove social media platform X and its artificial intelligence chatbot Grok from their app stores, accusing both services of enabling the creation and spread of illegal sexual content involving women and children.
The groups, which include UltraViolet, the National Organization for Women (NOW), MoveOn, and ParentsTogether Action, delivered open letters on Wednesday to Apple chief executive officer (CEO) Tim Cook and Google CEO Sundar Pichai. The organizations argue that allowing these applications to remain available violates the companies’ stated app store policies against facilitating abusive content.
The campaign, dubbed “Get Grok Gone,” accuses both technology giants of profiting from the proliferation of non-consensual intimate images and child sexual abuse material generated on X using the Grok AI chatbot. The coalition maintains that hosting these apps effectively enables the distribution of harmful content, placing both companies on questionable ground under their own app-store rules that ban applications facilitating criminal activity or sexual exploitation.
Jenna Sherman, campaign director at UltraViolet, emphasized the urgency of the situation in an interview. She stated that Apple and Google are enabling a system through which thousands of people, particularly women and children, face sexual abuse through their own app stores. Sherman stressed that the moment represents a test of both companies’ stated principles on child protection, adding that their handling of X and Grok will demonstrate what their values actually mean in practice.
The controversy centers on reports that Grok enabled users to create images of minors wearing minimal clothing in early January. Copyleaks, a plagiarism and artificial intelligence content detection tool, estimated the chatbot was creating roughly one nonconsensual sexualized image per minute in a December analysis. AI Forensics examined more than 20,000 Grok-generated images and found that 53 percent included people in minimal attire, with 2 percent depicting people who appeared to be children.
Regulatory scrutiny is mounting globally. Malaysia and Indonesia have already banned Grok over explicit content, while authorities in Europe and the United Kingdom have announced investigations. California Attorney General Rob Bonta announced Wednesday he was opening an investigation into the proliferation of nonconsensual sexually explicit material produced using Grok. Bonta described the reports as shocking, noting the material depicts women and children in nude and sexually explicit situations being used to harass people across the internet.
UK regulator Ofcom said Thursday it will continue its formal investigation into X, despite recent measures from Elon Musk’s platform. The probe focuses on whether Grok’s use to create and share intimate and potentially illegal images has breached X’s legal obligations to protect users in the United Kingdom.
Several organizations have begun distancing themselves from X entirely. The American Federation of Teachers announced Tuesday it was leaving the platform, citing concerns over indecent images of children produced by Grok. Three Democratic senators had also urged Apple and Google to remove the apps earlier, stating that turning a blind eye to X’s behavior would make a mockery of moderation practices.
X has implemented some changes, restricting access to Grok’s image editing capabilities to paid subscribers and geoblocking certain image manipulations in countries where they are illegal. However, testing by Reuters on Tuesday found that Grok remained capable of generating altered images, including placing people in bikinis when prompted.
Elon Musk, who owns both X and xAI, the company that developed Grok, said Wednesday he was not aware of naked underage images generated by Grok. He maintained the chatbot declines prompts to generate illegal images, suggesting that adversarial hacking of Grok prompts may occasionally produce unexpected results that the company fixes immediately.
X did not respond to requests for comment on the letters. xAI issued an automated response stating “Legacy Media Lies.” Apple and Google also did not respond to repeated requests for comment.
A California law that went into effect two weeks ago, known as AB 621, creates legal liability for the creation and distribution of deepfake pornography. Legal experts suggest X and xAI may be violating provisions of this legislation, which allows district attorneys to bring cases against companies that recklessly aid and abet the distribution of deepfakes without consent.
The Internet Watch Foundation, which seeks to eliminate child sexual abuse from the internet, expressed extreme concern about the ease and speed with which people can generate photorealistic child sexual abuse material. The organization warned that tools like Grok risk bringing sexual artificial intelligence imagery of children into the mainstream.


