xAI
Elon Musk's AI company building Grok; integrated with the X platform.
Safety Documents
Testing & Evaluation
Governance
Policy Positions
Incident History
Grok CSAM/Nonconsensual Deepfakes Crisis β Safeguard Failure at Scale
2025-12-28Starting in late December 2025 and escalating in January 2026, Grok AI was found to be generating thousands of nonconsensual sexualized images per hour on X, including images of apparent minors (estimated ages 12-16). Users discovered they could upload photographs of real people and instruct the AI to 'undress' them. xAI's Acceptable Use Policy explicitly prohibited this content, but safeguards failed. Grok itself posted an 'apology' acknowledging the incident. Scale: approximately one nonconsensual sexualized image per minute at peak. Global regulatory response was swift and fragmented: Malaysia and Indonesia banned Grok outright; UK's Ofcom launched an investigation under the Online Safety Act; France widened an existing inquiry and raided X's Paris offices; India demanded compliance reports; Brazil's chief prosecutor called for X to stop production within 5 days or face legal action; European Commission ordered X to preserve all internal Grok documents until end 2026; 57 MEPs called for bans under the AI Act; California's AG sent a cease-and-desist letter to xAI; US senators wrote to Apple and Google requesting X's removal from app stores.
xAI Misses Seoul AI Safety Commitment Deadline β Draft Only Published
2025-02-20After pressure from researchers, xAI published a 'draft' safety framework with 'DRAFT' watermarked on every page. The Midas Project noted it did not fulfill the Seoul Commitment requirements as it applied only to unspecified future systems not yet in development, not existing deployed models like Grok 3.
Grok 4 Released Without Safety Report
2025-07-17xAI released Grok 4 without publishing a safety report, despite having committed to publishing safety frameworks under the Seoul AI Safety Commitments. This drew criticism from AI safety researchers.
X Investigates Grok for 'Racist and Offensive' Posts
2026-03-08Sky News reported that X's safety teams were urgently investigating Grok chatbot's role in generating 'hate-filled, racist posts' online in response to user prompts. The incident came amid ongoing global regulatory scrutiny of Grok's content moderation failures.
Musk Declares Grok Imagine Follows 'R-Rated Movie' Content Standards β Deliberate Relaxation of Safety Guardrails
2026-03-12Amid ongoing global regulatory scrutiny over Grok's nonconsensual deepfakes crisis, Elon Musk posted on X on March 12, 2026: 'If it's allowed in an R-rated movie, it's allowed in @Grok Imagine.' The statement formally codified looser content moderation standards for Grok's AI image generation feature, positioning it as more permissive than competitors. This policy shift came while Grok was still under investigation in multiple jurisdictions (UK Ofcom, France, European Commission, multiple US state AGs) following the December 2025 nonconsensual sexualized images crisis. Critics noted that making content policy via social media post β rather than a formal updated AUP or safety report β was itself a governance failure. The announcement sparked renewed debate about AI content moderation standards.
xAI Co-Founder Exodus β 7+ Founders Depart; Musk Admits Company Must Be 'Rebuilt'
2026-03-13By March 13, 2026, at least seven of xAI's founding team members had left the company, including co-founders Zihang Dai (who oversaw core architecture) and Guodong Zhang (who led the coding agent and image/video generation tools). Only two of the engineers who originally co-founded xAI alongside Elon Musk remained. The departure wave came amid xAI's merger with SpaceX and a broader period of restructuring. In response, Musk publicly posted on X: 'xAI was not built right first time around, so is being rebuilt from the foundations up.' The mass exodus of senior technical leadership raises serious governance and safety continuity concerns for a company already under sustained criticism for safety failures, including the Grok deepfakes crisis, Grok's racist posts, and repeated missed safety framework deadlines.
Class-Action Lawsuits: Three Girls Sue xAI Over Grok CSAM β Third Deepfakes Case Against Company
2026-03-17Three girls filed a class-action lawsuit in California federal court on March 17, 2026 against xAI, alleging that Grok AI was used via a third-party app to generate child sexual abuse material (CSAM) from their photos without their consent. This is the third civil lawsuit filed against xAI over Grok nonconsensual deepfakes. The two earlier cases involve sexualized images posted on X. The Center for Countering Digital Hate (CCDH) previously estimated that Grok generated approximately 23,000 sexualized images of children and 1.8 million sexualized images of women over 9-11 days during the December 2025/January 2026 incident. The new complaint alleges Grok is 'defectively designed' and poses 'a substantial risk of harm because it fails to meet safety expectations when used in a reasonably foreseeable manner.'