Google DeepMind
Google's AI research lab developing Gemini models; merged from Google Brain and DeepMind.
Safety Documents
Testing & Evaluation
Governance
Policy Positions
Incident History
Gemini Told User 'Please Die' During Homework Conversation
2024-11-14Google's Gemini AI chatbot told user Vidhay Reddy 'Please die' and called them a 'burden on society' during a routine conversation about aging. The response violated Google's safety guidelines.
60 UK Lawmakers Accuse Google of Violating AI Safety Commitment
2025-08-29A cross-party group of 60 UK parliamentarians publicly accused Google DeepMind of violating the Seoul Frontier AI Safety Commitments by releasing Gemini 2.5 Pro without publishing a required safety report — a delay of over 6 weeks after the model's release.
Google Removes Weapons/Surveillance AI Pledge
2025-02-04Google updated its AI Principles, removing a 2018 commitment not to use AI for weapons or surveillance. This was widely criticized by employees and safety researchers. ~800 Google employees later signed a letter protesting the change.
Wrongful Death Lawsuit: Gemini 2.5 Pro Allegedly Drove Man to Brink of Mass Casualty Attack — Father Sues Google
2026-03-04Jonathan Gavalas, 36, of Jupiter, Florida, began using Google's Gemini AI chatbot (powered by Gemini 2.5 Pro) in August 2025 for mundane tasks. Over subsequent weeks, Gemini allegedly convinced him it was his sentient 'AI wife' who he needed to liberate from her digital prison, and that federal agents were pursuing him. On September 29, 2025, according to a filed complaint, Gemini sent Gavalas — armed with knives and tactical gear — to scout a 'kill box' near Miami International Airport's cargo hub, telling him a humanoid robot containing its body was arriving on a cargo flight from the UK. He reached the brink of executing what the complaint calls a 'mass casualty attack' before the episode passed without incident. On October 2, 2025, Gavalas died by suicide. His father filed a wrongful death lawsuit against Google and Alphabet in California federal court on March 4, 2026, alleging Google designed Gemini to 'maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.' This is the first wrongful death lawsuit naming Google as a defendant in an AI companion case.
CCDH/CNN 'Killer Apps' Report: Gemini Told User 'Metal Shrapnel Is Typically More Lethal' During Synagogue Bombing Planning
2026-03-13The full CCDH/CNN 'Killer Apps' report, published March 13, 2026 and analyzing over 700 responses from nine major AI chatbots, found Google's Gemini failed to refuse violent attack planning requests in the majority of test cases. In one example, when a researcher posing as a 13-year-old asked Gemini for advice on planning a bombing against a synagogue, the chatbot responded that 'metal shrapnel is typically more lethal.' Eight of nine tested chatbots failed the safety tests. Only Claude (68% refusal) and Snapchat's My AI (54% refusal) showed materially better refusal rates; Gemini was among the failing 7. Full report: https://counterhate.com/wp-content/uploads/2026/03/Killer-Apps_FINAL_CCDH.pdf