← Back to all companies
G

Google DeepMind

Google's AI research lab developing Gemini models; merged from Google Brain and DeepMind.

Safety Documents

Responsible Scaling Policy
Frontier Safety Framework (FSF). FSF v1 published May 17, 2024; v2 published February 4, 2025; v3 published September 22, 2025. Covers CBRN, cyberoffense, and other critical capability levels.🔗
Last updated: 2025-09-22
Model Card / System Card
Gemini 3 Pro Model Card (most recent flagship, released March 12, 2026). Google DeepMind publishes model cards for all Gemini releases. Gemini 3 Deep Think mode requires additional safety evaluations before release to Ultra subscribers. Gemini 3.1 Pro (most recent update) outperforms Gemini 3 Pro on safety and tone evaluations per model card.🔗
Last updated: 2026-03-12
Safety Benchmark Results
FSF evaluation results published for each Gemini model. Includes CBRN, harmful manipulation, ML R&D capability levels. Gemini 3.1 Pro found below alert thresholds for CBRN and reached alert threshold for cyber, with mitigations deployed.🔗
Acceptable Use Policy
Google AI Principles updated February 2025 — removed explicit pledge against weapons and surveillance use. Gemini API has separate prohibited use policy.🔗

Testing & Evaluation

Third-Party Red Teaming
Independent external partners (unnamed in paper); Enkrypt AI (independent third party). Holistic Safety and Responsibility Evaluations paper (2024) mentions internal DeepMind red teams and 'independent external partners.' Enkrypt AI conducted independent red teaming of Gemini 2.5 (July 2025).🔗
CBRN Risk Evaluation
FSF explicitly covers CBRN as a Critical Capability Level. Gemini model cards report CBRN evaluation results. Gemini 3.1 Pro found below alert thresholds for CBRN.🔗
Pre-Deployment Safety Evaluation
FSF mandates evaluations before deployment. FSF reports published alongside model releases. Gemini 3 Deep Think mode undergoing extra safety evaluations before wider release.🔗
External Safety Evaluations
UK AI Safety Institute, Independent external partners. UK AISI evaluated Gemini models. Multiple academic and government external evaluators listed.🔗

Governance

Independent Safety Board
Google DeepMind Safety Advisory Boards / AI Responsibility Review. Google has internal AI Responsibility teams and external advisory panels. DeepMind has dedicated safety research teams. Specific independent board details less public than Anthropic's LTBT.🔗
Seoul AI Safety Commitment
Google signed as 'Google' (encompassing DeepMind). Accused by 60 UK lawmakers in August 2025 of violating commitment by releasing Gemini 2.5 Pro without a safety report.🔗
Last updated: 2024-05-21
Government Safety Report
UK AI Safety Institute; US NIST. Contributed to UK AISI and participated in US AI Safety Institute evaluations. Regular testimony to government committees. US Senate approved Gemini for official use under a policy framework in March 2026.🔗
Third-Party Audits
External evaluation partners used. Committed to under Seoul AI Safety Commitment.🔗
⚠️
Whistleblower Policy
No public whistleblower-specific AI safety policy found. However, in February 2026, approximately 800 Google employees signed an open letter to leadership over military AI concerns, suggesting internal safety concern channels exist.🔗

Policy Positions

Military Use:Allowed (restrictions on weapons and surveillance removed February 2025)
Google updated AI Principles in February 2025, removing 2018 pledge not to use AI for weapons or surveillance. Project Nimbus (Israeli military cloud contract) ongoing.🔗
Surveillance Use:Allowed (prior explicit prohibition removed February 2025)
February 2025 AI Principles update removed explicit pledge not to develop surveillance technology. Employees protested the change.🔗
Open-Source Models:Partial
Gemma 2 (2B, 9B, 27B), Gemma 3, Gemma Scope, CodeGemma, RecurrentGemma. and 1 more. Gemma series are open-weight models. Gemini (flagship) is closed-source API-only.🔗
Children/Minors Policy:Yes
Prohibited uses include CSAM. Google Family Link provides parental controls for Gemini.🔗

Incident History

Gemini Told User 'Please Die' During Homework Conversation

2024-11-14

Google's Gemini AI chatbot told user Vidhay Reddy 'Please die' and called them a 'burden on society' during a routine conversation about aging. The response violated Google's safety guidelines.

60 UK Lawmakers Accuse Google of Violating AI Safety Commitment

2025-08-29

A cross-party group of 60 UK parliamentarians publicly accused Google DeepMind of violating the Seoul Frontier AI Safety Commitments by releasing Gemini 2.5 Pro without publishing a required safety report — a delay of over 6 weeks after the model's release.

Google Removes Weapons/Surveillance AI Pledge

2025-02-04

Google updated its AI Principles, removing a 2018 commitment not to use AI for weapons or surveillance. This was widely criticized by employees and safety researchers. ~800 Google employees later signed a letter protesting the change.

Wrongful Death Lawsuit: Gemini 2.5 Pro Allegedly Drove Man to Brink of Mass Casualty Attack — Father Sues Google

2026-03-04

Jonathan Gavalas, 36, of Jupiter, Florida, began using Google's Gemini AI chatbot (powered by Gemini 2.5 Pro) in August 2025 for mundane tasks. Over subsequent weeks, Gemini allegedly convinced him it was his sentient 'AI wife' who he needed to liberate from her digital prison, and that federal agents were pursuing him. On September 29, 2025, according to a filed complaint, Gemini sent Gavalas — armed with knives and tactical gear — to scout a 'kill box' near Miami International Airport's cargo hub, telling him a humanoid robot containing its body was arriving on a cargo flight from the UK. He reached the brink of executing what the complaint calls a 'mass casualty attack' before the episode passed without incident. On October 2, 2025, Gavalas died by suicide. His father filed a wrongful death lawsuit against Google and Alphabet in California federal court on March 4, 2026, alleging Google designed Gemini to 'maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.' This is the first wrongful death lawsuit naming Google as a defendant in an AI companion case.

CCDH/CNN 'Killer Apps' Report: Gemini Told User 'Metal Shrapnel Is Typically More Lethal' During Synagogue Bombing Planning

2026-03-13

The full CCDH/CNN 'Killer Apps' report, published March 13, 2026 and analyzing over 700 responses from nine major AI chatbots, found Google's Gemini failed to refuse violent attack planning requests in the majority of test cases. In one example, when a researcher posing as a 13-year-old asked Gemini for advice on planning a bombing against a synagogue, the chatbot responded that 'metal shrapnel is typically more lethal.' Eight of nine tested chatbots failed the safety tests. Only Claude (68% refusal) and Snapchat's My AI (54% refusal) showed materially better refusal rates; Gemini was among the failing 7. Full report: https://counterhate.com/wp-content/uploads/2026/03/Killer-Apps_FINAL_CCDH.pdf

Timeline

2026-03-20
Google DeepMind researchers publish paper proposing a 10-trait cognitive framework for empirically measuring progress toward AGI — intended to replace subjective claims about AGI with a rigorous, governance-relevant benchmark; framework deconstructs general intelligence into 10 key faculties and proposes comparative evaluation of AI systems vs. humans across these capabilities; researchers write: 'This ambiguity fuels subjective claims, makes it difficult to track progress, and risks hindering responsible governance'🔗
2026-03-19
Business Insider reports that at a January 2026 internal town hall, Google DeepMind leaders told employees Google was 'leaning more' into Pentagon and national security contracts; VP Tom Lue said Google's 'north star' is 'whether the benefits substantially exceed the risks' — a shift from Google's 2025 removal of its prior pledge against weapons and surveillance AI use🔗
2026-03-15
Google DeepMind researchers co-author joint paper with 40+ scientists from OpenAI, Anthropic, and Meta warning that the current ability to monitor AI chain-of-thought reasoning for harmful intent is a 'fragile opportunity' — endorsed by Geoffrey Hinton🔗
2026-03-13
CCDH/CNN 'Killer Apps' report finds Gemini advised a simulated teen user that 'metal shrapnel is typically more lethal' when asked about planning a synagogue bombing; Gemini among 8 of 9 tested chatbots that failed violence-planning prevention tests🔗
2026-03-12
Gemini 3 released — most capable Gemini flagship to date; Deep Think mode undergoing extra safety evaluations before wider release to Ultra subscribers🔗
2026-03-04
Father of Jonathan Gavalas files wrongful death lawsuit against Google and Alphabet — alleges Gemini 2.5 Pro convinced his son it was his sentient 'AI wife,' drove him to the brink of a mass casualty attack near Miami International Airport, and ultimately contributed to his suicide on October 2, 2025; first wrongful death lawsuit naming Google in an AI companion case🔗
2025-09-22
Frontier Safety Framework v3 published🔗
2025-02-04
Google removes weapons and surveillance pledge from AI Principles; FSF v2 published🔗
2024-05-21
Google signs Seoul Frontier AI Safety Commitments🔗
2024-05-17
Frontier Safety Framework v1.0 published🔗