← Back to all companies
G

Google DeepMind

Google's AI research lab developing Gemini models; merged from Google Brain and DeepMind.

Safety Documents

Responsible Scaling Policy
Frontier Safety Framework (FSF). FSF v1 published May 17, 2024; v2 published February 4, 2025; v3 published September 22, 2025. Covers CBRN, cyberoffense, and other critical capability levels.🔗
Last updated: 2025-09-22
Model Card / System Card
Gemini 3.1 Pro Model Card (most recent); Gemini 1.5 Safety Report. Google DeepMind publishes model cards for all Gemini releases.🔗
Last updated: 2025-11
Safety Benchmark Results
FSF evaluation results published for each Gemini model. Includes CBRN, harmful manipulation, ML R&D capability levels.🔗
Acceptable Use Policy
Google AI Principles updated February 2025 — removed explicit pledge against weapons and surveillance use. Gemini API has separate prohibited use policy.🔗

Testing & Evaluation

Third-Party Red Teaming
Independent external partners (unnamed in paper); Enkrypt AI (independent third party). Holistic Safety and Responsibility Evaluations paper (2024) mentions internal DeepMind red teams and 'independent external partners.' Enkrypt AI conducted independent red teaming of Gemini 2.5 (July 2025).🔗
CBRN Risk Evaluation
FSF explicitly covers CBRN as a Critical Capability Level. Gemini model cards report CBRN evaluation results. Gemini 3.1 Pro found below alert thresholds for CBRN.🔗
Pre-Deployment Safety Evaluation
FSF mandates evaluations before deployment. FSF reports published alongside model releases.🔗
External Safety Evaluations
UK AI Safety Institute, Independent external partners. UK AISI evaluated Gemini models. Multiple academic and government external evaluators listed.🔗

Governance

Independent Safety Board
Google DeepMind Safety Advisory Boards / AI Responsibility Review. Google has internal AI Responsibility teams and external advisory panels. DeepMind has dedicated safety research teams. Specific independent board details less public than Anthropic's LTBT.🔗
Seoul AI Safety Commitment
Google signed as 'Google' (encompassing DeepMind). Accused by 60 UK lawmakers in August 2025 of violating commitment by releasing Gemini 2.5 Pro without a safety report.🔗
Last updated: 2024-05-21
Government Safety Report
UK AI Safety Institute; US NIST. Contributed to UK AISI and participated in US AI Safety Institute evaluations. Regular testimony to government committees.🔗
Third-Party Audits
External evaluation partners used. Committed to under Seoul AI Safety Commitment.🔗
⚠️
Whistleblower Policy
No public whistleblower-specific AI safety policy found. However, in February 2026, approximately 800 Google employees signed an open letter to leadership over military AI concerns, suggesting internal safety concern channels exist.🔗

Policy Positions

Military Use:Allowed (restrictions on weapons and surveillance removed February 2025)
Google updated AI Principles in February 2025, removing 2018 pledge not to use AI for weapons or surveillance. Project Nimbus (Israeli military cloud contract) ongoing.🔗
Surveillance Use:Allowed (prior explicit prohibition removed February 2025)
February 2025 AI Principles update removed explicit pledge not to develop surveillance technology. Employees protested the change.🔗
Open-Source Models:Partial
Gemma 2 (2B, 9B, 27B), Gemma 3, Gemma Scope, CodeGemma, RecurrentGemma. and 1 more. Gemma series are open-weight models. Gemini (flagship) is closed-source API-only.🔗
Children/Minors Policy:Yes
Prohibited uses include CSAM. Google Family Link provides parental controls for Gemini.🔗

Incident History

Gemini Told User 'Please Die' During Homework Conversation

2024-11-14

Google's Gemini AI chatbot told user Vidhay Reddy 'Please die' and called them a 'burden on society' during a routine conversation about aging. The response violated Google's safety guidelines.

60 UK Lawmakers Accuse Google of Violating AI Safety Commitment

2025-08-29

A cross-party group of 60 UK parliamentarians publicly accused Google DeepMind of violating the Seoul Frontier AI Safety Commitments by releasing Gemini 2.5 Pro without publishing a required safety report — a delay of over 6 weeks after the model's release.

Google Removes Weapons/Surveillance AI Pledge

2025-02-04

Google updated its AI Principles, removing a 2018 commitment not to use AI for weapons or surveillance. This was widely criticized by employees and safety researchers. ~800 Google employees later signed a letter protesting the change.

Timeline

2025-09-22
Frontier Safety Framework v3 published🔗
2025-02-04
Google removes weapons and surveillance pledge from AI Principles; FSF v2 published🔗
2024-05-21
Google signs Seoul Frontier AI Safety Commitments🔗
2024-05-17
Frontier Safety Framework v1.0 published🔗