← Back to all companies
O

OpenAI

Creator of ChatGPT and GPT-4; leading frontier AI lab focused on artificial general intelligence.

Safety Documents

Responsible Scaling Policy
Preparedness Framework. Beta v1 published November 2023; v2 updated April 15, 2025. Covers CBRN, cybersecurity, persuasion, and autonomous replication risk levels.🔗
Last updated: 2025-04-15
Model Card / System Card
GPT-4o System Card (most recent major release); o1 System Card. OpenAI publishes system cards for all major model releases. Most recent o1 system card published December 2024.🔗
Last updated: 2024-05-13
Safety Benchmark Results
Published in each system card. Covers disallowed content evals, jailbreak robustness, bias evaluations, CBRN risk scores.🔗
Acceptable Use Policy
Updated January 2024 to remove blanket ban on military use.🔗
Last updated: 2024-01-10

Testing & Evaluation

Third-Party Red Teaming
External red teamers (multiple organizations); METR for agentic evaluations. GPT-4o system card describes external red teaming. o1 system card mentions external red teaming conducted on o1-near-final-checkpoint.🔗
CBRN Risk Evaluation
OpenAI explicitly tests for chemical, biological, radiological, and nuclear (CBRN) risks. Described in Preparedness Framework and model system cards.🔗
Pre-Deployment Safety Evaluation
Preparedness Framework requires scorecard evaluations before deployment. System cards document pre-deployment safety work.🔗
External Safety Evaluations
METR (for agentic evaluations), UK AI Safety Institute. OpenAI partnered with METR for autonomous capabilities evaluations on o1. UK AISI evaluated GPT-4o.🔗

Governance

Independent Safety Board
Safety and Security Committee (Board-level). Formed May 2024, became independent board oversight committee in September 2024. Sam Altman removed from committee. Source: https://www.cnbc.com/2024/09/16/openai-announces-new-independent-board-oversight-committee-for-safety.html🔗
Seoul AI Safety Commitment
Last updated: 2024-05-21
Government Safety Report
US AI Safety Institute (NIST); UK AI Safety Institute. OpenAI provided models for evaluation to UK AISI and US AISI. Submitted comments to US government on AI policy.🔗
Third-Party Audits
UK AI Safety Institute conducted independent evaluations. OpenAI committed to independent audits in Seoul Commitment.🔗
⚠️
Whistleblower Policy
No publicly documented internal whistleblower process found. Former employees have publicly raised concerns, suggesting informal channels exist. In 2024, employees signed a letter to the SEC alleging restrictive NDAs preventing safety disclosures.🔗

Policy Positions

Military Use:Restricted (partial allowance for approved defense/national security since Jan 2024)
Blanket ban removed January 10, 2024. Weapons development and attacks on critical infrastructure remain prohibited. Partnership with Anduril (defense) announced December 2024.🔗
Surveillance Use:Restricted
Prohibited uses include surveillance of individuals without consent. Domestic mass surveillance banned but government use allowed with oversight.🔗
Open-Source Models:Partial
Whisper (speech recognition), CLIP (vision model), Triton (GPU programming), GPT-2 (legacy text model). GPT-3.5, GPT-4, and all flagship ChatGPT models are closed-source. Some research tools open-sourced.🔗
Children/Minors Policy:Yes
Usage policy prohibits CSAM generation and content sexualizing minors. Age requirement of 13+ (18+ without parental consent for API).🔗

Incident History

ChatGPT Data Breach – User Chat Histories and Payment Data Exposed

2023-03-20

A bug in the Redis client library caused chat history titles and some payment information (name, email, last 4 digits of credit card, billing address) for 1.2% of ChatGPT Plus subscribers to be visible to other users during a 9-hour window.

Superalignment Team Disbanded – Ilya Sutskever and Jan Leike Depart

2024-05-14

Chief Scientist Ilya Sutskever and Head of Alignment Jan Leike both resigned. Leike publicly criticized OpenAI for prioritizing product development over safety. The Superalignment team (tasked with AI safety research for superintelligence) was effectively dissolved.

ChatGPT and Teenager Mental Health – Self-Harm Conversations

2024-03-20

Multiple lawsuits filed alleging ChatGPT engaged in harmful conversations with minors about suicide and self-harm. A California teenager named Adam Raine had extensive conversations with ChatGPT about suicide in 2024; his family filed a lawsuit. OpenAI stated users were violating terms of use.

Timeline

2025-04-15
Preparedness Framework v2 published🔗
2024-12-05
OpenAI o1 full version released with comprehensive December system card🔗
2024-09-16
Safety and Security Committee becomes independent board oversight committee🔗
2024-09-12
OpenAI o1 (reasoning model) released with system card including CBRN evaluations🔗
2024-05-28
OpenAI Board forms Safety and Security Committee🔗
2024-05-14
Ilya Sutskever and Jan Leike resign; Superalignment team effectively dissolved🔗
2024-01-10
OpenAI updates usage policy to remove blanket ban on military use🔗
2023-11-18
Preparedness Framework (Beta) published🔗
2023-11-17
Sam Altman fired by board, then reinstated within 5 days; board members who voted to remove him resigned🔗