About

Mission

AI Safety Facts is a fact-based, source-linked AI safety transparency dashboard. We provide verifiable information about AI company safety practices without opinions, grades, or rankings. Every fact on this site links directly to its primary source so you can verify it yourself.

Methodology

We track publicly verifiable safety practices across major AI companies. Our data points fall into five categories:

  • Safety Documents — Published policies, model cards, and safety frameworks
  • Testing & Evaluation — Third-party red-teaming, CBRN evaluations, and safety benchmarks
  • Governance — Independent safety boards, government commitments, and audit practices
  • Policy Positions — Factual statements about military use, open-source stance, and content filtering
  • Incident History — Reported safety incidents and company responses

Each data point is marked as verified (with source link), not found (no public evidence), or unknown (insufficient information to determine).

Data Sources

  • Company websites, blogs, and official documentation
  • Published research papers and technical reports (e.g., arXiv)
  • AI Incident Database (incidentdatabase.ai)
  • Government publications and international commitments
  • Third-party evaluation organizations (METR, ARC, AISI)

Contributing

If you find inaccurate information or have sourced updates, we welcome corrections. All submissions must include a verifiable primary source link. Contact us or submit a pull request to our GitHub repository.