Striving to ensure that
AI technologies are safe.

Our Mission

We work across research and
policy to make AI safer.

SaferAI is a French non-profit furthering risk management in policy and industry. We work to advance responsible AI development by modelling complex risks, improving company practices, and contributing to standards and policies.

Our focus areas
Research

We develop risk management frameworks that draw on tried-and-tested practices from other industries, such as aviation or finance. 

We aim to address the biggest gap in risk management by developing quantitative models that translate AI capabilities into real-world risk estimates. 

We've completed nine cyber risk models and are expanding to cover loss of control, CBRN threats, and other advanced AI risks.

Learn more about our research

ratings & ACCOUNTABILITY

We independently evaluate and rate how well leading AI companies manage risks, creating transparency that drives safer industry practices. Beyond public ratings, we advise companies on developing robust safety frameworks and on integrating AI risk assessment into decision-making.

Our ratings also create market incentives for responsible AI development across the industry.

See our company ratings

standards & governance

Our team leads AI risk management standards development in the EU and serves as editors for international red teaming standards. We actively shaped the EU's Code of Practice for general-purpose AI models and advise AI Safety Institutes across the globe.

Our work ensures regulatory frameworks are grounded in rigorous risk management and practical implementation experience.

Explore our policy work

our ratings

We rate frontier AI companies on their risk management practices.

Our framework evaluates AI companies across four key areas: how they identify risks, assess their severity, implement safeguards, and govern their systems:

We're hiring people who want to shape how AI is developed. If rigorous research with direct policy impact excites you, join our team.

Featured research
  • 03.11.2025
  • Memo
Risk Tiers: Towards a Gold Standard for Advanced AI
Siméon Campos, James Gealy, Daniel Kossack, Malcolm Murray, Henry Papadatos
Support OUR WORK

Your donation directly funds independent research that shapes how AI is governed. Help us continue our mission to make AI systems more accountable and safer.