Advancing AI risk management to safeguard our future

SaferAI is a non-profit organisation based in France aiming to incentivize the development and deployment of safer AI systems through better risk management.

More about us

Our focus areas

Standards & Governance

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act.

We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.

Golden statue of Lady Justice against a blue sky backdrop. She is depicted as blindfolded, holding a balanced scale in one hand and a sword pointing towards the sky in the othe

Ratings

We aim to establish a clear rating framework for evaluating frontier AI companies' risk management practices.

Our objective is to enhance the accountability of private actors shaping the development of AI when developing and deploying their systems.

Stock image of two professionals in shirts working in an office. One is typing on a laptop, while the other is presenting a chart on a piece of paper.

Research

We are conducting research on AI risk management, applying existing knowledge from other domains to AI. Our current focus is to make quantitative risk assessment (QRA), i.e. the quantification of likelihood of damaging events, possible.

We’re therefore developing this methodology,  and applying it to harms induced by cyberoffensive LLM capabilities.

Cookie Banner Example