SaferAI joins the US AI Safety Institute Consortium (NIST)!
About us

SaferAI is a non-profit organization dedicated to the development of the risk management infrastructure for large, general-purpose AI systems. The organization focuses on the development of methodologies and standards for risk assessment of these AI systems⁠.

We're extremely grateful to Founders Pledge and to Jaan Tallinn through Lightspeed Grants who have been the main financial supporters of our work to tackle this incredibly pressing challenge.

Our Team

Siméon Campos
Risk Management
James Gealy
Standardization
Myriame Honnay
Externalized COO (Akoneo)
Gabor Szorad
Evaluation
Henry Papadatos
Technical Governance Researcher
Francesca Sheeka
Consensus-Building
Cornelia Kutterer
Senior Advisor
Fabien Roger
Technical Advisor

Our Activities

Minimalist logo depicting a loudspeaker, symbolizing announcement, broadcast or sound.
Research
We're researching AI governance & how to design adequate risk management AI standards. In that context we're studying other industries' safety standards.
Minimalist logo depicting a paper document, symbolizing information, documentation or filing.
Standardization
We're involved in AI standardization committees (JTC21 and SC42) on parts regarding large language models and AI risk management.
Minimalist logo depicting a loudspeaker, symbolizing announcement, broadcast or sound.
Outreach & Training
We see communication, training and advising as a core part of our mission to make sure AI risks are tackled adequately by all stakeholders.