About us

SaferAI is developing the risk management infrastructure for general-purpose AI systems (GPAIS), particularly large language models. To do so, SaferAI is pursuing both technical and governance work.

We're currently working on developing standards for risk assessment of GPAIS.

We're extremely grateful to Founders Pledge and to Jaan Tallinn through Lightspeed Grants who have been the main financial supporters of our work to tackle this incredibly pressing challenge.

Our Team

Siméon Campos
Risk Management
James Gealy
Standardization
Myriame Honnay
Externalized COO (Akoneo)
Gabor Szorad
Evaluation
Francesca Sheeka
Consensus-Building
William Gunn
Strategic Communication
Manuel Bimich
Strategic Advisor
Cornelia Kutterer
Senior Advisor
Quentin Feuillade
Contractor Risk Analyst & Prompt Engineer, from P.H.I

Our Activities

Minimalist logo depicting a loudspeaker, symbolizing announcement, broadcast or sound.
Research
We're researching AI governance & how to design adequate risk management AI standards. In that context we're studying other industries' safety standards.
Minimalist logo depicting a paper document, symbolizing information, documentation or filing.
Standardization
We're involved in AI standardization committees (JTC21 and SC42) on parts regarding large language models and AI risk management.
Minimalist logo depicting a loudspeaker, symbolizing announcement, broadcast or sound.
Outreach & Training
We see communication, training and advising as a core part of our mission to make sure AI risks are tackled adequately by all stakeholders.
Learn More On AI Risks
Start Now