About us

SaferAI is developing the auditing infrastructure for general-purpose AI systems (GPAIS), particularly large language models. To do so, SaferAI is pursuing both technical and governance work.

We're currently focused on demonstrating the existence of high-consequence risks among current models, and are working to develop standards for risk assessment of GPAIS.

We're extremely grateful to Founders Pledge who've been the main financial supporters of our work to tackle this incredibly pressing challenge.

Our Team

Siméon Campos
Founder, CEO
James Gealy
COO & US AI Governance Director
Manuel Bimich
Strategic Advisor
Quentin Feuillade
Contractor Risk Analyst & Prompt Engineer, from P.H.I

Our Services

Minimalist logo depicting a loudspeaker, symbolizing announcement, broadcast or sound.
Research
We're researching AI governance & how to design adequate risk management AI standards. In that context we're studying other industries' safety standards.
Minimalist logo depicting a paper document, symbolizing information, documentation or filing.
Standardization
We're involved in AI standardization committees (JTC21 and SC42) on parts regarding large language models and AI risk management.
Minimalist logo depicting a loudspeaker, symbolizing announcement, broadcast or sound.
Outreach & Training
We see communication, training and advising as a core part of our mission to make sure AI risks are tackled adequately by all stakeholders.
Learn More On AI Risks
Start Now