About

About us

SaferAI is a France-based nonprofit that creates risk measurement tools for governments and companies managing AI systems.

We combine technical research with policy work: developing quantitative risk models, evaluating company practices, and leading standards development that shapes AI regulation worldwide.

We contribute to AI safety frameworks at the OECD, NIST, ISO, and within the EU AI Act. As an independent nonprofit (registered as a French association loi 1901), we maintain scientific rigor while advising governments, rating companies, and building the technical infrastructure for AI accountability.

If you want to learn more about our activities, you can explore our publications, follow us on LinkedIn, or get in touch.

Our focus areas
Research

We develop quantitative risk models that translate AI capabilities into measurable harm. Our methodology breaks complex risks into quantifiable components and maps benchmark scores to probability estimates of real-world harm.

In the past, we have developed the first comprehensive AI risk management framework, adapting tried-and-tested practices from high-risk industries to AI systems. We’ve also partnered with Google DeepMind researchers on goal-directedness evaluations and pioneered quantitative AI risk estimation methodology using expert elicitation protocols.

Current focus: We've built nine risk models for AI-enabled cyberattacks spanning different threat actors, targets, and attack types. We're now expanding to include loss-of-control risks, CBRN threats, and harmful manipulation scenarios.

Impact: Our models help policymakers set evidence-based capability thresholds and enable the field to move from qualitative to quantitative assessments. National AI Safety Institutes are actively integrating our approaches into their evaluation frameworks.


Ratings & Accountability

We independently evaluate and rate how well leading AI companies manage risks. Our framework assesses companies across four dimensions: how they identify risks, assess their severity, implement safeguards, and govern their systems.

Current focus: Our public ratings create accountability through multiple channels: media coverage (featured twice in TIME and in Euractiv), market pressure from model deployers choosing safer systems, and investor decisions (Norway's $1.9 trillion sovereign wealth fund explicitly mentioned our framework as one of the external standards they were looking at).

Beyond ratings, we advise companies on developing Frontier Safety Frameworks. This advisory work is increasingly critical as California's SB53 requires companies to comply with their own frameworks, and the EU Code of Practice requires general-purpose AI companies to make a safety and security framework. We've contributed to frameworks for G42 and a major AI company.

Impact: We continue our work to increase accountability for AI companies, provide specific guidance on how their risk management can be improved, and create market incentives for better practices. We're scaling this work and exploring additional mechanisms, such as ESG frameworks, to reward responsible AI development.


standards & governance

We shape the technical foundations of AI regulation through leadership positions in key standards bodies and direct collaboration with governments. Our contributions to date include:

◆ James Gealy, our Standardization Lead, leads the high-risk AI systems risk management standard at CEN-CENELEC JTC 21 (the body writing technical specifications for EU AI Act compliance)

◆ James also serves as editor of the generative AI red teaming standard at ISO

◆ We participated in all four working groups developing the EU's Code of Practice for general-purpose AI models

◆ We were the only AI safety organization invited to the OECD taskforce working on the reporting framework of the Hiroshima AI Process

Current focus: We're contributing to the development of harmonized standards that will define how companies demonstrate AI Act compliance. We're also advising national AI Safety Institutes on implementing quantitative risk modeling.

Impact: Our standards work establishes the technical infrastructure that makes AI regulation enforceable. By codifying risk management requirements and evaluation methodologies, we're translating policy intent into concrete technical specifications that companies follow.


Employees
Henry Papadatos
Executive Director
Henry is Executive Director, working on key aspects of SaferAI ranging from ratings, AI & benchmark research, risk management, to management and operations. With experience conducting research on AI sycophancy at UC Berkeley, Henry is bringing to the team know-how of LLM research which enables t...
James Gealy
Standardization Lead
James is Standardization Lead, contributing to the OECD G7 Hiroshima AI Process reporting framework and co-editing the AI risk management standard at CEN-CENELEC JTC21. He has fifteen years of experience as an electrical engineer in spacecraft testing and operations at Northrop Grumman and Airbus, w...
Chloe Touzet
Policy Lead
Chloé is Policy Lead, leading our engagement with external stakeholders (spanning civil society organisations, policymakers and companies) and producing research and policy pieces on adequate AI governance. A researcher on labor, AI and inequalities, Chloé spent 5 years as Policy Researcher at the O...
Malcolm Murray
Research Lead
Malcolm is Research Lead, leading our work on quantitative risk assessment of large language models on risks like cybersecurity and biosecurity. With twenty years of experience in risk and strategy, research, consulting and industry, he has a long track record of running research projects as a Chief...
Steve Barrett
Senior Researcher
Steve is a Senior Researcher at SaferAI working on AI risk management. He has experience in assessing confidence in frontier AI safety cases, alongside experience in cybersecurity and safety assurance in the automotive sector. He brings a strong track record in innovation and has spent 20+ years in ...
Radostina Karageorgieva
Communications and Operations Specialist
Radostina leads SaferAI's communications strategy, translating complex AI safety research into actionable narratives for policymakers, media, and public audiences. With over 10 years at the UN and Red Cross managing crisis communications across global humanitarian emergencies, she brings proven expe...
Lily Stelling
Policy Associate
Lily is Policy Associate (Risk Management Specialist), co-authoring SaferAI's Risk Management Ratings and contributing to policy research and stakeholder engagement to advance AI safety coordination. Previously, she was a Research Scholar at ML Alignment Theory & Scholars. With research experien...
Matt Smith
Research Scientist
Matt is a Research Scientist at SaferAI investigating methods for producing principled, and verifiable quantitative risk models at the intersection of AI systems and society in high uncertainty and limited data settings. He has ten years of experience in fundamental Machine Learning research and hol...
Jakub Krys
Research Scientist
Jakub is a Research Scientist focused on developing our quantitative risk management framework. His experience spans both technical and governance aspects of AI safety, having worked on adversarial ML, cybersecurity, compute governance and whistleblowing policies. Previously, he completed a PhD in P...
Max Schaffelder
Standards Associate
Max is a Talos fellow, contributing to SaferAI’s work on technical standards for AI. He holds a Master’s degree in Artificial Intelligence and has done technical research on the impact of synthetic training data on the behavior of large language models. Max has also been active as an organizer for t...
Advisors
Simeon Campos
Founder, Advisor
Siméon is the founder of SaferAI and currently serves as an advisor and chairman of the board, working on key aspects ranging from standardisation, risk management research, to fundraising and external partnerships. With experience co-founding an organisation doing training in responsible AI, EffiSc...
Fabien Roger
Technical Advisor
Fabien is a Technical Advisor and a member of the board, providing crucial inputs to our technical projects such as our ratings and our standardization work in international institutions. Member of Technical Staff at Anthropic and formerly at Redwood Research, he has significant technical expertise ...
Gabor Szorad
Product Advisor
Gábor is Product Advisor, working towards valuing SaferAI’s expertise to increase our financial independence and deliver benefits to companies interested in responsible AI. With twenty years of experience in management, he has a long track record of scaling product and companies, from 0 to 8200 empl...
Cornelia Kutterer
Senior Advisor
Cornelia is a Senior Advisor, advising the team on institutional engagement and governance research. With twenty years of experience in research, tech and AI policy, she has a long track record of institutional engagement, law, and research management as Senior Director of EU Government Affairs at M...
Duncan Cass-Beggs
Senior Advisor
Duncan Cass-Beggs is a Senior Advisor, providing guidance on AI governance and strategic foresight. With over 25 years of experience in public policy, including as OECD's head of strategic foresight and executive director of CIGI's Global AI Risks Initiative, he brings deep expertise in anticipating...