About us

SaferAI is a non-profit organisation aiming to incentivize the development and deployment of safer AI systems through better risk management.

The organization focuses on doing research to advance the state of the art in AI risk assessment and developing methodologies and standards for risk management of these AI systems⁠. Among others, we are actively contributing in the relevant working groups of the OECD, the EU (Code of Practice and JTC 21) and the US NIST AI Safety Institute Consortium.

If you want to learn more about our activities, read our publications, follow our linkedin, subscribe to our newsletter "The SaferAI Roundup", or get in touch with us.

Transparency

We're extremely grateful to Founders Pledge and to Jaan Tallinn through Lightspeed Grants who have been the main financial supporters of our work to tackle this incredibly pressing challenge.

We are are a registered non-profit in France, our registration is public.

If you want to help reduce risks from AI, consider donating to support our mission.

Our Team

Please contact us through our Linkedins, or for general inquiries email [email protected].

Siméon Campos
Founder, Executive Director
Linkedin

Siméon is Executive Director and founder of SaferAI, working on key aspects of SaferAI, ranging from standardization, risk management research to fundraising and external partnerships. With experience co-founding an organization doing training in responsible AI, EffiSciences, Siméon has had the opportunity to follow closely the AI literature for a few years, to build partnerships with external stakeholders, and to build an organization.

Henry Papadatos
Managing Director
Linkedin

Henry is Managing Director, working on key aspects of SaferAI ranging from ratings, AI & benchmark research, risk management, to management and operations. With experience conducting research on AI sycophancy at UC Berkeley, Henry is bringing to the team know-how of LLM research which enables to complement the risk management work and research experience of the team.

Chloé Touzet
Head of Policy
Linkedin

Chloé is Head of Policy, leading our engagement with external stakeholders (spanning civil society organisations, policymakers and companies) and producing research and policy pieces on adequate AI governance. A researcher on labor, AI and inequalities, Chloé spent 5 years as Policy Researcher at the OECD doing research and policy work in those domains. Chloé holds a PhD in political economy from the University of Oxford.

Malcolm Murray
Head of Research
Linkedin

Malcolm is Head of Research, leading our work on quantitative risk assessment of large language models on risks like cybersecurity and biosecurity. With twenty years of experience in risk and strategy, research, consulting and industry, he has a long track record of running research projects as a Chief of Research, Risk and Audit, at Gartner, as well as Institute Senior Fellow, MBOSI and Global Head of Business Operations, Reuters Media.

James Gealy
Head of Standardization
Linkedin

James is Head of Standardization, contributing to the OECD G7 Hiroshima AI Process reporting framework and co-editing the AI risk management standard at CEN-CENELEC JTC21. He has fifteen years of experience as an electrical engineer in spacecraft testing and operations at Northrop Grumman and Airbus, with practical experience in quality and risk management, technical procedure writing, and information security practices.

Lauren Fried
Operations (external)
Linkedin

Lauren is Operations Officer, as a part of her role in the consultancy Akoneo that provides growth assistance in legal, operations and HR domains to small structures.

Gabor Szorad
Head of Product
Linkedin

Gábor is Head of Product, working towards valuing SaferAI’s expertise to increase our financial independence and deliver benefits to companies interested in responsible AI. With twenty years of experience in management, he has a long track record of scaling product and companies, from 0 to 8200 employees as CEO of companies in the e-commerce space.

Daniel Kossack
Policy Research Analyst
Linkedin

Daniel is a Policy Research Analyst and Talos Fellow. In that role, he's facilitating a special interest group at JTC21 on risk acceptability, conducting interoperability research on the commonalities and differences between texts relative to GPAI risk management, and writing analyses and inputs for the multiple international processes SaferAI is part of.

Cornelia Kutterer
Senior Advisor
Linkedin

Cornelia is Senior Advisor, advising the team on institutional engagement and governance research. With twenty years of experience in research, tech and AI policy, she has a long track record of institutional engagement, law, and research management as Senior Director of EU Government Affairs at Microsoft, as well as Managing Director of Considerati and Head of Department at BEUC.

Fabien Roger
Technical Advisor
Linkedin

Fabien is Technical Advisor, providing crucial inputs to our technical projects such as our ratings and our standardization work in international institutions. Member of Technical Staff at Anthropic and formerly at Redwood Research, he has significant technical expertise in AI research, especially interpretability, evaluations and AI control methods.