Overview
SaferAI is seeking a Research Scientist with expertise in a CBRN domain (ideally biosecurity) and AI to work on CBRN risk modeling for the European Commission.

SaferAI is seeking a Research Scientist with expertise in a CBRN domain (ideally biosecurity) and AI to work on CBRN risk modeling for the European Commission.
SaferAI is seeking a Research Scientist with a strong ability to perform technical research on risk modeling in AI and CBRN. Ideal candidates will have experience conducting research on AI models and ideally with CBRN risk assessment.
We have been awarded a tender from the European Commission for CBRN risk assessment of GPAI systems. SaferAI is responsible for risk modeling, risk monitoring, and briefing the AI Office. This represents a unique opportunity for AI safety research to directly inform regulators enforcing a regulation.
As a Research Scientist, you will lead the core technical work to develop CBRN risk models and collaborate with consortium partners to deliver on the tender.
SaferAI is a fast-moving, mission-driven organization advancing and promoting AI risk management to reduce AI risks, in particular extreme risks from advanced AI systems. We’re uniquely positioned at the intersection of technical research work and policy, ensuring that the knowledge we produce has an impact on AI risk management.
As a primary contributor to EU standards and GPAI Code of Practice, the only NGO member of a G7 OECD task force in charge of writing a reporting framework for frontier AI, and a founding member of the US AI Safety Institute Consortium, we are responsible for significant contributions to AI risk management of general-purpose AI models in the policy realm.
Our technical work is key to maintaining and furthering the unique expertise we bring to governments and companies. We released the first AI risk management framework combining traditional risk management and frontier AI safety policies, cited by NVIDIA’s risk assessment work. We co-drafted the G42 risk management policy. We developed the first AI risk management rating system for AI companies’ risk management practices, featured twice in TIME and Euractiv, informing the decisions of major investors. In this work, we’ve identified risk modeling as a key blind spot in current AI risk management practices. We have proposed a methodology for quantitative risk modeling and applied it to develop nine cybersecurity risk models. We are now partnering with multiple AI Safety Institutes to support their cyber risk modeling efforts, collaborating with the European Commission to develop CBRN risk models, and conducting research on loss-of-control risk modeling.
Your core objective will be to own and carry forward our part of the tender which is CBRN risk modeling and risk monitoring. Your responsibilities will include:
Within CBRN, we are also excited about our team members shaping our medium-term research directions beyond this tender and we are keen to support and enable new research ideas that align with our mission.
Location: We have a central office in Paris and we will be opening an office in London. We prefer candidates willing to relocate to France or the UK and who can work from either office, but we welcome applications from candidates based anywhere and will consider remote arrangements for strong candidates. French language skills are not required for this position.
Wage Range: For US-based candidates, the wage range is $65,000-90,000. For candidates based outside the US, $65,000-80,000.
Benefits:
To apply for this position, please complete this application form. We will evaluate the candidates on a rolling basis starting now until we fill the role.
Our hiring process consists of an initial screening call, followed by a paid work test with a follow-up interview to discuss your approach, and concludes with a 3-day paid work trial where you’ll collaborate directly with our team.
We encourage you to lean towards applying, even if you don’t have all the skills and experience required.
If you have any questions or concerns throughout the application process, please don’t hesitate to reach out to us at careers@safer-ai.org. We look forward to reviewing your application.