Overview
SaferAI is seeking a Policy Associate with a strong ability to engage and coordinate multiple stakeholders, including policymakers and scientists.

SaferAI is seeking a Policy Associate with a strong ability to engage and coordinate multiple stakeholders, including policymakers and scientists.
SaferAI is a fast-moving, mission-driven organization advancing and promoting AI risk management to reduce AI risks, in particular extreme risks from advanced AI systems. We’re uniquely positioned at the intersection of technical research work and policy, ensuring that the knowledge we produce has an impact on AI risk management.
As a primary contributor to EU standards and GPAI Code of Practice, the only NGO member of a G7 OECD task force in charge of writing a reporting framework for frontier AI, and a founding member of the US AI Safety Institute Consortium, we are responsible for significant contributions to AI risk management of general-purpose AI models in the policy realm.
Our technical work centers around advanced AI risk management. We released the first AI risk management framework combining traditional risk management and frontier AI safety policies, cited by NVIDIA’s risk assessment work. We co-drafted the G42 risk management policy. We developed the first AI risk management rating system for AI companies’ risk management practices, informing the decisions of major investors. We have proposed a methodology for quantitative risk modeling and applied it to develop nine cybersecurity risk models, now partnering with multiple AI safety institutes and the European Commission on risk modeling efforts.
Our policy work builds on and complements our technical research agenda. The policy team at SaferAI works on two main strands: first, government advice & contributions to public policy, and second, advocacy work aligned with our mission. The first part includes collaborating with technical governance bodies, such as the EU AI Office or national AISIs, on the basis of our technical research on risk management. It also includes disseminating risk management insights in public policy fora (e.g. international gatherings on AI policy) more broadly. The second part of the policy team’s work consists of identifying and engaging relevant stakeholders to push for various outcomes aligned with our mission, such as the development of safe-by-design AI technologies.
The policy associate will join a team led by Chloé Touzet, our Policy Lead, and interact with our Senior Advisor Cornelia Kutterer, and with Senior Policy Associate Bruno Galizzi, to work across these tasks. An important portion of their time will be spent on a project advocating for public investment in reliable, safe, and secure AI. A key responsibility will notably include coordinating a coalition of researchers gathering top international researchers, including e.g. Prof. Yoshua Bengio and Prof. Luke Ong, working on ambitious safe-by-design research agendas.
Candidates should possess strong stakeholder management skills and the ability to independently, strategically, and rapidly execute our policy agenda amid evolving policy landscapes, as well as the capacity to quickly grasp complex technical AI safety concepts:
Location: SaferAI’s policy team is currently based in Paris and London, and we have a preference for new joiners to be based there as well. We are willing to consider remote arrangements for particularly strong candidates. French language skills are not required for this position.
Wage Range: Competitive salary, commensurate with experience and location.
Benefits:
To apply for this position, please complete this application form. We will evaluate candidates on a rolling basis starting now until we fill the role. We are aiming to fill the role as soon as possible, and we’ll take down the page on our website as soon as the role is filled.
Our hiring process comes in stages and usually consists of initial screening, a first interview, a paid work test, interviews, and a final 3-day paid work trial where you would directly collaborate with the team.
We encourage you to lean towards applying, even if you don’t have all the skills and experience required.
If you have any questions or concerns throughout the application process, please don’t hesitate to reach out to us at careers@safer-ai.org. We look forward to reviewing your application.