Frontier AI Risk Management — Governance Researcher

Overview

SaferAI is hiring for a Governance Researcher role on its Frontier AI Risk Management team. This role offers a direct opportunity to shape how the world's leading AI companies manage risk at a pivotal moment.

About SaferAI

SaferAI is a fast-moving, mission-driven organization advancing and promoting AI risk management to reduce AI risks, in particular extreme risks from advanced AI systems. We’re uniquely positioned at the intersection of technical research work and policy, ensuring that the knowledge we produce has an impact on AI risk management.

SaferAI’s work centers around frontier AI risk management, and we have been responsible for major contributions to AI risk management of general-purpose AI models. We are a primary contributor to EU risk management standards, and we were influential in drafting the EU GPAI Code of Practice. We released the first AI risk management framework that combines traditional risk management and frontier AI safety policies, co-drafted G42’s Frontier AI Safety policy, and advised several other frontier AI companies on their policies and practices. We developed the first AI risk management assessment of AI companies’ risk management practices, informing the decisions of major investors. We developed a methodology for quantitative risk modeling, applied it to develop nine cybersecurity risk models, and are now implementing risk modeling as part of the EU Commission tender and partnering with multiple AI safety institutes globally.

Responsibilities

This role will join a team led by Malcolm Murray and engage directly in shaping how frontier AI risk management works in practice. The work of the frontier AI risk management team has two tracks. The first is advisory: collaborating with frontier AI companies to refine their risk management frameworks and support their internal implementation (e.g., implementing safety evaluations and mitigation measures). The second is research: advancing risk management methodology where key practices for frontier AI are still being defined and contributing to the emerging foundations of external AI assurance.

As a Governance Researcher, you will work directly with frontier AI companies and governments to assess and strengthen risk management frameworks and their methodology. You will contribute directly to advancing AI risk management methodology in areas that remain underexplored across the field, such as systematic risk identification. This research is similar in nature to the work behind our risk management framework. 

SaferAI is currently in active discussions with several major frontier developers across European, US and East Asian markets. This is a rare opportunity to shape how the world’s most capable AI companies manage risk.

Note that we are also hiring a Research Engineer.


‍What we’re looking for

AI governance literacy

  • Familiarity with AI safety and governance, alongside safety and policy issues related to technology and society more broadly.
  • Working knowledge of the AI governance landscape, including safety frameworks from AI companies, the EU AI Act, the EU Code of Practice for General-Purpose AI, and California’s SB-53 or comparable regulatory regimes.
  • Working knowledge of technical fundamentals of current frontier AI systems. 
  • Ability to read across regulatory requirements, company operating contexts, and technical constraints, and to synthesize them into coherent, actionable framework recommendations.

Background and professional experience

  • M.A./M.S/PhD in a relevant field such as public policy, international relations, risk management, engineering, law, economics, or computer science.
  • 2 or more years in research, management consulting, or similar.
  • Significant experience supporting teams to deliver projects in fast-paced and constantly evolving environments.
  • Familiarity with established risk management standards such as ISO 31000, COSO ERM, ICAO SMS, or equivalent frameworks is a plus.

Client engagement and communication

  • Ability to write and edit research reports and contribute to a policy-focused research agenda.
  • Exceptional written and verbal communication skills, with a track record of translating technical risk concepts into language that resonates with senior management.
  • Comfortable with ambiguity and able to provide “good enough” solutions to open-ended problems and switch between problems to prioritize the most impactful opportunities.
  • Exceptional problem-solving skills and skills in thinking logically about the big picture, identify areas of improvement, and drive change.

Delivery and project management

  • Ability to produce clear written deliverables such as gap analyses, methodology documents, or implementation roadmaps under time pressure and with incomplete information.
  • Comfort operating in a fast-moving, under-defined field where precedent is scarce and judgment must substitute for established playbooks.
  • Highly collaborative and service-oriented, used to working in diverse teams.
  • Ability to work effectively in a remote, asynchronous, and international environment.

Mission alignment

  • Motivation to operate at the frontier of a field that does not yet have settled answers and to contribute to building a methodology that others will eventually follow.
  • Alignment with SaferAI’s mission to ensure that advanced AI technologies are safe through risk management practices, policy, and safe technology development.

‍Working Conditions

Location: SaferAI’s team members are mostly based in Paris and London, and we have a preference for new joiners to be based there as well. We are willing to consider remote arrangements for particularly strong candidates. We work in English, so French language skills are not required for this position.

Wage Range: Competitive salary, commensurate with experience and location.

Benefits: 

  • Health insurance coverage and retirement plans adapted to the location
  • Transportation home to work covered at 50%
  • Productivity expenditures up to €2k annually
  • Office space if relevant

‍How to Apply

To apply for this position, please complete this application form. We will evaluate candidates on a rolling basis starting now until we fill the role. We are aiming to fill the role as soon as possible, and we’ll take down the page on our website as soon as the role is filled.

Our hiring process comes in stages and usually consists of initial screening, a first interview, a paid work test, interviews, and a final 3-day paid work trial where you would directly collaborate with the team. 

We encourage you to lean towards applying, even if you don’t have all the skills and experience required.
If you have any questions or concerns throughout the application process, please don’t hesitate to reach out to us at careers@safer-ai.org. We look forward to reviewing your application.

apply


back to jobs page