Frontier AI Risk Management — Research Engineer

Overview

SaferAI is hiring for a Research Engineer role on its Frontier AI Risk Management (FAIRM) team. This role offers a direct opportunity to shape how the world's leading AI companies manage risk at a pivotal moment.

About SaferAI

SaferAI is a fast-moving, mission-driven organization advancing and promoting AI risk management to reduce AI risks, in particular extreme risks from advanced AI systems. We’re uniquely positioned at the intersection of technical research work and policy, ensuring that the knowledge we produce has an impact on AI risk management.

SaferAI’s work centers around frontier AI risk management, and we have been responsible for major contributions to AI risk management of general-purpose AI models. We are a primary contributor to EU risk management standards, and we were influential in drafting the EU GPAI Code of Practice. We released the first AI risk management framework that combines traditional risk management and frontier AI safety policies, co-drafted G42’s Frontier AI Safety policy, and advised several other frontier AI companies on their policies and practices. We developed the first AI risk management assessment of AI companies’ risk management practices, informing the decisions of major investors. We developed a methodology for quantitative risk modeling, applied it to develop nine cybersecurity risk models, and are now implementing risk modeling as part of the EU Commission tender and partnering with multiple AI safety institutes globally.

Responsibilities

This role will join a team led by Malcolm Murray and engage directly in shaping how frontier AI risk management works in practice. The work of the frontier AI risk management team has two tracks. The first is advisory: collaborating with frontier AI companies to refine their risk management frameworks and support their internal implementation (e.g., implementing safety evaluations and mitigation measures). The second is research: advancing risk management methodology where key practices for frontier AI are still being defined and contributing to the emerging foundations of external AI assurance.

As a Research Engineer, you will help frontier AI companies operationalize their risk management frameworks. This includes designing and recommending safety evaluations, defining mitigation measures, red-teaming existing mitigations, and running third-party evaluations. The role sits on the technical and infrastructure side of AI safety: we’re looking for someone who can move between understanding a company’s risk management commitments and translating them into concrete evaluation and mitigation practices.

SaferAI is currently in active discussions with several major frontier developers across European, US and East Asian markets. This is a rare opportunity to shape how the world’s most capable AI companies manage risk.

Note that we are also hiring a Governance Researcher.


‍What we’re looking for

Technical AI literacy

  • Deep familiarity with AI safety and evaluation methodologies, including capability evaluations, red-teaming, safety cases, and model cards, and the ability to assess their efficacy.
  • Hands-on experience running benchmarks.
  • Familiarity with AGI safety and governance work, alongside safety and policy issues related to technology and society more broadly, is a plus.

Background and professional experience

  • M.S. or Ph.D. in computer science, machine learning, AI safety, or a related technical field.
  • 1+ years of hands-on technical experience with AI systems development, deployment, or safety engineering at a frontier AI company, AI product company, or AI research organization.
  • Significant experience supporting teams to deliver projects in fast-paced and constantly evolving environments.

Client engagement and communication

  • Excellent technical understanding and communication ability, with the ability to distil sophisticated technical ideas to their essence.
  • Comfortable with ambiguity and able to provide “good enough” solutions to open-ended problems and switch between problems to prioritize the most impactful opportunities.
  • Exceptional problem-solving skills and skills in thinking logically about the big picture, identifying areas of improvement, and driving change.

Delivery and project management

  • Ability to produce clear written deliverables such as gap analyses, methodology documents, or implementation roadmaps under time pressure and with incomplete information.
  • Comfort operating in a fast-moving, under-defined field where precedent is scarce and judgment must substitute for established playbooks.
  • Highly collaborative and service-oriented, used to working in diverse teams.
  • Ability to work effectively in a remote, asynchronous, and international environment

Mission alignment

  • Motivation to operate at the frontier of a field that does not yet have settled answers and to contribute to building a methodology that others will eventually follow.
  • Alignment with SaferAI’s mission to ensure that advanced AI technologies are safe through risk management practices, policy, and safe technology development.

‍Working Conditions

Location: SaferAI’s team members are mostly based in Paris and London, and we have a preference for new joiners to be based there as well. However, we are willing to consider remote arrangements for particularly strong candidates. We work in English and French language skills are not required for this position.

Wage Range: Competitive salary, commensurate with experience and location.

Benefits: 

  • Health insurance coverage and retirement plans adapted to the location
  • Transportation home to work covered at 50%
  • Productivity expenditures up to €2k annually
  • Office space if relevant

‍How to Apply

To apply for this position, please complete this application form. We will evaluate candidates on a rolling basis starting now until we fill the role. We are aiming to fill the role as soon as possible, and we’ll take down the page on our website as soon as the role is filled.

Our hiring process comes in stages and usually consists of initial screening, a first interview, a paid work test, interviews, and a final 3-day paid work trial where you would directly collaborate with the team. 

We encourage you to lean towards applying, even if you don’t have all the skills and experience required.
If you have any questions or concerns throughout the application process, please don’t hesitate to reach out to us at careers@safer-ai.org. We look forward to reviewing your application.

apply


back to jobs page