Research Scientist – Cyber Risk Modeling

Overview

SaferAI is seeking a Research Scientist with expertise in cybersecurity and AI to advance our cyber risk modeling work.

SaferAI is seeking a Research Scientist with a strong ability to perform technical research on risk modeling in AI and cyber. Ideal candidates will have experience both with conducting research on large language models and with traditional cybersecurity.

As a Research Scientist, you will be responsible for the core technical work we conduct to advance quantitative risk modelling in cyber, i.e. the ability to quantify the probability of AI cyber risk scenarios and their steps. By breaking down scenarios leading to harm in steps whose probability or quantity can be estimated, we intend to advance our understanding of AI risk and be able to systematically translate empirical indicators such as benchmarks, evaluations and incident reports into real-world risk estimates. We have already developed a set of initial cyber risk models, and seek to distribute and disseminate these, as well as create new models for specific use cases (e.g. national security, enterprise risk etc).


‍About SaferAI

SaferAI is a fast-moving, mission-driven organization advancing and promoting AI risk management to reduce AI risks, in particular extreme risks from advanced AI systems. We’re uniquely positioned at the intersection of technical research work and policy, ensuring that the knowledge we produce has an impact on AI risk management.

As a primary contributor to EU standards and GPAI Code of Practice, the only NGO member of a G7 OECD task force in charge of writing a reporting framework for frontier AI, and a founding member of the US AI Safety Institute Consortium, we are responsible for significant contributions to AI risk management of general-purpose AI models in the policy realm.

Our technical work is key to maintaining and furthering the unique expertise we bring to governments and companies. We released the first AI risk management framework combining traditional risk management and frontier AI safety policies, cited by NVIDIA’s risk assessment work. We co-drafted the G42 risk management policy. We developed the first AI risk management rating system for AI companies’ risk management practices, featured in TIME and Euractiv, informing the decisions of major investors.


‍Our current core research focus is to develop risk models that enable us to aggregate existing empirical measurements (benchmarks, evaluations, incident reports) and turn them into granular risk quantification. To render this scalable, we’re accelerating the process by utilising LLMs to complement expert Delphi studies and perform predictions. We have developed a methodology for this research focusing on cybersecurity, and now plan to take this work further, as well as developing models of other risk landscapes.

Responsibilities

Your core objective will be to own and carry forwards our cyber risk modeling work. Your responsibilities will include:

  • Develop new AI uplifted cyber risk models to cover a wider area of the risk universe, including specific models tailored to key partners, and novel models to capture uniquely enabled AI risks
  • Develop the cyber risk modeling methodology further, integrating effects of defenses, mitigations, new LLM benchmarks etc.
  • Maintaining and update our current set of cyber risk models, ensuring they reflect real world observed dynamics, and integrate the latest LLM capabilities
  • Disseminate the cyber risk models through presentations and collaborations with key cyber agencies, governments, or AI safety institutes.

We are excited about our team members shaping our medium-term research directions and we are keen to support and enable new research ideas that align with our mission.


‍Skills and Experience Required

  • Detail-oriented and conscientious
  • Strong problem solving skills
  • A significant AI and/or cyber research background
  • Programming and software development skills
  • Technical and research paper writing abilities

‍Nice to haves

  • Experience developing applications or enhanced workflows with LLMs
  • Advanced statistical proficiency 
  • High degree of creativity

‍Working Conditions

Location: We have a central office in Paris and we will be opening an office in London. We prefer candidates willing to relocate to France or the UK and who can work from either office, but we welcome applications from candidates based anywhere and will consider remote arrangements for strong candidates. French language skills are not required for this position.

Wage Range: For US-based candidates, the wage range is $65,000-90,000. For candidates based outside the US, $65,000-80,000.

Benefits: 

  • Health insurance coverage and retirement plans adapted to the location
  • Transportation home to work covered at 50% 
  • Productivity expenditures up to $2k annually
  • Office space if relevant

‍How to Apply

To apply for this position, please complete this application form. We will evaluate the candidates on a rolling basis starting now until we fill the role.

Our hiring process consists of an initial screening call, followed by a paid work test with a follow-up interview to discuss your approach, and concludes with a 3-day paid work trial where you’ll collaborate directly with our team.

We encourage you to lean towards applying, even if you don’t have all the skills and experience required.

If you have any questions or concerns throughout the application process, please don’t hesitate to reach out to us at careers@safer-ai.org. We look forward to reviewing your application.

apply


back to jobs page