About SaferAI
SaferAI works to reduce large-scale AI risks by building the governance and technical infrastructure for effective AI risk management. We work across four integrated areas: frontier AI risk management, risk modeling, technical standards, and policy. Our ability to operate across all four areas simultaneously is what makes our impact greater than the sum of its parts.
Our standards work illustrates what that integration produces in practice.
Members of SaferAI currently hold leadership positions in two of the most consequential AI standards processes running today: with EN 18228 (the European AI Risk Management standard at CEN-CENELEC) and with ISO/IEC TS 42119-8 (the global standard on LLM benchmarking and red-teaming). We engage actively across EU, US, and international venues at technical drafting depth, not just as observers. SaferAI is one of the very few independent actors that has earned a seat at the table where the actual text gets written. This role is about building on that position.
Responsibilities
As Standards Researcher, you will spend the majority of your time doing the hands-on work of standards development:
- Drafting standards text: writing and revising precise technical language for documents under development at ISO/IEC, CEN-CENELEC, and NIST, including on AI risk management, LLM evaluation, and frontier AI safety and security.
- Committee participation: attending standards meetings (in person and remotely) to advocate for specific text and decisions under the consensus process, representing an independent and prudent perspective.
- Technical analysis: thinking rigorously about the downstream implications of standards language: what requirements actually require in practice, where ambiguity could result in inefficacy, and thus specificity is needed to prevent incomplete compliance.
- Stakeholder engagement: building and maintaining productive working relationships with national delegations, industry representatives, regulators, and fellow experts across nearly all countries and jurisdictions.
- Standards-adjacent policy and advocacy work: contributing to cross-cutting projects at the EU and international level, including work such as the G7 Hiroshima AI Process Reporting Framework, where standards expertise directly informs policy efforts.
What we’re looking for
Standards work rewards a specific temperament. You do not need to have worked inside a formal standards body before — but you should recognize yourself in the following:
You care about the words. Standards are effective when they are specific and hard to game, and ineffective when they are vague enough to mean anything. You find it genuinely interesting to debate whether “shall” is stronger than “should,” whether “adequate” is defensible or a loophole, and how a definition of “risk” changes what a company is actually required to do.
You are tenacious and constructive. Writing effective text for a standard and seeing it to publication takes months or years of building trust, making technical arguments, and finding joint contributors. You are energized by incremental progress, resilient after setbacks, and able to work constructively with counterparts whose values or interests differ significantly from yours.
You hold the bigger picture. Standards are a means, not an end. You understand that a standard only becomes effective if it is well-written but also adopted and adhered to, and you are able to assess when standards are the right tool and when they are not.
You combine technical depth with clarity. Effective participation in standards committees requires the ability to engage credibly on technical content (AI systems and their development, risk management, evaluation methodology) and to communicate complex positions clearly and diplomatically across language and cultural barriers.
You are aligned with the mission. SaferAI’s presence in standards bodies is only valuable because we represent an independent perspective that is grounded in our mission. This role requires a genuine commitment to that perspective and mission — one that you maintain under institutional pressure, over time. SaferAI’s mission is to ensure that advanced AI technologies are safe through risk management practices, policy, and safe technology development.
Technical language skills. Given the detailed textual work, an excellent command of English grammar, word choice, and “wordsmithing” ability, which can be applied in the moment, is needed.
Specific qualifications we are looking for:
- A background in AI, computer science, or a quantitative science and engineering field
- Experience engaging with technical specification, policy analysis, or structured argumentation (in standards, regulation, law, auditing, conformity assessment, or an industry where standards or technical specifications are key aspects). Strong applicants would typically have 3-5 years of experience in one or more of these areas.
- Given SaferAI’s participation in the French standardization ecosystem, being able to write and speak about AI and related technical topics in French is a plus
- Comfort working across jurisdictions and with international counterparts
- Working knowledge of technical fundamentals of frontier AI systems
- Ability to read across regulatory requirements, company operating contexts, and technical constraints, and to synthesize them into coherent, actionable recommendations
- Familiarity with AI risk management concepts, frontier AI evaluation methodology, or related areas of AI governance is a strong advantage
- Prior involvement in a formal standards process (ISO, IEC, CEN, CENELEC, NIST, IEEE, ITU or equivalent) is welcome but not required.
Working Conditions
Location: SaferAI’s major locations are Paris and London, and we have a preference for this position to be based in either of the two cities. We are willing to consider remote arrangements for an exceptionally strong candidate.
Schedule: Standards writing consists of regularly engaging with relevant experts across the globe, which occasionally necessitates conducting remote meetings outside of local working hours.
Travel: Because standards writing is about building trust and consensus, meeting other experts face to face is beneficial. Therefore, this job brings the opportunity for periodic travel to standards committee meetings across Europe and occasionally internationally. We estimate 2-6 trips per year.
Salary: Competitive salary, commensurate with experience and location.
Benefits:
- Health insurance coverage and retirement plans adapted to the location
- Transportation home to work covered at 50%
- Productivity expenditures up to €2k annually
- Office space if relevant
How to Apply
To apply for this position, please complete this application form. We will evaluate candidates on a rolling basis starting now until we fill the role. We are aiming to fill the role quickly, and we’ll take down the page on our website as soon as the role is filled.
Our hiring process comes in stages and usually consists of initial screening, a first interview, a paid work test, interviews, and a final 3-day paid work trial where you would directly collaborate with the team.
We encourage you to lean towards applying, even if you don’t have all the skills and experience required.
If you have any questions or concerns throughout the application process, please don’t hesitate to reach out to us at careers@safer-ai.org. We look forward to reviewing your application.
apply
back to jobs page