How Can Biosafety Inform AI Safety?

Publication date
September 1, 2023
authors
Olivia Jimenez
share
Abstract

read full paper

Many AI scientists and other notable figures now believe that AI poses a risk of extinction on par with pandemics and nuclear war. A sufficiently powerful AI could self-replicate beyond developers’ control. Less powerful AI could also be misused. Given these risks, it is crucial that AI research be held to high standards of caution, trustworthiness, security, and oversight. 

To determine what AI research standards should be and how they should be implemented, it may be helpful to consider precedents from other fields conducting dangerous research. 

This memo outlines select standards in biosafety, with a focus on how high-risk biological agents are treated in biosafety level (BSL) 3 and 4 labs in the United States. It then considers how similar standards could be applied to high-risk AI research. The covered topics include:

  1. High-risk research must be conducted in designated labs subject to stringent standards.
  2. Personnel must be trained and screened for reliability.
  3. Someone at each lab is responsible for safety, and they are empowered to shut projects down if they determine them to be unsafe.
  4. Physical and information security are prioritised.
  5. Labs record and respond to every incident and plan for emergencies.
  6. Labs have extensive oversight from governments and independent auditors.

Back to top


Publication date
September 1, 2023
authors
Olivia Jimenez
share
Related Content
  • 10.12.2025
  • Technical Report
Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse
Steve Barrett, Malcolm Murray, Otter Quarks, Matthew Smith, Jakub Krýs, Siméon Campos, Alejandro Tlaie Boria, Chloé Touzet, Sevan Hayrapet, Fred Heiding, Omer Nevo, Adam Swanda, Jair Aguirre, Asher Brass Gershovich, Eric Clay, Ryan Fetterman, Mario Fritz, Marc Juarez, Vasilios Mavroudis, Henry Papadatos
  • 10.12.2025
  • Paper
A Methodology for Quantitative AI Risk Modeling
Malcolm Murray, Steve Barrett, Henry Papadatos, Otter Quarks, Matt Smith, Alejandro Tlaie Boria, Chloé Touzet, Siméon Campos
  • 10.12.2025
  • Paper
The Role of Risk Modeling in Advanced AI Risk Management
Chloé Touzet, Henry Papadatos Malcolm Murray, Otter Quarks, Steve Barrett, Alejandro Tlaie Boria, Elija Perrier, Matthew Smith, Siméon Campos