How Can Biosafety Inform AI Safety?
DESCRIPTION
This memo outlines select standards in biosafety, with a focus on how high-risk biological agents are treated in biosafety level (BSL) 3 and 4 labs in the United States. It then considers how similar standards could be applied to high-risk AI research.
When
September 2023
Who
Olivia Jimenez

Many AI scientists and other notable figures now believe that AI poses a risk of extinction on par with pandemics and nuclear war. A sufficiently powerful AI could self-replicate beyond developers’ control. Less powerful AI could also be misused. Given these risks, it is crucial that AI research be held to high standards of caution, trustworthiness, security, and oversight. 

To determine what AI research standards should be and how they should be implemented, it may be helpful to consider precedents from other fields conducting dangerous research. 

This memo outlines select standards in biosafety, with a focus on how high-risk biological agents are treated in biosafety level (BSL) 3 and 4 labs in the United States. It then considers how similar standards could be applied to high-risk AI research. The covered topics include:

  1. High-risk research must be conducted in designated labs subject to stringent standards.
  2. Personnel must be trained and screened for reliability.
  3. Someone at each lab is responsible for safety, and they are empowered to shut projects down if they determine them to be unsafe.
  4. Physical and information security are prioritised.
  5. Labs record and respond to every incident and plan for emergencies.
  6. Labs have extensive oversight from governments and independent auditors.

Read the complete document here:

Back to REsearchBack to REsearchREADREAD
SaferAI joins the US AI Safety Institute Consortium (NIST)!