A short overview of AI-related biorisks

April 21, 2024

In late 2023, SaferAI ran a workshop on AI misuse to foster consensus amongst key experts in the field. As part of this workshop, we drafted a collaborative document providing a brief literature review on specific facets of AI risk, which we thought would be useful for the broader public.

Background: Some biosecurity experts believe that advanced AIs could weaken or lift some of the constraints that currently limit the number of people who could commit acts of bioterrorism. Advanced AI systems could be exploited by malicious individuals or groups without much domain-specific knowledge to develop and deploy existing or new chemical and biological weapons. In the worst cases, AI-assisted bioterrorists could optimise the virulence and resistance of some pathogens to make those more harmful. 

Here is a summary of the main claims underlying bioweapons threat models: 

  • There are actors actively trying to cause human extinction or catastrophic damages, e.g. Aum Shinrikyo, which tried to develop pathogens in the 1990s and 2000s towards that purpose. Below, find an estimate from Esvelt, 2023 of the number of actors that might pursue dangerous R&D:
  • Most of these actors, especially the numerous ones (e.g. “extremists” and “zealots”), have so far failed due to lack of:
    • Adequate technology, material and/or resources
    • Technical capability and knowledge
  • Advances in biotechnology are making adequate material more available and weakening the technological and resource constraints.
    • “DNA constructs of length sufficient to generate infectious 1918 influenza virus can now be obtained for US$1,500; coronaviruses cost approximately US$2,000, but typically must be enzymatically stitched together by hand prior to virus generation, limiting (for now) the number of capable individuals to those also skilled at modern biotechnology. The laboratory equipment and reagents required for these tasks can typically be obtained for less than US$50,000.” (Esvelt, 2023)
  • Advances in AI capabilities threaten to decrease substantially the minimal amount of competence needed to reproduce the key technical steps needed to craft a bioweapon. 
    • An illustrative example depicting how current large language models (LLMs) lower the barriers to creating biological weapons was the 'Safeguarding the Future' course at MIT, which tasked non-scientist students with exploring the potential of LLM chatbots to assist non-experts in the creation of a pandemic (Soice et al., 2023). In one hour, the chatbots suggested four potential pandemic pathogens, along with instructions for how individuals lacking the necessary laboratory skills could acquire them, and how to avoid detection by obtaining genetic material from providers who do not screen orders.
    • A more recent experiment conducted on a LLaMa-2 fine-tuned to answer to any request (dubbed “Spicy”) has shown the ability of LLaMa-2 to provide to non-experts a large fraction of all the key information to acquire a live infectious sample of the 1918 influenza virus. Experts were able to retrieve all the key information in less than 1h (Gopal et al., 2023).

In other words, LLMs may lower barriers to biological misuse. Beyond LLMs, advanced biological design tools could expand the capabilities of already sophisticated actors, enabling the creation of pandemic pathogens substantially worse than anything seen to date and could enable forms of more predictable and targeted biological weapons (Sandbrink, 2023).

back to blog
March 19, 2024
Learn More On AI Risks
Start Now
SaferAI joins the US AI Safety Institute Consortium (NIST)!