6 Expert-Backed Claims on AI Risk Management

Following a workshop focused on risk management frameworks and risk thresholds for frontier AI, which brought together top experts in AI risk management and key policymakers, we present a set of expert-driven claims that emerged from the discussions. Below, you'll find the claims that resulted from the conversations, along with the specific experts who endorse each one.

Claim 1 

Frontier AI risk management frameworks should include elements of commonly used risk management standards and frameworks (e.g., in NIST AI RMF, and ISO/IEC 23894 & 42001), such as the following:

  • Defining risk tolerances
  • Performing risk assessmentsome text
    • Identification of risks
    • Analysis of identified risks
    • Comparing risks to risk tolerance
  • Implementing risk mitigation or other controls
  • Monitoring risks

Endorsed by:

Siméon Campos (SaferAI), Henry Papadatos (SaferAI), Heather Frase, PhD, Bill Anderson-Samways (IAPS), Malcolm Murray

Claim 2

Risk analysis should include, though not be restricted to, a semi-quantitative or quantitative estimate of risk (i.e. severity and likelihood). 

Endorsed by: 

Siméon Campos (SaferAI), Henry Papadatos (SaferAI), Heather Frase, PhD, Bill Anderson-Samways (IAPS)

Claim 3

Risk identification should be done continuously throughout the training run and deployment, in a tight integration between red-teaming, monitoring and standard risk identification methods (e.g. Fishbone analysis, scenario analysis etc.) applied upon worrying findings. 

Endorsed by: 

Siméon Campos (SaferAI), Henry Papadatos (SaferAI), Heather Frase, PhD, Bill Anderson-Samways (IAPS), Malcolm Murray

Claim 4

In absence of government-set risk tolerance, frontier AI developers should define their risk tolerance in a quantitative or semi-quantitative way. Any substantial differences in tolerance to other industries should be clearly explained.

Endorsed by: 

Siméon Campos (SaferAI), Henry Papadatos (SaferAI),  Malcolm Murray.

Claim 5

Risk tolerance should be operationalized as a joint set of capabilities thresholds and mitigations objectives, with in-depth rationales for how those relate to the global risk thresholds. 

Endorsed by: 

Siméon Campos (SaferAI) , Henry Papadatos (SaferAI), Bill Anderson-Samways (IAPS), Malcolm Murray.

Claim 6

Risk assessments should be validated by independent third-party auditors or oversight organizations to ensure objectivity, rigor, and adherence to industry standards and best practices.

Endorsed by:

Siméon Campos (SaferAI), Henry Papadatos (SaferAI), Heather Frase, PhD, Bill Anderson-Samways (IAPS), Malcolm Murray.

back to blog
July 10, 2024
Learn More On AI Risks
Start Now
SaferAI joins the US AI Safety Institute Consortium (NIST)!