The Role of Risk Modeling in Advanced AI Risk Management

Publication date
December 10, 2025
authors
Chloé Touzet, Henry Papadatos Malcolm Murray, Otter Quarks, Steve Barrett, Alejandro Tlaie Boria, Elija Perrier, Matthew Smith, Siméon Campos
share
Abstract

Rapidly advancing artificial intelligence (AI) systems introduce novel, uncertain, and potentially catastrophic risks. Managing these risks requires a mature risk management infrastructure whose cornerstone is rigorous risk modeling. We conceptualize AI risk modeling as the tight integration of (i) scenario building—causal mapping from hazards to harms—and (ii) risk estimation—quantifying the likelihood and severity of each pathway. We review classical techniques such as Fault and Event Tree Analyses, FMEA/FMECA, STPA and Bayesian networks, and show how they can be adapted to advanced AI. A survey of emerging academic and industry efforts reveals fragmentation: capability benchmarks, safety cases, and partial quantitative studies are valuable but insufficient when divorced from comprehensive causal scenarios. Comparing the nuclear, aviation, cybersecurity, financial, and submarine domains, we observe that every sector combines deterministic guarantees for unacceptable events with probabilistic assessments of the broader risk landscape. We argue that advanced-AI governance should adopt a similar dual approach and that verifiable, provably safe AI architectures are urgently needed to supply deterministic evidence where current models are the result of opaque end-to-end optimization procedures rather than specified by hand. In one potential governance-ready framework, developers conduct iterative risk modeling and regulators compare the results with predefined societal risk tolerance thresholds. The paper provides both a methodological blueprint and opens a discussion on the best way to embed sound risk modeling at the heart of advanced-AI risk management.

read full paper

Highlights

Rapidly advancing artificial intelligence (AI) systems introduce novel and potentially catastrophic risks, and they are being deployed amid deep epistemic uncertainty. Safety-critical industries facing catastrophic hazards—such as nuclear power or aviation—have achieved dramatic safety gains by institutionalizing risk management, with rigorous risk modeling at its core. Risk modeling is part of the explanation behind these industries’ improved safety.

In AI, practical risk modeling remains fragmented. We define risk modeling as the tight coupling of (i) scenario building, which maps causal pathways from hazard to harm, and (ii) risk estimation, which assigns likelihood and harm values to these scenarios, with explicit treatment of uncertainty and dependencies. Both components are necessary: estimation without scenarios cannot yield a comprehensive risk picture; scenarios without estimation cannot support real decision-making trade-offs.

This paper touches on three nested questions. The outer question is governance: what risk management approach should society adopt for advanced AI? This includes questions related to the roles of international bodies and national regulators, responsibility sharing with industry, transparency, and risk tolerance setting. The middle question asks: how should risk modeling fit within that approach? I.e., what blend of deterministic and probabilistic requirements, what concrete use of modeling outputs? The inner question is technical: how can risk modeling for advanced AI be done in practice? This paper focuses on the inner two: it discusses how to adapt classical scenario building and risk estimation tools to advanced AI; it suggests one possible way to use risk modeling within risk management. It deliberately leaves final choices about institutional design and risk tolerance to policymakers, while making explicit the decisions they must settle.

On the technical question, we (i) translate foundational risk-modeling concepts to AI contexts; (ii) adapt scenario-building tools (FTA/ETA, FMEA/FMECA, STPA, bow-tie) to AI scenarios; (iii) review quantitative techniques (structured expert elicitation, Monte Carlo, Bayesian Networks, copulas) and show how to connect them to advanced AI scenarios; and (iv) survey emerging AI-specific practices and gaps. Two principles recur: integration over isolation—scenarios should be built to enable quantification, and quantification should respect scenario logic and dependencies; and rigor over impressionism—use structured elicitation with calibration and report uncertainty explicitly. We distinguish safety cases (argumentative assurance) from comprehensive scenario modeling and argue that risk models should feed—rather than be replaced by—safety cases. Given heavy tails, sparse data, and rapid change, modeling must be dynamic and iterative, updating with evaluations, incidents, and red-team results.

On the question: “how should risk modeling fit into advanced AI risk management”, our survey of five industries (nuclear, aviation, cybersecurity, finance, submarine operations) yields two lessons. First, mature sectors often mandate modeling aligned with international standards; for AI, unresolved governance choices include who models, who audits, how results are shared, and which international bodies set norms. We illustrate one coherent option: regulators mandate scenario-based, dependency-aware modeling by developers; independent experts audit; regulators compare outputs to predefined risk tolerance thresholds in deployment certification. The second lesson is that every sector blends probabilistic and deterministic elements. We argue that AI should do the same to meet safety-critical norms. Yet AI’s intrinsic opacity hinders strong deterministic assurances, motivating investment in verifiable AI safety (provable components, interpretable mechanisms) to enable hard guarantees for the highest-severity risks.

This paper’s original contributions include: (1) an operationalization of AI risk modeling as coupled causal pathways and dependency-aware estimation; (2) an adaptation of classical tools (FTA/ETA, STPA; elicitation/Monte Carlo/Bayesian methods/BNs/copulas) to AI; (3) a clarification of safety-case limits and how explicit risk models should feed assurance; (4) a cross-industry map of modeling’s roles from conservative design margins to best-estimate profiles; (5) a suggested governance-ready framing that links model outputs to tolerability thresholds (6) a case for research in verifiable AI safety to unlock deterministic guarantees despite black box systems.

Future work should prioritize three directions.

  • First, we recommend further technical developments to sharpen advanced AI risk methodology, including scalable, calibrated expert judgment; improved dependency and tail-risk methods; dynamic, iterative modeling with KRIs/KCIs; and validated mappings from lab capability evaluations to real-world risk.
  • Second, resolution of remaining subjective risk-management questions regarding responsibility sharing and risk tolerance are necessary to yield the safety benefits of risk modeling.
  • Third, research into provably safe AI models is needed to deliver the level of deterministic safety guarantees that is routine in other industries where technology is built from first principles.

Combined progress in these three strands of research would provide the stronger risk management apparatus that society expects for its most consequential technologies.

Back to top


Publication date
December 10, 2025
authors
Chloé Touzet, Henry Papadatos Malcolm Murray, Otter Quarks, Steve Barrett, Alejandro Tlaie Boria, Elija Perrier, Matthew Smith, Siméon Campos
share
Related Content
  • 10.12.2025
  • Technical Report
Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse
Steve Barrett, Malcolm Murray, Otter Quarks, Matthew Smith, Jakub Krýs, Siméon Campos, Alejandro Tlaie Boria, Chloé Touzet, Sevan Hayrapet, Fred Heiding, Omer Nevo, Adam Swanda, Jair Aguirre, Asher Brass Gershovich, Eric Clay, Ryan Fetterman, Mario Fritz, Marc Juarez, Vasilios Mavroudis, Henry Papadatos
  • 10.12.2025
  • Paper
A Methodology for Quantitative AI Risk Modeling
Malcolm Murray, Steve Barrett, Henry Papadatos, Otter Quarks, Matt Smith, Alejandro Tlaie Boria, Chloé Touzet, Siméon Campos
  • 10.12.2025
  • Paper
The Role of Risk Modeling in Advanced AI Risk Management
Chloé Touzet, Henry Papadatos Malcolm Murray, Otter Quarks, Steve Barrett, Alejandro Tlaie Boria, Elija Perrier, Matthew Smith, Siméon Campos