The Case for European Investment in High-Risk, High-Reward AI Reliability Research

Publication date
March 24, 2026
authors
Chloé Touzet, Lily Stelling, Bruno Galizzi
share
Abstract

This paper outlines the economic and strategic rationale for European investment, diagnoses the market failure that justifies public intervention, identifies the institutional design features that would maximise the likelihood of success, and explains why existing research and innovation instruments are insufficient for this purpose.

read full paper

Europe faces a structural challenge in AI adoption. An important share of its economy is concentrated in safety-critical industries, such as aerospace, rail, or energy, where current AI systems cannot be deployed because they lack the formal reliability guarantees that safety practices and certification require. Safety-critical sectors account for 10.2% of gross value added in the EU, compared to 8.3% in the United States, and European listed corporate champions are nearly twice as concentrated in these sectors as their American counterparts.

This paper argues that Europe should seize a rare window of opportunity by investing in high-risk, high-reward research into verifiably reliable AI through an ARPA-like institution, potentially as the core mission of the Frontier AI Initiative. This research would aim to produce ex-ante formal guarantees that AI systems will behave within specified bounds, enabling deployment in industries where the alternative to reliable AI is no AI deployment at all.

AI reliability research is a comparatively low-cost bet with outsized potential returns. The investment required is modest relative to the scaling-focused expenditures of frontier AI companies. An initial research program could begin at approximately €65 million per year , gradually ramping up to €1.8 billion per year , compared to the $100–200 billion that individual hyperscalers spend annually on compute infrastructure only.

The case for European leadership rests on three pillars. First, the economic benefits are substantial: reliability guarantees would unlock AI adoption across Europe’s industrial champions. Second, the strategic benefits include reduced dependence on foreign AI providers in sovereignty-sensitive sectors, a powerful talent repatriation lever, and a leadership position in an emerging and potentially indispensable layer of the AI value chain. Third, a market failure in AI reliability research, driven by competitive dynamics, investor incentives, and sunk-cost lock-in among frontier providers, means that the private sector is structurally unlikely to supply this research at the necessary scale, creating an opening for public investment.

This paper outlines the economic and strategic rationale for European investment, diagnoses the market failure that justifies public intervention, identifies the institutional design features that would maximise the likelihood of success, and explains why existing research and innovation instruments are insufficient for this purpose.

Back to top


Publication date
March 24, 2026
authors
Chloé Touzet, Lily Stelling, Bruno Galizzi
share
Related Content
  • 10.12.2025
  • Technical Report
Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse
Steve Barrett, Malcolm Murray, Otter Quarks, Matthew Smith, Jakub Kryś, Siméon Campos, Alejandro Tlaie Boria, Chloé Touzet, Sevan Hayrapet, Fred Heiding, Omer Nevo, Adam Swanda, Jair Aguirre, Asher Brass Gershovich, Eric Clay, Ryan Fetterman, Mario Fritz, Marc Juarez, Vasilios Mavroudis, Henry Papadatos
  • 10.12.2025
  • Paper
A Methodology for Quantitative AI Risk Modeling
Malcolm Murray, Steve Barrett, Henry Papadatos, Otter Quarks, Matt Smith, Alejandro Tlaie Boria, Chloé Touzet, Siméon Campos