Lessons from Our European Parliament Roundtable on Reliable AI

Publication date
April 22, 2026
share

SaferAI and Pour Demain brought together around 40 participants at the European Parliament on 23 March 2026 for a roundtable on Moonshots in Reliable, Safe & Secure AI: A Path to European Leverage and Strategic AI Adoption. MEPs Michael McNamara (Renew Europe) and Sergey Lagodinsky (Greens/EFA) hosted the meeting under Chatham House rules. The room included parliamentarians, EU officials, national government representatives, industry executives, and researchers. Turing Prize winner Yoshua Bengio joined via video message.

We organized the event to test a thesis we’ve been developing for months: that Europe’s best path to AI competitiveness runs through reliability, safety, and security (RSS), and that getting there requires coordinated, ambitious public investment. What we heard confirmed the thesis and sharpened it considerably.

The problem everyone agrees on

Hundreds of billions of dollars are going into scaling AI infrastructure globally. Yet today’s most powerful AI systems still can’t provide the three guarantees high-stakes deployment requires: consistent performance under specified conditions (reliability), avoidance of unintended harm (safety), and protection against intrusion, tampering, and disruption (security).

Europe feels this acutely. Companies like Airbus, Thales, Alstom, Siemens, and Philips operate in sectors where failure is measured in lives, on top of lost earnings. One participant with decades of experience in aviation safety put it plainly: no unit carries passengers unless it meets a legally binding safety proof. The Federal Aviation Administration in the US sets the acceptable level of catastrophic accident with failure probabilities on the order of one in a billion flight hours. Nothing comparable exists for AI. The gap between what current AI systems can demonstrate and what safety-critical industries actually require is enormous — and closing it is a crucial technical challenge.

European companies want to use AI but can’t get the reliability and safety guarantees they need. Providers recognize this but argue that verification is expensive. The result is a market failure: the scaling race leaves frontier providers with little incentive to fund the foundational research that would make AI systems reliably safe and secure.

A strategic reframing

An important idea to come out of the discussion was a distinction between model-level and system-level safety. Some researchers argued against focusing exclusively on building safer and more reliable LLMs — the difficulty of verifying properties in large models makes this a fragile bet, and pursuing it alone would tie Europe’s fortunes to the frontier model race. Europe should continue investing in compute, but an important and neglected opportunity is in verified system-level architectures around AI models.

Some participants focus on building verified system-level architectures around the model. Think of the LLM as an engine. The research challenge is designing the fuselage: systems where if the underlying model fails, the overall system fails safely. Formal specifications define what the system should and should not do in a given deployment context. The underlying model — whether European, American, or Chinese — becomes a replaceable component within a verified scaffolding.

Some participants argued that Europe doesn’t need to win the frontier model race to lead on safe AI deployment; it needs to own the reliability and safety layer. And because the decisive inputs are world-class research talent and scientific ambition rather than primarily focusing on massive compute clusters, it’s genuinely achievable — if Europe acts with urgency and focus.

What participants called for

Three sessions covered the guarantees required to unlock adoption in safety-critical industries, what research bets to make, and how to organize and fund the effort. A few themes kept surfacing across all three.

Reliable, safe, and secure AI is a competitive advantage. Above all, the event confirmed that a wide range of stakeholders, including policymakers, industry leaders, and researchers, shared a common message on the relevance of reliable, safe, and secure AI to build a competitive advantage for the EU.

Foundational research, not incremental improvement. Multiple researchers argued that current AI development needs fundamental rethinking. Extending foundational mathematical theory to make verification tractable is where breakthrough speed-ups will come from. This calls for longer-term funding stability. Current EU funding cycles of three to five years are too short for this kind of deep fundamental work.

Focus over fragmentation. Investment in RSS AI is dispersed across disconnected national programs, underfunded academic labs, and isolated civil society efforts. Several participants were blunt: the European Competitiveness Fund risks spreading resources too thinly by trying to fund every technology domain. The same concern applies to the Frontier AI Initiative if its scope isn’t focused.

Human capital as a critical constraint. Participants characterized RSS as a human capital challenge. The expertise is scarce and scattered across Europe, and new funding models are needed that allow sustained, long-term investment in people, not just projects.

An ARPA-style institution. Broad support emerged for a dedicated European research institution built for speed, scientific risk-taking, and mission focus, with RSS as its central mandate. Participants drew parallels with early CERN, which started with just a few countries and grew. Key success factors: strong leadership, a focused mission, good IP frameworks, and sufficient resources. Estimates ranged from tens of millions to start, potentially scaling to billions across years of implementation, plus dedicated compute for research experiments.

Public procurement as a market signal. If governments buy trustworthy, homegrown AI solutions for the public sector, this validates the market for private adoption. One country reported such projects already underway.

The declaration and position paper

Following the roundtable, participants signed a Joint Declaration calling on the EU and its Member States to take three concrete steps: establish a dedicated European ARPA-style research institution for moonshot RSS AI research; integrate RSS priorities into existing European initiatives, including the Frontier AI Initiative and the European Competitiveness Fund; and commit dedicated, multi-year funding at scale — on the order of hundreds of millions of euros annually as a starting point.

Signatories include MEPs Michael McNamara, Sergey Lagodinsky, and Brando Benifei, alongside SaferAI’s Henry Papadatos, Patrick Stadler of Pour Demain, Yoshua Bengio at Mila, and representatives from LawZero, Bitkom, The Future Society, the Arq Foundation, TU Delft, the University of Oxford, MIT, among others across research, industry, and civil society. A member of the French Senate also signed, adding to growing political support across multiple EU member states.

Alongside the declaration, we published a position paper — The Case for European Investment in High-Risk, High-Reward AI Reliability Research. It lays out the economic and strategic rationale in detail: according to MSCI regional index data, 48% of listed firms operate in safety-critical sectors in the EU, compared to 26% in the United States and 20% in China (see Appendix B in here). European corporate champions are disproportionately concentrated in industries where current AI systems simply can’t be deployed without formal reliability guarantees. The paper argues that a market failure in reliability research — driven by competitive dynamics, investor incentives, and sunk-cost lock-in among frontier providers — means the private sector won’t supply this research at scale, and that public investment through an ARPA-like institution could start at roughly €65 million per year and ramp to €1.8 billion, a fraction of what individual hyperscalers spend annually on compute alone.

Where we go from here

The Frontier AI Initiative’s scope and structure are being debated at an upcoming expert forum, including advice and support from a range of stakeholders in the field. The European Competitiveness Fund is being designed. Gigafactories are being planned. Each of these is a chance to embed RSS priorities from the ground up — or to let them become afterthoughts in yet another ill-focused funding program.

The roundtable confirmed that the coalition for this agenda exists and has weight behind it. Policymakers, industry leaders, and researchers converged on a shared assessment: reliable, safe, and secure AI is a competitive advantage, and Europe has a genuine window to lead. Whether that convergence translates into focused action is now the open question.

SaferAI will keep pushing. Our policy lead will be joining the upcoming Frontier AI Initiative expert forum, and we’ll continue engaging with the European Competitiveness Fund process and across the European ecosystem, publishing further research, and working with our partners to make sure the policy instruments being designed right now reflect the ambition and specificity that this moment demands.

The Joint Declaration is available here.

Back to top


Publication date
April 22, 2026
share