Why Europe Should Bet on Reliable AI Infrastructure

Publication date
November 21, 2025
share

Introduction

In August 2025, economics Nobel laureate Philippe Aghion called for the creation of a Franco-German ARPA and AI security institute. The timing couldn’t be better. As Europe rolls out its Apply AI and AI in Science strategies, it wrestles with a fundamental question: How can Europe compete in AI without needing to out-spend or out-scale the United States and China?

Here’s what we think: Europe doesn’t need to win that race. We need a different kind of competitive advantage that actually plays to our strengths. Just as German cars became the gold standard for reliability, and Airbus built its reputation on rigorous safety testing, so too can Europe become the world leader in AI that organisations can actually trust. Europe can become a world leader in AI that is reliable, safe, and secure by design.

Some see security, safety and reliability as constraints that’ll slow Europe down. This misunderstands both the technology trajectory and the market opportunity.

Within the current AI ecosystem, Europe can address a concrete market problem: right now, many industries that could greatly benefit from AI – healthcare systems, financial institutions, critical infrastructure operators – can’t fully adopt it. They need reliability, safety and security guarantees that current systems cannot provide. Europe could own this market by building a European offering across the AI stack around RSS principles: safe-by-design models that are transparent and interpretable, hardware with built-in verification mechanisms enabling real-time monitoring of AI behaviour, secure datacenters and compute for sensitive applications, or AI systems developed specifically for cyberdefense. From models to chips to infrastructure, each layer presents opportunities for European leadership.

The policy window is now open. The Commission just published its Apply AI and AI in Science strategies. Over the next few months, we’ll see major decisions: the gigafactories call, how RAISE (Resource for AI Science in Europe) gets structured, what the new Frontier AI Initiative actually does. These are the infrastructure decisions that will determine whether Europe can translate its AI ambitions into reality.

The challenge is immediate. Europe’s AI ecosystem remains fragmented. Talent and funding are spread across member states. Research efforts scatter rather than compound. Meanwhile, competitor stacks are built with considerable and concentrated resources. The longer we wait, the more difficult catching up becomes. We’ve seen this pattern with consumer technology, where companies like Google and Meta moved quickly and established dominance early. Now European alternatives struggle to compete in markets those companies defined. The infrastructure decisions happening right now in AI will shape the competitive landscape for decades. Europe has a short window of opportunity to establish its distinct position before the market consolidates around standards and systems built elsewhere.

For the past several months, we’ve been working with researchers, policymakers, and industry leaders on a vision for how Europe can seize this moment. We see three key pillars that could help build a coherent, world-leading ecosystem built around reliability, safety, and security: positioning the Frontier AI Initiative as Europe’s RSS champion, transforming RAISE into a true ARPA-style institute, and aligning Gigafactories as strategic infrastructure.

Our vision

Pillar 1: The Frontier AI Initiative as Europe’s RSS champion

The Apply AI strategy aims to strengthen European competitiveness and technological sovereignty by accelerating AI adoption, particularly among SMEs and in strategic sectors. Within this framework, it introduces a Frontier AI Initiative with a clear and compelling mandate: “ensure that European models with cutting-edge capabilities reinforce sovereignty and competitiveness in a trustworthy and human centric manner.” This could be the vehicle that makes Europe the world leader in AI reliability, safety, and security.

Why we’re optimistic:

The language in the strategy is promising. It calls for “sovereign frontier models”, ensuring “safety is embedded by design.” It emphasises “European strategic presence at the various layers of the AI stack” and explicitly mentions secure infrastructure and AI for cybersecurity.

This framing opens up space for something genuinely different. Instead of building European clones of existing frontier models, Europe could lead in developing frontier AI where reliability, safety, and security are core capabilities – not features added on later.

The strategic opportunity and what it requires:

The competitive advantage is clear: strategic industries are currently stalled on AI adoption. Space systems, defense, healthcare, finance, critical infrastructure – they all need AI but can’t deploy current systems without stronger safety guarantees. An AI stack certified for reliability, safety, and security would give Europe a global premium position. Think of it as the Airbus approach: don’t compete on being biggest or cheapest, compete on being the choice when reliability, safety and security matters most.

For the Frontier AI Initiative to get there, we believe it needs four things:

  1. Clear ownership and expert leadership. The EU AI Office should pilot this initiative with dedicated resources, while prominent external chairs lead specific workstreams. This approach, similar to how Mario Draghi’s visible leadership elevated the competitiveness report, combines bureaucratic coherence with expert credibility. The AI Office provides structure; the chairs bring momentum and buy-in.
  2. ARPA-style funding for moonshot RSS research. Right now, fundamental research in AI reliability and security gets a tiny fraction of overall AI investment. An ARPA-style institute – substantial staged funding, frequent evaluation, lean paperwork – could rapidly advance the field. We’re already building a research coalition on safe-by-design AI with prominent international researchers, proving there’s both demand and capability.
  3. Multi-stakeholder convening that includes the demand side. The initiative must extend beyond AI developers and researchers to include representatives from safety-critical industries, specialized startups, and civil society. Understanding what guarantees these actors actually need would clarify the research agenda.
  4. Systematic bottleneck identification. Map the obstacles to building a reliable, safe, secure European AI stack – then propose concrete policy fixes using existing tools like AI Factories, European Digital Innovation Hubs (EDIHs), RAISE, and Gigafactories. With buy-in from AI deployers in key industries, this allows the Initiative to move quickly, consistently addressing bottlenecks.

Pillar 2: RAISE as an ARPA-Style institute

The AI in Science strategy lays out an ambitious vision for RAISE – pooling Europe’s talent, compute, data, and funding into one Resource for AI Science in Europe. It’s also realistic about challenges, primary among them being the fragmentation of resources, limited access to computational power, and fierce global competition for AI talent.

What’s promising in RAISE as it currently stands:

The Commission’s learned from past mistakes. They’re explicit that “network and coordination” approaches haven’t been sufficient, and that RAISE needs to “reduce fragmentation and better align research efforts” while actually attracting top talent. The funding behind it is substantial: up to €600 million from Horizon Europe for the pilot, with guaranteed access to AI Factories and Gigafactories for EU-funded research. Critically, they’re leaving the door open for RAISE to develop genuine institutional weight rather than be merely a “virtual institute”.

Where we still need clarity:

The current governance structure – involving Thematic Networks of Excellence, a secretariat established through Coordination and Support Actions, connection to the AI Board, representation from member states and private sector, a high-level academic advisory board, collaboration between the EU AI Office and Joint Research Centre – has too many moving parts. History suggests that when decision-making is fragmented across so many stakeholders, bold initiatives struggle to gain momentum.

Breakthroughs come from clear ownership. The US DARPA‘s success came from empowered program managers with the authority to make bold bets, set clear deadlines based on results, and adjust quickly accordingly. They had substantial funding, frequent evaluation, and minimal bureaucracy. That model’s efficiency has been proven consistently from DARPA to more recent adaptations worldwide, including Germany’s SPRIN-D.

For RAISE to actually work, we believe it needs three things:

  1. Program managers directly hired by RAISE with authority to shape research agendas and make funding decisions, rather than working through elaborate committees;
  2. A secretariat with executive authority. This will determine whether RAISE becomes a real institute or remains a virtual network. They will need full authority to manage the budget and lead on strategy, rather than simply coordinating between partners.
  3. Lean governance focused on accountability. The “European Network of Frontier AI Labs” could be transformative – a curated set of excellent teams with clear ownership. If the structure becomes too complex or includes too many stakeholders, the initiative will become slow and ineffective.

All the pieces are there. The question is whether implementation embraces the ARPA philosophy (“fund people, not projects” and “empower managers to take risks”), or defaults to the familiar but less effective patterns of fragmented European research funding.

Pillar 3: Gigafactories as part of a coherent strategy

The AI Factories announced in 2023 represented important infrastructure investment. The upcoming Gigafactories – at larger scale – could be even more significant. But potential isn’t the same as impact. For Gigafactories to truly contribute to European AI competitiveness, they need to be conceived as part of a coherent strategy built around Europe’s RSS advantage.

The current gap:

The Apply AI and AI in Science strategies treat Gigafactories primarily as computational resources that RAISE and other initiatives will access. This is important, but it understates the opportunity. Gigafactories could be the cornerstone of Europe’s RSS advantage, advancing the goal of building reliable, safe, and secure AI at every level of the stack.

Consider the full stack required for AI RSS:

  • Safe-by-design models that are transparent, auditable, and aligned with specified objectives;
  • Hardware-enabled safety mechanisms – chips designed to enable verification and monitoring of AI system behaviour;
  • Secure compute environments – datacenters with provable trust infrastructure and protection against unauthorised access or manipulation.

Gigafactories touch all three layers. They’ll train models and host experiments, influence hardware choices through procurement, and physically host the compute environments where Europe’s most critical AI systems run. Each of these is a chance to embed RSS principles from day one.

What’s needed from the Gigafactories call and selection:

The upcoming Gigafactories call should make RSS alignment explicit:

  1. Call design should reflect the broader vision. Proposals need to show not just computational capacity, but how their architecture, hardware choices, and operations could advance European leadership in AI RSS.
  2. Member states should think European, not just national. Proposals should show how they complement other member states’ capabilities and fit into the broader ecosystem.
  3. Security and verification should be core design principles. The AI in Science strategy talks about research security and “safeguards against unwanted technology transfer.” Gigafactories should be secure infrastructure where sensitive research, defense applications, and model inference on confidential intellectual property can all happen with appropriate protections.
  4. Hardware choices that enable safety. As Gigafactories procure chips and infrastructure, they will favour hardware that supports monitoring, auditing, and verification. This creates market demand that pulls hardware manufacturers toward RSS-enabling features and incensitivises research and development in that direction.

The Gigafactories represent billions in infrastructure investment. That level of commitment allows Europe to aim high and establish itself as the global leader in trustworthy AI infrastructure. Such a competitive advantage only grows as AI systems become more powerful and integrate into more consequential decisions.

Why this creates competitive advantage

As AI gets deployed in higher-stakes contexts, demand for reliability guarantees will intensify. Hospitals can’t use diagnostic AI without proof it works across patient populations and keeps confidential data secure. Aerospace can’t put AI in flight systems without extensive reliability verification. Banks need to understand how AI trading systems behave under stress. Defense needs AI that works predictably in adversarial environments and infrastructure that protects against intellectual property theft.

These aren’t niche markets: they represent trillions in economic value and some of Europe’s strong industrial sectors. And right now, these sectors face an adoption bottleneck. Frontier AI developers are building impressive systems, but there’s a blind spot: these systems weren’t designed with the verification and reliability properties that safety-critical applications require. The focus has been on capability and performance, not on the trustworthiness guarantees that would enable deployment in high-stakes contexts.

Europe can own this. An AI stack certified for reliability, safety, and security would capture high-value markets where trust isn’t optional, winning customers who can’t afford to rely on unreliable AI. Organisations handling sensitive data, running critical infrastructure, or operating in high-stakes environments need verification and auditing capabilities before they can adopt AI. Europe has built its reputation on exactly these qualities in other technology sectors. It just needs to apply that approach to AI.

What happens next (the decisions that matter)

Over the coming months, specific decisions will determine whether Europe actually leads in AI RSS, or whether these strategies remain well-intentioned but scattered.

The Gigafactories call is being drafted now – will it treat AI infrastructure as generic compute or as a strategic RSS asset?

RAISE’s governance is taking shape – will it embrace the ARPA model or become another coordination mechanism?

The Frontier AI Initiative’s scope is being defined – will it get clear Commission ownership or get distributed across competing bureaucracies?

Since May, SaferAI has been convening researchers, engaging policymakers, and building a coalition. The pieces are in place across the European ecosystem: aligned strategies, committed funding, recognition of problems. What we need now is decisive implementation – clear ownership, empowered program managers, coherent strategy across instruments. Europe’s done this before in aerospace, automotive, building infrastructure the world trusts. AI’s next. The window’s open. What matters now is execution.

Back to top


Publication date
November 21, 2025
share