SaferAI selected to support EU AI Act implementation through CBRN risk assessment

Publication date
February 2, 2026
share

We’re pleased to announce that SaferAI has been selected as part of a consortium by the European Commission’s AI Office to provide technical assistance on assessing risks posed by GPAI models at EU level.

Our consortium won Lot 1 of the “Technical Assistance for AI Safety” tender (EC-CNECT/2025/OP/0032), which focuses on CBRN risk assessment and therefore focuses on one of the most pressing questions in AI governance: how might advanced AI systems lower barriers to chemical, biological, radiological, and nuclear threats?

Within this consortium, SaferAI will lead risk modeling and monitoring efforts. Other partners include FAR.AI, SecureBio, GovAI, Nemesys Insights, and Equistamp.

The EU AI Act and CBRN risk assessment

The EU AI Act entered into force on August 1, 2024, creating the world’s most comprehensive regulatory framework for AI. Under the Act, the Commission’s AI Office must evaluate general-purpose AI models classified as systemic risks, including their potential to enable CBRN threats.

Strong technical expertise is paramount for effective enforcement of the act. This tender will play an important role in building such technical expertise.

Our role

SaferAI will lead two core workstreams over the next three years:

Risk Modeling: We’ll create risk models and facilitate risk modeling workshops with the AI Office to develop frameworks for understanding CBRN threats from advanced AI. This includes identifying specific threat scenarios, establishing risk thresholds, and creating structured approaches that will inform how models are evaluated.

Risk Monitoring: AI capabilities and therefore risks change rapidly. We’ll track emerging developments and provide regular briefings to the Commission on new models, evolving risk sources, novel elicitation techniques, mitigation approaches, and relevant incidents in the field.


About SaferAI

SaferAI is a nonprofit research organization working to understand and reduce risks from advanced AI systems. We conduct technical research on AI evaluation methods, study how AI capabilities translate to real-world risks, and work with policymakers and industry to develop practical approaches to AI risk management.

Back to top


Publication date
February 2, 2026
share