Developing the Risk Management Infrastructure for Foundation Models


We’re developing the governance ideas that are needed to develop and deploy general-purpose AI systems in a safe way.

We’re writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. 

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems.

This is a first step for a sane AI auditing ecosystem to develop in Europe and in the world.

Golden statue of Lady Justice against a blue sky backdrop. She is depicted as blindfolded, holding a balanced scale in one hand and a sword pointing towards the sky in the othe


We’re evaluating developers and deployers of AI systems

Distinguish responsible AI from irresponsible AI

Discover the risks of these systems, the uncertainties, and the best existing practices to manage them.

Stock image of two professionals in shirts working in an office. One is typing on a laptop, while the other is presenting a chart on a piece of paper.


We’ll develop and compile tests and tools on large language models that involve:

Benchmarks to test compliance with laws & standards like the EU AI Act or 42001

Red teaming techniques, to ensure your model is robust to adversaries

Governance procedures to help you manage the risks that AI brings

Illustration of a magnifying glass examining a brain

Latest news

Is OpenAI's Preparedness Framework better than its competitors' "Responsible Scaling Policies"? A Comparative Analysis
January 19, 2024

Reminiscent of the release of the ill-named Responsible Scaling Policies (RSPs) developed by their rival Anthropic, OpenAI has just released their Preparedness Framework (PF) which fulfills the same role. How do the two compare?

RSPs Are Risk Management Done Wrong
October 25, 2023

We compare Anthropic's "Responsible Scaling Policy" with the risk management standard ISO/IEC 31000, identify gaps and weaknesses, and propose some pragmatic improvements to the RSP.

Learn More On AI Risks
Contact Us