We’re writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act.
With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems.
This is a first step for a sane AI auditing ecosystem to develop in Europe and in the world.
Distinguish responsible AI from irresponsible AI
Discover the risks of these systems, the uncertainties, and the best existing practices to manage them.
Benchmarks to test compliance with laws & standards like the EU AI Act or 42001
Red teaming techniques, to ensure your model is robust to adversaries
Governance procedures to help you manage the risks that AI brings
Reminiscent of the release of the ill-named Responsible Scaling Policies (RSPs) developed by their rival Anthropic, OpenAI has just released their Preparedness Framework (PF) which fulfills the same role. How do the two compare?
We compare Anthropic's "Responsible Scaling Policy" with the risk management standard ISO/IEC 31000, identify gaps and weaknesses, and propose some pragmatic improvements to the RSP.