With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act.
We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.
We rate frontier AI companies' risk management practices.
Our objective is to enhance the accountability of private actors shaping the development of AI when developing and deploying their systems.
You can find here our website with the complete results.
We are conducting research on AI risk management, applying existing knowledge from other domains to AI. Our current focus is to make quantitative risk assessment (QRA), i.e., the quantification of the likelihood and severity of harmful potential events.
We’re therefore developing this methodology, and applying it to harms induced by cyberoffensive LLM capabilities.