SaferAI joins the US AI Safety Institute Consortium (NIST)!
Advancing AI risk management to safeguard our future

Standards

We’re writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. 

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems.

Golden statue of Lady Justice against a blue sky backdrop. She is depicted as blindfolded, holding a balanced scale in one hand and a sword pointing towards the sky in the othe

Evaluating

We are evaluating AI system developers and deployers to help people distinguish responsible AI from irresponsible AI.

Our goal is to uncover the risks and uncertainties of these systems, and the best existing practices to manage them.

Stock image of two professionals in shirts working in an office. One is typing on a laptop, while the other is presenting a chart on a piece of paper.

Outreach

We hold workshops and discussions with key AI actors and policymakers to build consensus and better understand how to foster safe and trustworthy AI.

Through these discussions, we aim to uncover what experts think about the most pressing AI risks, identify possible solutions and translate these learnings into policy.

Latest news