Developing the Auditing Infrastructure for General Purpose AI Systems

Governing

We’re developing the governance ideas that are needed to develop and deploy general-purpose AI systems in a safe way.

We’re writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. 

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems.

This is a first step for a sane AI auditing ecosystem to develop in Europe and in the world.

Golden statue of Lady Justice against a blue sky backdrop. She is depicted as blindfolded, holding a balanced scale in one hand and a sword pointing towards the sky in the othe

Consulting

We’re offering consulting services on large language models & their governance:

Improve your understanding of general-purpose AI and large language models.

Discover the risks of these systems, the uncertainties, and the best existing practices to manage them.

Stock image of two professionals in shirts working in an office. One is typing on a laptop, while the other is presenting a chart on a piece of paper.

Auditing

We’ll develop and compile tests and tools on large language models that involve:

Benchmarks to test certain capabilities that could be relevant to track (e.g. generality or hacking abilities)

Red teaming techniques, to increase our ability to detect new failures and capabilities from models.

Interpretability tools to understanding how models take decisions and whether they’re actually thinking the way we think they do

Illustration of a magnifying glass examining a brain

Latest news

Geoffrey Voices his Concerns About Existential Risks
May 25, 2023

"There are very few examples of a more intelligent thing being controlled by a less intelligent thing." "It's not clear to me that we can solve this problem."‍ Who said that? No one else than Geoffrey Hinton, the godfather of AI, after he left Google to voice his concerns about AI existential risks.

Slowing Down AI: Rationales, Proposals, and Difficulties
May 25, 2023

Our world is one where AI advances at breakneck speed, leaving society scrambling to catch up. This has sparked discussions about slowing AI development. We explore this idea, delving into the reasons why society might want to have slowdown in its policy toolbox. This includes preventing a race to the bottom, giving society a moment to adapt, and mitigating some of the more worrisome risks that AI poses.

Learn More On AI Risks
Contact Us