General-Purpose AI Systems
DESCRIPTION
In this post, we're working towards proposing a qualitative definition that could be used by regulators that are trying to target the most dangerous AI systems in the EU, China and the US.
When
March 2023
Who
Simeon Campos

In AI, 99% of the risk is coming from 1% of the AI systems.

It means that we can target all our AI risk management efforts & regulation towards a very narrow set of systems: GPTs, Claude and the other scaled general-purpose AIs.

Our aim is to provide a precise definition that will enable regulators to maximize benefits mitigating risks, and minimize costs by remaining targeted.

This is why we are proud to announce the publication of our latest paper, "A Definition of General-Purpose AI Systems: Mitigating Risks from the Most Generally Capable Models" by Siméon Campos & Romain Laurent.

🔍 In this paper we offer a new definition that:
1. Clearly differentiates between narrow and general systems
2. Prevents exploitation by GPAIS providers to dodge regulatory constraints

📑 Our paper is divided into two sections:
1. Analysis of specific risks of GPAIS (unpredictability, adaptability, and emergent capabilities)
2. Presentation of the new definition of GPAIS

We're working towards proposing a qualitative definition that could be used by regulators that are trying to target the most dangerous AI systems in the EU, in China or in the US. In the future, such a definition will need to be turned into a quantitative definition.

Back to REsearchBack to REsearchREADREAD
SaferAI joins the US AI Safety Institute Consortium (NIST)!