How Can Nuclear Safety Inform AI Safety?
DESCRIPTION
As general-purpose AI systems (GPAIS) and foundation models surpass human abilities in diverse tasks and become more integrated into our daily lives, their potential risks grow, underscoring a pressing need for a robust regulatory framework at the international level. Drawing inspiration from the nuclear power industry, we explore lessons from nuclear safety and the International Atomic Energy Agency (IAEA) to inform the development and deployment of GPAIS.
When
September 2023
Who
Simeon Campos & James Gealy

As the growth in capabilities of general-purpose AI systems (GPAIS) and foundation models continues to accelerate, the risks from these systems will increase in lockstep. With GPAIS already outmatching humans in many domains, expediting sensible regulation by drawing upon expertise and experience from other high-risk industries is prudent. This paper reviews the hard-won safety lessons from the nuclear power industry and identifies the most actionable and applicable for GPAIS regulation.

To begin, the formation of the International Atomic Energy Agency (IAEA) and its role in coordinating and improving nuclear safety was not inevitable at the dawn of the nuclear age. International coalition building began with the sharing of safety practices among a small group of nations and steadily grew, and a similar approach can be taken with GPAIS safety. In addition, developing safety standards at the international level may grant a degree of independence to the process in order to avoid safety being compromised by individual countries writing standards favourable only to their strategic interests.

The safe development and deployment of highly-capable GPAIS requires that multiple best practices be implemented. Similar to the nuclear power industry, the foremost of these is that the providers have a strong organisational safety culture. It is therefore concerning that the Silicon Valley start-up culture mantra of “move fast and break things” is currently driving the paradigm of scaling GPAIS without limit. A strong safety culture should be the top priority of the leadership of GPAIS providers as this has been key to reducing risks from nuclear power.

As the experience of the nuclear power industry has also shown, regulatory outcomes can be improved and innovation encouraged through the graded approach and performance-based regulation. Performance-based regulation is a promising basis for the core regulatory framework used in AI as it sets safety standards without dictating the specifics of implementation. It thus encourages innovation to meet safety requirements without excessive burden. Similarly, the graded approach—where the amount of safety scrutiny devoted depends  on the degree of dangerousness from a failure—should be applied in order to reduce regulatory overhead.

Safety principles from nuclear safety that should be considered for the development and deployment of GPAIS include post-accident investigations, where safety practices undergo continuous improvement, and Probabilistic Risk Assessments (PRA), which could reduce the likelihood that small failures combine to cause severe problems by evaluating the probability of negative outcomes in a piecemeal fashion. Other principles include safety margins—an essential part of safety in complex systems, which may require the development of new GPAIS architectures beyond transformers—as well as using defence in depth to ensure that no single failure or error can lead to an accident. These principles should be applied to AI safety to improve safety practices over time.

With the capabilities of GPAIS increasing at an ever faster pace, we should look for practical ways to reduce the time to implement sensible safety measures that will reduce the associated risks. Leveraging our experience in the field of nuclear safety is likely one of the best ways to do so.

Read the complete document here:

Back to REsearchBack to REsearchREADREAD
SaferAI joins the US AI Safety Institute Consortium (NIST)!