“Development of Superhuman Machine Intelligence (SMI) is probably the greatest threat to the continued existence of humanity,” says Sam Altman, CEO of OpenAI. He argues that SMI is more dangerous than even the worst imaginable viruses.
Despite this, labs explicitly attempting to build such AI systems do not consistently meet basic safety standards. As the scaling of large language models has led to systems with “sparks of artificial general intelligence” in large language models, the risks they pose remain unmanageable.
As a member of the European Standardization working group (CEN CENELEC) responsible for developing AI risk management standards for general-purpose AI systems, I believe it would be a disservice to recommend “countermeasures” when we lack any guarantees or understanding of the effects they would have. One thing is sure: current and next-generation general-purpose AI systems must satisfy basic safety and risk management properties.