SaferAI joins the US AI Safety Institute Consortium (NIST)!
Is OpenAI's Preparedness Framework better than its competitors' "Responsible Scaling Policies"? A Comparative Analysis
Reminiscent of the release of the ill-named Responsible Scaling Policies (RSPs) developed by their rival Anthropic, OpenAI has just released their Preparedness Framework (PF) which fulfills the same role. How do the two compare?
January 19, 2024
RSPs Are Risk Management Done Wrong
We compare Anthropic's "Responsible Scaling Policy" with the risk management standard ISO/IEC 31000, identify gaps and weaknesses, and propose some pragmatic improvements to the RSP.
October 25, 2023
SaferAI OECD Post: Basic Safety Requirements for AI Risk Management
There are three basic criteria I think will make AI risks manageable. For good risk management, models need to be interpretable, boundable and corrigible.
July 5, 2023
Slowing Down AI: Rationales, Proposals, and Difficulties
Our world is one where AI advances at breakneck speed, leaving society scrambling to catch up. This has sparked discussions about slowing AI development. We explore this idea, delving into the reasons why society might want to have slowdown in its policy toolbox. This includes preventing a race to the bottom, giving society a moment to adapt, and mitigating some of the more worrisome risks that AI poses.
May 31, 2023
Geoffrey Voices his Concerns About Existential Risks
"There are very few examples of a more intelligent thing being controlled by a less intelligent thing." "It's not clear to me that we can solve this problem."‍ Who said that? No one else than Geoffrey Hinton, the godfather of AI, after he left Google to voice his concerns about AI existential risks.
May 25, 2023