This report contains an analysis of the commonalities and differences between the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems—herein referred to as the Hiroshima AI Process Code of Conduct (CoC)—and the EU AI Act (“Act” or “AIA”) text on general-purpose AI (GPAI) models. SaferAI has been heavily involved in both the Code of Conduct Reporting Framework and the drafting of the AI Act’s Code of Practice for GPAI models. We leveraged this experience with both the CoC and Act to produce the following report.
There is substantial commonality between the texts, though each has requirements/recommendations not found in the other. In essence, their commonality can be thought of as fitting a Venn diagram, with approximately 30% high or complete commonality, 50% moderate commonality, and 20% not overlapping where requirements or recommendations from one are not found in the other.
For example, regarding copyright and intellectual property law, the Act has a specific focus on providers complying with EU law on copyright. With regards to public disclosure and reporting to regulators, the CoC intends for public reporting (e.g., Action 3) while the Act intends for documentation to be provided to the AI Office upon request, as well as to organisations further downstream on the value chain. That said, many points are the same or very similar, such as risk assessment, risk mitigation and cybersecurity.
The CoC Actions tend to be more detailed than the requirements in the Act’s Articles and give specific examples and expectations. If the Act’s Recitals are included though, then the level of detail is more comparable to the CoC. The Act is more detailed in certain ways, such as the documentation and transparency requirements in the Annexes. And many of the CoC requirements that are more detailed can be inferred from the Act’s text (e.g., Action 1’s secure testing environments requirement can be reasonably inferred from the Act’s cybersecurity and evaluations requirements).
Our analysis is based on three assumptions. Firstly, we include the AI Act’s Recitals and Article 56 requirements given their additional detail, e.g., 56(2)(d). Secondly, all CoC “shoulds” are considered mandatory in the sense that they are all assumed to be fulfilled. Thirdly, "Advanced AI systems" and GPAI “models with systemic risk” are assumed to be equivalent.
The following table shows the number of points of comparison between the Code of Conduct and the EU AI Act, per Code of Conduct Action: