G7 Hiroshima AI Process Code of Conduct and EU AI Act GPAI - Commonality Analysis
ABSTRACT
This report contains an analysis of the commonalities and differences between the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems and the EU AI Act text on general-purpose AI models. There is substantial commonality between the texts, though each has requirements/recommendations not found in the other. In essence, their commonality can be thought of as fitting a Venn diagram, with approximately 30% high or complete commonality, 50% moderate commonality, and 20% not overlapping where requirements or recommendations from one are not found in the other.
PUBLICATION DATE
26/05/2025
AUTHORS
James Gealy, Daniel Kossack
Back to REsearchBack to REsearchREAD PAPeRREAD PAPER

Introduction

This report contains an analysis of the commonalities and differences between the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems—herein referred to as the Hiroshima AI Process Code of Conduct (CoC)—and the EU AI Act (“Act” or “AIA”) text on general-purpose AI (GPAI) models. SaferAI has been heavily involved in both the Code of Conduct Reporting Framework and the drafting of the AI Act’s Code of Practice for GPAI models. We leveraged this experience with both the CoC and Act to produce the following report. 

There is substantial commonality between the texts, though each has requirements/recommendations not found in the other. In essence, their commonality can be thought of as fitting a Venn diagram, with approximately 30% high or complete commonality, 50% moderate commonality, and 20% not overlapping where requirements or recommendations from one are not found in the other.

For example, regarding copyright and intellectual property law, the Act has a specific focus on providers complying with EU law on copyright. With regards to public disclosure and reporting to regulators, the CoC intends for public reporting (e.g., Action 3) while the Act intends for documentation to be provided to the AI Office upon request, as well as to organisations further downstream on the value chain. That said, many points are the same or very similar, such as risk assessment, risk mitigation and cybersecurity.

The CoC Actions tend to be more detailed than the requirements in the Act’s Articles and give specific examples and expectations. If the Act’s Recitals are included though, then the level of detail is more comparable to the CoC. The Act is more detailed in certain ways, such as the documentation and transparency requirements in the Annexes. And many of the CoC requirements that are more detailed can be inferred from the Act’s text (e.g., Action 1’s secure testing environments requirement can be reasonably inferred from the Act’s cybersecurity and evaluations requirements).

Our analysis is based on three assumptions. Firstly, we include the AI Act’s Recitals and Article 56 requirements given their additional detail, e.g., 56(2)(d). Secondly, all CoC “shoulds” are considered mandatory in the sense that they are all assumed to be fulfilled. Thirdly, "Advanced AI systems" and GPAI “models with systemic risk” are assumed to be equivalent.

Summary Table

The following table shows the number of points of comparison between the Code of Conduct and the EU AI Act, per Code of Conduct Action:

  • 86 points of comparison in total
  • 31% of the comparisons have high or complete commonality
  • Just over 80% have at least some commonality
Subject of the Code of Conduct Action Action High or complete
commonality
Some
commonality
Little or no
commonality
Total
General and IntroductionGeneral and Introduction 123
Risk management and evaluationsAction 1 75517
Identify and mitigate vulnerabilitiesAction 2 27211
Transparency and documentationAction 3 178
Incident reporting and information sharingAction 4 37212
Risk management frameworkAction 5 1326
CybersecurityAction 6 6410
Content Authentication and Provenance MechanismsAction 7 3115
Investments in Research and Mitigation MeasuresAction 8 33
Developing AI for the Benefit of the PublicAction 9 1315
Development and Adoption of Technical StandardsAction 10 22
Data input measures & protections for personal data and intellectual propertyAction 11 1214
Total 27 43 16 86
31.4% 50% 18.6%