An overview of existing and potential future GenAI/GPAI standards
Standards relevant to generative AI (GenAI) and general-purpose AI (GPAI) exist at various stages of development within ISO/IEC JTC 1/SC 42. These standards inform regulatory frameworks including the EU AI Act and California’s SB-53.
This overview focuses on standards specifically addressing GenAI and GPAI systems. Other applicable standards exist, particularly those covering technical aspects like natural language processing or having broader AI applicability. This is a high-level map of the terrain rather than a comprehensive analysis of each standard’s content.
Published and Upcoming Standards
ISO/IEC 22989:2022/DAmd 1 Artificial intelligence — Artificial intelligence concepts and terminology; Amendment 1: Generative AI
ISO/IEC 23053:2022/DAmd 1 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML); Amendment 1: Generative AI
Stage: 40.60 – Draft International Standard in the enquiry phase (likely less than 12 months from publication)
These two amendments to foundational terminology standards define or explain key concepts such as foundation models, GenAI, and Large Language Models (LLMs). These definitions and concepts are expected to be referenced by SC 42 standards, forthcoming EU standards on GPAI, and other standards and organizations.
ISO/IEC TS 42119-7 Artificial intelligence — Testing of AI Part 7: Red teaming
Stage: 20.00 – Drafting (likely more than 12 months from publication)
This standard provides technology-agnostic guidance for conducting red teaming assessments on AI systems. It applies to all AI systems but it will serve as a basis for parts of ISO/IEC TS 42119-8.
ISO/IEC TS 42119-8 Artificial intelligence — Testing of AI Part 8: Quality assessment of prompt-based text-to-text systems that utilize generative AI
Stage: 20.00 – Drafting (likely more than 12 months from publication)
This standard explains how to use benchmarking and red teaming to assess quality characteristics—including safety and risk identification—of GenAI models and systems. The current scope focuses on text-based models and systems that utilize GenAI, which includes frontier GPAI models.
Regulatory relevance: This standard is highly relevant for model evaluations and therefore AI Act Article 55(1)(a), CoP Safety and Security chapter Measure 3.2 and Appendix 3, and likely SB-53 as well. Parts of it might be suitable for providing presumption of conformity against Article 55(1)(a). At the very least, it will establish the state of the art which is relevant for the CoP, future EU harmonised standards, and emerging national standards.
ISO/IEC TS 25568 Artificial Intelligence — Guidance on addressing risks in generative AI systems
Stage: 20.00 – Drafting (likely more than 12 months from publication)
This standard provides guidance on addressing risks in GenAI systems, which includes GPAI. It is intended to serve as a follow-on to ISO/IEC 23894:2023 Artificial intelligence — Guidance on risk management, which in turn is based on ISO 31000:2018 Risk management — Guidelines. These standards focus in particular on managing risks faced by organizations.
This standard will likely contain two important sets of information that further codify the state of the art. First, it will likely contain broadly applicable risk sources for GenAI/GPAI models/systems which refer to and extend the risk sources in ISO/IEC 23894. Second, it will likely list broadly applicable risk controls for GenAI/GPAI models/systems.
NIST “Zero Draft” on Documentation of AI Datasets and Models
Stage: Initial drafting (outside SC 42) (likely more than 12 months from publication)
This NIST initiative aims to deliver a draft standard to SC 42 at the CD stage. That is, instead of submitting the standard to SC 42 as a “Form 4” with a scope and outline, it will skip this step and the initial drafting process and deliver a feature-complete document.
The standard contains a dataset template and a model report template. Therefore, it is relevant to the requirements of AI Act Article 53 and the GPAI Code of Practice (CoP), in particular Commitment 7 of the CoP Safety and Security chapter. Parts of it might be suitable for providing presumption of conformity against Article 53. At the very least, it will establish the state of the art which is relevant for the CoP, future EU harmonised standards, and emerging national standards.
There has been one meeting convened by NIST thus far. What parts of the standard should be made into recommendations (“should”) and which should be made into requirements (“shall”) are being discussed.
NIST “Zero Draft” on AI testing, evaluation, verification, and validation (TEVV)
Stage: Initial drafting (outside SC 42) (likely more than 12 months from publication)
This NIST initiative is similar to the one on Documentation of AI Datasets and Models. It aims to be used as a foundational standard for the ISO/IEC 42119 series.
ISO/IEC 42001:2023 Artificial intelligence — Management system
Stage: 60.60 – Published
One of the most important standards on AI management, which establishes an AI management system (AIMS). Leading GPAI developers, including Microsoft and Anthropic have achieved certification for certain products.
This standard is valuable for managing most AI systems at an organizational level, though it doesn’t explicitly align with emerging regulatory requirements on frontier GPAI models, such as specific requirements for safety margins or defined controls for halting development or deployment.
ISO/IEC 42005:2025 Artificial intelligence — AI system impact assessment
Stage: 60.60 – Published
This standard “provides guidance for organizations performing AI system impact assessments for individuals and societies that can be affected by an AI system and its foreseeable applications.” This guidance includes laying the groundwork for the impact assessment and the identification and analysis of actual and reasonably foreseeable impacts, including harms and benefits, for individuals, groups, and societies.
Proposed and Discussed Topics
In recent months, GenAI/GPAI standards on the following topics have been discussed or formally proposed in SC 42:
- Evaluation, Ethical Use, and Interoperability of LLMs in AI systems
- Multilingual AI Model Benchmarks
- GPAI model evaluation of risk-related properties
The standards landscape for GenAI and GPAI continues to develop rapidly. Organizations working with these technologies should monitor these standards as they progress, particularly those standards that may inform regulatory compliance approaches or document evolving state-of-the-art practices.
For those tracking this space closely, more standards might be proposed ahead of the next SC 42 plenary in Singapore in April 2026.