FPF and OneTrust publish the Updated Guide on Conformity Assessments under the EU AI Act
The Future of Privacy Forum (FPF) and OneTrust have published an updated version of their Conformity Assessments under the EU AI Act: A Step-by-Step Guide, along with an accompanying Infographic. This updated Guide reflects the text of the EU Artificial Intelligence Act (EU AIA), adopted in 2024.
Conformity Assessments (CAs) play a significant role in the EU AIA’s accountability and compliance framework for high-risk AI systems. The updated Guide and Infographic provide a step-by-step roadmap for organizations seeking to understand whether they must conduct a CA. Both resources are designed to support organizations as they navigate their obligations under the AIA and build internal processes that reflect the Act’s overarching accountability. However, they do not constitute legal advice for any specific compliance situation.
Key highlights from the Updated Guide and Infographic:
- An overview of the EU AIA and its implementation and compliance timeline. The AIA is a regulation that has tailored obligations depending on the level of risk posed by AI systems, with phased applicability. Some provisions of the AIA began to apply in early 2025, such as the prohibitions on certain AI practices and AI literacy requirements. By 2 August 2025, the infrastructure related to governance and the conformity assessment process must be operational. The full set of obligations for high-risk AI systems, including the requirement to conduct CAs, will apply from 2 August 2026.
- Understanding when a conformity assessment is required. The Guide provides a detailed flowchart to help determine whether an AI system is subject to the CA obligations. It outlines key steps, such as determining whether the system falls under the AIA, whether it is classified as “high-risk”, and who is responsible for conducting the CA. CAs are not new in the EU context; the AIA builds on product safety legislation under the New Legislative Framework (NLF) to ensure that high-risk AI systems meet both legal and technical standards before and after being placed on the market and throughout their use.
- The CA should be understood as a framework of assessments (both technical and non-technical), requirements, and documentation obligations. The provider should assess whether the AI system poses a high risk and identify both known and potential risks as part of their risk management system. The provider should also ensure that certain requirements are built into the high-risk AI system, such as automatic event recording, human oversight capacity, and transparent operation of the AI system. Additionally, it should verify whether documentation obligations, including technical documentation, are met.
- The Guide highlights ongoing standardization efforts and the role of harmonized standards in streamlining the CA process. Systems developed in the context of regulatory sandboxes or certified under cybersecurity schemes may benefit from a presumption of conformity with certain AIA requirements.
- The CA is not a one-off exercise. Compliance must be maintained throughout the AI system’s lifecycle. Providers must ensure ongoing compliance by establishing a monitoring system that enables them to verify that the essential requirements are being met throughout the high-risk AI system’s lifecycle.
You can also view the previous version of the Conformity Assessment Guide here.