Introduction
As artificial intelligence (AI) systems, including Generative AI (GenAI) and large language models (LLMs), become increasingly integrated into various sectors, the need for regulatory oversight has grown. The European Union’s AI Act introduces a framework for assessing and ensuring the safety, fairness, and transparency of AI systems. One of the key elements of this framework is conformity assessments—processes designed to verify compliance with the Act’s requirements before AI systems enter the market.
This article explores the conformity assessment process for AI, GenAI, and LLMs under the EU AI Act, outlining the key requirements and aspects that should be covered in these evaluations.

Conformity Assessments in the EU AI Act
The EU AI Act classifies AI systems into four risk categories:
Unacceptable Risk (e.g., social scoring, manipulative AI)—banned outright.
High Risk (e.g., critical infrastructure, healthcare, law enforcement)—requires stringent conformity assessments.
Limited Risk (e.g., chatbots, AI assistants)—subject to transparency requirements.
Minimal Risk (e.g., AI-powered video games)—little to no regulatory requirements.
For high-risk AI systems, conformity assessments are mandatory before deployment. General-purpose AI models, including GenAI and LLMs, may also be subject to compliance requirements depending on their potential risks and use cases.
Risk-Based Requirements for Conformity Assessments
Unacceptable Risk AI Systems
These systems are prohibited under the EU AI Act and cannot undergo conformity assessments.
Examples include AI used for social scoring or manipulative subliminal techniques.
High-Risk AI Systems
Pre-Market Conformity Assessment:
Comprehensive risk assessment and technical documentation (Annex VI of the EU AI Act).
Independent third-party audits (Notified Bodies) required for most high-risk AI applications (Article 43).
Adherence to data governance, security, and transparency requirements (Articles 10-15).
Post-Market Monitoring:
Continuous risk assessment and performance evaluation.
Mandatory logging and traceability to ensure accountability.
Periodic Reassessments:
Re-evaluation required when significant updates or modifications are made.
Ongoing compliance checks to align with emerging risks.
Limited Risk AI Systems
Transparency Requirements:
Users must be informed that they are interacting with AI (Article 52).
AI-generated content must be disclosed to prevent misinformation.
Minimal Documentation:
Basic records of AI system performance and functionality.
Internal Conformity Assessment:
No third-party audit required, but internal compliance teams must ensure adherence to transparency obligations.
Minimal Risk AI Systems
No formal conformity assessment required.
Developers encouraged to follow ethical AI practices voluntarily.
No mandatory transparency or security requirements, though best practices such as fairness and robustness are recommended.
Key Requirements of AI Conformity Assessments
"Conformity Assessment” is defined under Article 3 as the process of verifying and/or demonstrating that a high-risk AI system complies with the requirements enumerated under Title III, Chapter 2 of the Act. These requirements include:
Risk Classification and Intended Use
Define the AI system’s purpose and potential impact.
Determine whether it falls under the high-risk category.
Assess the system’s use in critical sectors such as healthcare, law enforcement, and finance.
Data Quality and Governance
Ensure datasets used for training are representative, unbiased, and high quality.
Implement robust data governance practices, including documentation and traceability.
Address potential biases and discriminatory outcomes.
Robustness, Accuracy, and Security
Conduct testing to verify system reliability and accuracy.
Implement security measures to prevent adversarial attacks.
Regularly audit performance to mitigate risks of errors or hallucinations.
Transparency and Explainability
Provide clear documentation on model architecture, training methodologies, and decision-making processes.
Ensure users understand how AI decisions are made.
Meet transparency requirements for high-risk AI applications.
Human Oversight and Accountability
Define roles and responsibilities for AI system operators.
Ensure mechanisms for human intervention and control.
Establish accountability frameworks to address liability concerns.
Compliance with Ethical and Legal Standards
Adhere to fundamental rights, including non-discrimination and privacy.
Implement mechanisms for redress in case of errors or harm.
Align with existing EU regulations such as GDPR and Digital Services Act.
Best Practices for Performing Conformity Assessments
Performing Conformity Assessments
The EU AI Act provides two options for conducting conformity assessments: internally or through a notified third-party entity.
Internal Conformity Assessments: If a provider demonstrates compliance with HRAIS requirements by fully adhering to harmonized standards, they can proceed with internal conformity assessments, as outlined in Annex VI. The provider verifies that the established quality management system complies with Article 17, examines the technical documentation to assess compliance with essential requirements, and verifies the consistency between the design, development process, and post-market monitoring.
Third-Party Conformity Assessments: In cases where the provider cannot apply harmonized standards entirely or if such standards do not exist, or when providers deem that the nature, design, construction, or purpose of the AI system necessitates external verification, they must follow the procedures for performing a third-party conformity assessment established in Annex VII.
After the assessment, the entity is required to draft a written EU Declaration of Conformity for each relevant system, maintain it for ten years after the system has been placed on the market or put into service, and affix a physical/digital Conformité Européene (CE) Mark on the product.
Who Should Perform the Assessments?
Internal Compliance Teams: Organizations developing AI should have dedicated compliance officers to conduct initial assessments.
Third-Party Auditors: For high-risk AI applications, independent Notified Bodies must validate compliance.
Regulatory Authorities: Government agencies oversee compliance and issue necessary certifications.
When and How Often Should Assessments Be Performed?
Conformity assessments are required to be performed before placing a HRAIS in the market, or before its first use in the EU market. A new CA must be conducted in the event of significant modifications to a HRAIS. Compliance with the requirements should be ensured throughout the lifecycle of the system. Implement ongoing monitoring of AI systems post-deployment.
Pre-Market Conformity Assessment: Required before deployment, especially for high-risk AI (Article 43 of the EU AI Act).
Regular Reassessments: Periodic reviews ensure ongoing compliance as AI models evolve.
Post-Market Monitoring: Continuous auditing and updates are necessary to address emerging risks.
Major System Updates: Whenever significant modifications are made, reassessments should be conducted.
Actions Following a Non-Compliant Conformity Assessment
If an AI system fails to meet the conformity assessment requirements under the EU AI Act, the following actions may be taken:
Remediation and Corrective Measures
AI providers are required to address non-compliance issues by implementing corrective actions.
Necessary changes may include modifying datasets, improving transparency, enhancing security, or refining human oversight mechanisms.
Updated documentation and re-evaluation must be conducted before resubmitting the system for assessment.
Temporary or Permanent Market Restrictions
Non-compliant AI systems may be restricted from entering or remaining in the EU market.
If violations are severe, authorities may issue a temporary ban until compliance is achieved.
For critical risks, the AI system may be permanently prohibited.
Fines and Legal Consequences
The EU AI Act imposes significant financial penalties for non-compliance.
Up to €30 million or 6% of global annual turnover for violations involving banned AI practices.
Up to €20 million or 4% of global annual turnover for violations related to high-risk AI conformity failures.
Lower fines apply for minor infractions, including transparency requirement breaches.
Re-evaluation and Re-assessment
Companies may be required to undergo additional conformity assessments before resubmission.
Notified Bodies and regulatory authorities will verify whether corrective actions are sufficient.
AI systems must demonstrate compliance with all legal and ethical requirements.
Public Disclosure and Reputational Impact
Severe non-compliance cases may be made public by regulatory authorities.
Companies failing to comply may face reputational damage, loss of trust, and reduced market competitiveness.
Organizations are encouraged to proactively address non-compliance to avoid negative publicity.
Government and Regulatory Supervision
Regulatory bodies such as the European Artificial Intelligence Board (EAIB) will oversee high-risk AI systems.
Continuous monitoring may be required for AI systems previously flagged for compliance issues.
Additional post-market surveillance may be enforced to ensure adherence to the Act.
References to the Regulation
Article 16: Outlines obligations for AI providers, including compliance with conformity assessments.
Article 43: Describes conformity assessment procedures for high-risk AI systems.
Annex VI: Details technical documentation requirements for assessment.
Other Global Regulations Impacting AI Conformity Assessments
While the EU AI Act is a significant regulatory framework, other global regulations also influence AI conformity assessments.
United States
The AI Bill of Rights: Introduced by the White House, emphasizing safety, fairness, and accountability.
NIST AI Risk Management Framework: Provides guidelines for risk assessment and mitigation in AI systems.
Proposed Algorithmic Accountability Act: Seeks to impose transparency and impact assessments for AI-driven decision-making systems.
China
Regulations on Algorithmic Recommendation Services (2022): Enforces transparency and accountability in AI-based recommendation algorithms.
AI Ethics Guidelines: Promotes responsible AI development with an emphasis on security and social stability.
Canada
Artificial Intelligence and Data Act (AIDA): Part of Bill C-27, aiming to regulate high-impact AI systems and require conformity assessments.
OECD AI Principles
Encourages AI systems to be robust, fair, transparent, and accountable, influencing regulatory frameworks worldwide.
ISO/IEC 42001 (AI Management System Standard)
Establishes international best practices for AI governance and conformity assessments.
Provides a structured framework for organizations to implement AI management systems, ensuring risk-based governance and compliance.
Helps companies align AI development and deployment with regulatory requirements through documented policies, risk management, and continuous monitoring.
Supports conformity assessments by establishing standardized audit processes, accountability mechanisms, and security controls.
Encourages ongoing assessment cycles to maintain compliance with evolving AI regulations globally.
Conclusion
Conformity assessments play a crucial role in ensuring AI systems, including GenAI and LLMs, comply with the EU AI Act’s requirements. These evaluations help mitigate risks, enhance transparency, and foster trust in AI technologies. Organizations developing or deploying AI should proactively implement conformity assessment frameworks to ensure regulatory compliance and ethical AI deployment. Additionally, companies operating globally should align with multiple regulatory frameworks, including those from the U.S., China, Canada, and international standards like OECD AI Principles and ISO/IEC 42001.
Comments