The risk-based structure
The EU AI Act classifies AI systems into four tiers: prohibited systems (banned outright), high-risk systems (Annex III — strict compliance requirements), limited-risk systems (transparency obligations), and minimal-risk systems (no specific requirements). Understanding which tier each of your AI deployments falls into is the starting point for everything else.
Annex III: the high-risk categories you need to know
High-risk systems include AI used in biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and asylum management, and administration of justice. If you deploy AI in any of these domains, the full compliance requirements apply.
What high-risk compliance requires
High-risk systems must meet requirements across eight areas: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness, and cybersecurity. The cybersecurity requirements — Article 15 — map closely to existing security controls, making them a natural starting point for CISO-led compliance programs.
The provider vs. deployer distinction
Compliance obligations differ depending on whether you are a provider (you built the AI system) or a deployer (you use a system built by someone else). Deployers have lighter obligations but are not off the hook — they must conduct conformity assessments for high-risk systems, maintain use logs, and ensure human oversight mechanisms are in place.
Building your compliance program
Start with the system inventory and risk classification. Then build documentation templates for your high-risk systems. Establish a compliance monitoring cadence that reviews AI system behavior against regulatory requirements quarterly. And designate an AI compliance lead — someone who owns the relationship with the EU AI Office and tracks regulatory guidance as it evolves.