High-Risk AI Systems Under the EU AI Act: The Complete List
Introduction
The EU AI Act (Regulation (EU) 2024/1689) classifies AI systems into four risk tiers: unacceptable, high, limited, and minimal. The high-risk category is where the regulation's teeth are sharpest - these systems face the most demanding compliance obligations, from risk management and data governance to conformity assessments and post-market monitoring.
Understanding whether your AI system qualifies as high-risk is the foundational question for any AI Act compliance program. Get it wrong, and you face fines of up to EUR 15 million or 3% of global annual turnover.
This article provides a complete breakdown of every high-risk AI system category under the AI Act, explains the two classification pathways, details provider and deployer obligations, and covers the exemption criteria that may apply.
If you are unsure whether your AI systems qualify as high-risk, take the free AI Act Readiness Assessment for a guided evaluation.
Two Pathways to High-Risk Classification
The AI Act defines high-risk AI systems through two distinct pathways under Article 6.
Pathway 1: Annex I - Product Safety Legislation (Article 6(1))
AI systems that serve as safety components of products covered by existing EU harmonized legislation listed in Annex I are classified as high-risk if those products require a third-party conformity assessment. This covers AI embedded in:
- Medical devices (Regulation (EU) 2017/745)
- In vitro diagnostic medical devices (Regulation (EU) 2017/746)
- Motor vehicles (Regulation (EU) 2019/2144)
- Aviation (Regulation (EU) 2018/1139)
- Marine equipment (Directive 2014/90/EU)
- Toys (Directive 2009/48/EC)
- Lifts (Directive 2014/33/EU)
- Pressure equipment (Directive 2014/68/EU)
- Radio equipment (Directive 2014/53/EU)
- Personal protective equipment (Regulation (EU) 2016/425)
- Machinery (Regulation (EU) 2023/1230)
These systems must comply with AI Act high-risk requirements by August 2, 2027 (one year later than standalone high-risk systems).
Pathway 2: Annex III - Standalone High-Risk Systems (Article 6(2))
AI systems used in specific areas of public interest are classified as high-risk under Annex III. These systems must comply by August 2, 2026.
This is where the majority of organizations will find their compliance obligations.
The Complete Annex III: Eight High-Risk Categories
Category 1: Biometrics (Annex III, point 1)
AI systems intended for:
(a) Remote biometric identification systems - not including verification systems that merely confirm a person is who they claim to be.
Examples:
- Facial recognition systems used to identify individuals in a crowd
- Voice identification systems used to identify speakers across recordings
- Gait recognition systems
(b) AI systems intended for biometric categorization based on sensitive or protected attributes (to the extent not prohibited under Article 5).
Examples:
- Systems that infer ethnicity, political opinions, or religious beliefs from biometric data
- Age estimation systems used for access control
(c) AI systems intended for emotion recognition (to the extent not prohibited under Article 5).
Examples:
- Systems detecting emotional states in job interviews
- Customer sentiment analysis from facial expressions in retail settings (where not prohibited)
Key note: Real-time remote biometric identification in publicly accessible spaces for law enforcement is generally prohibited under Article 5, with narrow exceptions for missing children, imminent terrorist threats, and serious criminal suspects.
Category 2: Critical Infrastructure (Annex III, point 2)
AI systems intended for use as safety components in the management and operation of:
(a) Critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
Examples:
- AI controlling electricity grid load balancing
- Water treatment plant AI monitoring systems
- AI-based traffic management systems (traffic lights, routing)
- Network management AI for telecommunications infrastructure
(b) AI systems intended as safety components in the management and operation of road traffic and supply of water, gas, heating, or electricity.
Examples:
- Predictive maintenance AI for gas pipeline infrastructure
- AI systems managing autonomous vehicle fleets on public roads
- Smart grid optimization systems
Category 3: Education and Vocational Training (Annex III, point 3)
AI systems intended for use in:
(a) Determining access to or admission to educational and vocational training institutions at all levels.
Examples:
- University admissions screening algorithms
- Automated application scoring for vocational training programs
- AI systems deciding student placement in schools
(b) Evaluating learning outcomes, including systems used to steer the learning process.
Examples:
- Automated essay grading systems
- AI-driven examination assessment tools
- Adaptive learning platforms that determine student progression
(c) Determining the appropriate level of education an individual will receive or be able to access.
Examples:
- AI systems assigning students to educational tracks
- Systems recommending remedial education vs. advancement
(d) Monitoring and detecting prohibited behavior of students during tests.
Examples:
- AI-powered proctoring software during examinations
- Automated plagiarism detection with consequential actions
Category 4: Employment, Workers Management, and Access to Self-Employment (Annex III, point 4)
AI systems intended for use in:
(a) Recruitment or selection, including publishing targeted job advertisements, screening or filtering applications, and evaluating candidates.
Examples:
- CV screening algorithms
- AI-driven candidate ranking systems
- Video interview analysis tools
- Chatbot-based initial candidate screening
(b) Decisions affecting terms of work relationships, including promotion, termination, task allocation based on behavior or personal traits, and monitoring or evaluating performance.
Examples:
- Automated performance evaluation systems
- AI systems determining employee promotions or pay raises
- Workforce scheduling AI based on individual performance metrics
- Employee monitoring and productivity tracking AI
Category 5: Access to and Enjoyment of Essential Private and Public Services and Benefits (Annex III, point 5)
AI systems intended for use by or on behalf of public authorities or private entities to:
(a) Evaluate eligibility for public assistance benefits and services, including the ability to grant, reduce, revoke, or reclaim such benefits.
Examples:
- AI determining welfare or social benefit eligibility
- Systems evaluating unemployment insurance claims
- Automated public housing allocation
(b) Evaluate creditworthiness of natural persons, except for detecting financial fraud.
Examples:
- AI-driven credit scoring models
- Automated loan approval/denial systems
- Mortgage risk assessment algorithms
- Alternative credit scoring using non-traditional data
This category is particularly relevant for financial services. See our detailed guide on AI Act Compliance for FinTech and the intersection with DORA.
(c) Risk assessment and pricing for life and health insurance.
Examples:
- AI systems determining insurance premiums based on individual risk profiles
- Health insurance underwriting algorithms
- Life insurance risk scoring models
(d) Evaluating and classifying emergency calls, including establishing prioritization for dispatching services.
Examples:
- AI triage systems for 112/emergency call centers
- Automated priority classification of emergency requests
(e) AI systems used in the context of migration, asylum, and border control as referred to in point 7 (cross-reference).
Category 6: Law Enforcement (Annex III, point 6)
AI systems intended for use by or on behalf of law enforcement authorities for:
(a) Individual risk assessments - evaluating the risk of a natural person offending or re-offending, or the risk for potential victims.
Examples:
- Recidivism prediction tools
- Victim risk assessment systems
- Radicalization risk scoring
(b) Polygraphs and similar tools to detect deception.
Examples:
- AI-powered lie detection systems
- Emotion detection during interrogations
(c) Evaluating the reliability of evidence in criminal investigations.
(d) Predicting the occurrence or recurrence of criminal offenses based on profiling or assessing personality traits (to the extent not prohibited).
(e) Profiling of natural persons in the course of criminal investigations.
Category 7: Migration, Asylum, and Border Control (Annex III, point 7)
AI systems intended for use by or on behalf of competent public authorities for:
(a) Polygraphs and similar tools to assess migration-related risks.
(b) Assessing risks (security, irregular migration, or health risks) posed by a person intending to enter or who has entered the EU.
(c) Assisting examination of applications for asylum, visas, and residence permits, including evaluating the reliability of evidence.
(d) Detecting, recognizing, or identifying natural persons in border control contexts (except document verification).
Category 8: Administration of Justice and Democratic Processes (Annex III, point 8)
AI systems intended for use by judicial authorities or on their behalf for:
(a) Assisting judicial authorities in researching and interpreting facts and law and in applying the law to specific facts, or for alternative dispute resolution.
Examples:
- AI systems suggesting case outcomes based on precedent analysis
- Automated legal research tools used by courts
- AI-assisted sentencing recommendation systems
(b) Influencing the outcome of an election or referendum or the voting behavior of natural persons (excluding systems used for organizational purposes like sorting, structuring, and translating).
The Article 6(3) Exemption
Not every AI system falling under Annex III is automatically high-risk. Article 6(3) provides an important exemption:
An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. This applies when the AI system is intended to:
- Perform a narrow procedural task (e.g., converting unstructured data into structured data, classifying documents by categories, or detecting duplicates)
- Improve the result of a previously completed human activity (e.g., checking the quality or accuracy of a document drafted by a human)
- Detect decision-making patterns or deviations from prior patterns without replacing or influencing human assessment (e.g., flagging anomalies for human review)
- Perform a preparatory task for an assessment relevant to the Annex III use cases
Critical condition: The exemption does not apply if the AI system performs profiling of natural persons as defined in the GDPR (Article 4(4)).
Providers claiming this exemption must document their assessment and register it in the EU database before placing the system on the market. National market surveillance authorities can challenge the classification.
Provider Obligations for High-Risk AI Systems
Providers of high-risk AI systems face the most extensive obligations. Here is a summary:
| Obligation | AI Act Article | Description |
|---|---|---|
| Risk management system | Art. 9 | Continuous risk identification, analysis, and mitigation |
| Data governance | Art. 10 | Quality criteria for training, validation, and testing data |
| Technical documentation | Art. 11 | Comprehensive documentation before market placement |
| Record-keeping | Art. 12 | Automatic logging of system events |
| Transparency | Art. 13 | Clear instructions for deployers |
| Human oversight | Art. 14 | Design for effective human oversight |
| Accuracy and robustness | Art. 15 | Appropriate performance and security levels |
| Quality management | Art. 17 | Documented QMS covering the full lifecycle |
| Conformity assessment | Art. 43 | Self-assessment or third-party assessment |
| EU declaration of conformity | Art. 47 | Written declaration per system |
| CE marking | Art. 48 | Visible conformity marking |
| Registration | Art. 49 | Entry in the EU database |
| Post-market monitoring | Art. 72 | Systematic monitoring after market placement |
| Serious incident reporting | Art. 73 | Report serious incidents to authorities |
Deployer Obligations for High-Risk AI Systems
Deployers (the organizations using high-risk AI systems) also have mandatory obligations under Articles 26-27:
- Use in accordance with instructions provided by the provider
- Assign human oversight to competent, trained individuals
- Ensure input data quality is relevant and representative for the intended purpose
- Monitor operation and report malfunctions or incidents to the provider
- Conduct fundamental rights impact assessment (Article 27) - mandatory for public bodies and certain private deployers
- Inform affected individuals that they are subject to a high-risk AI system
- Keep automatically generated logs for at least 6 months
- Cooperate with national authorities during investigations
How to Assess Your AI Systems
A structured approach to classification is essential:
Step 1: Inventory all AI systems in your organization - developed, deployed, or procured.
Step 2: For each system, check Article 5 prohibitions. If any apply, the system cannot be used.
Step 3: Check Annex I. Is the AI system a safety component of a product covered by EU harmonized legislation?
Step 4: Check Annex III. Does the system's intended purpose fall into any of the eight categories?
Step 5: If the system falls under Annex III, evaluate whether the Article 6(3) exemption applies. Document your reasoning.
Step 6: Classify remaining systems as limited or minimal risk and assess transparency obligations.
For a structured, guided assessment, take the free AI Act Readiness Assessment. It walks you through the classification process and generates a compliance gap report.
Using a compliance management platform like Matproof allows you to centrally manage your AI system inventory, track risk classifications, and map controls across the AI Act alongside other frameworks like DORA, GDPR, and NIS2. Start a free trial to see how it works.
Frequently Asked Questions
Q: What is the difference between a provider and a deployer under the AI Act?
A: A provider is the entity that develops an AI system or has one developed on its behalf and places it on the market or puts it into service under its own name or trademark. A deployer is any entity that uses an AI system under its authority, except where the system is used in a personal, non-professional activity. Obligations differ significantly: providers bear the heaviest burden (conformity assessment, CE marking, documentation), while deployers must ensure proper use, human oversight, and monitoring.
Q: If I use a third-party AI tool, am I a deployer with obligations?
A: Yes. If you deploy a third-party AI system that is classified as high-risk, you have deployer obligations under Article 26. You must use the system according to the provider's instructions, assign human oversight, monitor its operation, and ensure input data quality. You may also need to conduct a fundamental rights impact assessment.
Q: Can an AI system be reclassified from high-risk to non-high-risk?
A: Yes, through the Article 6(3) exemption. If the system performs a narrow procedural task, improves a previously completed human activity, or detects decision-making patterns without replacing human assessment - and does not involve profiling - the provider can document that it does not pose a significant risk. However, this assessment can be challenged by national authorities.
Q: What is the deadline for high-risk AI system compliance?
A: Standalone high-risk systems under Annex III must comply by August 2, 2026. High-risk AI systems that are safety components of products under Annex I legislation have until August 2, 2027.
Q: Does the AI Act apply to AI systems already in use before 2026?
A: Systems already on the market before August 2, 2026 are only subject to the new requirements if they undergo significant changes in design or intended purpose after that date. However, high-risk AI systems used by public authorities must comply regardless. Providers should still prepare, as any update or modification to an existing system could trigger compliance obligations.