EU AI Act Fines and Penalties: What Non-Compliance Will Cost You
Introduction
The EU AI Act does not leave compliance to goodwill. It establishes one of the most severe penalty regimes in EU regulatory history - with maximum fines of EUR 35 million or 7% of global annual turnover, whichever is higher. For context, that exceeds even the GDPR's maximum of EUR 20 million or 4% of turnover.
With the full application date of August 2, 2026 approaching for high-risk AI system requirements, organizations need to understand exactly what non-compliance will cost them, which violations trigger which penalty tiers, how enforcement will work, and how penalties compare to other EU regulations.
This article provides a complete breakdown of the AI Act's penalty structure, enforcement mechanisms, and practical guidance on reducing regulatory risk.
If you are not yet sure whether your AI systems comply, take the free AI Act Readiness Assessment to identify your gaps before regulators do.
The Three Penalty Tiers
The AI Act establishes three tiers of administrative fines under Article 99, each corresponding to different categories of violations.
Tier 1: EUR 35 Million or 7% of Global Annual Turnover
Applies to: Violations of prohibited AI practices under Article 5.
This is the highest penalty tier, reserved for the most serious violations. It applies when an organization develops, deploys, or uses AI systems that are explicitly banned:
- Social scoring systems used by or on behalf of public authorities
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (outside the narrow exceptions)
- AI systems deploying subliminal manipulation techniques that cause harm
- AI exploiting vulnerabilities of specific groups (children, elderly, disabled persons)
- Emotion recognition in the workplace or educational institutions (beyond permitted exceptions)
- Untargeted scraping of facial images from the internet or CCTV to build recognition databases
- Biometric categorization inferring sensitive attributes (race, political opinions, religion, sexual orientation)
- Predictive policing based solely on profiling
Example scenario: A financial institution deploys a social scoring system that aggregates citizen behavior data to determine access to financial services. This directly violates Article 5 and exposes the organization to fines of up to EUR 35 million or 7% of its worldwide annual turnover.
For a company with EUR 1 billion in global revenue, 7% amounts to EUR 70 million - far exceeding the fixed EUR 35 million cap. The regulation applies whichever is higher.
Tier 2: EUR 15 Million or 3% of Global Annual Turnover
Applies to: Non-compliance with the requirements for high-risk AI systems and other substantive provisions.
This tier covers the broadest range of violations, including:
- Failure to comply with high-risk AI system requirements (Articles 6-49):
- Missing or inadequate risk management system (Art. 9)
- Data governance failures (Art. 10)
- Insufficient technical documentation (Art. 11)
- Inadequate record-keeping and logging (Art. 12)
- Failure to provide transparency information (Art. 13)
- Insufficient human oversight design (Art. 14)
- Inadequate accuracy, robustness, or cybersecurity (Art. 15)
- Missing quality management system (Art. 17)
- Failure to conduct conformity assessment (Art. 43)
- Failure to register high-risk AI system in EU database (Art. 49)
- Non-compliance with deployer obligations (Art. 26)
- Failure to meet transparency obligations for limited-risk systems (Art. 50)
- Non-compliance with GPAI model provider obligations (Chapter V)
- Failure to cooperate with national authorities
Example scenario: A FinTech company deploys an AI-driven credit scoring system classified as high-risk under Annex III but fails to implement an adequate risk management system, does not provide proper human oversight, and lacks the required technical documentation. Each of these deficiencies constitutes a separate potential violation under Tier 2.
Tier 3: EUR 7.5 Million or 1.5% of Global Annual Turnover
Applies to: Supplying incorrect, incomplete, or misleading information to national competent authorities or notified bodies.
This tier targets dishonesty and obstruction in regulatory interactions:
- Providing false or incomplete data in responses to authority requests
- Misleading information in conformity assessment documentation
- Inaccurate registration information in the EU database
- Obstruction or misleading conduct during market surveillance activities
Example scenario: During an investigation, a company provides incomplete training data documentation to the national market surveillance authority, omitting records of known bias issues. This constitutes supplying misleading information and triggers Tier 3 penalties.
Penalty Calculation Factors
The AI Act specifies that fines must be "effective, proportionate, and dissuasive." Article 99(3) lists the factors authorities must consider when determining the specific fine amount:
- Nature, gravity, and duration of the infringement, including the number of affected persons and the level of damage
- Intentional or negligent character of the infringement
- Actions taken to mitigate the damage suffered by affected persons
- Degree of responsibility, considering technical and organizational measures implemented
- Previous infringements by the same operator
- Degree of cooperation with the supervisory authority
- Manner in which the infringement became known to the authority (self-reported vs. discovered)
- Size, annual turnover, and market share of the operator
- Financial benefits gained or losses avoided due to the infringement
- Other aggravating or mitigating factors applicable to the circumstances
These factors closely mirror the GDPR's penalty calculation criteria under Article 83(2), and enforcement practice under the GDPR provides useful precedent for how AI Act authorities may approach fine calculations.
Special Rules for SMEs and Startups
The AI Act includes proportionality provisions for smaller organizations. Under Article 99(6):
- For SMEs, including startups, the fine shall be the lower of the two amounts (percentage of turnover vs. fixed amount), not the higher
- This means a startup with EUR 2 million in turnover faces a maximum Tier 1 fine of EUR 140,000 (7% of EUR 2M) rather than EUR 35 million
This is a significant concession, but the fines remain material for small companies. A EUR 140,000 fine for a seed-stage startup could be existential.
Penalties for EU Institutions and Agencies
When EU institutions, agencies, or bodies violate the AI Act, a separate enforcement mechanism applies. The European Data Protection Supervisor (EDPS) handles complaints and can impose fines of up to:
- EUR 1.5 million for violations of prohibited practices
- EUR 750,000 for other non-compliance
These lower caps reflect the public-sector context but still represent meaningful sanctions.
How Enforcement Will Work
National Market Surveillance Authorities
Each EU Member State must designate one or more market surveillance authorities to enforce the AI Act. These authorities have the power to:
- Conduct investigations and audits
- Request access to AI systems, data, documentation, and source code
- Order corrective measures, including withdrawal from the market
- Impose administrative fines
- Issue warnings and reprimands
Some Member States are establishing dedicated AI oversight bodies, while others are assigning responsibilities to existing regulators (data protection authorities, sector regulators, or consumer protection agencies).
The European AI Office
The European AI Office, established within the European Commission, coordinates enforcement at the EU level. Its responsibilities include:
- Supervising GPAI model providers directly (the AI Office has sole enforcement authority for GPAI rules)
- Supporting national authorities in cross-border enforcement
- Issuing guidelines and best practices
- Managing the EU database for high-risk AI systems
- Coordinating with other EU bodies (EDPB, ENISA)
Enforcement Timeline
| Date | Enforcement Milestone |
|---|---|
| February 2, 2025 | Prohibitions enforceable - penalties for unacceptable-risk AI practices |
| August 2, 2025 | GPAI rules enforceable - AI Office can investigate GPAI model providers |
| August 2, 2025 | Member States must have designated market surveillance authorities |
| August 2, 2026 | Full enforcement - high-risk AI requirements, transparency obligations, and deployer obligations enforceable |
Multiple Penalties for the Same Violation
The AI Act does not permit "double jeopardy" - Article 99(8) states that when a violation also constitutes an infringement under other EU legislation (e.g., GDPR, DORA, product safety law), only the higher of the applicable fines shall be imposed. However, different violations can be penalized separately.
For example, if an AI system violates both the AI Act's data governance requirements and the GDPR's data protection principles, the authority can impose the higher applicable fine but not stack both fines for the same factual violation.
Comparison with Other EU Regulatory Penalties
| Regulation | Maximum Fine | Turnover Cap | Effective Since |
|---|---|---|---|
| EU AI Act (Tier 1) | EUR 35 million | 7% of global turnover | Feb 2025 (prohibitions), Aug 2026 (full) |
| GDPR | EUR 20 million | 4% of global turnover | May 2018 |
| DORA | Varies by Member State | Varies - up to 1% of average daily global turnover per day of non-compliance | Jan 2025 |
| NIS2 | EUR 10 million | 2% of global turnover | Oct 2024 |
| Digital Services Act | N/A | 6% of global turnover | Feb 2024 |
| Digital Markets Act | N/A | 10% of global turnover (20% for repeat) | May 2023 |
The AI Act's 7% of global turnover ceiling makes it the second-highest percentage-based penalty in EU digital regulation, behind only the Digital Markets Act.
For organizations subject to multiple frameworks, the cumulative regulatory risk is substantial. A large financial institution could face AI Act fines, GDPR fines, DORA penalties, and NIS2 sanctions simultaneously for overlapping failures.
Beyond Fines: Other Consequences of Non-Compliance
Financial penalties are only part of the picture. Non-compliance with the AI Act can trigger additional consequences:
Market Withdrawal and Recall
National authorities can order the withdrawal or recall of non-compliant AI systems from the EU market (Article 79). This means:
- Your AI system must be removed from sale and use across all 27 Member States
- Existing deployments may need to be deactivated
- Revenue loss from forced market exit
Corrective Actions
Authorities can require specific corrective actions with defined deadlines:
- Bringing the AI system into compliance
- Suspending the system until compliance is achieved
- Modifying the system's design, data practices, or deployment
Reputational Damage
Non-compliance findings are public. National authorities maintain records of enforcement actions, and significant cases will attract media attention. For B2B companies, particularly in regulated sectors like financial services, a finding of AI Act non-compliance can erode client trust and partnership opportunities.
Criminal Liability
While the AI Act itself establishes administrative penalties, Member States can implement additional sanctions, including criminal penalties, under their national laws. Some jurisdictions may choose to criminalize certain violations, particularly those involving prohibited AI practices.
How to Reduce Your Regulatory Risk
1. Start Compliance Now
The August 2, 2026 deadline is approaching. Demonstrating that you have been working toward compliance - even if not yet fully compliant - is a significant mitigating factor in penalty calculations. Regulators look more favorably on organizations that show good-faith effort.
2. Know Your Risk Classification
The most common compliance failure will be misclassifying (or failing to classify) AI systems. Conduct a thorough AI system inventory and risk classification exercise. Take the free AI Act Readiness Assessment for a structured approach.
3. Document Everything
Comprehensive documentation is both a compliance requirement and your best defense in an investigation. If you can demonstrate that you identified risks, implemented mitigations, and followed documented procedures, your penalty exposure drops significantly.
4. Invest in Compliance Infrastructure
Manual compliance does not scale. Use a compliance management platform like Matproof to automate evidence collection, maintain documentation, track controls, and manage compliance across the AI Act alongside DORA, GDPR, and other frameworks. Start a free trial.
5. Establish Cooperation Protocols
Your degree of cooperation with authorities is an explicit penalty factor. Establish internal protocols for responding to regulatory inquiries, including:
- Designated points of contact
- Document retention and retrieval procedures
- Incident escalation and reporting workflows
- Legal counsel engagement procedures
6. Self-Report Issues
The AI Act considers how a violation became known to the authority. Self-reporting issues before they are discovered externally is a mitigating factor. Establish internal monitoring and audit processes that catch problems early.
Frequently Asked Questions
Q: Can the AI Act fines be combined with GDPR fines for the same violation?
A: No. Article 99(8) prevents double penalties for the same factual violation when it falls under both the AI Act and another EU regulation (such as the GDPR). The authority will apply the higher of the two applicable fines. However, different aspects of the same AI system can give rise to separate violations under each regulation, and those distinct violations can be penalized individually.
Q: Who exactly imposes the fines - the EU or national governments?
A: National market surveillance authorities in each Member State impose fines for most AI Act violations. The European AI Office has direct enforcement authority only for GPAI model rules (Chapter V). Each Member State will designate its own authority, which may be a new body or an existing regulator. For financial entities, the national financial supervisor may also play a role alongside the market surveillance authority.
Q: What happens if I am a small startup - are the fines proportionate?
A: The AI Act includes proportionality provisions for SMEs and startups. For smaller companies, the fine is capped at the lower of the two amounts (percentage of turnover vs. fixed amount). So a startup with EUR 500,000 in annual revenue faces a maximum Tier 1 fine of EUR 35,000 (7% of EUR 500K) rather than EUR 35 million. While significantly lower, this can still be material for early-stage companies.
Q: Are there any grace periods or warnings before fines are imposed?
A: The AI Act allows authorities to issue warnings and reprimands before imposing fines. In practice, regulators typically follow a graduated enforcement approach - especially in the early years of a new regulation - issuing guidance, warnings, and corrective orders before resorting to fines. However, egregious violations (particularly of prohibited practices) may trigger immediate penalties.
Q: If I withdraw my AI system from the EU market, can I avoid fines for past non-compliance?
A: No. Withdrawing a system does not retroactively cure past violations. If you operated a non-compliant AI system during a period when the requirements were in force, you can be fined for that period. Withdrawal may, however, be considered a mitigating factor in the penalty calculation.