All Frameworks
Framework

EU AI Act compliance, fully automated

The EU AI Act is the world's first comprehensive AI regulation - mandatory from August 2, 2026. Matproof covers risk classification, prohibited practice screening, data governance, human oversight, and conformity assessment.

Request a demo

Key Features

AI Risk Classification (Art. 6)

Automated AI system inventory with risk classification against Annex III categories. Continuous monitoring for prohibited, high-risk, limited-risk, and minimal-risk systems.

Prohibited Practice Screening (Art. 5)

Screen all AI use cases against Article 5 prohibitions - subliminal manipulation, social scoring, emotion inference, and biometric categorisation. Automated alerts and remediation.

Risk Management System (Art. 9)

Build and maintain continuous AI risk management systems. Risk identification, analysis, evaluation, and treatment - with full audit trail and evidence collection.

Human Oversight (Art. 14)

Document human oversight measures for high-risk AI. Track operator roles, intervention procedures, override capabilities, and automation bias prevention.

Technical Documentation (Art. 11)

Generate and maintain Annex IV technical documentation. System design, development methodology, testing procedures, and performance metrics - all in one place.

Conformity Assessment (Art. 19)

Prepare for conformity assessments with automated gap analysis. EU Declaration of Conformity generation and CE marking documentation.

Why Matproof

Covers all EU AI Act obligations in a single platform
AI-powered risk classification against Annex III categories
Pre-built policies for all 8 key AI governance areas
100% EU data residency (hosted in Germany)

Customer stories

Teams that stopped dreading audit season.

"
85%less prep time

Matproof saved us months of audit preparation. We connected our tools on Monday and had DORA-mapped evidence by Friday. Our auditor was impressed by the depth of the audit trail.

KS

Katharina Steinbach

Head of Compliance · Novalend GmbH

"
4 wksto compliance

We were staring down a DORA deadline with three frameworks to cover. Matproof got us audit-ready in under four weeks. The policy generator alone was worth the subscription.

FB

Florian Bergmann

CTO · Paymatic AG

"
100+controls automated

The cross-framework mapping is genuinely brilliant. We already had ISO 27001 — Matproof showed us exactly what DORA added on top without duplicating controls. No consultant could do this in the same time.

DA

Dr. Annika Brandt

CISO · Kreditwerk Digital

"
0audit findings

Our last audit finished with zero findings. First time in company history. Matproof's continuous monitoring caught a configuration drift two weeks before the auditors arrived.

MV

Maximilian Vogt

VP Engineering · Finova Technologies

"
1 dayArt. 28 register

Vendor risk was the section we dreaded most for DORA Article 28. Matproof auto-generated our entire ICT third-party register from existing contracts. What took our legal team weeks took Matproof an afternoon.

JH

Julia Hoffmann

Legal & Compliance · FinLeap Connect

"
3 frameworksone platform

Three frameworks — DORA, ISO 27001, SOC 2 — running in parallel on one platform. Matproof's shared evidence library means we collect evidence once and it satisfies all three. The efficiency is remarkable.

TK

Thomas Kessler

Head of IT Risk · Solaris SE

Ready to get started?

Ready to get started?

See how Matproof automates compliance for your organization.

Request a demo

What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework regulating artificial intelligence. Adopted by the European Parliament in March 2024 and published in the Official Journal on July 12, 2024, it establishes harmonised rules for the development, placing on market, and use of AI systems across the European Union. The regulation takes a risk-based approach, with obligations ranging from outright prohibitions to transparency and documentation requirements.

The AI Act classifies AI systems into four risk tiers: prohibited (AI practices that pose unacceptable risks to fundamental rights), high-risk (AI used in critical areas like biometrics, education, employment, law enforcement, and essential services), limited-risk (AI requiring specific transparency obligations such as chatbots and deepfakes), and minimal-risk (all other AI with no specific obligations beyond AI literacy).

For high-risk AI systems, the regulation establishes comprehensive requirements covering risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), and accuracy, robustness, and cybersecurity (Art. 15). Providers must also implement quality management systems and undergo conformity assessments before placing systems on the market.

The AI Act also introduces obligations for general-purpose AI (GPAI) models under Articles 53-55, including technical documentation, copyright compliance, and additional requirements for models posing systemic risks. The regulation is enforced through a combination of the European AI Office, national competent authorities, and market surveillance bodies.

Who Needs EU AI Act Compliance?

The EU AI Act applies to any organisation that develops, deploys, or uses AI systems within or serving the EU market. The scope is deliberately broad to capture the entire AI value chain:

AI Providers (Developers)

  • Companies developing AI systems for the EU market
  • GPAI model providers (e.g. foundation model developers)
  • Open-source AI developers (with exemptions for research)
  • Non-EU companies whose AI output is used in the EU
  • Product manufacturers integrating AI into regulated products
  • Companies adapting or fine-tuning third-party AI models

AI Deployers (Users)

  • Public sector bodies using high-risk AI systems
  • Financial institutions using AI for credit scoring
  • Employers using AI for recruitment and HR decisions
  • Healthcare providers using AI diagnostic systems
  • Companies using AI for biometric identification
  • Any organisation using high-risk AI within the EU

All organisations using AI in the EU must ensure AI literacy among their staff (Art. 4), regardless of whether their AI systems are classified as high-risk. This obligation applies from February 2, 2025. For high-risk AI providers and deployers, the full set of obligations applies from August 2, 2026.

EU AI Act Key Requirements

1. AI Risk Classification (Articles 5-7, Annex III)

The foundation of AI Act compliance is correctly classifying each AI system by risk level. Article 5 defines prohibited practices (effective February 2025). Articles 6-7 and Annex III define high-risk categories covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. Organisations must maintain an AI system inventory, classify each system, and regularly review classifications as use cases evolve.

2. AI Risk Management System (Article 9)

Providers of high-risk AI must establish a continuous, iterative risk management system covering the entire lifecycle. This includes identifying and analysing known and foreseeable risks, estimating and evaluating risks from intended use and misuse, adopting risk mitigation measures, and testing to ensure residual risks are acceptable. The risk management system must be documented, regularly updated, and tested throughout the AI system's lifecycle.

3. Data Governance (Article 10)

Training, validation, and testing datasets for high-risk AI must meet strict governance requirements. This includes ensuring data quality, relevance, representativeness, and freedom from errors. Datasets must be examined for potential biases, particularly those affecting protected characteristics. Where personal data processing is necessary for bias monitoring, specific safeguards must be implemented. Data governance practices must be documented and auditable.

4. Human Oversight (Article 14)

High-risk AI systems must be designed to enable effective human oversight. Human operators must be able to fully understand the system's capabilities and limitations, correctly interpret outputs, decide when not to use the system or override its output, and intervene or interrupt operation. For real-time biometric identification, additional safeguards apply including requiring at least two natural persons to confirm identification results.

5. Technical Documentation and Transparency (Articles 11-13)

Providers must prepare detailed technical documentation per Annex IV before placing a high-risk AI system on the market. This covers system design, development methodology, monitoring and testing, accuracy metrics, and known limitations. Deployers must receive clear instructions for use including the provider's identity, system characteristics, performance metrics, known risks, and necessary human oversight measures.

6. Conformity Assessment and CE Marking (Articles 19-20, 43)

Before placing a high-risk AI system on the EU market, providers must conduct a conformity assessment, prepare an EU Declaration of Conformity (Annex V), and affix the CE marking. The assessment can be self-performed for most high-risk systems using a quality management system approach (Annex VII), but biometric AI systems used by law enforcement require third-party assessment by a notified body. Systems must also be registered in the EU database.

7. General-Purpose AI Model Obligations (Articles 53-55)

Providers of GPAI models must prepare and maintain technical documentation, provide information to downstream providers, establish copyright compliance policies, and publish training data summaries. GPAI models posing systemic risk (based on cumulative compute or Commission designation) face additional obligations: model evaluation for systemic risks, adversarial testing, serious incident reporting to the AI Office, and adequate cybersecurity protections.

Penalties for EU AI Act Non-Compliance

The AI Act establishes one of the most significant penalty regimes in EU technology regulation, with fines calculated as the higher of a fixed amount or a percentage of global annual turnover:

Up to EUR 35M / 7%

for violations of prohibited AI practices (Article 5) - the most severe penalties in the regulation

Up to EUR 15M / 3%

for non-compliance with high-risk AI obligations, GPAI model requirements, or notified body obligations

Up to EUR 7.5M / 1%

for supplying incorrect, incomplete, or misleading information to authorities or notified bodies

SME Proportionality

SMEs and startups face the lower of the fixed amount or percentage cap - ensuring penalties remain proportionate

Beyond financial penalties, non-compliance can result in withdrawal of AI systems from the market, restriction of use, and reputational damage. National market surveillance authorities have broad investigative and corrective powers, and the European AI Office directly oversees GPAI model providers.

How to Prepare for EU AI Act Compliance

With high-risk AI obligations applying from August 2, 2026, organisations should begin preparation now. Here is a structured approach to achieving compliance:

  1. 1

    AI System Inventory and Classification

    Create a comprehensive inventory of all AI systems used or developed by your organisation. Classify each system according to the risk tiers defined in Articles 5-7 and Annex III. Identify prohibited practices, high-risk use cases, and transparency obligations. This inventory becomes the foundation for all subsequent compliance activities.

  2. 2

    Gap Analysis Against Requirements

    For each high-risk AI system, assess current practices against the full set of Article 9-15 requirements. Identify gaps in risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy/robustness. Prioritise remediation based on risk and implementation complexity.

  3. 3

    AI Risk Management System

    Establish a continuous, iterative risk management process for each high-risk AI system. Document risk identification, analysis, evaluation, and treatment measures. Implement testing protocols to validate that residual risks are acceptable. Ensure the system covers the entire AI lifecycle from design through deployment and post-market monitoring.

  4. 4

    Data Governance and Bias Testing

    Implement data governance practices for training, validation, and testing datasets. Establish quality criteria, representativeness checks, and bias examination processes. Document data provenance, preprocessing steps, and any limitations. Set up ongoing bias monitoring for deployed systems.

  5. 5

    Conformity Assessment Preparation

    Prepare technical documentation per Annex IV, implement a quality management system per Article 17, and prepare the EU Declaration of Conformity per Annex V. For biometric identification systems used by law enforcement, engage a notified body for third-party assessment. Register systems in the EU database per Article 49.

  6. 6

    AI Literacy and Ongoing Compliance

    Implement an AI literacy programme for all staff per Article 4. Establish post-market monitoring (Article 72), serious incident reporting (Article 73), and ongoing human oversight processes. Integrate AI Act compliance into your organisation's existing governance, risk, and compliance framework.

Frequently Asked Questions about the EU AI Act

What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based approach to regulating AI systems, with obligations ranging from outright prohibitions to transparency requirements. High-risk AI systems face the most stringent requirements including risk management, data governance, and conformity assessments. Full application begins August 2, 2026.

Who needs to comply with the EU AI Act?

The AI Act applies to providers (developers) and deployers (users) of AI systems placed on the EU market or whose output is used in the EU. This includes EU-based companies, non-EU companies serving EU markets, and providers of general-purpose AI models. All organisations using AI in the EU must ensure AI literacy among their staff.

What are the penalties for non-compliance?

Penalties are severe: up to EUR 35 million or 7% of global annual turnover for prohibited AI practices, up to EUR 15 million or 3% for high-risk AI obligations, and up to EUR 7.5 million or 1% for providing incorrect information. SMEs and startups face proportionally lower caps.

What qualifies as a high-risk AI system?

High-risk AI systems are defined in Annex III and include AI used in: biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and border control, and administration of justice.

What is the difference between the EU AI Act and ISO 42001?

The EU AI Act is a legally binding regulation with enforcement and penalties, while ISO 42001 is a voluntary international standard for AI management systems. ISO 42001 can help demonstrate compliance but does not guarantee it. The AI Act has specific requirements like conformity assessments, CE marking, and incident reporting that go beyond ISO 42001.

When does the EU AI Act become enforceable?

The AI Act entered into force August 1, 2024 with phased implementation: prohibited AI practices from February 2, 2025; GPAI model obligations from August 2, 2025; and full high-risk AI requirements from August 2, 2026.