EU AI Act Compliance for FinTech: What You Need to Know
Introduction
FinTech companies sit at the intersection of two powerful regulatory forces in 2026: the EU AI Act and the Digital Operational Resilience Act (DORA). Both are now in force, both carry significant penalties, and both directly target core FinTech operations.
The AI Act classifies several AI applications common in financial services - credit scoring, insurance risk assessment, fraud detection - as high-risk. At the same time, DORA imposes comprehensive ICT risk management obligations on virtually all EU financial entities. Where these two frameworks overlap, compliance becomes both more complex and more consequential.
This guide breaks down exactly how the AI Act affects FinTech operations, which systems are classified as high-risk, how DORA and the AI Act interact, what the compliance timeline looks like, and what practical steps FinTech companies should take.
Take the free AI Act Readiness Assessment to understand where your FinTech stands on AI Act compliance.
Which FinTech AI Systems Are High-Risk?
The AI Act's Annex III explicitly targets several AI applications that are standard in financial services. Here is a breakdown of the most relevant classifications for FinTech.
Credit Scoring and Creditworthiness Assessment
Classification: High-risk under Annex III, point 5(b)
AI systems intended to evaluate the creditworthiness of natural persons are explicitly listed as high-risk. This covers:
- Traditional credit scoring models using machine learning
- Alternative credit scoring systems using non-traditional data (social media activity, mobile phone usage, behavioral data)
- Automated loan approval or denial systems
- Mortgage risk assessment algorithms
- Buy-now-pay-later (BNPL) creditworthiness checks
- Real-time credit limit adjustment systems
Exception: AI systems used solely for the purpose of detecting financial fraud are explicitly excluded from this high-risk category. A system designed to flag fraudulent loan applications is not high-risk under this provision - but a system that simultaneously assesses creditworthiness and fraud risk must comply with high-risk requirements for the creditworthiness component.
Practical impact: Most FinTech lenders, neobanks, and BNPL providers use AI-driven credit decisions. These systems will need full compliance with Chapter III requirements: risk management, data governance, transparency, human oversight, technical documentation, and conformity assessment.
Insurance Risk Assessment and Pricing
Classification: High-risk under Annex III, point 5(c)
AI systems used for risk assessment and pricing in relation to natural persons in the case of life and health insurance are classified as high-risk. This includes:
- Health insurance underwriting algorithms
- Life insurance risk scoring models
- AI systems that adjust premiums based on individual risk profiles
- Predictive models assessing mortality or morbidity risk
Note: Property and casualty insurance AI systems are not explicitly listed as high-risk under this provision, though they may still qualify under other categories depending on their impact on fundamental rights.
Fraud Detection
Classification: Generally not high-risk
The AI Act explicitly carves out fraud detection from the high-risk credit scoring category. AI systems designed to detect financial fraud - transaction monitoring, anti-money laundering (AML) pattern detection, identity fraud prevention - are not classified as high-risk under Annex III, point 5(b).
However, there are important nuances:
- If a fraud detection system also makes decisions about access to financial services (e.g., automatically blocking accounts), it may qualify as high-risk under the broader point 5(a) or 5(b)
- Fraud detection systems using biometric identification (e.g., voice recognition to verify callers) may be high-risk under Annex III, point 1
- Fraud detection systems must still comply with GDPR requirements for automated decision-making under Article 22
Algorithmic Trading
Classification: Context-dependent
Algorithmic trading systems are not explicitly listed in Annex III. Their classification depends on their specific function:
- Pure execution algorithms (market making, order routing) are likely minimal risk
- AI systems that autonomously make investment decisions affecting natural persons' assets could potentially fall under high-risk if they influence access to essential financial services
- Robo-advisors providing personalized investment advice may qualify as high-risk depending on their decision-making authority
The European Commission may issue further guidance or delegated acts clarifying the classification of trading-related AI systems.
Customer Onboarding and KYC
Classification: Context-dependent
AI systems used for Know Your Customer (KYC) and customer onboarding:
- Biometric verification (facial recognition for identity verification): High-risk under Annex III, point 1, if used for remote biometric identification (not just verification)
- Document verification AI (reading and validating identity documents): Generally not high-risk if limited to narrow procedural tasks (Article 6(3) exemption may apply)
- Risk-based customer categorization for AML purposes: May be high-risk if it influences access to financial services
Chatbots and Virtual Assistants
Classification: Limited risk (transparency obligations only)
AI-powered customer service chatbots are classified as limited risk. The primary obligation is to inform customers that they are interacting with an AI system, not a human. This is a transparency requirement under Article 50, not a high-risk obligation.
However, if a chatbot makes consequential decisions (e.g., approving or denying a financial product), the decision-making component may be separately classified as high-risk.
How the AI Act Intersects with DORA
FinTech companies subject to DORA face a unique dual compliance challenge. DORA has been fully applicable since January 17, 2025, and the AI Act's high-risk provisions apply from August 2, 2026. Here is how they interact.
Overlapping Requirements
| Requirement Area | DORA | AI Act | Overlap |
|---|---|---|---|
| Risk management | ICT risk management framework (Art. 6-16) | AI risk management system (Art. 9) | Both require documented risk management. AI-specific risks should be integrated into the DORA ICT risk framework. |
| Incident reporting | Major ICT incident reporting to competent authorities (Art. 19) | Serious incident reporting for high-risk AI systems (Art. 73) | An AI system failure causing a major ICT incident triggers both reporting obligations - potentially to different authorities. |
| Testing | Digital operational resilience testing, including TLPT (Art. 24-27) | Testing and validation of high-risk AI systems (Art. 9, 15) | AI system testing can be integrated into DORA's resilience testing program. |
| Third-party risk | ICT third-party risk management (Art. 28-30) | Provider and deployer obligations for third-party AI (Art. 25-26) | Third-party AI providers should be included in the DORA Register of Information and subject to contractual provisions. |
| Documentation | Register of Information for ICT third-party arrangements (Art. 28(3)) | Technical documentation (Art. 11), EU database registration (Art. 49) | Documentation requirements can be aligned but serve different purposes. |
| Governance | Management body accountability for ICT risk (Art. 5) | Provider and deployer governance obligations (Art. 9, 26) | Board-level oversight should cover both ICT resilience and AI governance. |
Practical Integration Strategy
Rather than building two separate compliance programs, FinTech companies should integrate AI Act requirements into their existing DORA framework:
Extend your DORA ICT risk management framework to include AI-specific risks. Article 9 of the AI Act requires a risk management system - this can be a module within your DORA risk framework rather than a standalone system.
Add AI systems to your DORA Register of Information. If you use third-party AI providers (e.g., cloud-based credit scoring APIs), include them in your ICT third-party register with AI-specific contract clauses.
Integrate AI testing into your resilience testing program. DORA requires regular resilience testing. AI system testing (accuracy, robustness, bias) can be incorporated into this program.
Align incident reporting. Establish a single incident management process that evaluates whether an incident triggers DORA reporting, AI Act reporting, or both.
Unify governance. Your management body should oversee both ICT resilience and AI governance through integrated reporting and accountability structures.
A platform like Matproof manages both DORA and AI Act compliance in a single dashboard, mapping overlapping controls and eliminating duplication. Start a free trial.
Specific Obligations for FinTech AI Systems
For Credit Scoring Providers
If you develop or provide an AI-based credit scoring system, you are a provider of a high-risk AI system. Your obligations include:
Data governance (Article 10):
- Demonstrate that training data is representative and free of discriminatory bias
- Document data sources, preprocessing steps, and labeling criteria
- Ensure datasets cover relevant demographic groups to prevent discriminatory outcomes
- Regularly validate data quality throughout the system lifecycle
Transparency (Article 13):
- Provide deployers (banks, lenders using your system) with clear instructions for use
- Document the system's intended purpose, performance metrics, and known limitations
- Specify the level of human oversight required
- Explain the logic involved in the AI decision to the extent possible
Human oversight (Article 14):
- Design the system to allow human review of credit decisions
- Enable human overseers to override or reverse AI-generated credit assessments
- Provide information necessary for overseers to interpret the system's output
Conformity assessment (Article 43):
- Conduct a self-assessment (internal control under Annex VI) before placing the system on the market
- Draw up an EU declaration of conformity
- Register the system in the EU database
For FinTechs Deploying Third-Party AI
If you use a third-party AI system for credit scoring, fraud detection, or other purposes, you are a deployer with obligations under Article 26:
- Use the system strictly in accordance with the provider's instructions
- Assign trained personnel for human oversight of high-risk decisions
- Ensure the quality and relevance of input data you feed into the system
- Monitor the system's operation and report malfunctions to the provider
- Conduct a fundamental rights impact assessment (Article 27) if you are a public body or provide essential financial services
- Inform individuals affected by AI-driven decisions (e.g., loan applicants)
- Keep automatically generated logs for at least 6 months
Interaction with Existing Financial Regulation
FinTech companies must also consider how the AI Act interacts with existing financial regulation:
- CRD/CRR: Banks using AI for credit risk must ensure AI Act compliance does not conflict with prudential requirements
- MiFID II: AI-based investment advice and algorithmic trading face both MiFID II and AI Act obligations
- PSD2/PSD3: Payment service providers using AI for transaction monitoring must comply with both frameworks
- Consumer Credit Directive: AI-driven credit decisions must comply with both the AI Act and consumer credit disclosure requirements
- GDPR Article 22: The right not to be subject to solely automated decision-making with legal effects applies alongside AI Act requirements - see our AI Act vs GDPR comparison
Timeline and Enforcement for FinTech
Key Dates
| Date | Milestone |
|---|---|
| January 17, 2025 | DORA fully applicable - ICT risk management, incident reporting, third-party risk requirements in force |
| February 2, 2025 | AI Act prohibitions in effect (e.g., social scoring) and AI literacy obligation |
| August 2, 2025 | GPAI model rules apply - relevant if your FinTech uses or provides foundation models |
| August 2, 2026 | High-risk AI system requirements fully applicable - credit scoring, insurance AI, and other Annex III systems must comply |
Who Enforces?
Enforcement involves multiple authorities:
- National financial supervisors (e.g., BaFin, AMF, FCA for UK-connected entities) enforce DORA
- National market surveillance authorities (designated by each Member State) enforce the AI Act
- Data protection authorities enforce GDPR aspects of AI systems
- The European AI Office coordinates AI Act enforcement at the EU level
For FinTech companies, this means potential oversight from three or more regulatory bodies simultaneously. Maintaining clear, centralized compliance documentation is essential.
Practical Compliance Steps for FinTech
1. Conduct an AI System Audit
Map every AI system in your organization. For each system, document:
- What it does and its intended purpose
- What data it uses and how
- What decisions it influences or makes
- Who is affected by its outputs
- Whether it falls under Annex III high-risk categories
2. Classify Your Systems
Using the categories above, determine:
- Which systems are high-risk (credit scoring, insurance pricing)
- Which are exempt (fraud detection, narrow procedural tasks)
- Which have transparency obligations only (chatbots)
- Which are minimal risk (internal analytics tools)
Take the free AI Act Readiness Assessment for a structured classification walkthrough.
3. Integrate with Your DORA Program
Do not build a standalone AI Act compliance program. Instead:
- Extend your DORA risk management framework to include AI-specific risks
- Add AI providers to your Register of Information
- Integrate AI testing into your resilience testing schedule
- Align incident reporting processes
4. Address Data Governance Gaps
For high-risk AI systems:
- Audit training data for bias and representativeness
- Document data lineage and preprocessing
- Implement ongoing data quality monitoring
- Ensure GDPR compliance for personal data used in AI training
5. Implement Human Oversight
For credit decisions and other high-risk AI outputs:
- Define clear escalation paths for AI-driven decisions
- Train staff on AI system capabilities and limitations
- Implement mechanisms for human review and override
- Document oversight procedures
6. Prepare Documentation
Start building the technical documentation required by Article 11:
- System description and intended purpose
- Risk management documentation
- Data governance practices
- Testing and validation results
- Human oversight measures
- Performance metrics and known limitations
7. Plan for Conformity Assessment
Most FinTech AI systems will use the self-assessment procedure (Annex VI). Prepare by:
- Establishing your quality management system
- Compiling all required documentation
- Conducting internal reviews against AI Act requirements
- Drafting the EU declaration of conformity
Frequently Asked Questions
Q: Is fraud detection AI high-risk under the AI Act?
A: No, fraud detection is explicitly excluded from the high-risk credit scoring category (Annex III, point 5(b)). However, if a fraud detection system also makes decisions about access to financial services, uses biometric identification, or performs profiling that materially affects individuals, it may qualify as high-risk under other Annex III categories. Each system must be assessed individually.
Q: Do I need to comply with both DORA and the AI Act?
A: Yes, if you are a financial entity subject to DORA and you develop or deploy AI systems covered by the AI Act. The good news is that many requirements overlap - risk management, documentation, testing, incident reporting. Building an integrated compliance program is more efficient than managing them separately. Matproof supports both frameworks with shared controls mapping.
Q: My FinTech operates in the UK. Does the EU AI Act apply to me?
A: The EU AI Act applies if you place AI systems on the EU market or if the output of your AI systems is used within the EU. If your FinTech serves EU customers - even from a UK base - you are likely in scope. The UK is developing its own AI regulatory framework, but it does not exempt you from the EU AI Act for EU market activities.
Q: What if my credit scoring model was built before August 2026?
A: AI systems already on the market before August 2, 2026 are only subject to the new requirements if they undergo significant changes in design or intended purpose. However, any retraining, model update, or scope change could constitute a significant change. In practice, most actively maintained credit scoring models will need to comply because they are regularly updated.
Q: How should I handle AI systems from third-party vendors?
A: As a deployer, you have obligations under Article 26 regardless of whether you built the system. Require your AI vendors to provide the documentation, instructions, and transparency information mandated by the AI Act. Include AI Act compliance clauses in vendor contracts. Add AI providers to your DORA third-party risk register and monitor their compliance status.