The FCA Has Not Written New AI Rules. It Has Reapplied Existing Ones.
If you are waiting for the FCA to publish a comprehensive AI regulation before taking action, you are misreading the regulator's approach. The FCA has been deliberate and consistent on this point since Discussion Paper DP22/4 in 2022: the regulator's preferred stance is principles-based rather than prescriptive. Existing rules apply to AI-enabled activities just as they apply to any other method of conducting regulated business. There is no separate AI authorisation regime and no dedicated AI rulebook.
What this means in practice is that a fintech using AI for credit scoring, fraud detection, or customer communications is already subject to the full weight of FCA expectations: treating customers fairly, ensuring market integrity, maintaining operational resilience, and delivering good outcomes under Consumer Duty. The question is not whether AI is regulated but whether your governance of AI is proportionate to the risks it introduces.
This article sets out the FCA's stated priorities on AI risk, how Consumer Duty applies to AI-driven decisions, what firms are expected to do in practice, and the international dimension for UK fintechs with European operations.
The FCA's Principles-Based Framework
The FCA has been clear that it does not intend to regulate AI as a technology. Instead, it regulates outcomes. The existing Principles for Businesses (particularly Principle 6 on fair treatment of customers, Principle 11 on information to the regulator, and Principle 3 on management and control) apply to AI-enabled activities in exactly the same way they apply to manual processes. What changes is the nature of the risk, not the regulatory obligation.
The FCA's Portfolio Letter on AI, directed at firms using algorithmic decision-making in customer-facing contexts, set out four areas where the regulator expects firms to demonstrate competence. First, that the firm understands how its AI models make decisions and can explain those decisions to customers and to the regulator. Second, that the firm has tested its models for bias and discriminatory outcomes, particularly in credit and insurance pricing. Third, that the firm has operational resilience arrangements that account for the failure or degradation of AI systems. Fourth, that AI-driven decisions are consistent with Consumer Duty's requirement to deliver good outcomes.
The AI Risks the FCA Is Focused On
The FCA has identified a specific set of AI-related risks that it considers most material for financial services firms. Understanding this risk taxonomy is the starting point for building proportionate governance.
Model Risk and Algorithmic Bias
Model risk is the risk that an AI system produces materially incorrect outputs due to flawed design, training data deficiencies, or deployment in a context different from the one for which it was trained. In financial services, this is particularly acute in credit scoring and insurance pricing, where biased outputs can lead to discriminatory outcomes that violate the Equality Act 2010 as well as FCA principles. The FCA expects firms to have a formal model risk management framework that covers model development, validation, approval, monitoring and decommissioning.
Explainability
Explainability is the requirement to be able to explain to a customer, in plain language, why an AI-driven decision was made. This is most pressing in credit decisions, where adverse decisions (declining an application, reducing a credit limit, or pricing at a premium) must be explainable to the affected customer under the Consumer Credit Act and under Consumer Duty. The use of black-box models that cannot produce human-interpretable explanations is incompatible with this requirement. The FCA has not mandated any specific explainability technique, but it expects firms to be able to produce meaningful explanations on demand.
Data Quality
AI models are only as good as the data on which they are trained. The FCA expects firms to be able to demonstrate the provenance and quality of training data, the steps taken to identify and address bias in training data, and the ongoing monitoring of data quality as the model operates in production. This is particularly relevant for fintechs that train models on their own proprietary transaction data, which may be small, unrepresentative, or subject to survivorship bias in early years of operation.
Operational Resilience
AI systems introduce a specific form of operational resilience risk: they can fail silently. Unlike a system outage that is immediately visible, an AI model that has degraded in performance (due to data drift, model drift, or upstream data quality issues) may continue to produce outputs that appear normal but are in fact materially incorrect. The FCA expects firms to have monitoring and alerting mechanisms that detect model degradation, and contingency arrangements (including human review fallback) for when AI systems are unavailable or unreliable.
Consumer Duty and AI-Driven Decisions
Consumer Duty, which came into force on 31 July 2023 for new products and 31 July 2024 for existing products, is arguably the most significant regulatory development for customer-facing AI since the FCA was established. The Duty requires firms to deliver good outcomes for retail customers across four outcome areas: products and services, price and value, consumer understanding, and consumer support.
Each of these outcome areas has direct implications for AI use. On products and services, the FCA expects that AI-driven product recommendations or pricing decisions are based on genuine assessment of customer needs rather than maximising short-term revenue. On price and value, the FCA has been explicit that algorithmic pricing models must not produce outcomes that represent poor value for customers, and that dynamic pricing that disadvantages vulnerable customers or exploits behavioural biases is inconsistent with the Duty. On consumer understanding, AI-generated communications must be clear, fair, and not misleading. On consumer support, AI-driven customer service tools must not create friction that prevents customers from accessing the support they need.
For fintechs using AI in credit scoring, the most immediate Consumer Duty implication is the requirement to demonstrate that credit limit decisions, pricing decisions, and adverse credit decisions are delivering good outcomes for customers over time. This requires a monitoring framework that tracks outcomes by customer cohort, identifies adverse trends, and triggers a management response. The FCA expects this to be a genuine risk management process, not a box-ticking exercise.
"Consumer Duty does not prevent firms from using AI in customer-facing decisions. It requires them to demonstrate that those decisions are producing good outcomes for customers over time, backed by a monitoring framework that can identify problems early and a governance process that can act on them."
Vulnerable Customers and AI
The FCA's vulnerability guidance (FG21/1) predates Consumer Duty but is fully incorporated within it. Firms are expected to identify customers who may be vulnerable due to health, life events, resilience, or capability, and to make reasonable adjustments to the way they serve those customers. For AI-driven processes, this creates a specific challenge: automated systems tend to be less sensitive to vulnerability signals than well-trained human advisers, and customers who would benefit most from a different approach may be the least likely to proactively disclose their circumstances.
The practical implication is that AI systems should be designed with vulnerability identification as an explicit feature rather than an afterthought. This might include monitoring for behavioural indicators of financial stress, flagging accounts for human review where vulnerability signals are present, and ensuring that escalation pathways to human support are easily accessible.
What Firms Must Do in Practice
The FCA's expectation for firms using AI can be summarised as four requirements: document, test, monitor, and govern. Documentation means maintaining clear records of the purpose of each AI system, the training data used, the validation process, the approval governance, and the parameters within which the model is expected to operate. Testing means running the model against historical data with known outcomes, testing specifically for discriminatory or biased outputs, and conducting periodic challenger model testing. Monitoring means real-time tracking of model performance metrics (accuracy, precision, recall, and relevant business outcomes such as default rates or complaint rates) and having defined thresholds that trigger management action. Governance means clear accountability at senior management level for each AI system, regular reporting to the board or risk committee on AI model performance, and a formal model change control process.
For a fintech at Series A or Series B, this does not require a large dedicated AI governance team. It does require that AI systems are included in the risk framework, that somebody is formally accountable for each model, and that the firm can demonstrate to the FCA, if asked, that it has tested its models for bias and has a monitoring process in place.
The EU AI Act: The International Dimension
The EU AI Act entered into force on 1 August 2024, with most provisions applying from August 2026 onwards. It is directly relevant to UK fintechs that have EU-based customers, EU-based operations, or whose AI systems make decisions affecting EU persons. The Act takes a risk-based approach, classifying AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. Credit scoring and creditworthiness assessment are explicitly classified as high-risk under Annex III of the Act.
High-risk AI systems under the EU AI Act are subject to mandatory requirements including: a quality management system, conformity assessment before deployment, registration in an EU database, ongoing monitoring and logging, and post-market monitoring plans. These requirements are more prescriptive than the FCA's principles-based approach and, for firms with both UK and EU operations, will require a dual compliance framework.
The practical implication for UK fintechs is that if you are deploying AI in credit scoring for EU customers, the EU AI Act's requirements are not optional even if you are FCA-authorised rather than EU-authorised. The extraterritorial reach of the Act means that where the output of an AI system is used in the EU, the Act applies to that system. Legal advice specific to your operating structure is essential.
Practical Guidance for Fintechs Using AI
For a fintech using AI in credit scoring, the immediate priorities are: document every model in production (name, purpose, training data vintage, last validation date, accountable owner), run a bias test on credit score outputs by protected characteristic and document the results, establish a monitoring dashboard that tracks model accuracy and adverse decision rates at least weekly, and ensure that every adverse credit decision can produce a plain-language explanation that could be shared with a customer or the FCA on request.
For a fintech using AI in fraud detection, the priorities are slightly different: the explainability requirement is less acute (customers do not have a right to know the reasons for a fraud block in the same way as a credit decline), but the operational resilience requirements are higher because fraud detection failures have immediate and severe consequences. Firms should ensure that AI-based fraud models have human review escalation paths, that false positive rates are monitored and reported to the board, and that there is a contingency arrangement if the model is unavailable.
For a fintech using AI in customer service (chatbots, virtual assistants, automated complaint handling), the Consumer Duty implications are most direct. AI-generated responses must be accurate, clear, and must not mislead customers. Systems must be able to identify when a customer needs escalation to a human and must facilitate that escalation without friction. Complaint handling AI must not be used to reduce the rate of upheld complaints in a way that disadvantages customers with legitimate grievances.
Key Takeaways
- The FCA has not created new AI rules: it applies existing principles to AI-enabled activities. Firms using AI in regulated activities are already subject to the full weight of FCA expectations.
- The four AI risks the FCA focuses on are model risk and bias, explainability, data quality, and operational resilience. Each requires a proportionate governance response.
- Consumer Duty applies directly to AI-driven customer decisions. The requirement to deliver good outcomes and protect vulnerable customers cannot be delegated to an algorithm.
- The minimum governance requirement is: model inventory, bias testing, production monitoring, and board-level accountability. This is achievable for early-stage fintechs without large dedicated teams.
- The EU AI Act classifies credit scoring as high-risk AI. UK fintechs with EU operations must plan for dual compliance with FCA principles and EU AI Act requirements from 2026.
- Explainability in adverse credit decisions is non-negotiable under Consumer Duty. Black-box models that cannot produce plain-language explanations are incompatible with the regulatory framework.
- AI governance should be integrated into the existing risk management and compliance framework, not treated as a separate technology project.