Teaming up with... AVIVA

Welcome to the UKGI weekly regulation update service for Aviva ABC brokers

We hope you find the Updates useful. If you are
interested in subscribing to our affordable
ABC compliance support package, please
email us at ABC@ukgigroup.com or
call UKGI on our dedicated ABC
contact line 01925 767893.

FCA publishes speech on the FCA’s long term review into AI and retail financial services

Link(s):        The FCA’s long term review into AI and retail financial services: designing for the unknown | FCA
Review into the long-term impact of AI on retail financial services (The Mills Review) | FCA

Context

Sheldon Mills delivered a speech at the FCA’s Supercharged Sandbox Showcase event, discussing the FCA’s long term review into AI and retail financial services.

Key points to note

In opening the speech, Mills highlighted that his past work with the FCA has taught him that “the real challenge in regulation isn’t dealing with what we already understand – it’s preparing for what we don’t”, confirming that the long-term review into AI is about the FCA “designing for the unknown”.

The speech covers the following topics:

  • Why the FCA needs to design for the unknown – and why now:
    • Millions of UK consumers now use AI tools to interpret information, plan their lives, and make decisions, such as managing money and finances or even to generate recipe ideas from a photo of fridge contents
    • AI has long been used in financial services (e.g. fraud detection, trading systems, credit decisions).  By 2024, around 75% of firms were already using AI.
    • The past two years have seen major change due to generative AI, multimodal systems, and emerging AI agents.
    • Firms are already building AI tools for personalised financial guidance, improved customer journeys and better identification of vulnerable customers.
    • AI investment by firms will continue, and customer AI usage will keep increasing.
    • There is still uncertainty around which AI models will scale successfully, which risks will matter most and which mitigations will be effective.
  • Exploring uncertainty through a considering a plausible scenario
    • Over time, consumers may increasingly use AI as an intelligent intermediary between themselves and firms.
    • Assistive AI is here today. Tools that explain products, compare options, prefill forms and highlight risks; supporting consumers without taking decisions away from them.
    • Advisory AI is emerging. Systems that nudge, recommend and encourage action – switching suppliers, reshaping budgets, refinancing at better rates. These tools promise better outcomes, but they also raise questions about transparency, neutrality and the basis of advice.
    • Autonomous AI is coming into view. Agents that act within the boundaries of the consumer sets – shifting money, negotiating renewals, reallocating savings, or spotting risks before the consumer even sees them. For many households, this will be transformative; reducing admin, improving decisions and cutting costs.
    • Agent autonomy brings deeper questions, such as; what happens when an AI agent makes a mistake? How do we ensure consumers understand enough to stay in control?  And what happens if commercial incentives quietly shape the recommendations people see?
  • Consumer outcomes
    • The FCA wants to understand how firms can unlock opportunities to support better outcomes safely.
    • Risks include; consumers delegating decisions they don’t understand; people with patchy data histories facing new exclusions; scammers exploiting AI to mimic voices or create synthetic identities.
    • Some risks may be less visible but just as important, such as the embedding or amplifying of bias into an AI model, leading to systematically worse outcomes for some groups.  AI could make decisions that are technically logical but misaligned with a consumer’s real-world needs.  When decisions are powered by ever more data, firms must get transparency and data protection right.
  • Competition, market structure and new entrants
    • AI could change the drivers of market power in ways we need to understand early. It could be the great leveller, giving a start-up the analytical power of a global bank. Or it could entrench the biggest players with the most data and the deepest pockets.
    • Big Tech firms may capture parts of the value chain without ever becoming regulated providers.
    • Consumers, through their personal AI agents, may drive much more rapid switching, reshaping who holds power in ways we’ve not seen before.
  • What does all this mean for regulation?
    • Current regulatory frameworks were built for a world where systems updated occasionally, models behaved predictably and responsibility was clearly located within the firm. AI challenges all three of those assumptions.
    • Accountability under the SM&CR still matters – but what does ‘reasonable steps’ look like when the model you rely on updates weekly, incorporates components you don’t directly control, or behaves differently as soon as new data arrives?
    • As firms continue to develop AI assurance platforms to monitor, audit, and evaluate AI systems, what should the role of the FCA be?
    • The FCA wants to examine how AI will change the way it applies its rules and provide the clarity firms need. Designing for the unknown means building a regulatory model that can evolve with the technology – without compromising clarity or trust.