Summary
Global efforts to govern artificial intelligence are multiplying. The problem, as a landmark paper published this month in Global Policy makes clear, is that almost none of them carry sufficient authority to change the behaviour of governments, technology firms or the firms deploying AI at scale.
For compliance leads at FCA-regulated firms, the consequence is direct: there is no international body, no agreed global standard, and no imminent convergence that will do your internal framework-building for you. The external architecture is fragmented but internal architecture cannot wait.
This piece focuses on three practical themes: what the governance gap means, how the FCA has chosen to respond, and where governance failures are already appearing inside regulated firms.
1. The Global Governance Gap
A research team from the University of Oxford, the Alan Turing Institute, Yale and the University of Bologna published a framework this month evaluating why global AI governance initiatives consistently fail to change real-world behaviour.
Their diagnosis centres on three structural problems. States with the most advanced AI capabilities have the least incentive to accept binding constraints – every multilateral agreement risks constraining their own competitive edge. The more institutions involved, the slower and vaguer the output, and consensus among governments, regulators, standards bodies and technology companies tends toward the lowest common denominator. And AI is not one thing: foundation models, fraud detection, risk profiling and autonomous systems raise entirely different regulatory questions across entirely different remits. No single framework holds all of them properly.
The consequence for regulated firms, both within the UK and outside of it, is tangible. The EU AI Act, DORA’s ICT requirements, the FCA’s expectations, and the output of bodies like the Financial Stability Board and IOSCO were all produced in this fragmented environment. They are not coordinated. They do not share definitions. They were not designed to be implemented in sequence.
However, heads of compliance will know that you cannot wait for clarity on the basis that not all principle has been established. Internal governance has to be built without it using a principles and risk-based approach.
What this means in practice
- Do not assume convergence is coming: the international frameworks your vendors reference, EU AI Act, FSB guidance, IOSCO principles, carry different legal weights in different jurisdictions. Do not treat any of them as a definitive compliance standard for your firm.
- Map the frameworks that actually apply to your firm now: EU AI Act (if you have EU clients or use EU-based vendors), DORA (if AI underpins important business services), Consumer Duty and SM&CR (universally). These are live obligations, not future ones.
- Use the fragmentation as a gap analysis tool: where frameworks disagree or use different definitions, those gaps identify the areas where your internal governance needs to be most explicit.
2. The FCA’s Position
Though the “Mills Review” into long-term impacts of AI has been launched by the FCA, they have made a deliberate and consistent choice to not introduce AI-specific rules. Chief Executive Nikhil Rathi reaffirmed this position in December 2025, and it is embedded in the FCA’s 2025-2030 strategy. In some ways, the logic is defensible, technology-neutral, outcomes-focused regulation is more durable when the technology is evolving as rapidly as AI.
But principles-based does not mean expectation-free. The FCA’s 2024 AI Update mapped its existing frameworks directly against the UK Government’s five AI principles. Transparency and explainability sit within the Consumer Duty’s consumer understanding outcome and UK GDPR requirements. Accountability maps to SM&CR. Safety and robustness sit within SYSC 7 and operational resilience. Fairness maps to Consumer Duty and the Equality Act.
The Treasury Committee’s January 2026 report sharpened the pressure, criticising regulators for not doing enough and recommending the FCA publish practical guidance on how consumer protection rules apply to AI, and what SM&CR assurance is expected when AI causes harm, by end of 2026. The FCA’s March 2026 work programme confirmed the expansion of AI Live Testing. The direction is clear: the regulator is building supervisory insight from firms’ actual AI use. What it observes will shape enforcement.
What this means in practice (through a technological-neutral lens)
- Name the senior manager accountable for technology outcomes: the SM&CR question of who is responsible when an AI-driven output causes harm is not theoretical. It needs a documented answer before the FCA asks for one.
- Map each technological deployment to the Consumer Duty’s four outcomes: products and services, price and value, consumer understanding, consumer support. Where AI touches any of these, the governance arrangements need to be explicit and recorded.
- Treat the audit trail as the governance: documented data lineage, model assumptions, human review checkpoints and escalation routes are not compliance paperwork. Map also the dependencies of the technology and the level of data sensitivity held within any system. They are the evidence that the principles-based obligations are being met.
- Consider engaging with FCA AI Live Testing: the second cohort opens in April 2026. For firms with material AI deployments, supervised testing builds regulatory confidence and surfaces governance gaps in a controlled environment.
3. Where Governance Gaps Are Appearing
Across FCA-regulated firms, the same governance failures are appearing consistently which appear to be structural rather than technical failures.
Accountability without ownership. AI has created business functions that sit across multiple existing SM&CR responsibilities, technology, compliance, risk, investment management. In practice this means accountability is distributed across several individuals, which under SM&CR means no individual actually holds it. When the FCA asks who is responsible for an AI-driven compliance output, ‘the team’ is not an answer the framework accepts.
Policy that does not reflect practice. Many firms have AI governance policies, but fewer have documentation that reflects how AI is being used day to day. The gap between the policy and the practice is where supervisory risk accumulates.
Vendor dependency without vendor scrutiny. Third-party AI tools adopted by compliance and risk functions carry obligations under both SYSC and if you’re a UK/EU business, DORA. Concentration risk, where a single vendor underpins multiple core processes, is a specific operational resilience concern that most firms have not yet stress-tested or documented.
What this means in practice
- Conduct an AI deployment audit: identify every AI system in operational use, classify by function and risk, map current governance arrangements, and identify the gaps. This is the baseline. Without it, firms are managing exposure they cannot see.
- Review vendor contracts against operational requirements: audit rights, exit planning, concentration risk and business continuity provisions should be explicit for any AI vendor underpinning an important business service.
- Check explainability for every client-facing AI output: where AI generates suitability summaries, risk assessments or client communications, the firm must be able to document how the output was produced and what human review occurred. Consumer Duty requires it; the ability to demonstrate it is a governance question, not a technical one.
- Close the gap between policy and practice: if an AI governance policy exists but the documentation of actual deployments does not match it, that gap is the supervisory risk. An internal audit of AI use against stated policy is the starting point.
What Firms Should Prioritise
Name accountability.
The SM&CR question is coming. Firms that have answered it proactively, with documented scope and embedded governance structures, are positioned to respond clearly. Firms that have not will be constructing the answer under supervisory pressure.
Build the audit trail as infrastructure.
Documentation of data lineage, model assumptions, human oversight and escalation pathways is not a compliance exercise. It is the operational architecture that makes AI use defensible when the FCA or a client challenges an output.
Classify AI deployments by EU AI Act risk tier as a starting point.
Whether or not EU obligations directly apply, this exercise identifies the systems carrying the highest governance requirements and creates a prioritised remediation roadmap. Most financial services AI use cases fall into the high-risk category. Governance arrangements need to reflect that.
Do not confuse no AI-specific rules with no regulatory expectations.
Consumer Duty, SM&CR, SYSC and operational resilience requirements apply to AI use in full. The absence of a dedicated rulebook makes documented, demonstrable governance more important, not less, because the principles-based framework means the evidence of compliance is the compliance.
Map your technology and infrastructure governance.
AI governance cannot be built on guesswork about what is running across your systems. Firms need a structured inventory, every AI deployment mapped to the business function it supports, the data it processes, and the senior manager accountable for it. Without that visibility, it is impossible to know whether existing governance arrangements are adequate or where the gaps are.