Insights  ·  AI Governance

AI Governance in Financial Services: What Boards Must Decide Before the Rules Arrive

By Ali Zeb  ·  January 2026  ·  9 min read

There is a pattern I have observed consistently across the organisations I advise. AI adoption is accelerating at the operational level, in fraud detection, underwriting decisions, customer communication, credit assessment, regulatory reporting, while governance of that adoption remains almost entirely absent at board level. Boards are not deciding how AI should be used. They are learning, after the fact, about AI that has already been deployed.

This is not a new dynamic. Ten years ago, the same pattern applied to cyber security. Boards were learning about security programmes that had been running for years, operating to risk appetites that had never been defined at board level, and governed by frameworks the board had never approved. The consequences of that governance gap are now well-documented and continue to surface in enforcement actions, incident disclosures, and regulatory supervisory findings.

The AI governance gap is larger. The technology is more pervasive, the risk dimensions are less well-understood, and the regulatory framework is less mature. Boards that wait for the rules to arrive before establishing AI governance will find themselves governing AI that has already been embedded at scale in the firm's most consequential decisions.

Why AI governance is different from technology governance

Technology governance has existed in financial services for decades. Boards have governance frameworks for IT risk, change management, systems resilience, and third-party technology dependency. AI sits within these frameworks in most organisations, treated as a category of technology risk.

This is a category error. AI introduces governance dimensions that conventional technology governance frameworks were not designed for.

Explainability. A conventional IT system does what it is programmed to do, and what it does can be documented and tested. AI systems, particularly machine learning models, make decisions through processes that are often not transparent even to their developers. When an AI system makes a consequential decision about a customer or a risk, the board must be able to govern the process by which that decision was made, not just audit the outcome.

Drift and model degradation. AI models trained on historical data will produce different outputs as the environment they are operating in changes. A credit model trained before a recession will behave differently in a downturn. A fraud model trained on pre-pandemic transaction patterns will misread post-pandemic behaviour. The governance question is not just whether the model worked at deployment, it is whether the board has oversight of the model's ongoing performance and the conditions under which it should be reviewed or retired.

Accountability for automated decisions. Financial services regulation places specific accountability on firms for decisions made about customers and counterparties. AI does not transfer that accountability to a vendor or a model. When an AI system makes a decision that harms a customer or creates a regulatory liability, the accountability sits with the firm, and within the firm it sits with the management body. This is not a theoretical legal point. It is a governance obligation.

"Boards are not deciding how AI should be used. They are learning, after the fact, about AI that has already been deployed. That is not governance. It is retrospective awareness."

Ali Zeb

What the regulatory direction is actually signalling

The FCA's AI discussion papers, the NCSC's AI security guidance, and the EU AI Act together signal a consistent regulatory direction: boards and senior management will be expected to demonstrate active governance of AI, not passive awareness of it.

The FCA has been explicit that its Consumer Duty obligations extend to outcomes produced by AI systems. A firm cannot argue that an AI model's adverse outcome was unintended. The Consumer Duty requires firms to take positive steps to ensure good outcomes, and a board that cannot demonstrate it has governed its AI systems to that standard will struggle to demonstrate it has met the Consumer Duty.

The NCSC's work on AI security has identified a specific set of risks that boards need to understand: adversarial manipulation of AI systems, supply chain risks from AI components, and the security implications of large language model integration into operational systems. These are not distant, hypothetical risks. They are present in the current threat environment and they require board-level visibility.

The EU AI Act, which will affect financial services firms operating in EU markets, creates tiered obligations based on AI risk classification. High-risk AI uses in financial services, credit scoring, insurance underwriting, employment decisions, will require documented governance frameworks, conformity assessments, and human oversight mechanisms. The governance infrastructure needs to be built before the obligations become enforceable.

Four decisions boards must make now

The following four decisions are not a complete AI governance framework. They are the board-level decisions that create the foundation on which a governance framework can be built. Without them, governance frameworks produced by management will remain management frameworks, not board governance.

1. Inventory approval. The board should approve a register of the AI systems the firm is using in consequential decisions, those that affect customers, counterparties, or regulatory obligations. This is not a technology inventory. It is a governance decision about what the firm is doing in its own name. Boards that do not know what AI they are running cannot govern it.

2. Risk classification. For each consequential AI use, the board should approve a risk classification: what is the potential harm if this system produces a wrong output? Who bears the accountability? What is the acceptable error rate? These classifications drive the governance requirements for each system, and they are board decisions, not management ones, because they involve the firm's risk appetite.

3. Human oversight thresholds. The board should define the categories of AI decision that require human review before they take effect. Not every AI output requires human oversight, that would defeat the purpose. But the board should define the threshold explicitly, rather than allowing operational teams to define it by default. In regulated financial services, that threshold is not a technical question. It is a governance question with regulatory implications.

4. A review and retirement policy. The board should approve a policy that defines how AI models are monitored, when they are reviewed, and under what circumstances they are retired or suspended. This is the AI equivalent of the credit risk review cycle. It provides the governance infrastructure that allows the board to demonstrate ongoing oversight, not just point to the original approval.

Ali Zeb is an Executive Cyber Security Advisor and board-level advisor with former appointments at the FCA and NCSC. He advises financial services boards on AI governance, cyber risk, and regulatory compliance.

Disclaimer

The views and opinions expressed in these articles are those of the author, Ali Zeb, and are provided for general informational and educational purposes only. They are based on professional experience, independent research, publicly available information, and the use of artificial intelligence tools to support analysis and content development.

While every effort is made to ensure the accuracy and relevance of the information presented, no representation or warranty, express or implied, is made as to its completeness, accuracy, or suitability for any particular purpose. The content should not be relied upon as professional, legal, regulatory, or financial advice.

Readers are encouraged to seek appropriate independent advice specific to their organisation and circumstances before making any decisions based on the information contained in these articles.

To the fullest extent permitted by law, the author accepts no liability for any loss, damage, or consequences arising directly or indirectly from the use of, or reliance on, the information provided.

AI governance on your board's agenda?

Advisory on AI risk frameworks for financial services boards, informed by former FCA and NCSC advisory appointments. I respond personally to every enquiry.

Arrange a Conversation