AI Model Risk Is Now Cyber Risk: Why Security, Legal and Risk Committees Must Work Together
By Ali Zeb · April 2026 · 9 min read
Boards and senior executives in regulated organisations are accustomed to thinking about AI risk and cyber risk as related but distinct disciplines. Cyber security governs the integrity, confidentiality, and availability of systems and data. Model risk governs the performance, accuracy, and fairness of analytical and decisioning systems. For most of the past decade, that separation was workable. It is no longer.
AI model risk and cyber risk have converged. The attack vectors that cyber security functions are defending against now include model poisoning, prompt injection, and adversarial manipulation of AI outputs. The failures that model risk functions are monitoring for now include failures that are not operational drift but deliberate attack. And the regulatory frameworks that legal and compliance teams are navigating, DORA, NIS2, the FCA's operational resilience expectations, and the EU AI Act, treat AI model failure as a category of ICT risk event, not as a separate compliance matter.
The governance consequence is structural. Most regulated organisations have three committees that each own a fragment of this risk: the security or cyber committee, the risk or audit committee, and the legal or compliance function. None of them owns the full picture. Each can discharge its individual mandate without the others being aware of material exposures that sit at the intersection. That is not a theoretical concern. It is the condition most boards currently operate under, and it is the condition that regulators will increasingly examine.
Where model risk and cyber risk now meet
Understanding why this convergence matters requires understanding what AI-specific attack vectors actually look like, because they do not map neatly to either a traditional cyber incident or a traditional model failure.
Data and model poisoning. In a data poisoning attack, an adversary contaminates the training data used to build or update an AI model. The contamination does not need to be substantial. Analysis of documented poisoning attacks shows that as little as one per cent of corrupted training data can meaningfully degrade model accuracy or introduce systematic bias in outputs. The insidious quality of this attack is that the system continues to function. It produces plausible outputs. The fraud detection model still flags transactions. The credit underwriting model still generates scores. The outputs are wrong in patterned ways that may not be detectable without specifically designed monitoring. This is simultaneously a cyber incident, a model risk event, and, in a regulated institution, potentially a consumer harm issue triggering FCA scrutiny.
Prompt injection. Where data poisoning attacks the model at the training stage, prompt injection attacks it in operation. An attacker embeds instructions within content the AI system will process, and those instructions override or subvert the model's intended behaviour. A 2024 incident demonstrated the operational reality of this risk: a document containing hidden instructions caused an AI system integrated with industrial controls to execute unintended physical commands, resulting in equipment damage. In a financial services context, a prompt injection attack against an AI system processing customer communications, loan applications, or trade documentation could cause material financial decisions to be made on manipulated inputs. It is a cyber attack. It produces a model failure. It triggers operational resilience reporting obligations.
Model theft and IP exfiltration. Proprietary AI models represent significant intellectual and commercial value in institutions that have invested in developing them. They can be extracted through a class of attacks that mirror traditional data exfiltration: systematically querying the model and analysing outputs to reconstruct its parameters and decision logic. A stolen credit risk model, trading signal generator, or fraud detection system represents both a competitive loss and a security incident. It will not appear in either the cyber incident register or the model risk log unless there is a governance framework that explicitly covers the intersection.
"AI risk does not live in a single department. Security sees the exposure. Legal interprets the compliance. Risk monitors the performance. Without a structure that joins these views, each committee is governing with incomplete information."
Ali ZebThe regulatory framework that has removed the option of inaction
The convergence of model risk and cyber risk is no longer only a governance best practice question. It has become a regulatory compliance question, and the timelines are short.
Under DORA, which entered into force for EU financial services entities in January 2025, AI systems are ICT assets. Any disruption, manipulation, or failure of a material AI system is an ICT incident that may require regulatory notification. The management body, the board, carries direct accountability for the ICT risk management framework that governs those systems. A board that has delegated model risk oversight to one committee and cyber risk oversight to another, with no mechanism to consolidate the view, has a governance structure that cannot satisfy DORA's requirements in any incident scenario involving an AI system.
The FCA's new operational incident reporting rules, which take effect in March 2027, explicitly include AI systems within the scope of material third-party and operational arrangements requiring incident disclosure. An AI model failure that causes material harm to consumers, or that poses a risk to market integrity or to the firm's safety and soundness, must be reported within twenty-four hours. The senior manager accountable for the function affected carries personal accountability under SMCR for having taken reasonable steps to prevent the breach. That accountability cannot be discharged through a governance structure in which no one has a complete picture.
The PRA, following two roundtables with regulated firms in late 2025, has made clear that it expects existing model risk management frameworks, including SS1/23 on model risk management for banks — to be applied to AI and machine learning systems. The expectation is not new regulation. It is existing standards applied to new technology. The practical question for boards is whether their current governance structure can actually meet those standards as applied.
What joined-up governance actually requires
Boards do not need to create an entirely new governance architecture to address this. They need to identify the specific gaps in their current structure and close them with targeted interventions.
The diagnostic question is straightforward: if a data poisoning attack caused a material AI system to produce systematically incorrect outputs, which committee would be notified, who would make the decision to take the system offline, who would assess the regulatory reporting obligation, and who would be accountable at board level? If the answer involves three separate escalation paths with no predetermined coordination mechanism, the governance structure has a gap that regulators will find in the event of an incident.
The practical response does not require a new committee in most organisations. It requires three things. First, a shared AI risk taxonomy that all three functions — security, risk, and legal — use in common, so that an event described in cyber terms is immediately recognised as a model risk event and a regulatory reporting trigger simultaneously. Second, a single unified incident response plan for AI-related events that spans cyber containment, model impact assessment, and regulatory notification in a single playbook rather than three separate ones. Third, a board-level reporting mechanism that consolidates the view from all three functions into a coherent picture, rather than requiring the board to triangulate across separate committee reports to understand a risk it should be able to see as a whole.
None of this is architecturally complex. The complexity is in the coordination, and in overcoming the institutional inertia that keeps functions in their established lanes. That coordination is a leadership task that begins at board level, because only the board can direct all three functions to build the shared framework their current mandates do not require them to build independently.
Ali Zeb is an Executive Cyber Security Advisor, Non-Executive Director, and former advisory member at the FCA, NCSC, and Lloyds of London Market Cyber Risk Committee. He advises boards and executive committees on AI governance, cyber risk, and the organisational conditions required to manage both effectively.
Disclaimer
The views and opinions expressed in these articles are those of the author, Ali Zeb, and are provided for general informational and educational purposes only. They are based on professional experience, independent research, publicly available information, and the use of artificial intelligence tools to support analysis and content development.
While every effort is made to ensure the accuracy and relevance of the information presented, no representation or warranty, express or implied, is made as to its completeness, accuracy, or suitability for any particular purpose. The content should not be relied upon as professional, legal, regulatory, or financial advice.
Readers are encouraged to seek appropriate independent advice specific to their organisation and circumstances before making any decisions based on the information contained in these articles.
To the fullest extent permitted by law, the author accepts no liability for any loss, damage, or consequences arising directly or indirectly from the use of, or reliance on, the information provided.
Reviewing your AI and cyber governance structure?
The first conversation is about understanding your situation. I respond personally to every enquiry.
Arrange a Conversation