AI-Enabled Cyber Attacks: What Boards Must Understand Before the First Crisis
By Ali Zeb · April 2026 · 9 min read
Most boards have now accepted that cyber security is a governance responsibility. They receive periodic briefings, approve security budgets, and understand, at least in principle, that a significant incident would land on their agenda within hours of discovery. What most boards have not yet absorbed is that the threat landscape they are preparing for no longer exists. Artificial intelligence has changed the economics, the speed, and the precision of cyber attack in ways that make many existing governance frameworks structurally inadequate.
This is not a technology argument. It is a governance one. The nature of the threat has shifted in three specific ways that boards need to understand, not because they are expected to manage the technical response, but because the strategic decisions that flow from this shift, about investment, about risk tolerance, about third-party dependency, belong unambiguously at board level.
The NCSC has been explicit on this. Its April 2026 open letter to business leaders described AI-enabled cyber threats as a material and growing risk requiring board-level attention. The FCA and the Bank of England have reinforced the same message through their AI governance expectations. The regulatory direction is not ambiguous: boards are expected to understand the AI threat environment, not simply to be briefed on it after the fact.
Three shifts that change the governance calculus
AI has not created entirely new categories of attack. Phishing, impersonation, and vulnerability exploitation are not new. What AI has done is remove the constraints that previously limited their scale and effectiveness. Understanding that distinction is the starting point for board-level thinking on this threat.
Speed of exploitation. The window between a vulnerability being discovered and a working exploit being deployed has collapsed. Where it once took adversaries weeks to weaponise a newly disclosed vulnerability, analysis of 2025 incident data shows the average has fallen to under three days. In several documented cases, critical vulnerabilities were weaponised within hours of public disclosure. This is not because attackers have become more skilled. It is because AI systems can now automate the reverse engineering, testing, and deployment of exploits at a speed no human team can match. The practical consequence for boards is that patch cycles designed around a weekly or monthly cadence are structurally misaligned with the actual threat. That is an investment and prioritisation decision with a board dimension.
Personalised attacks at industrial scale. Social engineering has always been the most effective attack vector against organisations, and it has always relied on convincing a human to take an action they should not. The historical constraint was that convincing social engineering required skilled, native-language operators and significant time investment per target. Generative AI has eliminated that constraint entirely. A single AI system can now generate thousands of individually personalised, contextually accurate phishing communications per hour, drawing on publicly available information about targets, their roles, their contacts, and their organisations. The result is that the trade-off between volume and precision that previously favoured defenders no longer holds. Attacks can be simultaneously large in scale and high in quality. By early 2026, over 80 per cent of analysed phishing communications contained AI-generated elements. That is not an anomaly. It is the new baseline.
Deepfake impersonation as a financial control risk. The impersonation risk that AI introduces deserves particular attention from boards in regulated organisations. The technology that allows realistic synthesis of voice and video has moved from the theoretical to the operationally deployed. In 2024, a single deepfake video conference call convinced an employee at a global engineering firm to authorise a transfer of twenty-five million dollars. In the first quarter of 2025 alone, deepfake-enabled financial fraud resulted in losses exceeding two hundred million dollars globally. The detection capability of human observers against high-quality deepfake video currently runs at less than 25 per cent. This is a financial controls failure waiting to happen in organisations whose authorisation processes have not been updated to account for it. The question of whether existing financial controls are adequate in an environment where voice and video can be synthetically generated is a board question, not a technology question.
"Boards are governing AI-era threats with pre-AI frameworks. The threat has moved. The governance has not. That gap is where the next significant incident will find its opening."
Ali ZebWhy existing governance frameworks are being outpaced
Most board-level cyber governance frameworks were designed for a threat environment in which the limiting factor was attacker capability. The assumption, often implicit, was that significant attacks required time, skill, and resources that constrained their frequency and precision. Defences built around detection windows, response times, and patch cycles reflected that assumption.
AI removes the capability constraint. The limiting factor is now attacker intent. Organisations that were previously too small, too obscure, or too well-defended to be an attractive target under the economics of traditional attack are now viable targets. The marginal cost of extending an AI-powered attack to an additional target is negligible. The implication is that boards cannot rely on the protection previously provided by size, sector, or relative obscurity.
The second inadequacy is temporal. Board governance typically operates on quarterly cycles. Incident response frameworks are tested annually. The threat environment is now measured in hours. A governance model that reviews security posture every three months cannot be adequate for a threat that can establish a foothold, escalate privileges, and exfiltrate material data over a weekend. This does not mean boards need to meet more frequently. It means the standing framework that boards approve needs to be built for a different operating rhythm at the executive level, with appropriate board-level visibility when thresholds are crossed.
What boards should be asking now
The NCSC has published specific questions for boards to put to their technical leadership on AI and cyber security. Three are particularly diagnostic of whether governance is aligned with the actual threat environment.
Have our financial authorisation controls been reviewed in light of deepfake capability? Most financial authorisation frameworks were designed assuming that voice or video confirmation from a known individual provided reasonable assurance. That assumption is no longer reliable. The board should understand whether controls have been updated, what alternative verification mechanisms exist, and what the financial exposure would be if a deepfake impersonation of a senior executive or board member succeeded.
What is our effective patch deployment timeline, and how does it compare to current exploit development speeds? The answer should be specific. A board that understands its organisation can patch critical vulnerabilities across material systems within forty-eight hours is in a materially different position than one whose patch cycle runs to weeks. If the answer is not known, that is itself a significant governance finding.
How would a sophisticated AI-generated spear-phishing campaign targeting our finance, legal, or executive teams be detected? The answer should describe a specific technical control, not a general awareness training programme. Employee awareness remains important, but awareness training built for the pre-AI phishing environment is not adequate as a primary control against AI-generated attacks that are indistinguishable from legitimate correspondence.
These questions will not always produce reassuring answers. That is the point. The governance value is in understanding where the gaps are, making deliberate decisions about risk appetite and remediation investment, and being able to demonstrate to regulators that the board engaged with the threat environment in an informed and active way.
Ali Zeb is an Executive Cyber Security Advisor, Non-Executive Director, and former advisory member at the FCA, NCSC, and Lloyds of London Market Cyber Risk Committee. He advises boards on how to govern cyber and AI risk at the level the current threat environment demands.
Disclaimer
The views and opinions expressed in these articles are those of the author, Ali Zeb, and are provided for general informational and educational purposes only. They are based on professional experience, independent research, publicly available information, and the use of artificial intelligence tools to support analysis and content development.
While every effort is made to ensure the accuracy and relevance of the information presented, no representation or warranty, express or implied, is made as to its completeness, accuracy, or suitability for any particular purpose. The content should not be relied upon as professional, legal, regulatory, or financial advice.
Readers are encouraged to seek appropriate independent advice specific to their organisation and circumstances before making any decisions based on the information contained in these articles.
To the fullest extent permitted by law, the author accepts no liability for any loss, damage, or consequences arising directly or indirectly from the use of, or reliance on, the information provided.
Reviewing your board's AI threat governance?
The first conversation is about understanding your situation. I respond personally to every enquiry.
Arrange a Conversation