Insights  ·  Fraud & Financial Crime

Cyber-Enabled Fraud: The Risk CEOs Are Now More Worried About Than Ransomware

By Ali Zeb  ·  March 2026  ·  8 min read

For most of the past decade, ransomware was the risk that occupied board agendas most consistently. It was visible, measurable, and produced headlines that brought the question of cyber investment directly into the boardroom. Organisations that had never seriously engaged with cyber governance found themselves doing so after a ransomware event, or after watching a peer organisation fail to recover from one. The threat was real, the damage was quantifiable, and the governance response, however belated, was at least legible.

Cyber-enabled fraud has overtaken ransomware as the risk that senior executives in regulated organisations now report as their primary concern, and it has done so almost silently. The WEF Global Risk Report has consistently placed fraud and cybercrime among the top concerns of CEOs across financial services, and the pattern in advisory conversations over the past eighteen months has been consistent: boards that have made reasonable progress on ransomware readiness are significantly less prepared for the fraud vector, and they are less prepared in ways that are structurally harder to fix.

The governance challenge fraud presents is different in kind from ransomware, not simply different in degree. Understanding that distinction is the starting point for any board that wants to govern this risk rather than simply respond to it after the fact.

Why fraud is a structurally different governance problem

Ransomware announces itself. Systems go offline, files are encrypted, a ransom demand appears. The organisation knows immediately that something has happened, knows broadly what has been affected, and can begin a response. The incident is acute, visible, and time-bounded. The governance response, incident response plans, communication protocols, regulatory notification procedures, is designed for exactly this kind of event.

Cyber-enabled fraud is frequently the opposite. A well-executed business email compromise or deepfake impersonation attack does not trigger a system alert. It triggers a human decision. An authorised person, acting on what appears to be a legitimate instruction from a known counterparty or senior executive, executes a transfer, changes payment details, or approves a transaction. The fraud may not be discovered for days, weeks, or in some documented cases, months. By the time it is identified, funds have moved through multiple jurisdictions and recovery is typically limited.

The financial scale is significant and growing. Business email compromise alone has consistently generated losses that exceed those from ransomware in absolute terms across major economies. AI has compounded this by removing the linguistic and contextual imperfections that previously made sophisticated fraud attempts detectable. A deepfake audio call using a CEO's voice to instruct a finance director to execute an urgent transfer is now technically achievable at a cost that makes it commercially viable for criminal organisations to deploy at scale. In early 2024, a single such attack cost a multinational firm twenty-five million dollars. It was not an isolated incident. It was an early indicator of a technique that has since become operationally routine.

"Ransomware gives you an incident to respond to. Fraud gives you a decision already made and money already moved. The governance challenge is not the response. It is preventing the decision from being made in the first place."

Ali Zeb

Where board-level governance is failing

Most organisations have financial controls designed for a world in which the authenticity of a senior executive's instruction could be reasonably assumed from the channel through which it arrived. An email from a known address, a phone call from a known number, a video call with a known face: these have historically provided sufficient assurance to authorise significant financial transactions. AI has invalidated all three as reliable verification mechanisms, and most organisations have not updated their controls accordingly.

The board-level governance failure is not usually a failure to understand that fraud exists. It is a failure to ask whether the financial authorisation framework, the controls designed to ensure that large transfers are properly sanctioned, remains adequate in an environment where voice, video, and email identity can be synthetically generated. That review has not happened in most organisations, because it sits uncomfortably between cyber security (which manages technical controls), finance (which manages authorisation processes), and risk (which manages fraud exposure). None of them owns the intersection.

The board questions that reveal the gap. There are three questions boards should put to their executive team that will establish quickly whether the governance framework is adequate. First: has our financial authorisation framework been reviewed in light of deepfake capability, and what alternative verification mechanisms exist for large or urgent transactions? Second: do we have documented out-of-band verification protocols for high-value payment instructions that arrive through digital channels? Third: when did we last test those protocols against a realistic social engineering scenario, and what did we find?

Boards that receive confident answers to all three, answers backed by documented evidence rather than assurance, are in a materially stronger position. In most organisations I encounter in an advisory capacity, the answers to at least two of those three questions are either incomplete or uncomfortable. That discomfort is where the governance work needs to happen.

What a board-level response to fraud risk looks like

Governing fraud risk at board level is not primarily a technology problem. The technical controls that detect and prevent fraud attempts are necessary and should be adequate, but the most effective fraud attacks succeed not by defeating technical controls but by circumventing them through human decision-making. The governance response therefore needs to be structural.

Boards should direct their executive teams to conduct a specific review of financial authorisation processes with AI-enabled fraud as the explicit threat model. This is different from a general fraud risk review. It should ask: which of our current authorisation controls assume the authenticity of a digital communication or voice call, and what would we do if that assumption were no longer reliable? The output should be a set of mandatory out-of-band verification requirements for transactions above defined thresholds, with those thresholds set at a level that reflects the actual fraud exposure rather than historical convenience.

The board should also ensure that fraud risk sits within a defined governance structure with clear ownership. The fragmentation between cyber, finance, and risk functions is the structural condition that allows this risk to be managed partially by all three and owned completely by none. A board that directs a named executive to own the intersection and report on it as a defined risk category has done the most important governance thing available to it.

Ali Zeb is an Executive Cyber Security Advisor, Non-Executive Director, and former advisory member at the FCA, NCSC, and Lloyds of London Market Cyber Risk Committee. He advises boards on cyber governance, fraud risk, and the organisational structures required to manage both effectively.

Disclaimer

The views and opinions expressed in these articles are those of the author, Ali Zeb, and are provided for general informational and educational purposes only. They are based on professional experience, independent research, publicly available information, and the use of artificial intelligence tools to support analysis and content development.

While every effort is made to ensure the accuracy and relevance of the information presented, no representation or warranty, express or implied, is made as to its completeness, accuracy, or suitability for any particular purpose. The content should not be relied upon as professional, legal, regulatory, or financial advice.

Readers are encouraged to seek appropriate independent advice specific to their organisation and circumstances before making any decisions based on the information contained in these articles.

To the fullest extent permitted by law, the author accepts no liability for any loss, damage, or consequences arising directly or indirectly from the use of, or reliance on, the information provided.

Reviewing your fraud governance framework?

The first conversation is about understanding your situation. I respond personally to every enquiry.

Arrange a Conversation