Why Shadow AI Belongs on the Board Agenda
The numbers on shadow AI are now everywhere. More than 80% of employees use unapproved AI tools at work;1 Harmonic Security catalogued 665 distinct generative AI applications across enterprise environments;2 one in five organisations has had a confirmed breach linked to unsanctioned AI, adding roughly $670,000 to the average incident, and only 37% have any policy in place to detect or manage it.3 Most coverage frames this as an IT control problem. That framing is wrong, and treating shadow AI as an IT problem is the reason it is getting worse rather than better.
The pattern is consistent across every mid-market and PE-backed business I work with. The board approves AI as a strategic priority. The CEO repeats it at the all-hands. The technology budget gets a small uplift, usually for licences. No operating model is funded underneath; no clear ownership is assigned; no policy distinguishes approved from unapproved tools; no approved alternative is deployed for the workflows employees actually need help with. Six months later the board asks the CIO why the AI strategy is not delivering, the CIO points to Copilot productivity numbers, and nobody mentions that two-thirds of the actual AI usage is happening on personal accounts the company cannot see, govern, or defend in an audit.
The board’s role in creating this is direct. Boards approved the priority without funding the discipline; they wanted the productivity gain without owning the governance cost; they treated AI as a tooling decision when it is an operating model decision. The result is the shadow IT problem from fifteen years ago replayed at higher speed: data exfiltration risk, regulatory exposure under the EU AI Act,4 model output reaching customers without review, and procurement that cannot tell the auditor what AI is in production because the AI was never procured.
The CIO and CISO cannot fix this from below. They can detect, write policy, deploy a gateway, run training; none of those moves close the gap because the gap is not technical. Employees use personal accounts because the approved alternatives are slower, narrower, or do not exist; banning the unapproved tools does not work, with nearly half of employees continuing to use them after a ban. The only intervention that changes behaviour is a credible approved alternative deployed against the actual workflow — and that requires capital and operating model investment only the board can authorise.
The board questions that move the problem are different from the ones most boards are currently asking; they are not “what is our AI strategy” or “are we using AI enough,” they are governance and capital questions. Which workflows are people using personal AI accounts to complete, and why have we not provided an approved alternative. Who owns the AI inventory, and can they produce it on demand. What is our policy for AI-generated content reaching customers, regulators, or the board itself. What capital have we allocated to the operating model that supports the AI strategy, separate from the licence cost. Who is accountable when an AI-generated output causes a breach, a regulatory event, or a misstatement.
There is a separate question for PE sponsors holding portfolio companies in this state. The shadow AI exposure sits on the balance sheet at exit, and the diligence question for the next buyer is going to include AI inventory, AI governance, and unsanctioned tool exposure within the next twelve months. This is downstream of the funding gap between AI mandates and AI budgets I wrote about earlier; the unsanctioned footprint is often the operational workaround to a funding decision made higher up the chain. Sponsors who treat it as an IT problem inside the portfolio CIO’s remit pay for it later, the same way the ones who treated cybersecurity as an IT problem in 2018 paid for it during the 2022 to 2024 wave of insurance and diligence tightening.
The practical move is to stop asking the IT function to solve a problem it cannot solve from below. Put the AI inventory, the operating model, and the governance discipline on the board agenda; fund an approved alternative for the workflows where shadow AI is concentrated; assign single-point ownership at executive level; and treat the shadow AI metric as a leading indicator of governance maturity rather than a hygiene number for the security team. Shadow AI is the visible part of the problem; the invisible part is a board that approved a strategy without funding the operating model.