0330 055 2678 | Client Portal |

0330 055 2678 | Client Portal |

Shadow AI Risks: How Everyday AI Use Can Expose Business Data

Shadow AI Risks: How Everyday AI Use Can Expose Business Data

Shadow AI risks are growing inside organisations as employees use AI tools to move faster without fully understanding what should never be pasted, uploaded, summarised, or connected. Most organisations still picture cyber risk as something that starts outside the business: a phishing email, a stolen password, or a suspicious login attempt. But some of the most important cyber risks in 2026 begin much earlier, inside the organisation itself, when everyday behaviour creates exposure before anyone notices it. Microsoft’s UK research found that 71% of employees have used unapproved consumer AI tools at work, with 51% doing so every week. SAP’s UK research found that 60% of businesses say employees lack AI training, 68% report staff using unapproved AI tools at least occasionally, and 44% say they have already seen data or IP exposure as a result of shadow AI use (Microsoft UK Stories, 2025; SAP News, 2026a; SAP News, 2025).

Why shadow AI risks start with behaviour

The problem is not usually malicious intent. Most employees are not trying to create risk. They are trying to save time. They want help drafting emails, summarising notes, improving reports, restructuring proposals, or speeding through repetitive admin. That is exactly why shadow AI risks are so easy to miss. They often look like productivity first and security second.

The National Cyber Security Centre is clear that keeping AI systems secure is as much about organisational culture, process, and communication as it is about technical controls, and that security should be built into AI projects from the start rather than bolted on later (National Cyber Security Centre, 2024).

That matters because employee behaviour is now part of the attack surface. If teams are experimenting with AI faster than the organisation can govern it, the first problem is not the model itself. It is the absence of clear boundaries around what can be used, what can be shared, and what should stay out of prompts altogether. The NCSC’s guidance is aimed not just at security teams, but at managers and decision-makers, which underlines the point: AI risk is a leadership and operating model issue, not just a tooling issue (National Cyber Security Centre, 2024).

The employee awareness gap behind shadow AI risks

In many businesses, employees still do not have clear answers to simple but important questions. Can a customer email be pasted into an AI tool if names are removed? Can meeting notes be uploaded for a summary? Can AI be used to rewrite an internal report, analyse a spreadsheet, or improve a proposal that contains commercially sensitive detail? When those lines are unclear, employees draw them themselves, and that is where silent exposure begins.

The ICO’s guidance on AI and data protection makes the standard here much clearer than many organisations realise. It says organisations should assess whether training or input data contains identified or identifiable personal data, whether directly or indirectly, and should stay up to date with both attack methods and mitigations in what it describes as a rapidly developing area. Separately, the ICO’s data protection principles make clear that personal data must be adequate, relevant, and limited to what is necessary for the purpose. In practice, that means “it was only a quick prompt” is not a strong defence if personal or commercially sensitive information has been entered into the wrong service (Information Commissioner’s Office, 2023a; Information Commissioner’s Office, 2023b).

What shadow AI risks and data leakage look like in practice

AI-related leakage rarely looks dramatic at the point it happens. It usually looks ordinary. A team member pastes a customer exchange into a chatbot to improve wording. A sales user uploads proposal text to sharpen tone. An operations lead uses an assistant to summarise internal issues using real examples. Someone drops spreadsheet content into a tool for faster analysis. Another employee uses a consumer AI product because it is easier than the approved route.

That is exactly why the risk is growing. OWASP’s current guidance on LLM application risks warns that sensitive information disclosure can include personally identifiable information, financial details, health records, confidential business data, security credentials, and legal documents. It also warns that users may unintentionally provide sensitive data that could later be disclosed in outputs. SAP’s UK research reinforces the commercial side of that risk, with 44% of organisations reporting data or IP exposure and 43% reporting security vulnerabilities linked to shadow AI use (OWASP Gen AI Security Project, n.d.a; SAP News, 2025).

Why shadow AI risks are now a cyber issue

It is tempting to treat shadow AI as a policy or training problem. It is both of those things, but it is also now a cyber issue. The NCSC’s assessment of AI and cyber threat says organisations using AI systems will almost certainly need to maintain up-to-date cyber security measures on those systems and their dependencies. The same report warns that insecure data handling, poor configuration, weak identity management, and extensive data collection can all make AI-enabled risk worse (National Cyber Security Centre, 2025).

OWASP’s broader LLM risk framework also matters here. Prompt injection can lead to disclosure of sensitive information, manipulation of outputs, unauthorised access to connected functions, and interference with decision-making processes. That means the issue is not limited to what a person types into a model. It also includes what the model can reach, what it can surface, and what it might do with weakly governed context or connected tools (OWASP Gen AI Security Project, n.d.b).

Why approved AI still needs controls to reduce shadow AI risks

A lot of organisations assume the answer is simply to block public tools and approve an enterprise AI platform instead. That is better, but it is not enough on its own. Approved AI still needs permissions, data classification, governance, and technical controls. Microsoft’s current Purview guidance shows just how practical this problem has become inside enterprise environments. It says organisations can create DLP policies to stop Microsoft 365 Copilot and Copilot Chat from processing prompts that contain sensitive information, and can also exclude files and emails with sensitivity labels from being used in response summaries (Microsoft Learn, 2026).

There is also an important nuance in Microsoft’s own guidance that many teams will miss. While DLP can check the text typed directly into a prompt, Microsoft says it cannot scan the contents of files uploaded directly into prompts for sensitive data. That is a useful reminder that approved AI does not remove judgement calls. It simply moves the need for governance, architecture, and user awareness into a more formal environment (Microsoft Learn, 2026).

The NCSC’s secure AI system development guidance reinforces the same point from a different angle. Its executive summary says secure AI systems should function as intended, be available when needed, and work without revealing sensitive data to unauthorised parties. In other words, secure AI is not just about access to a model. It is about the full design, deployment, and operating context around it (National Cyber Security Centre, 2023).

How good organisations reduce shadow AI risks

The organisations handling this well are not necessarily the ones using the least AI. They are the ones creating the most clarity. They define which tools are approved. They set simple rules on what must never be pasted, uploaded, or connected. They train employees using real examples rather than vague warnings. They apply data minimisation more seriously. They use sensitivity labels, DLP, and permissions where appropriate. They also treat shadow AI risks as an everyday operational issue rather than a future governance project (National Cyber Security Centre, 2024; Information Commissioner’s Office, 2023a; Microsoft Learn, 2026).

Final thought on shadow AI risks

Some of the most important cyber risks in 2026 do not begin with a sophisticated attacker. They begin with a rushed task, a useful AI tool, unclear boundaries, and a well-meaning employee trying to work more efficiently. That is what makes shadow AI risks so difficult to spot early. They often do not feel like cyber incidents when they start. They just feel helpful, right up until the point the business realises sensitive data has gone somewhere it should not have gone (Microsoft UK Stories, 2025; SAP News, 2025; National Cyber Security Centre, 2025).


Practical next steps

Two free resources will help you understand your exposure today:

Phishing Defence Toolkit

Five practical steps you can implement immediately, covering MFA, inbox rule audits, external tagging, verification habits and recovery.

View the toolkit

Cyber Health Check

A two-minute assessment that highlights:

  • Phishing risk

  • Behavioural exposure

  • Metadata vulnerabilities

  • Digital hygiene gaps

Your report arrives instantly with clear next actions.

Start Cyber Health Check


Got any questions? Get in Touch


References