90% of AI Usage Is Invisible to IT. The Breach Has Started.

67% of executives report breaches from unapproved AI tools. 269 unsanctioned apps per 1,000 employees. Shadow AI is enterprise security's biggest blind spot.

By Rajesh Beri·April 15, 2026·12 min read
Share:

THE DAILY BRIEF

shadow AIenterprise securityAI governancedata breachcomplianceCISOEU AI ActDLPunsanctioned AIBYOAI

90% of AI Usage Is Invisible to IT. The Breach Has Started.

67% of executives report breaches from unapproved AI tools. 269 unsanctioned apps per 1,000 employees. Shadow AI is enterprise security's biggest blind spot.

By Rajesh Beri·April 15, 2026·12 min read

Your employees are using AI right now. Not the AI tools you approved, budgeted for, and deployed with governance frameworks. The other ones. The free ChatGPT tab open behind Slack. The Claude account on a personal email. The AI coding assistant a developer installed without telling anyone. The marketing intern who pasted your Q3 revenue projections into a prompt to generate a board deck.

According to multiple reports released this month, approximately 90 percent of enterprise AI usage is invisible to the organization. Reco AI's 2025 State of Shadow AI Report found that 98 percent of organizations have employees actively using unsanctioned AI applications. Companies with 11 to 50 employees average 269 unsanctioned AI tools per 1,000 workers. And 63 percent of those organizations have no AI governance policy at all.

This is not a future risk. Writer's 2026 Workplace Intelligence report, published this month, found that 67 percent of executives believe their company has already suffered a data breach due to unapproved AI tools. Thirty-five percent of employees admit to entering proprietary information into public AI systems. Fifty-five percent of organizations describe their AI usage as a "chaotic free-for-all."

Shadow AI is not shadow IT with a new name. It is categorically more dangerous, and the enterprise response so far has been categorically inadequate.

Why Shadow AI Is Not Shadow IT

The comparison is tempting but misleading. Shadow IT in the 2010s meant an employee using Dropbox instead of SharePoint, or a team spinning up a Trello board without IT's blessing. The risk was real but bounded: the data stayed in a defined application, the blast radius was limited, and discovery usually happened during an audit.

Shadow AI inverts every one of those assumptions.

When an employee pastes a customer support transcript into ChatGPT to draft a response template, that data leaves the enterprise perimeter permanently. Most free-tier AI tools retain user inputs for model training. The data does not sit in a database you can audit. It is absorbed into a model's weights, where it cannot be retrieved, deleted, or traced. There is no access log. There is no data processing agreement. There is no way to know what happened until it surfaces in a breach investigation or a regulator's inquiry.

Samsung learned this the hard way. In a case that became a defining example, engineers pasted proprietary semiconductor source code into ChatGPT to debug production issues. The data was ingested into OpenAI's training pipeline before anyone in security knew it had left the building. Samsung responded with a company-wide ban on external AI tools, a response that solved the immediate problem but created a new one: developers who had been 40 percent more productive with AI assistance were now slower, and the ones who complied resented the policy while the ones who did not simply found workarounds.

The Samsung pattern has repeated across industries throughout 2025 and into 2026. A financial services firm discovers an analyst used Claude to summarize client portfolio data. A healthcare organization finds a clinician pasted patient notes into a consumer AI tool to generate referral letters. A law firm realizes associates have been using AI to draft contract language using privileged client information. In each case, the data exposure happened weeks or months before discovery, the regulatory implications are severe, and the remediation options are limited.

For the technical audience, the attack surface is worth mapping precisely. Shadow AI creates data exfiltration vectors that bypass every traditional DLP control. Endpoint DLP monitors file transfers and email attachments; it does not inspect what an employee types into a browser-based AI interface. Network DLP watches for sensitive data patterns crossing the perimeter; most AI interactions happen over standard HTTPS to well-known domains that are not on any blocklist. CASB solutions can identify which SaaS applications are in use, but they cannot inspect the content of prompts submitted to those applications in real time.

The result is a class of data exposure that is simultaneously pervasive, invisible, and irreversible.

The Numbers Behind the Crisis

The data from multiple 2026 reports converges on the same conclusion: shadow AI is now the largest unmanaged risk in enterprise security.

The Cloud Security Alliance's "AI Gone Wild" report found that one in five organizations has experienced a data breach directly tied to shadow AI usage. IBM's 2025 Cost of Data Breach Report calculates that organizations with extensive shadow AI face breach costs averaging $4.63 million, which is $670,000 more per incident than organizations with low or no shadow AI exposure. That 16 percent premium is not noise. It reflects the additional forensic complexity, regulatory penalties, and remediation costs that shadow AI breaches create.

Reco AI's analysis is even more pointed: 97 percent of AI-related security breaches in their dataset occurred in environments that lacked proper AI access controls. Not insufficient controls. No controls.

The confidence gap is equally alarming. The Purple Book Community's State of AI Risk Management 2026 report found that 92 percent of organizations express confidence in their ability to detect AI-generated code vulnerabilities in production. The same report found that 70.4 percent of organizations report confirmed or suspected vulnerabilities introduced by AI-generated code in their production systems. Nine out of ten believe they have the problem covered. Seven out of ten already have the problem.

BlackFog's Shadow AI Research adds a behavioral dimension that governance frameworks rarely address: 60 percent of employees say that using unsanctioned AI tools is worth the security risk if it makes them faster. More than a third have shared confidential company data with AI systems outside organizational oversight. And the majority say they would use approved AI tools if their employer provided them, a finding that reframes the problem from employee misconduct to institutional failure.

For business leaders, the Writer report delivers the strategic context. Seventy-nine percent of organizations report challenges adopting AI. Forty-eight percent of C-suite executives describe their AI adoption as a "massive disappointment," up from 34 percent in 2025. And 29 percent of employees admit to actively sabotaging their company's AI strategy, a number that rises to 44 percent among Gen Z workers. The enterprise is not just failing to govern AI. Parts of the enterprise are actively resisting the way AI is being governed.

The Compliance Cliff

Shadow AI is not just a security problem. It is a compliance problem that is about to get significantly worse.

The EU AI Act's obligations for general-purpose AI models take effect in August 2026. GDPR Article 28 requires documented data processing agreements with any processor handling personal data, a requirement that is automatically violated every time an employee submits personal data to a consumer AI tool. The SEC has begun requesting AI governance documentation during cybersecurity incident reviews. Italy's data protection authority opened an investigation in late 2025 into enterprise AI deployments lacking employee consent mechanisms.

For regulated industries, the exposure is acute. HIPAA audit controls under 45 CFR 164.312(b) require tracking of all access to protected health information. PCI DSS Requirement 10 mandates logging of access to cardholder data environments. SOC 2 CC7.2 requires monitoring of system components for anomalies. None of these controls are satisfied when a clinician pastes patient data into ChatGPT or an analyst feeds transaction records into an unsanctioned AI tool.

The compliance challenge is structural. You cannot document what you cannot see. You cannot apply controls to tools you do not know exist. And you cannot demonstrate governance to a regulator when 90 percent of your AI usage is invisible to your security team.

Organizations that have navigated this effectively share a common pattern: they established cross-functional ownership of AI governance early, before an incident forced the question. Those that waited, which is to say most of them, are now building governance frameworks under regulatory pressure with incomplete visibility into their actual AI footprint.

What Actually Works

The data across these reports converges on a framework that is less about restricting AI and more about making the right AI tools easier to use than the wrong ones.

Step one is visibility. You cannot govern what you cannot see. Query DNS and proxy logs for connections to known AI service domains. Review OAuth application consents in your enterprise identity systems. Deploy browser extension allowlisting through group policy. And critically, conduct anonymous employee surveys to understand what tools are actually in use. The gap between what IT thinks is happening and what employees report is consistently the most revealing data point in any shadow AI assessment.

Step two is risk classification. Not all shadow AI carries equal risk. A marketer using Grammarly's AI features on a blog draft is categorically different from an engineer pasting production database schemas into a free-tier coding assistant. Implement a tiered framework: critical risk for tools processing regulated data (PCI, PHI, PII), high risk for tools accessing proprietary business data, medium risk for tools handling internal non-sensitive information, and low risk for tools with no sensitive data access. Apply controls proportional to the tier.

Step three is providing sanctioned alternatives. This is where most enterprise AI governance strategies fail. Blocking consumer AI tools without providing enterprise alternatives is the corporate equivalent of banning smartphones and handing out pagers. BlackFog's research found that most employees would use approved AI tools if their employer provided them. The demand is real. The productivity gains are real. The security risk comes not from AI usage itself but from AI usage that occurs outside institutional visibility and control.

Step four is data-layer enforcement. For the technical teams, this means implementing controls at the data layer, not the application layer. AI tools proliferate too fast for application-level blocklists to remain current. Instead, focus on what data can reach AI tools: endpoint controls that detect and block sensitive data patterns in browser input fields, API gateways that inspect outbound requests to AI endpoints, and DLP policies that understand the difference between a file upload and a prompt submission.

Step five is continuous monitoring, not point-in-time audits. Shadow AI usage patterns change weekly as new tools launch and employees discover new capabilities. Annual or quarterly assessments are useless. Implement continuous monitoring of AI tool usage across the enterprise, with automated alerts for new tools, anomalous usage patterns, and sensitive data exposure.

The Leadership Problem

The deepest challenge in shadow AI governance is not technical. It is organizational.

Writer's data shows that 75 percent of executives acknowledge their AI strategy is "more for show" than actual guidance. Fifty-eight percent admit their fellow leaders lack the knowledge to make informed AI decisions. Seventy-three percent of CEOs report stress or anxiety about their AI strategy.

When leadership is uncertain, governance becomes performative. Policies get written but not enforced. Tools get approved but not deployed. Training gets scheduled but not attended. And employees, who are under constant pressure to be more productive, make rational individual decisions that create irrational collective risk.

The organizations that are managing shadow AI effectively share three characteristics. First, they treat AI governance as a business function, not a security function. The CISO provides controls. The CIO provides infrastructure. But the AI governance committee includes business unit leaders who understand the workflows AI is being used in and can make informed decisions about acceptable risk.

Second, they measure adoption of sanctioned tools as aggressively as they monitor for unsanctioned ones. If your approved AI platform has 15 percent adoption six months after deployment, you do not have a shadow AI problem. You have a product problem. Fix the product before you blame the users.

Third, they accept that some shadow AI usage is a signal, not a threat. When an employee discovers a consumer AI tool that solves a real workflow problem, that is market research delivered for free. The correct response is not to ban the tool. It is to understand the need it fills, evaluate whether a sanctioned alternative exists, and if not, bring the tool inside the governance perimeter or build something that does the same job under enterprise controls.

The Window Is Closing

The State of AI Risk Management 2026 report identifies the core problem: organizations are adopting AI faster than they can secure it, creating a growing gap between what they believe they control and what is actually happening in their environments.

That gap has a half-life. Every month without visibility is another month of unmonitored data exposure, untracked compliance violations, and unmanaged security risk. The EU AI Act enforcement deadline is 110 days away. The SEC is already asking for documentation. The breach that started last quarter with a single unsanctioned prompt submission is still unfolding.

The good news is that the playbook exists. Visibility, classification, sanctioned alternatives, data-layer enforcement, continuous monitoring. It is not technically novel. It does not require a massive capital investment. What it requires is organizational will: the willingness to look at what is actually happening with AI across your enterprise, accept that the answer will be uncomfortable, and build governance that works with human behavior rather than against it.

Ninety percent of your enterprise AI usage is invisible. The breach is not coming. For two-thirds of organizations, it has already arrived.


Rajesh Beri is Head of AI Engineering at Zscaler and writes about enterprise AI strategy, security, and the gap between what vendors promise and what production environments deliver.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

90% of AI Usage Is Invisible to IT. The Breach Has Started.

Photo by Christina Morillo on Pexels

Your employees are using AI right now. Not the AI tools you approved, budgeted for, and deployed with governance frameworks. The other ones. The free ChatGPT tab open behind Slack. The Claude account on a personal email. The AI coding assistant a developer installed without telling anyone. The marketing intern who pasted your Q3 revenue projections into a prompt to generate a board deck.

According to multiple reports released this month, approximately 90 percent of enterprise AI usage is invisible to the organization. Reco AI's 2025 State of Shadow AI Report found that 98 percent of organizations have employees actively using unsanctioned AI applications. Companies with 11 to 50 employees average 269 unsanctioned AI tools per 1,000 workers. And 63 percent of those organizations have no AI governance policy at all.

This is not a future risk. Writer's 2026 Workplace Intelligence report, published this month, found that 67 percent of executives believe their company has already suffered a data breach due to unapproved AI tools. Thirty-five percent of employees admit to entering proprietary information into public AI systems. Fifty-five percent of organizations describe their AI usage as a "chaotic free-for-all."

Shadow AI is not shadow IT with a new name. It is categorically more dangerous, and the enterprise response so far has been categorically inadequate.

Why Shadow AI Is Not Shadow IT

The comparison is tempting but misleading. Shadow IT in the 2010s meant an employee using Dropbox instead of SharePoint, or a team spinning up a Trello board without IT's blessing. The risk was real but bounded: the data stayed in a defined application, the blast radius was limited, and discovery usually happened during an audit.

Shadow AI inverts every one of those assumptions.

When an employee pastes a customer support transcript into ChatGPT to draft a response template, that data leaves the enterprise perimeter permanently. Most free-tier AI tools retain user inputs for model training. The data does not sit in a database you can audit. It is absorbed into a model's weights, where it cannot be retrieved, deleted, or traced. There is no access log. There is no data processing agreement. There is no way to know what happened until it surfaces in a breach investigation or a regulator's inquiry.

Samsung learned this the hard way. In a case that became a defining example, engineers pasted proprietary semiconductor source code into ChatGPT to debug production issues. The data was ingested into OpenAI's training pipeline before anyone in security knew it had left the building. Samsung responded with a company-wide ban on external AI tools, a response that solved the immediate problem but created a new one: developers who had been 40 percent more productive with AI assistance were now slower, and the ones who complied resented the policy while the ones who did not simply found workarounds.

The Samsung pattern has repeated across industries throughout 2025 and into 2026. A financial services firm discovers an analyst used Claude to summarize client portfolio data. A healthcare organization finds a clinician pasted patient notes into a consumer AI tool to generate referral letters. A law firm realizes associates have been using AI to draft contract language using privileged client information. In each case, the data exposure happened weeks or months before discovery, the regulatory implications are severe, and the remediation options are limited.

For the technical audience, the attack surface is worth mapping precisely. Shadow AI creates data exfiltration vectors that bypass every traditional DLP control. Endpoint DLP monitors file transfers and email attachments; it does not inspect what an employee types into a browser-based AI interface. Network DLP watches for sensitive data patterns crossing the perimeter; most AI interactions happen over standard HTTPS to well-known domains that are not on any blocklist. CASB solutions can identify which SaaS applications are in use, but they cannot inspect the content of prompts submitted to those applications in real time.

The result is a class of data exposure that is simultaneously pervasive, invisible, and irreversible.

The Numbers Behind the Crisis

The data from multiple 2026 reports converges on the same conclusion: shadow AI is now the largest unmanaged risk in enterprise security.

The Cloud Security Alliance's "AI Gone Wild" report found that one in five organizations has experienced a data breach directly tied to shadow AI usage. IBM's 2025 Cost of Data Breach Report calculates that organizations with extensive shadow AI face breach costs averaging $4.63 million, which is $670,000 more per incident than organizations with low or no shadow AI exposure. That 16 percent premium is not noise. It reflects the additional forensic complexity, regulatory penalties, and remediation costs that shadow AI breaches create.

Reco AI's analysis is even more pointed: 97 percent of AI-related security breaches in their dataset occurred in environments that lacked proper AI access controls. Not insufficient controls. No controls.

The confidence gap is equally alarming. The Purple Book Community's State of AI Risk Management 2026 report found that 92 percent of organizations express confidence in their ability to detect AI-generated code vulnerabilities in production. The same report found that 70.4 percent of organizations report confirmed or suspected vulnerabilities introduced by AI-generated code in their production systems. Nine out of ten believe they have the problem covered. Seven out of ten already have the problem.

BlackFog's Shadow AI Research adds a behavioral dimension that governance frameworks rarely address: 60 percent of employees say that using unsanctioned AI tools is worth the security risk if it makes them faster. More than a third have shared confidential company data with AI systems outside organizational oversight. And the majority say they would use approved AI tools if their employer provided them, a finding that reframes the problem from employee misconduct to institutional failure.

For business leaders, the Writer report delivers the strategic context. Seventy-nine percent of organizations report challenges adopting AI. Forty-eight percent of C-suite executives describe their AI adoption as a "massive disappointment," up from 34 percent in 2025. And 29 percent of employees admit to actively sabotaging their company's AI strategy, a number that rises to 44 percent among Gen Z workers. The enterprise is not just failing to govern AI. Parts of the enterprise are actively resisting the way AI is being governed.

The Compliance Cliff

Shadow AI is not just a security problem. It is a compliance problem that is about to get significantly worse.

The EU AI Act's obligations for general-purpose AI models take effect in August 2026. GDPR Article 28 requires documented data processing agreements with any processor handling personal data, a requirement that is automatically violated every time an employee submits personal data to a consumer AI tool. The SEC has begun requesting AI governance documentation during cybersecurity incident reviews. Italy's data protection authority opened an investigation in late 2025 into enterprise AI deployments lacking employee consent mechanisms.

For regulated industries, the exposure is acute. HIPAA audit controls under 45 CFR 164.312(b) require tracking of all access to protected health information. PCI DSS Requirement 10 mandates logging of access to cardholder data environments. SOC 2 CC7.2 requires monitoring of system components for anomalies. None of these controls are satisfied when a clinician pastes patient data into ChatGPT or an analyst feeds transaction records into an unsanctioned AI tool.

The compliance challenge is structural. You cannot document what you cannot see. You cannot apply controls to tools you do not know exist. And you cannot demonstrate governance to a regulator when 90 percent of your AI usage is invisible to your security team.

Organizations that have navigated this effectively share a common pattern: they established cross-functional ownership of AI governance early, before an incident forced the question. Those that waited, which is to say most of them, are now building governance frameworks under regulatory pressure with incomplete visibility into their actual AI footprint.

What Actually Works

The data across these reports converges on a framework that is less about restricting AI and more about making the right AI tools easier to use than the wrong ones.

Step one is visibility. You cannot govern what you cannot see. Query DNS and proxy logs for connections to known AI service domains. Review OAuth application consents in your enterprise identity systems. Deploy browser extension allowlisting through group policy. And critically, conduct anonymous employee surveys to understand what tools are actually in use. The gap between what IT thinks is happening and what employees report is consistently the most revealing data point in any shadow AI assessment.

Step two is risk classification. Not all shadow AI carries equal risk. A marketer using Grammarly's AI features on a blog draft is categorically different from an engineer pasting production database schemas into a free-tier coding assistant. Implement a tiered framework: critical risk for tools processing regulated data (PCI, PHI, PII), high risk for tools accessing proprietary business data, medium risk for tools handling internal non-sensitive information, and low risk for tools with no sensitive data access. Apply controls proportional to the tier.

Step three is providing sanctioned alternatives. This is where most enterprise AI governance strategies fail. Blocking consumer AI tools without providing enterprise alternatives is the corporate equivalent of banning smartphones and handing out pagers. BlackFog's research found that most employees would use approved AI tools if their employer provided them. The demand is real. The productivity gains are real. The security risk comes not from AI usage itself but from AI usage that occurs outside institutional visibility and control.

Step four is data-layer enforcement. For the technical teams, this means implementing controls at the data layer, not the application layer. AI tools proliferate too fast for application-level blocklists to remain current. Instead, focus on what data can reach AI tools: endpoint controls that detect and block sensitive data patterns in browser input fields, API gateways that inspect outbound requests to AI endpoints, and DLP policies that understand the difference between a file upload and a prompt submission.

Step five is continuous monitoring, not point-in-time audits. Shadow AI usage patterns change weekly as new tools launch and employees discover new capabilities. Annual or quarterly assessments are useless. Implement continuous monitoring of AI tool usage across the enterprise, with automated alerts for new tools, anomalous usage patterns, and sensitive data exposure.

The Leadership Problem

The deepest challenge in shadow AI governance is not technical. It is organizational.

Writer's data shows that 75 percent of executives acknowledge their AI strategy is "more for show" than actual guidance. Fifty-eight percent admit their fellow leaders lack the knowledge to make informed AI decisions. Seventy-three percent of CEOs report stress or anxiety about their AI strategy.

When leadership is uncertain, governance becomes performative. Policies get written but not enforced. Tools get approved but not deployed. Training gets scheduled but not attended. And employees, who are under constant pressure to be more productive, make rational individual decisions that create irrational collective risk.

The organizations that are managing shadow AI effectively share three characteristics. First, they treat AI governance as a business function, not a security function. The CISO provides controls. The CIO provides infrastructure. But the AI governance committee includes business unit leaders who understand the workflows AI is being used in and can make informed decisions about acceptable risk.

Second, they measure adoption of sanctioned tools as aggressively as they monitor for unsanctioned ones. If your approved AI platform has 15 percent adoption six months after deployment, you do not have a shadow AI problem. You have a product problem. Fix the product before you blame the users.

Third, they accept that some shadow AI usage is a signal, not a threat. When an employee discovers a consumer AI tool that solves a real workflow problem, that is market research delivered for free. The correct response is not to ban the tool. It is to understand the need it fills, evaluate whether a sanctioned alternative exists, and if not, bring the tool inside the governance perimeter or build something that does the same job under enterprise controls.

The Window Is Closing

The State of AI Risk Management 2026 report identifies the core problem: organizations are adopting AI faster than they can secure it, creating a growing gap between what they believe they control and what is actually happening in their environments.

That gap has a half-life. Every month without visibility is another month of unmonitored data exposure, untracked compliance violations, and unmanaged security risk. The EU AI Act enforcement deadline is 110 days away. The SEC is already asking for documentation. The breach that started last quarter with a single unsanctioned prompt submission is still unfolding.

The good news is that the playbook exists. Visibility, classification, sanctioned alternatives, data-layer enforcement, continuous monitoring. It is not technically novel. It does not require a massive capital investment. What it requires is organizational will: the willingness to look at what is actually happening with AI across your enterprise, accept that the answer will be uncomfortable, and build governance that works with human behavior rather than against it.

Ninety percent of your enterprise AI usage is invisible. The breach is not coming. For two-thirds of organizations, it has already arrived.


Rajesh Beri is Head of AI Engineering at Zscaler and writes about enterprise AI strategy, security, and the gap between what vendors promise and what production environments deliver.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

Share:

THE DAILY BRIEF

shadow AIenterprise securityAI governancedata breachcomplianceCISOEU AI ActDLPunsanctioned AIBYOAI

90% of AI Usage Is Invisible to IT. The Breach Has Started.

67% of executives report breaches from unapproved AI tools. 269 unsanctioned apps per 1,000 employees. Shadow AI is enterprise security's biggest blind spot.

By Rajesh Beri·April 15, 2026·12 min read

Your employees are using AI right now. Not the AI tools you approved, budgeted for, and deployed with governance frameworks. The other ones. The free ChatGPT tab open behind Slack. The Claude account on a personal email. The AI coding assistant a developer installed without telling anyone. The marketing intern who pasted your Q3 revenue projections into a prompt to generate a board deck.

According to multiple reports released this month, approximately 90 percent of enterprise AI usage is invisible to the organization. Reco AI's 2025 State of Shadow AI Report found that 98 percent of organizations have employees actively using unsanctioned AI applications. Companies with 11 to 50 employees average 269 unsanctioned AI tools per 1,000 workers. And 63 percent of those organizations have no AI governance policy at all.

This is not a future risk. Writer's 2026 Workplace Intelligence report, published this month, found that 67 percent of executives believe their company has already suffered a data breach due to unapproved AI tools. Thirty-five percent of employees admit to entering proprietary information into public AI systems. Fifty-five percent of organizations describe their AI usage as a "chaotic free-for-all."

Shadow AI is not shadow IT with a new name. It is categorically more dangerous, and the enterprise response so far has been categorically inadequate.

Why Shadow AI Is Not Shadow IT

The comparison is tempting but misleading. Shadow IT in the 2010s meant an employee using Dropbox instead of SharePoint, or a team spinning up a Trello board without IT's blessing. The risk was real but bounded: the data stayed in a defined application, the blast radius was limited, and discovery usually happened during an audit.

Shadow AI inverts every one of those assumptions.

When an employee pastes a customer support transcript into ChatGPT to draft a response template, that data leaves the enterprise perimeter permanently. Most free-tier AI tools retain user inputs for model training. The data does not sit in a database you can audit. It is absorbed into a model's weights, where it cannot be retrieved, deleted, or traced. There is no access log. There is no data processing agreement. There is no way to know what happened until it surfaces in a breach investigation or a regulator's inquiry.

Samsung learned this the hard way. In a case that became a defining example, engineers pasted proprietary semiconductor source code into ChatGPT to debug production issues. The data was ingested into OpenAI's training pipeline before anyone in security knew it had left the building. Samsung responded with a company-wide ban on external AI tools, a response that solved the immediate problem but created a new one: developers who had been 40 percent more productive with AI assistance were now slower, and the ones who complied resented the policy while the ones who did not simply found workarounds.

The Samsung pattern has repeated across industries throughout 2025 and into 2026. A financial services firm discovers an analyst used Claude to summarize client portfolio data. A healthcare organization finds a clinician pasted patient notes into a consumer AI tool to generate referral letters. A law firm realizes associates have been using AI to draft contract language using privileged client information. In each case, the data exposure happened weeks or months before discovery, the regulatory implications are severe, and the remediation options are limited.

For the technical audience, the attack surface is worth mapping precisely. Shadow AI creates data exfiltration vectors that bypass every traditional DLP control. Endpoint DLP monitors file transfers and email attachments; it does not inspect what an employee types into a browser-based AI interface. Network DLP watches for sensitive data patterns crossing the perimeter; most AI interactions happen over standard HTTPS to well-known domains that are not on any blocklist. CASB solutions can identify which SaaS applications are in use, but they cannot inspect the content of prompts submitted to those applications in real time.

The result is a class of data exposure that is simultaneously pervasive, invisible, and irreversible.

The Numbers Behind the Crisis

The data from multiple 2026 reports converges on the same conclusion: shadow AI is now the largest unmanaged risk in enterprise security.

The Cloud Security Alliance's "AI Gone Wild" report found that one in five organizations has experienced a data breach directly tied to shadow AI usage. IBM's 2025 Cost of Data Breach Report calculates that organizations with extensive shadow AI face breach costs averaging $4.63 million, which is $670,000 more per incident than organizations with low or no shadow AI exposure. That 16 percent premium is not noise. It reflects the additional forensic complexity, regulatory penalties, and remediation costs that shadow AI breaches create.

Reco AI's analysis is even more pointed: 97 percent of AI-related security breaches in their dataset occurred in environments that lacked proper AI access controls. Not insufficient controls. No controls.

The confidence gap is equally alarming. The Purple Book Community's State of AI Risk Management 2026 report found that 92 percent of organizations express confidence in their ability to detect AI-generated code vulnerabilities in production. The same report found that 70.4 percent of organizations report confirmed or suspected vulnerabilities introduced by AI-generated code in their production systems. Nine out of ten believe they have the problem covered. Seven out of ten already have the problem.

BlackFog's Shadow AI Research adds a behavioral dimension that governance frameworks rarely address: 60 percent of employees say that using unsanctioned AI tools is worth the security risk if it makes them faster. More than a third have shared confidential company data with AI systems outside organizational oversight. And the majority say they would use approved AI tools if their employer provided them, a finding that reframes the problem from employee misconduct to institutional failure.

For business leaders, the Writer report delivers the strategic context. Seventy-nine percent of organizations report challenges adopting AI. Forty-eight percent of C-suite executives describe their AI adoption as a "massive disappointment," up from 34 percent in 2025. And 29 percent of employees admit to actively sabotaging their company's AI strategy, a number that rises to 44 percent among Gen Z workers. The enterprise is not just failing to govern AI. Parts of the enterprise are actively resisting the way AI is being governed.

The Compliance Cliff

Shadow AI is not just a security problem. It is a compliance problem that is about to get significantly worse.

The EU AI Act's obligations for general-purpose AI models take effect in August 2026. GDPR Article 28 requires documented data processing agreements with any processor handling personal data, a requirement that is automatically violated every time an employee submits personal data to a consumer AI tool. The SEC has begun requesting AI governance documentation during cybersecurity incident reviews. Italy's data protection authority opened an investigation in late 2025 into enterprise AI deployments lacking employee consent mechanisms.

For regulated industries, the exposure is acute. HIPAA audit controls under 45 CFR 164.312(b) require tracking of all access to protected health information. PCI DSS Requirement 10 mandates logging of access to cardholder data environments. SOC 2 CC7.2 requires monitoring of system components for anomalies. None of these controls are satisfied when a clinician pastes patient data into ChatGPT or an analyst feeds transaction records into an unsanctioned AI tool.

The compliance challenge is structural. You cannot document what you cannot see. You cannot apply controls to tools you do not know exist. And you cannot demonstrate governance to a regulator when 90 percent of your AI usage is invisible to your security team.

Organizations that have navigated this effectively share a common pattern: they established cross-functional ownership of AI governance early, before an incident forced the question. Those that waited, which is to say most of them, are now building governance frameworks under regulatory pressure with incomplete visibility into their actual AI footprint.

What Actually Works

The data across these reports converges on a framework that is less about restricting AI and more about making the right AI tools easier to use than the wrong ones.

Step one is visibility. You cannot govern what you cannot see. Query DNS and proxy logs for connections to known AI service domains. Review OAuth application consents in your enterprise identity systems. Deploy browser extension allowlisting through group policy. And critically, conduct anonymous employee surveys to understand what tools are actually in use. The gap between what IT thinks is happening and what employees report is consistently the most revealing data point in any shadow AI assessment.

Step two is risk classification. Not all shadow AI carries equal risk. A marketer using Grammarly's AI features on a blog draft is categorically different from an engineer pasting production database schemas into a free-tier coding assistant. Implement a tiered framework: critical risk for tools processing regulated data (PCI, PHI, PII), high risk for tools accessing proprietary business data, medium risk for tools handling internal non-sensitive information, and low risk for tools with no sensitive data access. Apply controls proportional to the tier.

Step three is providing sanctioned alternatives. This is where most enterprise AI governance strategies fail. Blocking consumer AI tools without providing enterprise alternatives is the corporate equivalent of banning smartphones and handing out pagers. BlackFog's research found that most employees would use approved AI tools if their employer provided them. The demand is real. The productivity gains are real. The security risk comes not from AI usage itself but from AI usage that occurs outside institutional visibility and control.

Step four is data-layer enforcement. For the technical teams, this means implementing controls at the data layer, not the application layer. AI tools proliferate too fast for application-level blocklists to remain current. Instead, focus on what data can reach AI tools: endpoint controls that detect and block sensitive data patterns in browser input fields, API gateways that inspect outbound requests to AI endpoints, and DLP policies that understand the difference between a file upload and a prompt submission.

Step five is continuous monitoring, not point-in-time audits. Shadow AI usage patterns change weekly as new tools launch and employees discover new capabilities. Annual or quarterly assessments are useless. Implement continuous monitoring of AI tool usage across the enterprise, with automated alerts for new tools, anomalous usage patterns, and sensitive data exposure.

The Leadership Problem

The deepest challenge in shadow AI governance is not technical. It is organizational.

Writer's data shows that 75 percent of executives acknowledge their AI strategy is "more for show" than actual guidance. Fifty-eight percent admit their fellow leaders lack the knowledge to make informed AI decisions. Seventy-three percent of CEOs report stress or anxiety about their AI strategy.

When leadership is uncertain, governance becomes performative. Policies get written but not enforced. Tools get approved but not deployed. Training gets scheduled but not attended. And employees, who are under constant pressure to be more productive, make rational individual decisions that create irrational collective risk.

The organizations that are managing shadow AI effectively share three characteristics. First, they treat AI governance as a business function, not a security function. The CISO provides controls. The CIO provides infrastructure. But the AI governance committee includes business unit leaders who understand the workflows AI is being used in and can make informed decisions about acceptable risk.

Second, they measure adoption of sanctioned tools as aggressively as they monitor for unsanctioned ones. If your approved AI platform has 15 percent adoption six months after deployment, you do not have a shadow AI problem. You have a product problem. Fix the product before you blame the users.

Third, they accept that some shadow AI usage is a signal, not a threat. When an employee discovers a consumer AI tool that solves a real workflow problem, that is market research delivered for free. The correct response is not to ban the tool. It is to understand the need it fills, evaluate whether a sanctioned alternative exists, and if not, bring the tool inside the governance perimeter or build something that does the same job under enterprise controls.

The Window Is Closing

The State of AI Risk Management 2026 report identifies the core problem: organizations are adopting AI faster than they can secure it, creating a growing gap between what they believe they control and what is actually happening in their environments.

That gap has a half-life. Every month without visibility is another month of unmonitored data exposure, untracked compliance violations, and unmanaged security risk. The EU AI Act enforcement deadline is 110 days away. The SEC is already asking for documentation. The breach that started last quarter with a single unsanctioned prompt submission is still unfolding.

The good news is that the playbook exists. Visibility, classification, sanctioned alternatives, data-layer enforcement, continuous monitoring. It is not technically novel. It does not require a massive capital investment. What it requires is organizational will: the willingness to look at what is actually happening with AI across your enterprise, accept that the answer will be uncomfortable, and build governance that works with human behavior rather than against it.

Ninety percent of your enterprise AI usage is invisible. The breach is not coming. For two-thirds of organizations, it has already arrived.


Rajesh Beri is Head of AI Engineering at Zscaler and writes about enterprise AI strategy, security, and the gap between what vendors promise and what production environments deliver.


Want to calculate your own AI ROI? Try our AI ROI Calculator — takes 60 seconds and shows projected savings, payback period, and 3-year ROI.

Continue Reading

THE DAILY BRIEF

Enterprise AI insights for technology and business leaders, twice weekly.

thedailybrief.com

Subscribe at thedailybrief.com/subscribe for weekly AI insights delivered to your inbox.

LinkedIn: linkedin.com/in/rberi  |  X: x.com/rajeshberi

© 2026 Rajesh Beri. All rights reserved.

Newsletter

Stay Ahead of the Curve

Weekly enterprise AI insights for technology leaders. No spam, no vendor pitches—unsubscribe anytime.

Subscribe

Related Articles

AI Security

NVIDIA OpenShell: The AI Agent Layer Your Stack Forgot

NVIDIA and SAP shipped OpenShell on May 12 — an open-source runtime security layer for AI agents. Why most enterprise AI stacks are missing this layer.

May 14, 2026
AI Agents

AI Agent Identity Crisis: 92% of CISOs Are Flying Blind

92% of CISOs lack visibility into AI agent identities, 16% effectively govern them. A 25-point readiness assessment + 6-month roadmap for closing the gap.

May 13, 2026
AI Security

Microsoft Semantic Kernel CVE: CVSS 9.9 RCE via Prompts

Microsoft disclosed two CVSS 9.9 RCEs in Semantic Kernel — its own 27K-star AI agent framework. CISO action plan and 25-point risk assessment.

May 12, 2026
Alation

78% Can't Pass an AI Audit. Alation Just Made It a Score.

On May 11, 2026, at the Gartner Data & Analytics Summit in London, Alation introduced Alation AI Governance — a system of record for every AI model, agent, and tool an enterprise runs, with a live board-ready compliance posture on demand. Launch timing is not coincidence: the EU AI Act's high-risk obligations enter force on August 2, 2026, just 83 days away, with penalties up to 3% of global revenue. Yet 78% of executives lack confidence they could pass an independent AI governance audit in 90 days, 82% admit AI is being built faster than it can be governed, and only 21% have a mature governance model. Inside the launch, the regulatory clock, the competitive landscape (Credo AI, IBM watsonx.governance, OneTrust, Holistic AI, Collibra, Atlan), and two frameworks every CDO and Chief Compliance Officer should run before August.

May 11, 2026

Latest Articles

View All →