Shadow AI Is Already Inside Your Organization. Here’s What We’ve Seen.

May 05, 2026 | 5 min read

  • CI Digital
  • TL;DR — Key Takeaways

    1. Shadow AI means employees using AI tools that IT never approved — and it’s already happening at scale inside most enterprises.
    2. 78% of employees bring their own AI tools to work, per Microsoft and LinkedIn’s 2024 Work Trend Index.
    3. 77% of employees paste data into AI tools, and 40% of uploaded files contain PII or payment card data, per LayerX Security’s 2025 report.
    4. Samsung, and thousands of other organizations, have already had data exposed through unsanctioned AI use.
    5. Most organizations have zero visibility into 89% of their AI tool usage — including what data is being shared.
    6. Detection starts with monitoring, not blocking. Blocking just moves shadow AI somewhere you can’t see it at all.

    The call that nobody wants to get goes something like this: a manager notices an employee submitted a client report that contains phrasing that sounds nothing like them. Someone pulls the thread. It turns out the employee has been using a free ChatGPT account — logged in personally, no corporate controls — and pasting client data into it for weeks.

    The data is gone. There’s no audit trail. And the organization had a signed AI policy the whole time.

    This isn’t a hypothetical. Variations of this scenario are playing out across enterprises right now. We covered why a written policy isn’t enough to stop it in our pillar post . This post gets into the specifics — what shadow AI in the enterprise actually looks like, how prevalent it is, and what the real-world damage has been when organizations found out too late.

    What is shadow AI in the enterprise?

    Shadow AI is any AI tool being used inside your organization without IT or security approval. It’s the AI equivalent of shadow IT — the unsanctioned apps and services that employees adopt on their own because they’re faster, easier, or more capable than the approved options.

    The difference is scale and risk. Shadow IT usually meant a team using Dropbox instead of the approved file share. Shadow AI means employees feeding business data, customer records, internal documents, and source code into external AI models that your organization has no visibility into and no contractual relationship with.

    A Gartner survey of 302 cybersecurity leaders found 69% of organizations suspect or have confirmed that employees are using prohibited AI tools. Gartner’s Arun Chandrasekaran put it bluntly: “CIOs should define clear enterprise-wide policies for AI tool usage and conduct regular audits for shadow AI activity.” The problem is that most organizations can’t audit what they can’t see — and right now, most can’t see much.

    How common is shadow AI really?

    The numbers from multiple independent surveys are strikingly consistent, and they’re high.

    Microsoft and LinkedIn’s 2024 Work Trend Index found 78% of AI users bring their own AI tools to work. That’s not rogue behavior — it’s standard practice. WalkMe’s survey of 1,000 U.S. adults found 78% of employees admit to using AI tools not approved by their employer. BlackFog research put the number at 49% using unsanctioned tools, with 51% connecting or integrating those tools with other work systems without IT approval.

    Perhaps the most telling number: Teramind found that 68% of workers using ChatGPT at work intentionally hide it from their employers. They know it’s against policy. They do it anyway. Because the alternative is slower, and nobody is checking.

    Dan Adika, CEO of WalkMe, summarized the situation plainly: “Beyond the productivity paradox, we’re facing a full-blown governance crisis.”

    What data is actually being shared with these tools?

    This is where shadow AI stops being an abstract policy problem and becomes a concrete security incident waiting to happen.

    LayerX Security’s 2025 Enterprise AI & SaaS Data Security Report, based on real enterprise browser telemetry, found 77% of employees paste data into AI tools. Of that activity, 82% comes from personal, unmanaged accounts — meaning your DLP policies, SSO controls, and corporate security tools are completely blind to it. Of files uploaded to AI tools, 40% contain PII or payment card data. On average, employees make 14 data pastes per day into non-corporate AI accounts, at least 3 of which contain sensitive information.

    BlackFog’s January 2026 research broke down exactly what’s being shared: 33% shared internal research or data sets, 27% shared employee data including payroll and performance information, and 23% shared financial statements or sales data.

    AI has become the number one data exfiltration channel in the enterprise. And most security teams aren’t watching it.

    Generative AI is now embedded across everyday workflows, often beyond traditional IT oversight.

    — Oliver Simonnet, Lead Cybersecurity Researcher, CultureAI

    What does shadow AI look like when it actually causes damage?

    The Samsung incident is the one that gets cited most often, and for good reason. In April 2023, three separate employees at Samsung’s semiconductor division fed proprietary data into ChatGPT within 20 days of each other. One pasted source code from an internal measurement database to debug it. Another uploaded code related to chip manufacturing defect detection. A third recorded a confidential meeting and asked ChatGPT to summarize the minutes. Samsung banned all generative AI tools across the company shortly after and began building an internal alternative.

    The employees weren’t malicious. They were trying to do their jobs faster. That’s what makes shadow AI so hard to address with policy alone — the intent is almost always benign.

    More recently, security researchers discovered that over 225,000 ChatGPT credentials had been harvested by LummaC2 infostealer malware and were being sold on dark web markets. Every one of those accounts contained a complete chat history — including whatever business data the account holder had previously shared with the model.

    CultureAI’s March 2026 research surveyed 300 senior technology and security leaders and found 1 in 5 organizations acknowledge their AI policies are not actively enforced. More than a third lack dedicated AI detection capabilities. And among organizations that experienced a near-miss involving AI data exposure, 17% changed nothing afterward.

    Think your AI policies are being followed? CI Digital can show you what’s actually happening.

    Book a discovery call with CI Digital

    Why can’t most organizations see their own shadow AI?

    Because the tools were built to catch different problems.

    Traditional DLP tools look for known patterns — a credit card number, a Social Security number, a specific file type. They don’t understand context or intent. An employee pasting a client proposal into ChatGPT doesn’t trigger a credit card alert. It doesn’t look like an exfiltration event. It looks like web traffic to a popular site.

    LayerX found that organizations have zero visibility into 89% of their AI tool usage despite having security policies on the books. 71% of connections to AI tools are made through personal, non-corporate accounts. 58% of corporate account connections bypass SSO entirely. And 20% of enterprise users have AI browser extensions installed — which can sidestep security web gateways altogether.

    CultureAI’s research found a particularly sharp disconnect: 72% of organizations believe they have full visibility into AI usage, but 65% still detect unauthorized shadow AI when they actually look. That’s not a small gap. That’s most of the picture missing.

    This research is a stark indication not only of how widely unapproved AI tools are being used, but the level of risk tolerance amongst employees and senior leaders.

    — Dr. Darren Williams, CEO, BlackFog

    How do you actually detect shadow AI in your organization?

    Detection requires visibility at the right layer. Blocking AI domains doesn’t work — employees route around blocks using personal hotspots, home networks, or VPNs. And blocking legitimate AI tools entirely pushes employees toward less secure workarounds that are harder to monitor, not easier.

    What works is endpoint and browser-level monitoring that captures AI activity in context. That means seeing what tool was used, what data was submitted, through which account type, from which device, and whether the activity fell within policy boundaries.

    Classie’s shadow AI detection capabilities are built for exactly this. The platform builds a live inventory of every AI tool active in your environment — including tools your security team didn’t know about — and gives you the context to distinguish legitimate business use from risky behavior before it becomes an incident. Christine Lee, VP Research at Gartner, made the point directly at the 2025 Security and Risk Management Summit: “Having a good AI discovery process is the foundation of a versatile AI cybersecurity program.”

    If you’re a CISO or IT director trying to understand the specific security questions behind shadow AI risk, the previous post in this series covers the five questions your security team should already be asking. And if you’ve already deployed Microsoft Copilot, our next post covers the governance gap that follows almost every rollout.

    Frequently Asked Questions

    What is shadow AI in the enterprise?

    Shadow AI refers to AI tools employees use inside an organization without IT or security approval. This includes free-tier accounts on platforms like ChatGPT, Claude, or Gemini accessed through personal accounts, as well as AI browser extensions and third-party tools connected to work systems without IT oversight.

    How do I detect shadow AI in my organization?

    Detection requires monitoring at the endpoint and browser level — not just reviewing your approved tool list. Platforms designed for AI supervision can build a live inventory of all AI activity, including unsanctioned tools, and identify which accounts are corporate versus personal. Starting with an AI posture assessment gives you a baseline of what’s actually running in your environment.

    What data are employees sharing with unauthorized AI tools?

    According to LayerX Security’s 2025 research, 40% of files uploaded to AI tools contain PII or payment card data, and 22% of pasted text includes sensitive regulatory information. BlackFog’s research found employees are sharing internal research data sets, employee records including payroll information, and financial statements.

    What real-world incidents have happened because of shadow AI?

    The most widely reported was Samsung in 2023, where engineers pasted proprietary source code and confidential meeting transcripts into ChatGPT across three separate incidents in 20 days. More broadly, over 225,000 ChatGPT credentials containing complete chat histories were found being sold on dark web markets after being harvested by infostealer malware.

    Why doesn’t blocking AI tools solve the shadow AI problem?

    Blocking forces employees to access AI tools through personal networks and devices that sit entirely outside your security perimeter. It doesn’t reduce usage — it reduces visibility. Organizations that block AI tools without providing approved alternatives often end up with less visibility into AI activity, not more.

    How many organizations actually have shadow AI running right now?

    According to Gartner, 69% of organizations have confirmed or suspect unauthorized AI tool usage. WalkMe’s 2025 survey found 78% of employees admit to using unapproved AI tools. And LayerX’s research found that 89% of enterprise AI usage is invisible to security teams despite existing security policies.

    This post is part of The AI Governance Series

    Author
    Headshot of Craig Taylor, Practice Lead at CI Digital
    Craig Taylor

    Share this article

    Subject Matter Expert
    Craig Taylor

    Practice Lead, CI Digital

    Speak With Our Team

    Share this article

    Let’s Work Together

    [email protected]