Shadow AIIT SecurityAI Governance

Shadow AI: The New Security Risk CISOs Can't Ignore

Over 70% of employees use unapproved AI tools at work. Learn what shadow AI is, why security tools miss it, and how to build a governance framework.

Coax TeamMarch 10, 202610 min read

What Is Shadow AI?

Shadow AI is the use of artificial intelligence tools and services within an organization without the knowledge, approval, or oversight of IT and security teams. It is the newest evolution of shadow IT — and arguably the most dangerous one.

Think of it this way: shadow IT gave employees the ability to store company data in unauthorized apps. Shadow AI gives employees the ability to feed company data into machine learning models that may retain, learn from, or expose that data in ways nobody anticipated.

Examples of shadow AI include:

  • Employees pasting confidential documents into ChatGPT, Claude, or Gemini to summarize or rewrite them
  • Marketing teams using AI image generators trained on proprietary brand assets
  • Developers using AI code assistants that send source code to external servers
  • Sales teams uploading customer lists to AI-powered CRM tools without IT approval
  • Finance teams using AI analytics tools to process sensitive financial data

The scale of the problem is staggering. Over 70% of knowledge workers now use generative AI tools at work, and the majority do so without any formal approval or governance in place.

Why Shadow AI Is Different from Shadow IT

Shadow IT has been a challenge for over a decade. CISOs have built programs and tools to discover unauthorized SaaS applications, assess their risk, and bring them under governance. So why can't those same programs handle shadow AI?

Data Flows in One Direction

With traditional shadow IT, data might be stored in an unauthorized app — but it stays there. With AI tools, data is actively processed, potentially stored in training datasets, and could resurface in responses to other users. Once company data enters an AI model, you cannot retrieve or delete it.

The Interface Is Invisible

Many AI tools are embedded within sanctioned applications. An employee might use an AI feature inside Google Docs, Notion, or Slack — tools IT already approved — without anyone realizing that data is being sent to a third-party AI model. Traditional SaaS discovery tools see the sanctioned app, not the AI layer underneath.

Speed of Adoption Is Unprecedented

SaaS adoption was fast. AI adoption is faster. New AI tools launch daily, many with free tiers that require nothing more than an email address. Employees adopt them in minutes, not days. By the time IT discovers a new AI tool, dozens of employees may already be using it.

Regulatory Uncertainty Amplifies Risk

GDPR, the EU AI Act, and emerging national regulations create a complex compliance landscape for AI usage. Unlike established SaaS categories, there are few clear precedents for how AI data processing is treated under European law. Getting it wrong carries significant financial and reputational risk.

The 5 Biggest Shadow AI Risks for European Companies

1. Data Leakage into AI Training Sets

When employees paste sensitive data into AI tools, that data may be used to train future model versions. This means your trade secrets, customer data, or strategic plans could influence the outputs other users receive.

The European dimension: Under GDPR, personal data processed by AI tools requires a legal basis, a documented purpose, and often a Data Protection Impact Assessment (DPIA). If an employee pastes customer personal data into an AI tool without these safeguards, your organization is in violation — even if the employee didn't realize it.

Most AI providers' terms of service for free tiers explicitly state that user inputs may be used for model training. Enterprise tiers often provide opt-outs, but employees on free accounts have no such protection.

2. EU AI Act Compliance Violations

The EU AI Act, which began phased enforcement in 2025, works alongside NIS2 to create a comprehensive regulatory framework. It classifies AI systems by risk level and imposes obligations on both providers and deployers. Organizations that use AI tools — even those built by third parties — are considered "deployers" and have legal obligations including:

  • Ensuring AI systems are used in accordance with their intended purpose
  • Maintaining human oversight over AI-assisted decisions
  • Conducting conformity assessments for high-risk AI systems
  • Maintaining documentation and logs of AI system usage

If your employees are using AI tools that IT doesn't know about, you cannot meet any of these obligations. Shadow AI makes EU AI Act compliance impossible by definition.

3. Intellectual Property Exposure

Code, designs, business strategies, and other intellectual property pasted into AI tools may lose trade secret protection. Under most jurisdictions, a trade secret must be subject to "reasonable measures" to maintain its secrecy. Allowing employees to freely paste proprietary information into third-party AI models arguably fails this test.

Several high-profile cases have already emerged where proprietary code appeared in AI-generated outputs used by competitors. The legal landscape is still evolving, but the risk is real and immediate.

4. Decision-Making Without Oversight

AI tools are increasingly used to make or inform business decisions: hiring recommendations, financial forecasts, customer segmentation, risk assessments. When these decisions are made using unauthorized AI tools, there is:

  • No audit trail of how the decision was reached
  • No validation of model accuracy or bias
  • No human oversight as required by GDPR Article 22 (automated decision-making)
  • No ability to explain the decision to affected individuals

For European companies, GDPR grants individuals the right not to be subject to solely automated decisions with legal or significant effects. Shadow AI makes compliance with this requirement unverifiable.

5. Supply Chain and Third-Party AI Risk

Many SaaS applications are quietly embedding AI features powered by third-party models. Your approved project management tool might start using OpenAI's API for "smart suggestions." Your CRM might integrate an AI assistant that sends customer data to an external model.

These embedded AI features often launch without notification to customers. Your IT team approved the original SaaS application — but never approved the AI subprocessor that was added six months later.

Why Traditional Security Tools Miss Shadow AI

Most organizations rely on a combination of CASB (Cloud Access Security Broker), DLP (Data Loss Prevention), and SaaS management platforms to control shadow IT. These tools have significant blind spots when it comes to AI:

Browser-Based AI Is Invisible to Network Tools

Most AI tools run in the browser over standard HTTPS connections. Network-level monitoring tools see a connection to chat.openai.com but cannot inspect the encrypted content. They don't know whether an employee is asking for recipe suggestions or pasting your entire customer database.

Embedded AI Bypasses App-Level Controls

When AI features are embedded in sanctioned applications, app-level monitoring tools see authorized usage of an approved tool. They don't distinguish between a user typing a message in Slack and an AI bot processing that message through a third-party model.

Free Tiers Don't Appear in Financial Data

Expense-based SaaS discovery tools find applications by tracking payments. Most AI tools offer generous free tiers — employees can use ChatGPT, Claude, Perplexity, and dozens of other tools without ever triggering a financial signal.

API-Based Discovery Misses Ad-Hoc Usage

API-based SaaS management tools discover applications by connecting to sanctioned platforms and pulling usage data. They work well for managed applications but cannot detect tools that employees access directly through a browser without any integration.

Building a Shadow AI Governance Framework

A practical governance framework for shadow AI needs to balance security with usability. Blocking AI entirely is counterproductive — employees will find workarounds, and your organization will fall behind competitors who embrace AI effectively.

Tier 1: Establish an AI Acceptable Use Policy

Start with a clear, concise policy that employees can actually follow:

  • Approved tools: List specific AI tools that are sanctioned for use, along with approved use cases
  • Prohibited actions: Clearly define what data cannot be entered into AI tools (customer PII, financial data, source code, trade secrets)
  • Reporting requirements: Create a simple process for employees to request approval for new AI tools
  • Consequences: Define what happens when the policy is violated

Keep the policy to one page. A 30-page AI governance document will not be read.

Tier 2: Deploy Enterprise AI Platforms

Provide employees with sanctioned AI tools that meet your security and compliance requirements:

  • Enterprise versions of major AI platforms (with data retention controls, no training on your data)
  • AI tools integrated into your existing approved stack
  • Internal AI services with appropriate data handling controls

When employees have approved tools that work well, the incentive to use unauthorized alternatives drops significantly.

Tier 3: Implement Technical Controls

Layer technical controls on top of policy:

  • Email and IdP scanning: Detect signups for AI services using email metadata analysis
  • OAuth monitoring: Identify AI applications that employees have authorized with corporate credentials
  • Browser extension policies: Block or control AI-related browser extensions on managed devices
  • DLP integration: Flag sensitive data patterns in clipboard or paste operations where supported

Tier 4: Continuous Monitoring and Response

AI adoption is too fast for quarterly reviews. Implement continuous monitoring:

  • Real-time alerts when new AI tools are detected in your environment
  • Monthly reviews of AI usage patterns and trends
  • Quarterly risk assessments of approved AI tools (including changes to their terms of service and data processing practices)
  • Annual policy review and update

Detecting Shadow AI with SaaS Management Platforms

SaaS management platforms that use email metadata analysis are uniquely positioned to detect shadow AI. Here's why:

When an employee signs up for an AI tool — whether it's ChatGPT, Midjourney, Jasper, or any of the hundreds of AI SaaS applications — they receive confirmation emails, usage notifications, and billing receipts. Email-based discovery catches these signups automatically, even for free tiers that leave no financial footprint.

Combined with identity provider (IdP) integration, a SaaS management platform can identify:

  • Every AI application an employee has authorized with corporate credentials
  • The OAuth permissions granted to each AI application
  • New AI tool signups as they happen, not months later during an audit

This approach covers the critical gap that network-based and API-based tools miss: it detects all AI SaaS signups, including free tiers, personal browser usage with corporate email, and embedded AI features that use OAuth authentication.

The 10-Step Shadow AI Checklist for CISOs

Use this checklist to assess and improve your organization's shadow AI posture:

Assessment

  1. Audit current AI usage: Use email and IdP scanning to discover all AI tools in use across the organization
  2. Map data exposure: For each discovered AI tool, determine what types of company data have been or could be entered
  3. Review OAuth grants: Identify all AI applications with OAuth access to corporate systems and assess their permission scopes
  4. Check vendor terms: Review the data processing terms of each discovered AI tool, particularly regarding training data and data retention

Policy and Governance

  1. Publish an AI acceptable use policy: Create a clear, one-page policy covering approved tools, prohibited data types, and reporting requirements
  2. Establish an AI review process: Create a fast-track approval process for new AI tools (aim for 48-hour turnaround)
  3. Appoint an AI governance owner: Designate a person or team responsible for AI tool evaluation, policy enforcement, and incident response

Technical Controls

  1. Deploy enterprise AI platforms: Provide sanctioned AI tools with appropriate security controls so employees don't need to seek unauthorized alternatives
  2. Enable continuous monitoring: Implement automated detection of new AI tool signups and OAuth grants
  3. Integrate with incident response: Add shadow AI scenarios to your incident response playbook, including procedures for data exposure through AI tools

The Bottom Line

Shadow AI is not a future risk — it's a current reality. Over 70% of your employees are likely already using AI tools that IT doesn't know about. Every day without governance is another day of uncontrolled data exposure, compliance violations, and intellectual property risk.

The approach that works is the same one that works for shadow IT: start with visibility. You can't govern what you can't see. Use automated discovery to build a complete picture of AI tool usage, then layer policy and technical controls on top. And be aware that every AI tool with OAuth access to your corporate data amplifies the risk.

The organizations that will thrive are not those that block AI entirely, but those that embrace it within a governance framework that protects the business while enabling innovation.


Want to discover which AI tools your employees are using? Book a demo and see your full AI and SaaS landscape in 15 minutes.

Ready to take control of your SaaS stack?

See your full SaaS landscape — shadow IT, wasted spend, and security gaps — in 15 minutes.

Related Articles