The 5 Questions Your CISO Should Be Asking About AI Right Now

Apr 28, 2026 | 5 min read

  • CI Digital
  • Series: The AI Governance Series — Blog 2 of 4

    TL;DR — Key Takeaways

    • Most CISOs know AI is a growing risk. Most organizations are not ready to manage it.
    • 78% of CISOs are already seeing significant impact from AI-powered threats, per Darktrace’s 2025 survey.
    • Only 7% of organizations with AI tools deployed govern them with real-time enforcement.
    • The five questions in this post are a starting framework — not a checklist you complete once.
    • Three governance frameworks every security leader should know: NIST AI RMF, ISO 42001, and the EU AI Act.
    • Classie’s AI posture assessment gives security leaders a clear picture of where they actually stand.

    There’s a specific kind of meeting happening in security teams right now. Someone flags that an employee used an unsanctioned AI tool. The CISO asks what data was exposed. Nobody knows. The CISO asks what AI tools are actually running in the environment. Nobody knows that either.

    That’s not a hypothetical. It’s the situation at most enterprises in 2025 and 2026.

    Proofpoint’s 2025 Voice of the CISO Report, which surveyed 1,600 CISOs across 16 countries, found 76% of security leaders feel at risk of a material cyberattack in the next 12 months — yet 58% admit their organization is not ready to respond. That gap has a name: it’s the difference between knowing AI is a problem and having a plan to manage it.

    If you’re a CISO trying to get ahead of AI risk, the place to start is with the right questions. We covered why a written AI policy isn’t enough in the first post in this series. This post goes one level deeper — into the specific questions you should be asking your team right now, and what the answers actually tell you about your security posture.

    Question 1: Do we have a live inventory of every AI tool running in our environment?

    This is the baseline. Everything else depends on it. You cannot govern what you cannot see.

    Most organizations assume their approved tool list is close to complete. It isn’t. A Gartner survey of 302 cybersecurity leaders found 69% of organizations suspect or have confirmed that employees are using prohibited public AI tools. That means the gap between the approved list and the actual list is already large — and it grows every time a new tool launches.

    A live inventory isn’t a spreadsheet you update quarterly. It’s a system that automatically captures AI tool usage as it happens — including tools employees access through personal accounts, browser extensions, and third-party integrations. LayerX Security’s 2025 Enterprise GenAI Security Report found that organizations have zero visibility into 89% of their AI tool usage despite having security policies on the books.

    If you don’t have a live inventory, you don’t have a governance program. You have a document.

    Question 2: What data are our AI tools actually touching?

    The inventory tells you what tools exist. This question tells you what’s at risk.

    IBM’s 2025 Cost of a Data Breach Report found 13% of organizations reported breaches of AI models or applications in 2025 — and 97% of those organizations lacked proper AI access controls. IBM’s Suja Viswesan put it plainly: “The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it.”

    The data exposure problem is more specific than most security leaders realize. LayerX found that 77% of employees paste data into AI tools, with 82% of those pastes coming from personal, unmanaged accounts. Of uploaded files, 40% contain PII or payment card data, and 22% of pasted text includes sensitive regulatory information.

    That’s not edge-case behavior. That’s a daily workflow pattern across your organization.

    The question to ask your team: can you trace which AI tools accessed which data, when, and what the model did with it? If the answer is no, that’s your most important gap.

    “The cost of inaction isn’t just financial — it’s the loss of trust, transparency and control.”

    — Suja Viswesan, VP Security, IBM

    Question 3: Are we prepared for AI-specific attacks, not just AI-assisted ones?

    Most security teams are thinking about AI as a tool attackers use. Fewer are thinking about AI systems themselves as the attack surface.

    Darktrace’s State of AI Cybersecurity Report 2025 surveyed 1,500 security professionals and found 78% of CISOs are already seeing significant impact from AI-powered cyber threats — up 5 points from the year before. At the same time, only 42% of those professionals say they fully understand the AI tools in their own security stack.

    There’s a category of threat that gets less attention than it deserves: attacks on AI systems themselves. Prompt injection attacks trick AI tools into leaking data or taking unauthorized actions. Training data poisoning affects model behavior at scale. Adversa AI’s 2025 Security Incidents Report found 35% of all real-world AI security incidents in 2025 were caused by simple prompt manipulation — some resulting in losses exceeding $100,000.

    The question for your team: have you modeled what a prompt injection attack looks like against your AI environment, and do you have detection for it?

    Ready to map your actual AI risk exposure? CI runs the assessment.

    Book a discovery call with CI

    Question 4: Which AI governance framework applies to our organization?

    There are now three major frameworks security leaders need to understand. They are not interchangeable, and they do not all apply equally depending on your industry and geography.

    The NIST AI Risk Management Framework (AI RMF) is a voluntary U.S. framework built around four functions: Govern, Map, Measure, and Manage. It is not certifiable, but it’s the baseline that most U.S. regulators and auditors reference. It pairs well with NIST CSF and is the most practical starting point for organizations without an existing AI governance structure.

    ISO/IEC 42001:2023 is the first internationally certifiable AI management standard. It uses the same Plan-Do-Check-Act structure as ISO 27001, which means organizations already certified under information security frameworks can layer it in without rebuilding from scratch. It covers 38 controls across the full AI lifecycle.

    The EU AI Act is the one with hard deadlines. The ban on prohibited AI systems took effect February 2025. General-purpose AI obligations began August 2025. High-risk AI system requirements are fully in force August 2026. If your organization operates in Europe or handles data from EU residents, this is not optional.

    The question to ask: which of these frameworks applies to your organization, and do you have a gap assessment against it?

    Question 5: What happens when something goes wrong?

    This is the question most security teams skip until they need it.

    The Vorlon 2026 CISO Report surveyed 500 U.S. CISOs and found 99.4% experienced at least one SaaS or AI ecosystem security incident in 2025. That is not a typo. Only 0.8% of respondents felt adequately protected. And yet most organizations still lack a documented AI incident response plan.

    A real incident response plan for AI looks different from a standard one. It needs to answer: How do you stop an AI agent mid-task if it starts behaving unexpectedly? How do you reconstruct what a model accessed during an incident? Who owns the response when the tool involved is a vendor model running on shared infrastructure?

    Forrester’s Jeff Pollard put the agentic AI failure risk plainly:

    When something goes wrong with agentic AI, failures cascade through the system. That means that the introduction of one error can propagate through the entire system, corrupting it.

    If your incident response playbook doesn’t include AI-specific scenarios — shadow AI data exposure, prompt injection, agent misbehavior — you have a gap worth closing before you need it.

    AI adoption is happening quickly, often outside of security’s line of sight. Novelty should never bypass scrutiny.

    — Shyama Rose, CISO, Affirm

    So where do most organizations actually stand?

    The honest answer is: behind where they think they are.

    A 2026 industry report from 2toLead and Cybersecurity Insiders surveying 1,253 security professionals found 73% of organizations have deployed AI tools, but only 7% govern them with real-time policy enforcement. 68% describe their AI governance posture as reactive or still developing. 94% report gaps in their AI activity visibility.

    Those numbers reflect the same pattern we see when we talk to security leaders: the deployment is moving. The governance is not.

    The five questions above won’t close that gap on their own. But they’ll tell you exactly where your gaps are — which is the only honest starting point.

    If you want to know where your organization stands against each of them, CI Digital partners with Classie to run a structured AI posture assessment. It maps your environment, identifies your blind spots, and gives you a prioritized path forward.

    Ready to see where your organization actually stands? CI can run the assessment.

    Talk to CI about an AI posture assessment

    Frequently Asked Questions

    What are the biggest AI security risks CISOs face right now?

    The top three are shadow AI (unsanctioned tools employees are already using), AI-specific attacks like prompt injection, and data exposure through overpermissioned AI systems. Darktrace found 78% of CISOs are already seeing significant impact from AI-powered threats.

    What is the NIST AI Risk Management Framework?

    The NIST AI RMF is a voluntary U.S. framework that helps organizations identify, assess, and manage AI-related risks. It is structured around four functions: Govern, Map, Measure, and Manage. It’s the most widely referenced baseline for AI governance in the United States and maps to other standards including ISO 42001 and the EU AI Act.

    What is an AI risk assessment for enterprises and how do I start one?

    An AI risk assessment identifies every AI tool in your environment, maps what data each tool can access, evaluates behavior against your security policies, and flags gaps in governance or controls. Starting one requires a live inventory of AI usage — which most organizations don’t have yet. A structured posture assessment with a partner like CI Digital is the fastest way to get there.

    How does the EU AI Act affect my security program?

    If your organization operates in Europe or processes data from EU residents, the EU AI Act sets binding requirements for high-risk AI systems, with full enforcement beginning August 2026. It mandates risk management systems, human oversight, transparency, and technical documentation. Non-compliance carries fines of up to €30 million or 6% of global annual turnover.

    What should a CISO do first to address AI governance?

    Start with visibility. Before you can govern AI, you need to know what AI tools are running in your environment — including ones IT didn’t approve. From there, map data access, run a gap assessment against the most relevant governance framework for your industry, and build incident response plans that cover AI-specific failure modes.

    What is an AI governance checklist for security leaders?

    A practical checklist includes: live inventory of AI tools, data access mapping, policy enforcement controls, incident response planning, framework alignment (NIST AI RMF, ISO 42001, or EU AI Act), and regular audits for shadow AI activity. IBM found that only 34% of organizations with AI governance policies perform regular audits — which means having the policy and actually running it are two different things.

    This post is part of The AI Governance Series by CI Digital. Read the full series

    Prev. Blog: Your Company Has an Ai Policy. Here's Why That's Not Enough.

    Next Blog: (Coming soon)

    Author
    Headshot of Craig Taylor, Practice Lead at CI Digital
    Craig Taylor

    Share this article

    Subject Matter Expert
    Craig Taylor

    Practice Lead, CI Digital

    Speak With Our Team

    Share this article

    Let’s Work Together

    [email protected]