5 Questions Every CISO Should Ask Before Deploying AI Agents
AI agents introduce attack surfaces that traditional security tools were not designed for. These five questions help CISOs evaluate whether their...
1. What is the complete attack surface of our AI agents?
AI agents combine multiple attack vectors that traditional applications do not. They accept natural language inputs vulnerable to prompt injection. They call external tools that can be poisoned or compromised. They hold credentials that can be stolen. And they generate outputs that can leak sensitive data.
Before deploying any agent, CISOs need a complete threat model that covers every layer: input (direct and indirect prompt injection), tool access (tool poisoning, confused deputy attacks), identity (credential theft, privilege escalation), output (data leakage, harmful content), and supply chain (compromised models, poisoned vector databases, malicious MCP servers).
Most security teams assess agents the same way they assess APIs — checking authentication and authorisation. But agents make autonomous decisions about which tools to call and what data to access. A comprehensive threat model must account for this autonomy.
2. Can we enforce security policies at runtime — not just at deployment?
Pre-deployment security reviews are necessary but insufficient. Agents make decisions continuously at runtime, often in ways their developers did not predict. A security review that approved an agent last month cannot prevent that agent from calling a restricted API today because a user asked it to.
CISOs should ask whether their organisation can enforce security policies at the moment an agent acts. This means runtime enforcement: intercepting every tool call, data access, and external interaction, evaluating it against policy, and blocking violations before they execute.
Without runtime enforcement, security depends entirely on the agent behaving as expected — which is a fragile assumption for autonomous systems. The question is not "did we review this agent?" but "can we stop this agent from doing something dangerous right now?"
3. Do we have an inventory of every AI agent in our environment?
Shadow AI is already a significant risk for most enterprises. Developers spin up agents for prototyping and forget to decommission them. Business users adopt AI tools with embedded agents without IT approval. Partners deploy agents that interact with internal systems.
CISOs need an agent registry — a centralised catalogue of every AI agent, who owns it, what model it uses, what tools it has access to, what data it can reach, and whether it has been security-reviewed. Without this inventory, security teams are defending a perimeter they cannot see.
The registry should integrate with deployment pipelines so new agents are registered automatically, and it should include discovery mechanisms to detect unregistered agents already operating in the environment.
4. How do we detect and respond to agent-specific incidents?
Traditional SIEM and incident response playbooks were not designed for AI agent incidents. When an agent is compromised or behaves unexpectedly, the response requires agent-specific capabilities: the ability to immediately suspend the agent (kill switch), forensic traces showing every action the agent took (including reasoning steps, tool calls, and policy decisions), and playbooks that cover agent-specific scenarios like prompt injection exploitation and credential compromise.
CISOs should evaluate whether their security operations team has the visibility and tooling to detect agent anomalies — unusual tool call patterns, unexpected data access, abnormal token consumption — and respond before damage occurs.
Mean time to detect and mean time to respond are the metrics that matter. For autonomous agents operating at machine speed, both need to be measured in seconds, not hours.
5. Can we prove to regulators that our agents are governed?
Regulatory scrutiny of AI systems is intensifying. The EU AI Act, DORA, HIPAA, and PCI DSS all impose requirements on organisations deploying AI. Regulators will not accept policy documents as evidence of governance — they will want to see operational controls and audit trails.
CISOs should ask whether their agent governance produces the compliance evidence regulators expect: immutable logs of every agent action and policy decision, demonstrable runtime enforcement (not just documentation), evidence that security reviews were conducted and findings addressed, and traceability from agent actions to business outcomes.
The organisations that invest in governance infrastructure now will be well-positioned when regulatory enforcement begins. Those that treat governance as a future problem will face costly remediation under time pressure.