5 Questions Every Risk Manager Should Ask About AI Agent Deployments
AI agents introduce risk categories that traditional risk frameworks do not cover. These five questions help risk managers evaluate and mitigate...
1. What is the blast radius if an AI agent fails or is compromised?
Risk managers are trained to think in terms of worst-case scenarios and blast radius. For AI agents, this analysis is essential but often overlooked.
Consider what happens if a customer-facing agent is compromised: it could leak customer data, provide harmful advice, execute unauthorised transactions, or cause reputational damage at scale. The blast radius depends on what the agent can access (data, tools, APIs), who it interacts with (internal users, customers, partners), and what actions it can take (read-only queries vs. irreversible transactions).
Risk managers should require a blast radius assessment for every agent deployed to production. This assessment maps the maximum potential damage if the agent is compromised or malfunctions, and ensures that controls are proportional to the risk. High-blast-radius agents need tighter permissions, more monitoring, and faster incident response capabilities.
2. Can we quantify and track AI agent risk in our existing risk framework?
Most enterprise risk frameworks use established methodologies to quantify and track risk: likelihood times impact, risk registers, heat maps, and key risk indicators. AI agents need to be integrated into these frameworks — not managed in a separate, informal process.
Risk managers should work with AI and security teams to define agent-specific risk indicators: number of unregistered agents (shadow agent risk), percentage of agent actions covered by runtime enforcement (governance gap), rate of policy violations (control effectiveness), mean time to detect and respond to agent incidents (operational risk), and agent cost variance (financial risk).
These indicators should be tracked alongside traditional risk metrics and reported to the same governance bodies. AI agent risk is not a separate category — it is operational risk, compliance risk, and technology risk expressed through a new vector.
3. How do we assess third-party and supply chain risks specific to AI agents?
AI agents depend on a complex supply chain: foundation model providers, agent frameworks, tool integrations, data sources, and hosting infrastructure. Each dependency introduces risk that may not be captured by traditional vendor risk assessments.
A model provider outage can disable every agent in the organisation. A compromised tool server can feed malicious data to agents. A poisoned vector database can cause agents to produce dangerous outputs. A framework vulnerability can be exploited across every agent built on that framework.
Risk managers should extend their third-party risk assessment process to cover AI-specific dependencies. This includes evaluating model provider reliability and security practices, assessing the security of MCP tool servers and other integrations, maintaining an AI bill of materials that catalogues every dependency, and monitoring for vulnerabilities in the agent supply chain.
4. Do our agents produce the audit evidence that regulators will require?
Regulatory requirements for AI are becoming concrete. The EU AI Act mandates transparency, human oversight, and detailed logging for high-risk AI systems. DORA requires financial institutions to test and monitor AI-driven processes. HIPAA requires audit trails for systems handling protected health information.
Risk managers should verify that agent governance infrastructure produces the specific evidence these regulations require: complete logs of every agent action and policy decision, demonstrable human oversight for high-risk actions, traceability from agent outputs to source data and reasoning, incident records with root cause analysis and remediation evidence, and regular risk assessments with documented findings.
The time to build this evidence infrastructure is before regulators ask for it — not after. Organisations that wait until an audit or enforcement action will face both remediation costs and potential penalties.
5. What is our incident response plan for AI agent failures?
Traditional incident response plans cover scenarios like data breaches, system outages, and cyberattacks. AI agent incidents require additional playbooks that address agent-specific failure modes.
Risk managers should ensure that incident response plans cover: compromised agents (an attacker has gained control of an agent through prompt injection or credential theft), malfunctioning agents (an agent is producing incorrect outputs, accessing wrong data, or consuming excessive resources), data exposure through agents (an agent has leaked sensitive data through its outputs or tool calls), and cascade failures in multi-agent systems (one agent's failure triggers failures in agents that depend on it).
Each scenario should have defined response procedures: who is notified, how the agent is isolated (kill switch), how evidence is preserved (audit logs), how affected parties are communicated with, and how the incident is resolved and reviewed.
The most important capability is speed. Autonomous agents can cause damage in seconds that takes months to remediate. The incident response plan must enable detection and response at machine speed, not human speed.