The Six Agentic AI Risks in AIGF v2.0
AIGF v2.0 introduced six risk categories specific to agentic AI — threat vectors that did not exist in single-model generative AI systems. They address the reality that agentic systems take actions, call tools, coordinate with other agents, and carry state across sessions.
These risks are now part of the procurement and operating vocabulary of financial institutions contributing to FINOS.
The Six Risks
Each risk addresses a specific attack surface that emerges in agentic architectures:
- Multi-agent trust boundary violations — trust between coordinating agents becomes an attack surface
- Agent action authorization bypass — agents exceeding their approved action scope through chained calls
- Tool chain manipulation — adversaries compromising one tool in an agent's chain to pivot to others
- MCP server supply chain compromise — tainted Model Context Protocol servers feeding corrupted data or execution environments
- Agent state persistence poisoning — malicious modification of an agent's long-running context to influence future decisions
- Agent-mediated credential harvesting — agents inadvertently exfiltrating credentials through chained tool interactions
Why These Risks Cannot Be Handled by Generative-Era Tools
The prompt-response AI governance paradigm assumed a narrow control surface: inspect the input, monitor the output, log after the fact. Agentic AI expands that surface to every tool, every system, every other agent. The decision is an action, not a piece of text — and by the time an action is logged, it has already executed.
Mitigating the six risks requires runtime enforcement at the moment of tool invocation and agent action, plus continuous evidence production. That is a different architecture, not an extension of the old one.
How Reign Addresses the Six Risks
Reign, iTmethods' AI governance platform, is built for agentic systems from the operational path up. The AI Gateway enforces policy in real time at the moment of tool invocation — including MCP server governance, tool chain integrity, multi-agent boundary enforcement, and agent action authorization. The Evidence Engine produces regulator-grade compliance artifacts automatically.
