Back to Insights
    AI Governance

    Most AI Governance Tools Were Built for Generative AI. Agentic AI Requires a Different Architecture.

    Agentic AI is in production in regulated industries. The governance stack most institutions spent the last three years building was designed for a world that no longer exists. FINOS Open RegTech shows the architecture pattern that actually works.

    PG
    Paul GoldmanCEO, iTmethods
    April 21, 202612 min read

    Securing the Agentic Era — Article 5

    Self-Hosted AgentsFluxnova Fork114 DaysThis Article: The Bridge

    Agentic AI is no longer theoretical in regulated industries.

    Banks and insurers are deploying autonomous agents for trade execution, credit decisioning, fraud investigation, customer onboarding, and regulatory reporting workflows. Healthcare and life sciences are deploying agents for clinical trial coordination, lab data orchestration, and regulatory submissions. Public sector and critical infrastructure are not far behind. These systems do not just generate outputs. They take actions, call tools, maintain state across multiple steps, and interact with other agents.

    That shift breaks the governance stack most institutions spent the last three years building.

    Generative AI governance — the prompt-response, single-model, content-moderation paradigm — had a tractable problem: inspect the input, monitor the output, log what happened. The control surface was narrow. The decision was a piece of text. The auditability was after the fact.

    Agentic AI removes every one of those simplifications. The control surface is every tool the agent can call, every system it can touch, every other agent it can coordinate with, and every piece of state it carries across a session. The decision is an action. The auditability has to be real-time, because by the time the action is logged, it has already executed.

    Most existing AI governance tools — and most existing AI governance frameworks — were built for the prompt-response world. They are now being asked to govern a fundamentally different system. They will not.

    25
    AIGF RISK
    CATEGORIES
    6
    NEW AGENTIC
    RISKS (V2.0)
    Aug 2, 2026
    EU AI ACT
    HIGH-RISK ENFORCEMENT
    $85B
    REGTECH MARKET
    BY 2035
    3/6
    D-SIBS IN FINOS
    CANADIAN BANKS
    99.95%
    FORGE SLA
    MANAGED INFRASTRUCTURE

    The New Threat Surface

    In late 2025, FINOS — the Linux Foundation’s Fintech Open Source Foundation — released Version 2.0 of its AI Governance Framework with a dedicated agentic AI risk catalogue. It defines six new risks specific to autonomous architectures: multi-agent trust boundary violations, agent action authorization bypass, tool chain manipulation, MCP server supply chain compromise, agent state persistence poisoning, and agent-mediated credential harvesting.

    These are not theoretical. MCP server supply chain compromise — where a tainted Model Context Protocol server feeds corrupted data or execution environments to agentic AI systems — is already surfacing in enterprise environments. Agent-mediated credential harvesting is appearing in red-team exercises at multiple G-SIBs. The other four are arriving as fast as enterprise agent deployment is.

    What is striking about the FINOS work is not the risk taxonomy itself — most serious AI security teams have parts of this list — but that it has been formally codified by a body whose institutional contributors include the largest banks in the world, and that it now maps to global regulatory frameworks including the EU AI Act, DORA, NIST AI RMF, ISO 42001, OWASP, and MITRE ATLAS.

    Codified risks become procurement requirements. Procurement requirements become operating standards.

    That progression is happening now.


    This Is Not a New Problem. The Template Already Exists.

    Here is what most people commenting on agentic governance are missing.

    This is not the first time a regulated industry has had to build an evidence architecture for a novel control surface. Trade reporting solved a structurally identical problem over the last decade. Banks had millions of reportable events per day, generated by complex systems, governed by national variants of global regulations, with regulators demanding machine-readable proof that controls had run.

    The solution was not a dashboard. It was an architecture.

    The Common Domain Model (CDM), now hosted as a FINOS project, gave the industry a shared, machine-readable representation of trades and lifecycle events. Morphir, contributed by Morgan Stanley, gave the industry a way to express regulatory rules as executable code rather than as PDF interpretations. Digital Regulatory Reporting (DRR) wove the two together into a production layer that takes structured trade data, applies the rule logic, and produces regulator-grade reports automatically. JP Morgan, RBC, LSEG, Natixis CIB, and Japan’s central counterparty JSCC are now running this architecture in production.

    The pattern is precise: take a regulatory requirement, express it as enforceable logic, run it continuously against operational data, and produce evidence as a byproduct of normal operations.

    That pattern is exactly what agentic AI governance now needs. And the people who built the trade-reporting version of it — the same FINOS community — have just built the AI governance version with the AIGF. The bridge between the two is the strategic gap nobody has built.


    Two Regulatory Domains. One Evidence Architecture.

    A large regulated institution today has two governance problems that look completely different and are actually the same problem.

    On one side: a trading or transaction operation generating millions of reportable events per day, governed by increasingly specific national implementations of global regulations.

    On the other: an AI operation — dozens or hundreds of models in production, agentic workflows in development, third-party AI embedded in SaaS across every business line — governed by an entirely separate and rapidly expanding regulatory framework.

    Today these are separate compliance programs. Separate teams. Separate systems. Separate evidence collection processes. Separate audit responses.

    But the underlying infrastructure problem is identical: take a requirement, express it as an enforceable control, run that control continuously against operational data, collect the evidence that it ran, and produce that evidence on demand.

    CDM and DRR have proven this model works for trade reporting. The AIGF defines the equivalent for AI governance. What neither provides is the production layer that enforces the controls, collects the evidence, and makes it retrievable — across both domains, on unified infrastructure.

    Two regulatory domains. One evidence architecture.

    The institution — or platform — that builds a unified evidence pipeline across both domains will define the architecture of regulatory technology for the next decade. Not because the regulations are the same, but because the infrastructure pattern is.


    The Regulatory Clock Has Already Started

    2026 is the year that trade reporting obligations and AI governance obligations hit the same institutions, under the same regulators, on overlapping timelines.

    DORA has been fully in force since January 17, 2025. Financial institutions and their critical ICT service providers must demonstrate ICT risk management, incident reporting, resilience testing, and third-party risk management. DORA’s tiered major incident reporting — initial notification within four hours of classification (or 24 hours after detection, whichever is earlier), intermediate reports at 72 hours, final reports within one month — has already proven challenging for organizations without mature SOC operations.

    EU AI Act high-risk enforcement arrives August 2, 2026. Organizations deploying AI in credit scoring, recruitment, critical infrastructure, or biometric identification must demonstrate conformity assessments, risk management systems, and human oversight mechanisms. The evidence requirements are operational, not documentary — regulators will ask for audit trails that prove controls ran, not policies that say they exist.

    The EU Parliament has already flagged the overlap. Regulatory bodies are concerned about the interaction between the EU AI Act and EU financial services legislation, with a lack of sufficient guidance introducing complexity and legal uncertainty for institutions trying to comply with both simultaneously.

    A large financial institution now faces DORA’s ICT resilience requirements, MiFID / EMIR / SFTR trade reporting obligations, and EU AI Act governance mandates — all enforced by overlapping regulatory authorities, all requiring continuous evidence, all running on infrastructure that must itself be governed.

    Treating these as three separate compliance programs is the expensive way. The institutions that recognize the shared infrastructure pattern will build once and produce evidence across all three.


    What the Right Architecture Actually Looks Like

    It is worth being precise about what the right architecture is, because it is meaningfully different from what most institutions have today.

    It is not a dashboard. Dashboards visualize what already happened. Agentic AI requires intervention before the action.

    It is not a risk assessment tool. Risk assessments produce documents. Regulators are increasingly asking for runtime evidence.

    It is not a policy management platform. Policies are inputs. Evidence is the output.

    It is an engine that sits in the operational path of AI workloads — between the agent and the tools it can call, between the agent and the data it can read, between the agent and the systems it can act on. It evaluates every action against policy in real time. It allows, denies, or quarantines. And it produces, as a byproduct of every operation, the audit packs, control attestations, and regulator-ready exports that satisfy DORA, the EU AI Act, the AIGF, and the trade-reporting frameworks running on the same managed infrastructure.

    The parallel to DRR is exact. DRR takes CDM-structured trade data and produces regulatory reports. An AIGF evidence engine takes AI operational data — model invocations, agent actions, policy decisions, tool calls — and produces governance reports. Same pattern. Same infrastructure needs. Different regulatory domain.


    Why Most Current Tools Will Fail

    The gap is real because nobody is positioned to close it today.

    Traditional GRC platforms — ServiceNow GRC and its peers — were built for policy management and checkbox compliance. They have no FINOS alignment, no AI-native architecture, and no ability to enforce policy at agent runtime. The evidence requirements coming from DORA and the EU AI Act are operational and continuous. GRC was built for a different problem.

    The independent AI governance startups that existed eighteen months ago have largely been absorbed into network and application security portfolios — Cisco acquired Robust Intelligence, and others have been folded into similar security stacks. None has FINOS alignment. None maps to the AIGF. None plays in the RegTech space. The acquisition pattern itself is the tell: the market recognized AI governance is infrastructure, but the acquirers embedded it in security stacks rather than regulatory architectures.

    Model monitoring tools track performance and drift. They do not enforce policy at the moment of tool invocation, and they do not generate regulator-grade evidence as a byproduct of operation.

    The Big 4 consultancies implement compliance projects at seven and eight figures per engagement. They build bespoke; they do not operate managed platforms; they do not produce automated evidence at runtime. In a RegTech market growing at roughly 16% CAGR toward $85 billion by 2035, the project-based model is the model being disrupted.

    REGnosys built Rosetta DSL, the language behind DRR. They are essential to the trade-reporting layer but have no AI governance capability and no managed infrastructure. Morphir is open source with no commercial offering. Together they prove the FINOS pattern; they do not operate it.

    The position at the intersection of agent-native runtime enforcement, full AIGF v2.0 mapping, FINOS-aligned managed infrastructure, automated evidence generation, and twenty-plus years of regulated-enterprise delivery is, as far as I can see, unoccupied.

    That will not last.


    The Canadian Signal

    There is an institutional dimension worth attention.

    Three of Canada’s six Domestic Systemically Important Banks are now active in the FINOS ecosystem. RBC co-chairs the Open RegTech Special Interest Group. TD joined FINOS as a Platinum member on April 13, 2026, with Rajesh Raman, Head of Credit Card and Unsecured Lending, joining the FINOS Governing Board — and TD’s announcement specifically cited Fluxnova, the FINOS fork of the last Apache 2.0-licensed version of Camunda 7, as a primary driver alongside industry-wide AI governance. BMO is active in the FINOS community through its technology and innovation leadership.

    Canada’s banking regulator, OSFI, has been increasingly explicit about expectations for AI governance, model risk management, and operational resilience — E-23 on model risk, B-13 on technology and cyber risk, and forthcoming guidance on AI in financial services. These are not distant frameworks. They are the operating reality Canadian banks are building infrastructure to satisfy.

    When three of six D-SIBs are investing in the open-source regulatory infrastructure that will underpin compliance with these guidelines, the signal is unambiguous. This is where the institutional money is going. And the opportunity for managed infrastructure — for the institutions that want the benefits without operating the stack themselves — is wide open.


    What Happens in the Next Twelve Months

    Three developments will shape the coming year.

    The FINOS RegTech Council will formalize. The institutions funding it will have outsized influence on the architecture of the next regulatory technology stack.

    The AIGF will move from framework to procurement requirement. Institutions that contributed to v2.0 will begin requiring vendors and internal teams to demonstrate compliance against it as a condition of deployment. The AIGF Training Workshops running now are the leading indicator: you do not build a training curriculum for a standard you intend to leave optional.

    Managed Open RegTech infrastructure will become an explicit market category. The largest global banks will continue building their own. The mid-tier banks, regional banks, insurers, asset managers, trust companies, and foreign-bank branches operating under regulators like OSFI, the FCA, BaFin, and FINMA will need someone to operate it for them — under SLA, with governance, with continuous evidence production.


    This Is What We Are Building

    I am going to be direct about our position, because the rest of this piece would be incomplete without it.

    At iTmethods, this is exactly what we are building with the Fortress Family — Reign, Forge, and BioCompute — and it is why we have been aligning the platform to FINOS Open RegTech.

    Reign is our enterprise AI governance platform, built for agentic systems from the operational path up. It maps to all 25 AIGF risk categories, including the six new agentic risks in v2.0. The AI Gateway enforces policy in real time at the moment of tool invocation and agent action — including MCP server governance, tool chain integrity, multi-agent boundary enforcement, and agent action authorization. The Evidence Engine produces the compliance artifacts as a byproduct of operation: audit packs, control attestations, regulator-ready exports. Continuous. Automatic. Regulator-grade.

    Forge is the managed infrastructure layer underneath. The same platform that runs Fluxnova for business process orchestration also runs AI workloads under governance. One operating model. One SLA. One pane of evidence across both domains.

    BioCompute extends the same pattern into sovereign AI for regulated life sciences — the third domain where the evidence architecture applies, with the same FDA, EMA, and HIPAA-grade requirements that financial services regulators are now demanding of AI systems.

    The pattern is deliberate. The same managed infrastructure that runs FINOS open-source orchestration also runs FINOS-aligned AI governance, also runs sovereign AI for regulated science. One architecture. Three regulated domains. Unified evidence.

    We will not be the only ones who build this. The point of this piece is that the opportunity is structural — and someone will define the category. But we are the only platform I know of today that combines agent-native runtime enforcement, full AIGF v2.0 mapping, FINOS-aligned managed infrastructure, an evidence engine in production, and twenty-one years of regulated-enterprise delivery.

    If you run technology, risk, or compliance at a financial institution, an insurer, or a life sciences organization trying to get ahead of the DORA / EU AI Act / MiFID collision — or if you are inside FINOS thinking about how the pieces fit together — I would like to have that conversation.

    Explore Reign for FINOS-aligned AI governance

    Maps to all 25 FINOS AIGF risk categories. AI Gateway enforces policy at runtime. Evidence Engine produces regulator-grade artifacts automatically.


    The Question

    Most institutions today are still building governance for the world of 2024 — the prompt-response world where the control surface was narrow and the audit was after the fact.

    The agentic era is already in production. The regulatory timelines have converged. The evidence architecture that will satisfy what is coming is the same across trade reporting and AI governance, because the infrastructure pattern is the same.

    The components are proven. The institutional money is moving. The pieces are on the board.

    The question is who assembles them first.


    Paul Goldman is CEO of iTmethods and creator of the Fortress Family — Reign, Forge, and BioCompute — the trust layer for enterprise AI. He has been building managed infrastructure for regulated enterprises for 21 years and writes weekly on AI governance and what regulated enterprises need to build safely in the agentic era.

    Previously in this series: The Camunda 7 Fork · 114 Days · The Platform Engineering Pivot

    PG

    Paul Goldman

    CEO, iTmethods

    Creator of the Fortress platform for AI governance in regulated industries. Previously published "MCP Is Exploding. Your Governance Isn't Ready."

    Continue the AI Governance series

    ← Previous

    The AI Agent Control Plane: What It Looks Like

    Coming Soon

    Next →

    AI Governance in Regulated Industries: A CISO's Playbook

    Coming Soon

    Or share your thoughts here

    Your comment will appear on this page. The best insights may be shared in the LinkedIn discussion.

    Get Paul's next article before it publishes

    Join 500+ security leaders