Skip to main content

    WHY NOW. TWO CONVERGING CYCLES

    Why Now. Two Cycles Converge.

    A compliance cycle is binding regulated industries between 2026 and 2028. A substrate rebuild cycle is forcing enterprises to take sovereign control of data, compute, model, and agent runtime. The two cycles arrive at the same time. The enterprise stacks built for the SaaS era cannot answer either one.

    CYCLE 1. THE COMPLIANCE CYCLE

    Seven frameworks. One window.

    The frameworks below carry binding deadlines or live supervisory expectations inside a single 24-month window. Regulated industries must produce machine-verifiable evidence at runtime, not policy decks at audit.

    EU AI Act

    High-risk AI obligations

    Binding 2026 to 2028

    High-risk AI obligations phase in across 2026, 2027, and 2028. Risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, and post-market monitoring become operational requirements, not policy intent.

    OSFI E-23

    Enterprise model risk management

    In force 2027

    OSFI's revised guideline on enterprise-wide model risk management binds federally regulated financial institutions in Canada. AI and ML in scope. Validation, ongoing monitoring, and documented effective challenge expected at runtime.

    Federal Reserve SR 26-2

    Revised model risk guidance

    Live April 2026

    SR 26-2 (replacing SR 11-7) is the joint US Federal Reserve / OCC / FDIC revised model risk guidance. Live April 2026. Expanded scope explicitly captures AI and machine-learning models. The supervisory bar moves from documented governance to evidenced governance.

    FDA PCCP

    Predetermined Change Control Plans

    Operationalised

    FDA's Predetermined Change Control Plan framework is now operational guidance for AI / ML-enabled medical devices. Pre-specified modifications, validation protocols, and change-control evidence are submission prerequisites.

    DORA

    Digital Operational Resilience Act

    In force 2025

    EU DORA binds financial entities and their critical ICT third parties. Operational resilience, incident reporting, threat-led penetration testing, and third-party risk all extend to AI and agent runtime workloads.

    ISO 42001

    AI Management Systems

    Standard in adoption

    The international AI management system standard. Audit-grade governance of the AI lifecycle: risk, controls, continuous improvement, and evidence. Procurement teams at regulated buyers are starting to require it.

    BCBS 239

    Risk data aggregation

    Standard in force

    Basel Committee principles for effective risk data aggregation and risk reporting. As AI moves into the credit, market, and operational risk stack, BCBS 239 lineage and data-quality discipline applies to the AI surface too.

    CYCLE 2. THE SUBSTRATE REBUILD CYCLE

    Agentic AI breaks the assumptions enterprise stacks were built on.

    The enterprise stack was designed for human-driven workflows on SaaS perimeters. Identity for people. Audit logs for clicks. Vendors that train on customer data by default. None of those assumptions hold once autonomous agents start invoking tools, calling models, and acting on production systems on behalf of the enterprise.

    Boards have noticed. The mandate landing on CISOs, CIOs, and Chief AI Officers is consistent across regulated industries: take sovereign control of the four substrate layers (data, compute, foundation models, and agent runtime), keep them inside the customer envelope, and produce audit-grade evidence for every decision the AI stack makes.

    That is a substrate rebuild, not a procurement exercise. Hyperscaler primitives and SaaS overlays are necessary, but they do not add up to a governed runtime. Something has to operate the substrate to a service-level standard regulators will accept.

    THE INTERSECTION

    One stack answers both cycles.

    iTmethods operates the practice and ships the platform. Three legs hold up the answer.

    REIGN

    Regulator-grade evidence.

    The Trust Layer for Enterprise AI. AI Gateway, Model Risk Validation, Audit Ledger (CAVR), and Assurance Packs. Pre-mapped to every framework above.

    FORGE

    Governed runtime infrastructure.

    Modern DevOps, the AI Substrate (agent runtimes, governed model access, MCP and tool operations, sovereign control plane), and Forge Secure AI. Operated inside the customer envelope.

    SOVEREIGN SUBSTRATE ENGINEERING

    The 21-year practice.

    The codified discipline of operating regulated runtime under customer ownership and customer control. The reason Reign and Forge can deliver audit-grade outcomes hyperscalers and SaaS overlays cannot.

    Pick the entry point that matches your mandate.

    Reign for governance and evidence. Forge for governed runtime. Frameworks for the regulator vocabulary you already speak.