Back to Insights
    AI Governance

    96 Days, Maybe 600 — Why the EU AI Act Delay Talk Doesn't Change What You Need to Build

    Brussels failed to agree on a 16-month delay today. Even if the delay eventually passes, regulated enterprises building governance infrastructure for the original August 2 deadline are still right.

    PG
    Paul GoldmanCEO, iTmethods
    April 28, 202611 min read

    Securing the Agentic Era — Article 7 · AI Governance

    114 DaysWrong ProblemVendor AI DataThis Article: 96 Days, Maybe 600

    The second political trilogue between the European Parliament, the Council, and the Commission on the Digital Omnibus on AI ended today without agreement.

    The Digital Omnibus is the proposal to defer the EU AI Act’s high-risk obligations from August 2, 2026 — 96 days from now — to December 2, 2027. For Annex I systems embedded in regulated products, the deferral runs further still: August 2, 2027 to August 2, 2028. The headline number is a 16-month slip on the high-risk deadline that the entire compliance industry has been building toward for two years.

    Today’s failed trilogue is the second consecutive negotiation to break down. A third trilogue is scheduled for mid-May. Even if it succeeds, the resulting compromise text must clear the Council, Parliament rapporteurs, and the Commission, then publish in the Official Journal before August 2, 2026 to take legal effect. If the Omnibus is not formally adopted before that date, the original AI Act applies from that date as written. That is the legal reality, not a rhetorical flourish — the deferral is a proposal, not an entitlement, and the legal-practitioner consensus is to treat August 2, 2026 as the actual deadline and the December 2027 date as a strategic gamble.

    In conversations across financial services, healthcare, life sciences, and regulated infrastructure over the past two weeks, I have heard a consistent question: “Should we slow down the governance build?”

    The answer is no. And the reasons are worth being precise about, because the next eight weeks will produce a lot of well-meaning but expensive advice in the other direction.

    96
    DAYS REMAINING
    TO ORIGINAL AUG 2 DEADLINE
    Aug 2, 2026
    EU AI ACT
    HIGH-RISK ENFORCEMENT
    Dec 2, 2027
    PROPOSED DEFERRAL
    DIGITAL OMNIBUS
    Jan 17, 2025
    DORA IN FORCE
    IMMOVABLE DEADLINE
    2/3
    TRILOGUES FAILED
    PROPOSAL UNCERTAIN
    12 mo
    GOVERNANCE BUILD
    MATH RUNS BACKWARDS

    What the Digital Omnibus actually proposes

    Three layers, often conflated, deserve separate treatment.

    The first layer is the high-risk obligations under Article 6. These are the rules that govern AI used in credit scoring, recruitment, education, healthcare triage, biometric identification, law enforcement risk assessment, and critical infrastructure. The Commission’s proposal moves the application date from August 2, 2026 to December 2, 2027 — a delay of approximately 16 months.

    The second layer is Annex I — AI systems embedded in regulated products like medical devices, industrial machinery, and autonomous vehicles. Application moves from August 2, 2027 to August 2, 2028.

    The third layer is the watermarking and deepfake provisions for general-purpose AI under Article 50. The Parliament’s position pushed these from February 2027 to November 2, 2026 — a much shorter extension and a reminder that the Parliament’s mood is uneven across provisions.

    The proposal also retains a feature most published commentary has not focused on: the Commission can end any pause at any time. The deferral, if adopted, is conditional, not absolute.


    Why today’s trilogue failure matters

    Two consecutive trilogues have now failed. A third is scheduled for mid-May. The remaining political calendar before August 2 is tight — even if the May trilogue produces an agreement, the resulting compromise has to clear the Council, Parliament rapporteurs, and the Commission, then publish in the Official Journal before the original deadline triggers.

    That is not a procedural impossibility. It is, however, a meaningful execution risk. Regulated institutions planning their AI governance program around a December 2027 deadline are betting on a legislative timeline that is currently failing to converge.

    The smart institutions are not making that bet. The reason has nothing to do with optimism about Brussels. It has to do with what the governance program actually needs to produce.

    Four reasons the delay doesn’t change what you need to build

    1. DORA is already in force, and DORA absorbs most of what high-risk AI Act compliance requires.

    The Digital Operational Resilience Act has been enforceable since January 17, 2025. Every regulated financial institution operating in the EU is already required to maintain ICT risk management programs, third-party AI risk assessments, incident reporting under tiered windows, threat-led penetration testing for systemically important institutions, and continuous evidence of control effectiveness.

    The overlap with high-risk AI Act obligations is substantial. Article 9 risk management systems, Article 10 data governance, Article 12 record-keeping, Article 14 human oversight, and Article 15 accuracy and robustness requirements all map to obligations a financial institution is already meeting under DORA.

    If the Digital Omnibus passes and the AI Act high-risk date moves to December 2027, the DORA obligations do not move with it. A bank that paused its AI governance program for 16 months is still in scope for DORA examinations on the same timeline, on the same evidence, against the same architecture.

    The marginal cost of also being AI Act high-risk ready is small once DORA-grade evidence architecture is in place. The marginal cost of pausing and restarting is large.

    2. OSFI E-23, B-13, and the forthcoming AI guidance are already live.

    Canada’s banking regulator does not wait for Brussels. The E-23 guideline on model risk management has been in force since 2017 and is being expanded in scope. B-13 on technology and cyber risk is in force. The forthcoming OSFI guidance on AI in financial services will be examination-relevant on its own timetable.

    Three of Canada’s six Domestic Systemically Important Banks are already investing in the FINOS Open RegTech infrastructure that produces the evidence these guidelines will examine. RBC co-chairs the FINOS Open RegTech Special Interest Group. TD joined as a Platinum FINOS member on April 13, 2026 — fifteen days ago. BMO is active in the FINOS community. The Canadian D-SIB cohort is not waiting for the EU AI Act. The Canadian D-SIB cohort is building.

    For the FCA, BaFin, FINMA, MAS, JFSA, OCC, and the Federal Reserve, the same logic applies. AI Act high-risk obligations are the harmonized European version of a regulatory expectation already in motion across the OECD. The deadline that matters is the regulator’s, not the legislator’s.

    3. The Commission can end any pause at any time.

    This provision in the Digital Omnibus is buried in the technical commentary but it is the most important political fact about the proposal. The deferral, if adopted, is not a contractual entitlement to wait until December 2027. It is a conditional grace period the Commission retains the power to terminate.

    If a high-profile incident occurs in the interim — an autonomous credit decisioning system that produces a discriminatory pattern, an AI-mediated medical decision that produces a patient harm, an MCP server supply chain compromise that hits a Tier 1 institution — the Commission has every reason and every authority to end the pause unilaterally. The institutions that paused are then caught short with 90 days, 60 days, or 30 days of operational runway to a deadline they had assumed was 18 months away.

    This is not hypothetical. The MCP server vulnerabilities disclosed in April 2026 across more than 7,000 servers, the Vercel breach attribution to a third-party AI supply chain compromise, and the OutSystems data showing 97% of enterprises now run AI agents while only 12% have centralized control are all the kind of forcing functions that move regulators from accommodation to enforcement on short notice.

    A governance program built for August 2, 2026 absorbs the surprise. A governance program built for December 2027 does not.

    4. Governance is a 12-month build. The math does not work backwards from a deferred date.

    This is the operational reality most published commentary on the delay misses entirely.

    A real AI governance program — agent inventory, identity for non-human actors, runtime enforcement at the tool-call boundary, evidence collection mapped to regulatory clauses, third-party AI risk management, incident response — is a 12 to 18 month build for a large institution. It is not a 90-day sprint. It involves infrastructure choices, vendor commitments, organizational redesign, and operational maturity that does not compress.

    If an institution waits until the trilogue resolves in June, plans through Q3, procures in Q4, builds through 2027, and pilots in early 2028, it is launching after the deferred December 2027 date — not before. The delay has not produced extra time. It has produced the illusion of extra time, and the institutions that fell for the illusion are the ones that miss the deadline twice.

    The institutions that planned for August 2, 2026 are working on a timeline that produces compliant operations whether the deadline lands in 2026 or 2027. They have optionality. The institutions that paused do not.

    96 days — or maybe 600

    Either way, the architecture that satisfies the deadline is the same. Most regulated enterprises are still building for the wrong control surface.

    Get an AI governance readiness assessment

    What to build now, regardless of the trilogue outcome

    The architectural answer is the same in every scenario the next sixty days could produce.

    Build the agent inventory. The Vision Compliance 2026 EU AI Act Readiness Report found that 78% of enterprises are unprepared and 83% have no AI inventory. The OutSystems data is consistent — 97% running, 12% governing. Inventory is the first line of every AIGF, NIST AI RMF, ISO 42001, and DORA control. It is also the only line that has to come first; everything else is downstream.

    Build identity for non-human actors. The typical regulated enterprise now has between 50 and 150 non-human identities for every human user, depending on agent deployment maturity. Each one is a potential authorization decision. Most enterprises do not yet treat agents as users. The AI Act Article 14 human oversight obligations and the AIGF v2.0 agent action authorization controls both presume that they do.

    Build runtime enforcement at the tool-call boundary. Logging is not enforcement. A control plane that records what an agent did is necessary, but not sufficient, for an examination that asks whether the action was allowed at the moment it executed. The architectural primitive is an authoritative policy decision point in the operational path of every model invocation, every MCP request, and every agent handoff.

    Build evidence collection as a byproduct of operations. Manual audit response does not scale to thousands of agent invocations per day. The evidence has to be tamper-resistant, identity-attributed, mapped to regulatory clauses, and produced continuously. This is what the FINOS AI Governance Framework v2.0 specifies. It is the same artifact regardless of which date the EU AI Act ultimately enforces.

    Build third-party AI risk management. Vendor unilateral AI data policy changes are now classified material ICT risk events under DORA. The supply chain attacks of April 2026 demonstrated that third-party AI risk is not theoretical. Every regulated enterprise needs the ability to assess, monitor, and produce evidence of vendor AI risk on its own timetable, not the vendor’s.

    These five capabilities are the architectural answer regardless of whether the trilogue resolves in May, June, or never. They are the answer DORA already requires, the answer OSFI is moving toward, the answer the FINOS AIGF codifies, and the answer the EU AI Act specifies for every scenario in which it eventually applies.


    Who actually benefits from the delay

    It is worth being specific about this, because the published commentary has been imprecise.

    The delay does not benefit the institutions that have been investing in governance infrastructure. They get to ship the same architecture either way, and they accumulate operational maturity that becomes a competitive advantage in audits, in procurement, in partnership conversations with regulators, and in the early-mover credibility that increasingly matters in regulated industries.

    The delay benefits, narrowly, the institutions that have not started — and only on the condition that the trilogue eventually succeeds, that the Commission does not invoke its early-termination clause, and that no high-profile incident accelerates enforcement. Each of those conditions is uncertain. The compound probability is significantly less than 100%.

    For institutions in that second category, the rational response to today’s failed trilogue is not relief. It is to start the governance build now, on the assumption that the original August 2 deadline applies, and to treat any eventual delay as a margin of safety rather than a license to pause.


    The infrastructure problem is the same problem

    The AI Act high-risk obligations describe an evidence architecture: continuous, machine-readable, regulator-ready evidence that AI systems operate within their authorized scope, with human oversight, with documented data governance, and with auditable accuracy and robustness.

    That description does not change if the deadline moves. The evidence architecture required for August 2, 2026 is the same evidence architecture required for December 2, 2027, for the FINOS AIGF v2.0, for DORA, for OSFI E-23, for the FDA’s expectations on AI-enabled medical devices, and for HIPAA’s security rule as it applies to clinical AI.

    The infrastructure problem is not the deadline. The deadline is a forcing function for the infrastructure problem.

    The institutions that solve the infrastructure problem are deadline-agnostic. The institutions that wait for the deadline to clarify discover, late, that the deadline was never the binding constraint.


    What we are building

    At iTmethods this is the infrastructure problem we are building the Fortress Family — Reign, Forge, and BioCompute — to solve.

    Reign is the AI governance and runtime enforcement layer. The AI Gateway evaluates policy in the operational path of every model invocation and every tool call. The Evidence Engine produces the regulator-grade artifacts that DORA, the FINOS AIGF v2.0, and the EU AI Act high-risk requirements all specify. Foundation in production today; full agentic-runtime coverage and the dual-regulator evidence architecture in active development.

    Forge is the managed infrastructure layer. The same operating model that runs Fluxnova for business process orchestration runs AI workloads under governance. One SLA. One evidence pane. Reign-aligned by default.

    BioCompute extends the same pattern into sovereign AI for regulated life sciences, where the FDA, EMA, HIPAA, and the new RAPID coverage pathway are imposing the same evidence requirements on AI-enabled diagnostics that the AI Act imposes on financial services.

    We are deadline-agnostic by design. The platform is built to satisfy whichever deadline lands, in whichever jurisdiction, on whichever date the political process produces. We are honest about the building-versus-available distinction because regulators will be honest about it too.


    The bottom line

    The EU AI Act trilogue failed today. It may fail again. It may eventually succeed and the high-risk deadline may move to December 2, 2027. None of that changes the infrastructure problem.

    DORA is live. OSFI is live. The FINOS AIGF v2.0 is the procurement standard the institutions that wrote it are now requiring. The Commission retains the power to end any pause at any time. Governance is a 12-month build. The math does not work backwards from a deferred date.

    Build now, for the deadline that exists today, with the architecture that satisfies the deadlines you can foresee tomorrow. The institutions that do are the ones that look prepared whether the date is August 2, 2026 or December 2, 2027 — and they are the ones who will be ready when the next forcing function arrives, which it will, on a timeline no trilogue controls.

    96 days. Maybe 600. The answer is the same.


    Paul Goldman is CEO of iTmethods and creator of the Fortress Family — Reign, Forge, and BioCompute — the trust layer for enterprise AI. He has been building managed infrastructure for regulated enterprises for 21 years and writes weekly on AI governance and what regulated enterprises need to build safely in the agentic era.

    Reign is the AI governance and runtime enforcement layer. The Evidence Engine produces regulator-grade compliance artifacts mapped to the FINOS AIGF v2.0. Forge operates Fluxnova and AI workloads on managed infrastructure. BioCompute extends sovereign AI to regulated life sciences. Enterprise AI. Governed. Learn more at itmethods.com.

    Sources

    Previously in this series: Vendor AI Data Governance Problem · Why the AI Governance Stack Was Built for the Wrong Problem · 114 Days

    PG

    Paul Goldman

    CEO, iTmethods

    Creator of the Fortress platform for AI governance in regulated industries. Previously published "MCP Is Exploding. Your Governance Isn't Ready."

    Continue the AI Governance series

    Or share your thoughts here

    Your comment will appear on this page. The best insights may be shared in the LinkedIn discussion.

    Get Paul's next article before it publishes

    Join 500+ security leaders