Third-Party AI Risk Management
Third-party AI risk is the compliance exposure regulated institutions carry for AI systems they did not build — AI embedded in SaaS platforms, AI services consumed from cloud providers, AI capabilities added to tools across the operational stack. Under DORA, the EU AI Act, and the FINOS AIGF, third-party AI risk is not optional: institutions must demonstrate that third-party AI data controls are enforced, that model provenance is verifiable, and that material changes to vendor AI practices are tracked and governed.
In the past 18 months, third-party AI risk has expanded dramatically because SaaS vendors across productivity, collaboration, developer tools, CRM, and communications have updated their terms to include customer content in AI model training — often with default opt-in settings and notification timelines that do not match enterprise compliance review cycles.
Regulatory Anchors
Three overlapping frameworks define third-party AI risk obligations:
- DORA — third-party ICT risk management, material-change tracking, documented risk assessments (in force since January 17, 2025)
- EU AI Act — Article 16 provider obligations and Article 26 deployer obligations extending to how third-party AI was trained
- FINOS AIGF — model provenance, training data lineage, cross-tenant signal leakage, and six agentic AI risks in v2.0
- OSFI E-23 / B-13 — Canadian equivalents for model risk and technology / cyber risk
What Third-Party AI Risk Requires Operationally
Satisfying third-party AI risk obligations is not a contractual exercise. It requires continuous, evidence-producing controls:
- Inventory of every third-party AI system used across the organization — including AI features in non-AI SaaS
- Documented risk assessment per vendor mapped to AIGF risk categories
- Material-change tracking — a trigger pipeline for vendor policy updates
- Contractual clauses covering training data provenance, audit rights, exit terms, and regulatory cooperation
- Runtime evidence that third-party AI controls are enforced — not just declared
- CC4AI-style machine-readable attestations from vendors wherever available
The Vendor AI Data Pattern
The current wave of default opt-in AI training across SaaS platforms is a third-party AI risk event, at scale, affecting every regulated institution. Each vendor policy change is a material change to the ICT service relationship that DORA requires the institution to assess, document, and govern. Most third-party risk programs today are not operationalized to catch these changes at the pace they occur.
How Reign Addresses Third-Party AI Risk
Reign's Evidence Engine automates FINOS AIGF-aligned compliance evidence across third-party AI vendor practices — mapping vendor attestations, contract terms, and operational signals into a continuous evidence architecture. Material changes trigger documented risk assessments automatically. Regulator-facing reports are produced as a byproduct of operation.
