REIGN · SPINE · MODEL RISK VALIDATION
Validated. Versioned. Change-controlled.
Continuous Observability + Continuous Assurance. The Trust Layer for Enterprise AI.
Reign Model Risk Validation is component 2 of the Reign Spine — SR 11-7, OSFI E-23, and FDA PCCP fluent. Approved-model registry, cryptographic version control, predetermined change-control plans, drift detection, and validation harnesses.
Three Lines of Defense
Model Risk Validation is the 2nd Line of Defense — independent challenge to models in production. AI Gateway is the 1st Line · Audit Ledger (CAVR) is the 3rd Line · Assurance Packs is independent assurance.
Deployment
Same governance layer across every tier.
Regulated industries do not get to treat model change-control as paperwork
AI models do not stand still. Versions change. Underlying data drifts. Prompts get rewritten. Fine-tunes retrain. Each change shifts what the model actually does at runtime — often in ways that would invalidate a regulatory submission, break a customer-facing contract, or fail a risk-management audit. Today, most enterprise AI deployments treat model change-control as a documentation exercise after the fact. Regulated industries do not get that option.
- Models drift in production; performance and behavior shift over time without explicit signal.
- Versions of the same model behave differently — what shipped earlier is not what is running today.
- Changes are made by data scientists, MLOps engineers, prompt engineers, fine-tune teams, and tool authors — all touching different parts of the same effective model.
- Regulators (SR 11-7, OSFI E-23, FDA PCCP, EU AI Act Art. 15, ISO 42001 §9.1) require predetermined change-control plans and continuous performance evaluation. Most platforms treat this as paperwork.
- When an audit asks “what version was running on the day of decision?”, the answer often takes weeks of log forensics.
Reign Model Risk Validation provides
Approved-model registry
Registry of models authorised for production use, with attestation, provenance, and approval status.
Version control
Cryptographic versioning — every model, prompt template, fine-tune, and configuration captured with integrity.
PCCP-aligned change control
Predetermined Change Control Plan packets, structured for SR 11-7, OSFI E-23, FDA PCCP, EU AI Act Art. 15, and ISO 42001 alignment.
Drift detection
Continuous monitoring of model behavior over time — distribution drift, performance drift, fairness drift, and cost drift.
Validation harness
Challenge sets and replay testing — every change can be re-evaluated against a frozen test corpus before promotion.
Change packets
Every change is captured as a change packet with reviewer, rationale, and impact assessment, and links into the Audit Ledger (CAVR) chain.
The registry of record for what models are allowed to run
Model Risk Validation is the registry of record for what models are allowed to run. The AI Gateway enforces this at the policy edge — only models in the approved registry are eligible for production AI calls. Every change packet flows into the Audit Ledger (CAVR), which captures the change, the reviewer, and the impact assessment as part of the audit chain. Assurance Packs export Model Risk Validation artifacts as part of the regulator-ready submission packet.
Mapped to the frameworks regulators expect
SR 11-7
Federal Reserve guidance on Model Risk Management for U.S. banks.
OSFI E-23
Canadian OSFI Enterprise-Wide Model Risk Management.
FDA PCCP
Predetermined Change Control Plan for AI/ML-enabled SaMD.
EU AI Act Art. 15
Accuracy, robustness, and cybersecurity over the full lifecycle of high-risk AI.
EU AI Act Art. 9
Risk management system across iteration.
ISO 42001 §9.1
Performance evaluation and ongoing monitoring.
FDA GMLP
Good Machine Learning Practice — Principles 1, 8, 9, 10.
21 CFR Part 11
Electronic records integrity, audit trail, change control (life sciences).
FINOS AIGF v2.0
Model risk and lifecycle controls in financial services AI governance.
NIST AI RMF
Manage and Measure functions across the AI lifecycle.
Frequently Asked Questions
Change description, change rationale, impact assessment against frozen test sets, reviewer attestations, the model and data versions involved, the framework citations the change implicates, and the cryptographic hash linking it to the audit chain.
Yes. The approved-model registry supports any model the customer authorises — commercial foundation models (OpenAI, Anthropic, Google, Cohere, Mistral), open-source and open-weight models (Llama, Mistral, DeepSeek, Qwen), and specialised biomedical or vertical models (AlphaFold 3, ESM3, Med-Gemini, NVIDIA BioNeMo, and others).
Yes. PCCP is the strictest framework Model Risk Validation is built around, but the same change-control discipline applies to any regulated environment — SR 11-7 / OSFI E-23 banking model risk, DORA-compliant financial services, ISO 42001 certified AI management systems, or internal enterprise risk-management programs.
Continuous monitoring of model outputs against frozen baselines and live distributions. Customers configure drift thresholds and alerting policy. Detected drift triggers a change-packet workflow.
Yes. Reign Model Risk Validation is designed to complement MLflow, Vertex AI, Amazon SageMaker, Azure ML, and other MLOps platforms. We surface the governance layer they do not.
Reign Model Risk Validation is available by application through the Reign Design Partner Program. Design partners shape the regulatory mappings, the change-packet schema, and the validation harness defaults.
Reign Spine
