REIGN · MODEL LIFECYCLE
Validated. Versioned. Change-controlled.
Policy by default. Evidence by design. Enforced at runtime.
Reign Model Lifecycle is the model-governance layer of the Reign platform — approved-model registry, version control, predetermined change-control plans, drift detection, and validation harnesses. Built for FDA PCCP, EU AI Act Article 15, and ISO 42001 Section 9 from the architecture up.
Deployment
Same governance layer across every tier.
Regulated industries do not get to treat model change-control as paperwork
AI models do not stand still. Versions change. Underlying data drifts. Prompts get rewritten. Fine-tunes retrain. Each change shifts what the model actually does at runtime — often in ways that would invalidate a regulatory submission, break a customer-facing contract, or fail a risk-management audit. Today, most enterprise AI deployments treat model change-control as a documentation exercise after the fact. Regulated industries do not get that option.
- Models drift in production; performance and behavior shift over time without explicit signal.
- Versions of the same model behave differently — what shipped in Q1 is not what is running in Q4.
- Changes are made by data scientists, MLOps engineers, prompt engineers, fine-tune teams, and tool authors — all touching different parts of the same effective model.
- Regulators (FDA PCCP, EU AI Act Art. 15, ISO 42001 §9.1) require predetermined change-control plans and continuous performance evaluation. Most platforms treat this as paperwork.
- When an audit asks “what version was running on the day of decision?”, the answer often takes weeks of log forensics.
Reign Model Lifecycle provides
Approved-model registry
Registry of models authorised for production use, with attestation, provenance, and approval status.
Version control
Cryptographic versioning — every model, prompt template, fine-tune, and configuration captured with integrity.
PCCP-aligned change control
Predetermined Change Control Plan packets, structured for FDA PCCP, EU AI Act Art. 15, and ISO 42001 alignment.
Drift detection
Continuous monitoring of model behavior over time — distribution drift, performance drift, fairness drift, and cost drift.
Validation harness
Challenge sets and replay testing — every change can be re-evaluated against a frozen test corpus before promotion.
Change packets
Every change is captured as a change packet with reviewer, rationale, and impact assessment, and links into the Evidence Engine audit chain.
The registry of record for what models are allowed to run
Model Lifecycle is the registry of record for what models are allowed to run. The AI Gateway enforces this at the policy edge — only models in the approved registry are eligible for production AI calls. Every change packet flows into the Evidence Engine, which captures the change, the reviewer, and the impact assessment as part of the audit chain. Data Vault links data versions to model versions, so regulators can trace any output back to the exact model and exact data in scope at the moment of decision. Evidence Packs export Model Lifecycle artifacts as part of the regulator-ready submission packet.
Mapped to the frameworks regulators expect
FDA PCCP
Predetermined Change Control Plan for AI/ML-enabled SaMD.
EU AI Act Art. 15
Accuracy, robustness, and cybersecurity over the full lifecycle of high-risk AI.
EU AI Act Art. 9
Risk management system across iteration.
ISO 42001 §9.1
Performance evaluation and ongoing monitoring.
FDA GMLP
Good Machine Learning Practice — Principles 1, 8, 9, 10.
21 CFR Part 11
Electronic records integrity, audit trail, change control (life sciences).
FINOS AIGF v2.0
Model risk and lifecycle controls in financial services AI governance.
NIST AI RMF
Manage and Measure functions across the AI lifecycle.
Frequently Asked Questions
Change description, change rationale, impact assessment against frozen test sets, reviewer attestations, the model and data versions involved, the framework citations the change implicates, and the cryptographic hash linking it to the audit chain.
Yes. The approved-model registry supports any model the customer authorises — commercial foundation models (OpenAI, Anthropic, Google, Cohere, Mistral), open-source and open-weight models (Llama, Mistral, DeepSeek, Qwen), and specialised biomedical or vertical models (AlphaFold 3, ESM3, Med-Gemini, NVIDIA BioNeMo, and others).
Yes. PCCP is the strictest framework Model Lifecycle is built around, but the same change-control discipline applies to any regulated environment — DORA-compliant financial services, ISO 42001 certified AI management systems, or internal enterprise risk-management programs.
Continuous monitoring of model outputs against frozen baselines and live distributions. Customers configure drift thresholds and alerting policy. Detected drift triggers a change-packet workflow.
Yes. Reign Model Lifecycle is designed to complement MLflow, Vertex AI, Amazon SageMaker, Azure ML, and other MLOps platforms. We surface the governance layer they do not.
Reign Model Lifecycle is available today through the Reign Design Partner Program, with broader release scheduled for Q4 2026. Design partners shape the regulatory mappings, the change-packet schema, and the validation harness defaults that ship at GA.
Reign components
