Your auditors are asking about AI. Reign makes the answer auditable.
Policy by default. Evidence by design. Enforced at runtime.
Reign maps every AI control to the COSO and SOX frameworks your auditors expect — with continuous evidence, submission-ready packets, and CAVR-aligned design. Built for audit committees, internal audit, and external auditors.
The audit committee problem
AI is in scope. The audit infrastructure isn't.
The audit committee carries oversight for every control in the financial-reporting chain — and AI is now in that chain. Models drive credit decisions, fraud detection, vendor onboarding, expense classification, and revenue recognition. SOX 404(b) requires those controls to be effective and tested. COSO ERM requires the underlying AI risk to be assessed and reported. Most enterprises are deploying AI faster than their audit infrastructure can adapt.
Logs ≠ evidence. Application logs show what happened. Auditors need to test that controls were designed and operating effectively. Most AI logs cannot satisfy a SOX walkthrough.
AI deploys in days. Audit prep takes weeks.The cycle mismatch creates audit gaps. Manual evidence compilation can't keep pace with the speed of AI deployment.
Sample testing fails for AI. Auditors typically test a sample of transactions; AI runs thousands of transactions per day. Population-level evidence is the only audit-defensible approach.
Materiality thresholds shift. An AI-driven decision that is non-material on day 1 can become material as the system scales. Without continuous monitoring tied to materiality, the audit committee is flying blind.
COSO and SOX, mapped to Reign
Every framework citation, every component, every artifact.
Reign's four-component spine maps directly to the framework citations your audit committee cares about. Each row below is a real audit question — and the Reign component that delivers the evidence to answer it.
| Framework / Citation | Audit Question | Reign Component | Evidence Reign Delivers |
|---|---|---|---|
| SOX 404(b) — Internal Controls over Financial Reporting | Are AI control activities effective at the assertion level? | AI Gateway | Real-time policy enforcement at the gateway. Every AI call gated, logged, and tied to identity. Tested by inspection of policy logs and selective control walkthroughs. |
| COSO ERM — Risk Assessment | Has AI risk been identified, assessed, and tracked over time? | Model Lifecycle | Approved-model registry with risk classification per model. Drift detection. PCCP-aligned change packets. Every model change logged with reviewer and impact assessment. |
| COSO ERM — Information & Communication / Monitoring | Is AI activity transparent, monitored, and reported to the audit committee? | Evidence Engine | Continuous audit chain across LLM and agent layers. Anomaly detection with materiality thresholds. Board-ready dashboards. Real-time alerts for findings that meet escalation criteria. |
| External Audit (Big 4) / Regulator-Facing Submissions | Is the evidence submission-ready for our external auditors and regulators? | Regulator Packs | Hash-verified, framework-mapped artifacts. SOX 404(b), COSO ERM, EU AI Act Art. 11, DORA, AIGF, ISO 42001 — exportable in the formats your auditors and regulators expect. |
CAVR — the four assertions, AI-native
Completeness · Accuracy · Validity · Restricted Access.
CAVR is the foundation auditors use to test IT general controls. For AI, CAVR is harder — but Reign is built so each assertion is testable by design.
Completeness
Did the system capture every relevant transaction?
Every model call, every agent action, every tool invocation, every prompt — captured by the AI Gateway and logged to the Evidence Engine. The audit chain has no gaps because every AI action goes through Reign.
Accuracy
Is the captured evidence accurate, complete, and immutable?
Cryptographic hash chains (SHA-256) with tamper-evident archives. ALCOA+ data integrity attributes (Attributable, Legible, Contemporaneous, Original, Accurate, plus Complete, Consistent, Enduring, Available). Auditor-verifiable in under five minutes.
Validity
Were AI actions authorized by policy at the time they occurred?
AI Gateway enforces policy at the edge before any AI action. Policy decisions are logged with every call. Unauthorized actions are blocked, not just logged after the fact. Every approval — and every denial — is part of the audit chain.
Restricted Access
Was access to AI systems and data properly restricted, controlled, and segregated?
Identity-bound to every AI call. Role-based access control at the Gateway. SSO + SAML + SCIM. Segregation of duties enforced — the same person cannot both develop and approve a model change. Every access decision logged and auditable.
For your role on the audit chain
Built for the four hands that touch AI audit.
Audit Committee Chair
Confidence that AI is in oversight scope and that management has effective controls. Reign delivers board-ready evidence on demand.
Chief Audit Executive (CAE)
An AI control framework you can audit against. Reign maps SOX 404, COSO, and CAVR to specific Reign components — your audit plan writes itself.
Internal Audit
Testable, reproducible evidence for IT general controls and AI-specific controls. Reign's continuous evidence chain replaces sample-based testing with population-level evidence.
External Auditor (Big 4)
Submission-ready, framework-mapped artifacts in the formats you ingest. Regulator Packs export hash-verified evidence packages tied to the assertions you test.
Mapped to your audit cadence
Continuous evidence, on every audit timeline.
Quarterly Board / Audit Committee Reviews
Live evidence pull from Evidence EngineReal-time dashboards showing AI policy violations, model changes, drift events, and material incidents — pulled directly from the audit chain. No manual prep window.
Annual SOX / COSO Audit
Submission-ready Regulator PacksFramework-mapped, hash-verified artifacts. Walkthrough-ready evidence for control activities. Test-of-design and test-of-operating-effectiveness packets generated on demand.
Real-Time Materiality Alerts
AI Gateway escalation triggersWhen an AI event meets materiality thresholds — set by you — the AI Gateway flags it immediately to the audit committee, CAE, and CRO with full context and the policy that was triggered.
Explore the four components
The four components that make AI auditable.
AI Gateway
MCP-native gateway. Policy at the edge. Validity and Restricted Access by design.
ExploreModel Lifecycle
Approved-model registry. Change control. The COSO Risk Assessment layer.
ExploreEvidence Engine
Continuous audit chain. Completeness and Accuracy by construction.
ExploreRegulator Packs
Framework-mapped artifacts. Submission-ready for SOX, COSO, EU AI Act, and more.
ExploreFrequently asked
