Chapter 5 . Signal & Exposure
Quiet alerts are silent failures.
See everything before it becomes risk.
Ring 5 of RING:1000:2026 . Second-outermost protection layer
Edition v0.1 . Draft for working group review Lead author: Derris Taylor . Working group masthead pending ratification
1 . The Opening Forensic
In late 2022 a multinational technology company received a Google Cloud invoice carrying a $72,000 line item from a single Cloud Function that had run for nine days. The function had been deployed to production by a junior engineer testing a webhook integration. It had a recursive call to itself, no rate limit, no execution ceiling, and no anomaly detector watching it. The function ran in production for nine days because no one was looking. The forensic report from the post-mortem identified the technical fix in seven minutes: add a max-instances limit, add a rate ceiling, fix the recursion. The report identified the structural fix in seven pages: there was no signal in the corporate ledger that an unbounded workload had appeared. The cost did not surface as an anomaly until the invoice landed. The invoice landed nine days late.
The federation includes this case in the Ring 5 corpus because it is the cleanest possible illustration of the doctrine. There was no breach. There was no malicious actor. There was no architectural failure of Ring 6: the function deployed through the proper pathway, the procurement channel was respected, the IP classification was correct, the dependency manifest held. Ring 6 was clean.
What failed was Ring 5. The institution had no real-time signal that a runaway workload had appeared in its estate. By the time the signal arrived, in the form of an invoice, the exposure had already compounded for nine days.
A practitioner reading this chapter and walking into their own organization will find a Ring 5 gap inside a week. The shape of the gap will vary. The doctrine that names it will not.
2 . The Doctrine
Ring 5 sits one layer in from Ring 6. The relationship between them is precise and the federation has been opinionated about it through three rounds of working group review.
Ring 6 governs what is possible to occur in the environment. Ring 5 governs what is visible when something does occur. The two rings are independent and both are necessary. A practitioner who has done Ring 6 well but not Ring 5 has an environment that has been hardened against the conditions for failure but cannot see when failure begins to form anyway. A practitioner who has done Ring 5 well but not Ring 6 has a vantage point from which to observe failure compounding without the architecture to prevent it.
The federation's preferred phrasing of the doctrine is the working group's, on the public record from the v0.7 review:
Signal latency is financial exposure. Every minute between the formation of an event and the institution's awareness of the event is a minute the exposure is compounding without governance.
The principle becomes a quantitative argument. Mean time to detect is a financial metric, not just an operational one. Reducing MTTD by a factor of ten reduces the average exposure window by a factor of ten, which under most operating models reduces the loss-given-event by a factor of ten as well. Ring 5 is the work of compressing MTTD across every signal class the institution generates.
Three principles run through this chapter.
Visibility is a financial function. The work of seeing is not separable from the work of governing.
Signals are not events. A signal is the institution's representation of an event. Signal quality is its own discipline, and signal quality is what makes the difference between data and noise.
Coverage gaps are exposure. A part of the estate that generates no signal is not a part of the estate that has zero exposure. It is a part of the estate where exposure is invisible.
3 . The Standard
Ten Ring 5 controls. Seven mandatory. Two recommended. One adaptive.
3.1 Asset Discovery & Continuous Inventory
Category: Detection. Enforcement: Mandatory.
Automated, real-time discovery of all cloud resources, SaaS subscriptions, data stores, and financial commitments. The control is the foundation of Ring 5: an institution cannot generate signal on assets it has not discovered.
Discovery in this control is continuous, not scheduled. A scheduled scan introduces a discovery latency equal to the scan interval. A continuous discovery posture keeps the latency at minutes or below. The federation's reference implementation runs against the cloud provider APIs, the SaaS portfolio scanners, the data store catalogs, and the procurement registry every fifteen minutes at most.
The boundary condition is asset definition. Practitioners often discover what they classify as resources and miss what they classify as configuration. The federation's standard requires discovery of resources, configurations, identities, contracts, and commitments. Five surfaces. Each one continuously scanned.
KPI. Asset discovery coverage. Target: 99 percent.
3.2 Cost Anomaly Detection
Category: Detection. Enforcement: Mandatory.
Real-time detection of spending anomalies, unexpected cost spikes, and budget threshold breaches. The control is the operating expression of the doctrine that signal latency is financial exposure.
Cost anomaly detection in the federation's standard does three things. It establishes a baseline per cost-center, per workload, per vendor. It defines anomaly thresholds calibrated to the volatility of each baseline. It triggers an alert with a defined response time when the threshold is breached. The response time is part of the control. An anomaly that triggers an alert no one reads is not a control. It is theater.
The $72K Cloud Function forensic violates this control directly. The function generated a spending anomaly within hours of deployment. There was no detector watching. The remediation is a detector wired against every workload class the institution operates, with thresholds calibrated to each class's volatility, with response cadences that match the threshold severity.
KPI. Mean time to detect cost anomaly. Target: under 15 minutes.
3.3 Shadow IT and Shadow Spend Identification
Category: Detection. Enforcement: Mandatory.
Continuous discovery of unauthorized tools, services, and accounts. The control is the Ring 5 partner to Ring 6's procurement gates. Ring 6 makes shadow IT structurally hard to acquire. Ring 5 ensures that any shadow IT that does form is detected before it compounds.
The federation's reference implementation correlates four data sources: SSO logs (anomalous service-provider IDs), expense reports (unexpected vendor names), DNS resolution (queries to non-enrolled vendors), and corporate-card classification (merchants outside the approved category list). A shadow tool that survives all four detectors is the kind of frontier event the institution should treat as a Ring 6 architectural failure rather than a Ring 5 detection failure.
KPI. Shadow IT detection rate within 24 hours of emergence. Target: 95 percent.
3.4 Contract and Commitment Surveillance
Category: Tracking. Enforcement: Mandatory.
Real-time monitoring of all contractual obligations, renewal dates, and scope triggers. The control is the Ring 5 work of seeing the institution's contractual exposure as a live surface, not a quarterly review.
A contract carries five live surfaces. The renewal date. The scope expansion trigger. The commitment ceiling. The minimum-spend covenant. The penalty schedule. Each surface generates signal as the institution's usage approaches it. Ring 5 wires the signal so the institution sees the approach before the trigger fires.
A practitioner satisfying this control has a contract registry where every active contract carries the five surface markers, the registry is wired to consumption telemetry, and the practitioner knows in real time which contracts are approaching which thresholds. The practitioner does not discover a renewal four days before it auto-renews. The practitioner sees the approach ninety days out and decides deliberately.
KPI. Contract surface coverage and lead time on renewal events. Target: 100 percent of active contracts, 90-day lead time.
3.5 Risk Signal Classification
Category: Classification. Enforcement: Mandatory.
Automated classification of all financial signals by severity, domain, and required response time. The control is the discipline of taking raw signal and turning it into actionable intelligence at the speed of the operating cadence.
Signals without classification are noise. The federation's standard requires three classification dimensions: severity (the magnitude of exposure), domain (which ring and which workload class), and response time (how fast the institution must act). Classification produces a triage queue. Triage produces a routing decision. Routing reaches an actor. The actor acts.
A practitioner who has not classified is a practitioner whose alert fatigue has overwhelmed the operating cadence. The classification is the discipline that converts a stream of raw events into a finite queue of governance decisions.
KPI. Signal classification accuracy. Target: 90 percent true-positive rate at the published severity bands.
3.6 Exposure Quantification
Category: Measurement. Enforcement: Mandatory.
Real-time calculation of total financial exposure across all signals and risk vectors. The control is the work of producing a single, defended number that answers the question "what is our exposure right now."
Exposure quantification rolls up every active signal into an exposure register. The register carries each signal's severity, the calibrated dollar exposure per severity band, the confidence interval, and the time-to-resolution. The register is queryable in real time. The CFO who asks "what is our cloud exposure right now" receives a defensible number, not a backward-looking report.
The federation's standard treats exposure quantification as the bridge between Ring 5 and the boardroom. Ring 5 is where the signal is generated. The exposure register is where the signal becomes a board-grade number.
KPI. Exposure register freshness. Target: under 1 hour from signal to register update.
3.7 Build and Release Artifact Monitoring
Category: Detection. Enforcement: Mandatory.
Every release artifact is scanned for anomalous file types and classified content before publication. The control is the Ring 5 partner to Ring 6's build environment governance.
Ring 6 governs what is structurally possible to release. Ring 5 generates signal on what is being released so that anomalies surface for review. The two are independent: an environment that has done Ring 6 well still benefits from Ring 5 monitoring because Ring 6 protects against known structural conditions and Ring 5 generates signal on conditions Ring 6 has not yet acknowledged.
The reference scanner runs against three classes of anomaly. File types that should not appear in release artifacts (raw datasets, model weights, internal documentation). Content classifications that should not be in the public path (PII, secrets, proprietary IP). Volume anomalies (release artifacts that are an order of magnitude larger than the historical baseline). Each class triggers a hold-publish action.
KPI. Release artifact monitoring coverage. Target: 100 percent of public-facing releases.
3.8 Cross-Domain Signal Correlation
Category: Analysis. Enforcement: Recommended.
Signals across cloud, SaaS, data, AI, and supply chain are correlated to identify compound risks. The control is recommended rather than mandatory because cross-domain correlation is the Ring 5 maturity ceiling: institutions reach this control after the seven mandatory controls are in steady state.
The principle is that individual signals appear benign but compose into systemic risk. A modest cost anomaly in cloud + a contract renewal trigger in SaaS + a vendor financial-health signal in supply chain are three separate signals at moderate severity. Composed, they are a single pattern that says the institution's primary cloud-and-SaaS supplier is eight quarters from a financial event that will become an operational event for the institution's estate.
KPI. Compound-risk identification cadence. Target: at least one compound finding per quarter that no single domain detector would have produced alone.
3.9 Vendor and External Risk Signals
Category: External. Enforcement: Recommended.
External monitoring of vendor financial health, compliance status, and security posture. The control is the recognition that Ring 5 visibility extends beyond the corporate boundary into the institution's vendor surface.
Vendor signals come from public filings, ratings agencies, security feeds, news monitoring, and the federation's Tooling Matrix. The institution wires these external feeds into the same exposure register that holds internal signals. Vendor financial distress is a signal that produces a Ring 5 alert before the distress becomes a Ring 6 architectural failure (vendor goes dark, contract terms invalidate).
KPI. External signal coverage across the top 80 percent of OpEx vendors. Target: 100 percent.
3.10 Predictive Signal Analysis
Category: Predictive. Enforcement: Adaptive.
Machine-learning models predict future cost trends, anomalies, and risk exposures. Adaptive because the cadence and the model class depends on the maturity of the rest of Ring 5. Practitioners deploy predictive analysis after the seven mandatory controls are in steady state and the cross-domain correlation has been operating for at least four quarters.
The federation publishes a calibration table for predictive models. Models are required to publish their training methodology, their feature set, their accuracy metrics, and their drift cadence. A predictive control without published methodology is theatrical. The federation will not accept it as a Ring 5 claim.
KPI. Predictive model accuracy and recency. Target: above 80 percent on published accuracy bands, drift-rechecked monthly.
4 . The Pattern Library
Ring 5 across the five canonical stacks.
| Stack | Ring 5 Pattern | |---|---| | Public Cloud | Hourly cost telemetry. Anomaly mean-time-to-detect under fifteen minutes. Zero discovery blind spots across regions and services. Workload-level cost anomaly detection wired to on-call. | | SaaS Portfolio | License utilization monitored per seat. Renewal dates tracked ninety days out. Inactive seats auto-suspend after the published threshold. Vendor financial-health feeds wired to the exposure register. | | On-Prem and Hybrid | Datacenter power draw per rack. Utilization heatmaps. Cooling efficiency signals integrated with the cost dashboard. Asset register reconciled to floor-plan inventory continuously. | | AI and ML | Token counts, inference latency, model cost per request, hallucination rate surfacing as live KPIs. Training run cost telemetry wired to the exposure register. Prompt-pattern anomaly detection. | | Data Platform | Every dataset access logged with cost attribution. BigQuery slots, Snowflake warehouses, Databricks clusters tracked real-time. Query-level anomaly detection. Schema-drift signals piped to Ring 4. |
5 . Industry Applications
Cloud Infrastructure. Discover all resources and commitments across multi-cloud. Real-time cost anomaly detection across AWS, Azure, GCP, and OCI accounts. Cross-account resource inventory and commitment tracking. The federation's Cloud Conformance Pack publishes the reference signal-coverage matrix per provider.
AI and ML Operations. Detect GPU utilization anomalies and training cost spikes. Model inference cost monitoring and endpoint exposure tracking. Training pipeline cost-signal classification. Frontier-model commitments tracked as financial obligations.
SaaS Portfolio. Shadow SaaS discovery and subscription surveillance. License utilization signal monitoring. Contract renewal date tracking and scope-trigger alerts. SSO-anomaly detection wired to procurement.
Government. Real-time visibility into agency cloud spend across accounts. Appropriation tracking and obligation monitoring. Anti-deficiency signal detection across programs. The Public Sector Conformance Pack adds the program-fund signal layer that civil agencies require.
Supply Chain. Vendor financial-health monitoring and contract surveillance. Procurement commitment tracking across supplier tiers. Logistics spend anomaly detection. Concentration-risk signals at the tier-one supplier band.
6 . The Adversarial Audit
Five vectors the auditor will use to challenge a Ring 5 claim.
Vector 1: "Show me the asset that exists in your estate that does not appear in your inventory."
The practitioner runs the discovery query against every layer of the estate and produces zero asset gaps or, if any exist, the time-to-discovery contract for each. If the practitioner cannot run the query, Ring 5 has not been claimed.
Vector 2: "What is your mean time to detect a cost anomaly, and how is the number measured?"
The practitioner produces the MTTD distribution across the last quarter, with the measurement methodology documented. The auditor looks for two failure modes: a number with no methodology behind it, and a methodology that excludes the long tail of slow-developing anomalies. The federation treats both as fatal to a Ring 5 claim.
Vector 3: "Walk me through a signal from generation to action."
The practitioner picks an arbitrary signal from the last week and walks the chain: detection, classification, routing, action, resolution. The auditor checks for hand-off latency at each step. A chain with handoff latency above the published threshold has a control gap somewhere in 3.5 (Risk Signal Classification).
Vector 4: "How do you know about a vendor's financial distress before it becomes an operational event?"
The practitioner produces the external-signal feed, the vendor coverage, and a recent example of an external signal that produced an internal action. If the practitioner cannot produce the example, 3.9 has been claimed but not implemented.
Vector 5: "Describe a compound risk you identified in the last quarter that no single detector would have caught."
The practitioner produces a recent compound finding with the constituent signals named. The auditor verifies that each constituent was below the single-detector severity threshold and that the composition crossed it. If the practitioner cannot produce a compound finding for the last four quarters, 3.8 has been claimed but is not active.
7 . The Working Capital Math
Ring 5's quantitative spine is the relationship between mean time to detect and average exposure window.
For an institution with annualized loss exposure $E$ and mean time to detect $T$ on the dominant signal class, the expected loss-given-event is approximately:
Expected loss ≈ $E$ × (T / 8760)
where $T$ is in hours and 8760 is the annual hour count. The practitioner reading the formula sees that compressing MTTD from 24 hours to 1 hour reduces expected loss by a factor of 24 on the dominant signal class. The math is the federation's calibration anchor for Ring 5 investment.
| Ring 5 Maturity | Dominant-class MTTD | Expected loss reduction vs. Phase 1 baseline | |---|---|---| | Phase 1 (Blind) | Days to weeks. Signal arrives via invoice. | Baseline. The exposure runs unchecked until the invoice lands. | | Phase 2 (Reactive) | Hours to days. Signal arrives via weekly review. | 5 to 8 times reduction. | | Phase 3 (Coordinated) | Minutes to hours on dominant signals. | 24 to 50 times reduction. | | Phase 4 (Proactive) | Sub-fifteen-minute MTTD on critical-class signals. | 100 to 200 times reduction. | | Phase 5 (Adaptive) | Predictive signal arrives before the event forms. | The federation does not publish a multiplier here. The exposure window is conceptually pre-formation. The math becomes the institution's option-value argument for predictive control. |
The CFO's question is "what does Ring 5 maturity earn us." The answer is the math above, anchored to the institution's own loss exposure register.
8 . The 13 Modes of Failure
M1. Asset discovery on a schedule. Remedy: continuous discovery, latency under fifteen minutes.
M2. Cost anomaly detection without baselines per workload class. Remedy: baseline calibration per class with volatility-aware thresholds.
M3. Shadow IT correlated from one signal source rather than four. Remedy: SSO + expense + DNS + card classification correlation.
M4. Contract surveillance manual rather than wired. Remedy: registry wired to consumption telemetry with five surface markers per contract.
M5. Risk signal classification missing severity bands. Remedy: federation-published severity bands enforced in the classifier.
M6. Exposure quantification produced as a quarterly report rather than a live register. Remedy: register with under-1-hour update cadence.
M7. Build artifact monitoring scoped to secrets only. Remedy: extension to file types, content classifications, and volume anomalies.
M8. Cross-domain correlation absent. Remedy: federation reference correlator across cloud + SaaS + data + AI + supply chain.
M9. External vendor signals not wired to the exposure register. Remedy: vendor signal feed into the same register that holds internal signals.
M10. Predictive analysis without published methodology. Remedy: methodology, training data, accuracy bands, drift cadence published.
M11. Alert fatigue collapsing the response cadence. Remedy: classification severity bands tightened so the actionable queue is finite.
M12. Discovery coverage gaps in the procurement and identity surfaces. Remedy: discovery extended to all five surfaces, not just resources and configurations.
M13. MTTD measured on the median signal rather than the long tail. Remedy: distribution reporting at the 50th, 90th, and 99th percentile.
9 . Sidebars (Working Group co-authored, ratification pending)
>
Sidebar 5.A . Signal latency is financial exposure. Co-authored, signed at ratification. The phrase that anchors Ring 5 is the phrase the federation argues most about with traditional security frameworks. Security teams have long treated mean time to detect as an operational metric. The federation re-grounds it as a financial metric. Every minute between the formation of an event and the institution's awareness of it is a minute the exposure compounds. The math is the same whether the event is a cost anomaly or a security incident. Ring 5 is the institution's commitment to compress that latency on every signal class, not just the cost class or the security class.
>
Sidebar 5.B . The exposure register. Co-authored, signed at ratification. The single artifact that distinguishes a Phase 4 Ring 5 from a Phase 3 Ring 5 is the live exposure register. A Phase 3 institution can answer the question "what is our exposure right now" with a one-day-old number. A Phase 4 institution can answer it with a number that is under one hour old and rolls up every active signal across every domain. The register is the bridge between Ring 5 work and the boardroom. Practitioners building Ring 5 should treat the register as the goal, not the dashboards that feed it.
>
Sidebar 5.C . Why Ring 5 sits outside Ring 4. Co-authored, signed at ratification. Visitors to the methodology often ask why visibility (Ring 5) is upstream of attribution (Ring 4). The reasoning is that you must see before you can attribute. An institution that has not done Ring 5 cannot claim Ring 4 because its attribution graph is built on incomplete signals. The reading order is principled, not arbitrary. Ring 5 is the discovery and signal layer. Ring 4 names what it has discovered.
10 . The Founder's Annotation Track
>
I want the reader to know that Section 3.6 (Exposure Quantification) was the section the working group spent the most ink on. The first draft positioned exposure quantification as an aggregate dashboard, which is what most institutions ship today. The working group's dissent was that an aggregate dashboard is not a register, because a dashboard is a presentation layer and a register is a data primitive. The chapter now reflects the dissent. The exposure register is a primitive. The dashboard is a view onto the primitive. Practitioners building Ring 5 should know which one they are building. Section 3.10 (Predictive Signal Analysis) is the section I expect to revise most heavily in v0.2. The methodology calibration table is undercooked in this edition. Practitioners running predictive controls today should treat their own methodology as the canonical reference until the federation's table catches up.
11 . The Capstone Artifact
The Ring 5 capstone is the Live Exposure Register for the candidate's organization.
The register contains, at minimum:
- The discovery manifest. Every asset class the institution operates, with the discovery cadence and the latency contract per class.
- The signal taxonomy. Every signal class the institution generates, with the classification bands per class.
- The detection inventory. Every detector running against every signal class, with MTTD distributions per detector.
- The exposure roll-up. The aggregate dollar exposure across active signals, with the methodology behind the roll-up.
- The shadow-IT discovery report.
- The vendor external-signal feed.
- The compound-risk findings from the last four quarters.
- The predictive model methodology, if 3.10 is claimed.
- The named gaps under remediation.
Submitted, signed, and dated. Federation Standards Council reviews. Accepted registers are filed against the candidate's CFO-R credential. The federation builds the public corpus of Ring 5 reference implementations from the accepted registers.
12 . Doctrine Q&A
Fifteen calibrated questions. Forty-eight in the proctored examination.
Q1. A cost anomaly is detected within thirty minutes of formation. The institution's published MTTD target is under fifteen minutes. Has the control fired correctly?
A. The control fired but did not meet the threshold. A correctly performing Ring 5 implementation would treat the gap as a finding to remediate. The thirty-minute detection is acceptable as a one-time event but unacceptable as a steady-state pattern.
Q2. Asset discovery runs nightly and catches new resources within 24 hours. Does this satisfy 3.1?
A. No. 3.1 requires continuous discovery with latency in minutes. Nightly is failure mode M1.
Q3. Shadow IT detection runs against expense reports only. Has 3.3 been satisfied?
A. Partially. 3.3 requires correlation across SSO, expense, DNS, and card classification. Single-source detection is failure mode M3.
Q4. A vendor's financial distress was reported in a Wall Street Journal article on Tuesday. The institution's exposure register reflected the signal on Friday. Has 3.9 been satisfied?
A. No. The four-day latency violates the spirit of 3.9. External signals must reach the exposure register on the same operating cadence as internal signals.
Q5. An institution publishes a quarterly exposure report to the board. Is the report acceptable as a 3.6 implementation?
A. No. 3.6 requires a live register with under-1-hour freshness. A quarterly report is a presentation artifact, not a register.
Q6. A predictive model is in production but its training methodology is undocumented. Has 3.10 been claimed correctly?
A. No. 3.10 requires published methodology. An undocumented predictive control is theatrical and will not be accepted as a Ring 5 claim.
Q7. Compound risk findings have not been produced for the last six quarters. Is 3.8 active?
A. Doubtful. 3.8 is a recommended control and the absence of findings over six quarters is a strong signal that cross-domain correlation is not running, or is running too narrowly. Federation review will treat this as a yellow flag.
Q8. A contract auto-renews three days before the renewal date is surfaced to the procurement team. Has 3.4 been satisfied?
A. No. 3.4 requires 90-day lead time on renewal events. A three-day lead time is failure mode M4.
Q9. An on-call engineer dismisses fourteen low-severity cost anomaly alerts in a single shift. Is the system performing correctly?
A. No. The volume suggests the severity bands are too loose, producing alert fatigue. Failure mode M11. Remedy: tighten classification.
Q10. A release artifact is published that contains a model weight file. Has 3.7 been satisfied?
A. No. 3.7 requires file-type anomaly detection that should have flagged the weight file. The artifact monitor needs extension.
Q11. A Cloud Function runs unbounded for nine days before its cost surfaces in the monthly invoice. Which controls failed?
A. Primarily 3.2 (Cost Anomaly Detection) failed. Secondary failure in 3.1 (Asset Discovery & Continuous Inventory) if the function was not in the discovery manifest. Possible Ring 6 failure in 6.9 if the function bypassed a commitment ceiling.
Q12. Discovery coverage is reported at 92 percent. Does the institution claim 3.1?
A. Approaching the bar. The federation target is 99 percent. A 92 percent coverage is Phase 3 and the practitioner should target Phase 4 with a remediation plan for the missing 7 percent.
Q13. Cross-domain correlation surfaces a compound risk involving cloud + SaaS + supply chain. Which ring owns the response?
A. Ring 5 surfaces the signal. The response routes to Ring 3 (Policy & Control) for the operating decision. Ring 5 closes the loop by re-measuring the exposure register after the response.
Q14. A predictive model's accuracy has drifted from 85 percent to 67 percent over the last two months. The model has not been re-trained. Is 3.10 in good standing?
A. No. 3.10 requires drift-rechecked monthly with re-training when accuracy crosses the published band. The model is out of compliance.
Q15. What is the canonical Ring 5 forensic the federation uses to ground the chapter?
A. The $72,000 Cloud Function runaway of 2022. Reconstruction signed and registered.
End of Chapter 5 . Edition v0.1 draft . Working group review pending . Ratification target Q3 2026 . Public comment window opens at vot.ifo4.org on the chapter publication date.