Chapter 2 . Optimization & Efficiency
Money already spent is the test.
Every dollar must work. Waste is a governance failure.
Ring 2 of RING:1000:2026
Edition v0.1 . Draft for working group review Lead author: Derris Taylor . Working group masthead pending ratification
1 . The Opening Forensic
In 2018, Pinterest filed an S-1 prospectus ahead of its public offering and disclosed a six-year, $750 million commitment to Amazon Web Services. The disclosure included contractual minimums, scaling commitments, and a published exit ramp that gave both parties visibility into the multi-year economic relationship. Investors, analysts, and competitors read the filing and arrived at varying conclusions about Pinterest's cloud economics. The federation reads the filing because it shows what mature optimization discipline looks like at scale.
A year and a half before the S-1, Pinterest's cloud spend had been growing at a rate that a public-company-bound finance organization could not justify. The internal forensic from that period showed three patterns. First, a meaningful share of compute capacity was idle outside business hours but not scheduled to scale down. Second, the storage tier was over-provisioned by a factor that compounded across regions because no lifecycle policy moved cold data to nearline. Third, reserved-instance commitments had been purchased on consumption forecasts that never materialized, leaving the institution paying for capacity it did not consume while paying on-demand for capacity that exceeded the commitment band. The forensic was textbook Ring 2.
The remediation took eighteen months. Right-sizing across the production fleet. Lifecycle policies on the storage tier. Re-negotiation of reserve commitments aligned to the actual consumption curve. Architecture reviews that produced an order-of-magnitude reduction in cross-region traffic. By the time Pinterest filed the S-1, the company had a defended commitment structure backed by a measurable optimization program that the federation now treats as a reference implementation.
The federation includes the case in the Ring 2 corpus because it shows that optimization is not a campaign. Optimization is a discipline. Pinterest did not declare a "cost-cutting initiative" and ship a press release. Pinterest built a continuous engineering practice that compressed waste over time and produced a defensible commitment structure when the institution's external accountability surface (the IPO) demanded one. Ring 2 is the work of building that practice.
2 . The Doctrine
The doctrine of Ring 2 is the line the federation has held against every working group dissent that wanted to soften it.
Every dollar must work. Waste is not an operating reality. Waste is a governance failure.
The phrase reads strong because the federation refuses the alternative reading. Most institutions treat waste as an inherent cost of operation, something to manage, something that "is what it is." The federation's position is that this framing produces a culture in which waste compounds because no one is paid to compress it. Ring 2 is the institution's commitment to treating waste as a measurable governance failure, with named owners, published metrics, and an enforced reduction cadence.
The reading order places Ring 2 inside Ring 3 because optimization without policy enforcement produces inconsistent outcomes. A team that optimizes without a budget guardrail can over-optimize and break the workload. A team that optimizes without a rate-card enforcement layer can leave negotiated rates unrealized. A team that optimizes without a unit-economics measurement can claim savings that do not survive a board-grade review. Ring 3 produces the policy floor; Ring 2 builds on top of it.
Three principles run through this chapter.
Optimization is continuous, not episodic. The institution that runs optimization as a quarterly campaign produces savings that erode by the next quarter. The institution that runs optimization as a continuous engineering practice produces savings that compound.
Right-sizing is not the goal. Right-sizing is the floor. The mature Ring 2 institution treats right-sizing as table stakes and competes on architecture efficiency, commitment portfolio strategy, and unit economics.
Waste reduction without measurement is performance theatre. Every Ring 2 claim is anchored to a measurable baseline, a measurable target, and a published methodology. Claims without methodology are not claims.
3 . The Standard
Ten controls. Five mandatory. Three recommended. Two adaptive.
3.1 Right-Sizing Engine
Category: Sizing. Enforcement: Mandatory.
Continuous analysis of resource utilization with automated or recommended right-sizing actions. The control is the operating expression of the doctrine that every dollar must work.
The engine evaluates every resource against a utilization profile (CPU, memory, network, storage I/O, license seats, etc.) on a continuous cadence and produces three classes of recommendation: automated right-size (the engine applies the change directly under published policy), recommended right-size (the engine produces a ticket for human review), and architectural right-size (the resource's pattern indicates a re-design opportunity that exceeds simple sizing).
The federation's reference implementation runs the engine against the cloud provider's utilization telemetry, the SaaS portfolio license-utilization signal, the on-prem fleet inventory, and the AI/ML workload throughput metrics. Each surface has its own right-sizing semantics. The engine treats each surface as a first-class domain rather than forcing them into a single rubric.
KPI. Right-sizing coverage and applied recommendations. Target: 95 percent of resources covered by the engine; 70 percent of recommendations applied within the published cadence.
3.2 Waste Elimination
Category: Waste. Enforcement: Mandatory.
Detection and removal of idle, orphaned, and unused resources across all environments. The control is Ring 2's continuous backstop against the resources that drift into uselessness without producing the cost signal that would have triggered earlier action.
Idle resources are running but produce no measurable work. Orphaned resources are running but have no current owner. Unused resources are provisioned but never consumed. Each class is detected by a different signal: idle by utilization telemetry, orphaned by ownership reconciliation (Ring 4), unused by access-log analysis. The federation's standard requires that all three classes be detected continuously and remediated within published SLAs.
The control is not "delete everything that looks idle." Mature Ring 2 implementations evaluate idle resources against three criteria before remediation: business purpose (some workloads are correctly idle), seasonal pattern (some workloads are correctly idle outside business hours), and re-architecture opportunity (some idle resources signal an architectural redesign). Auto-remediation is reserved for resources that fail all three criteria.
KPI. Waste-elimination remediation rate. Target: idle resources reclaimed within fourteen days, orphaned within the Ring 4 SLA, unused within thirty days.
3.3 Commitment Portfolio Management
Category: Commitments. Enforcement: Mandatory.
Reserved instance, savings plan, and commitment discount portfolio management. The control is the institution's commitment to actively managing its multi-year commitments as a portfolio rather than as one-off purchases.
A commitment portfolio carries five live surfaces. Coverage (what percentage of consumption is covered by commitments). Utilization (what percentage of committed capacity is actually consumed). Maturity (the time-to-expiration distribution of the portfolio). Vendor concentration (the share of commitments locked to a single vendor). Risk-adjusted savings (the savings produced by the portfolio after accounting for unused capacity).
The federation's standard requires continuous portfolio review with quarterly rebalancing. The Pinterest forensic in Section 1 is the canonical case: an institution that purchased reserved instances on consumption forecasts that did not materialize ended up paying for unused capacity. The remediation was treating the portfolio as a portfolio rather than a series of one-off bets.
KPI. Coverage, utilization, and risk-adjusted savings. Target: 70 to 85 percent coverage band, 92 to 98 percent utilization, risk-adjusted savings reported quarterly.
3.4 Unit Economics Tracking
Category: Measurement. Enforcement: Mandatory.
Cost per transaction, per user, per API call, per inference, per shipment, per unit-of-business-output. Understanding true unit costs. The control converts the institution's gross spend into a unit-level signal that engineering teams can act on.
A unit economic is the cost of one unit of business output. For a SaaS company, the unit might be cost per active user. For an e-commerce company, cost per order. For a manufacturer, cost per unit produced. For an AI company, cost per inference. The federation's standard requires that every revenue-generating product line carry a unit economic that is measured continuously, reported per quarter, and trended against the institution's strategic targets.
The control is upstream of every architecture decision. Engineering teams that operate against unit economics build different systems than engineering teams that operate against gross spend. The unit economic is the bridge between Ring 2 (Optimization) and Ring 0 (Outcome).
KPI. Unit-economic coverage and trend. Target: every revenue-generating product line covered; trend reported quarterly with named owners for each unit.
3.5 License Right-Sizing
Category: Licensing. Enforcement: Mandatory.
Right-sizing software licenses based on actual usage patterns and feature requirements. The control extends 3.1's right-sizing logic into the SaaS and software portfolio.
License right-sizing operates on three signals. Active-seat utilization (seats assigned but not used). Tier utilization (premium seats used at standard tier). Feature utilization (paying for feature bundles whose features are not consumed). Each signal triggers a remediation: seat reclamation, tier downgrade, or feature unbundling.
The control is Ring 2's recognition that the SaaS portfolio is now a substantial line item in most institutions and deserves the same continuous optimization rigor as the cloud fleet. License waste is invisible in many institutions because the SaaS subscription is treated as a fixed cost rather than a variable one. Mature Ring 2 implementations treat every SaaS contract as a continuously optimizable surface.
KPI. License utilization and reclamation. Target: 92 to 98 percent active-seat utilization, monthly reclamation cycle, tier accuracy reviewed quarterly.
3.6 Workload Scheduling and Tiering
Category: Scheduling. Enforcement: Recommended.
Time-based scheduling of non-critical workloads and tiering to reduce peak cost. The control is recommended rather than mandatory because not all workloads tolerate scheduling, and the institution's risk appetite for tiering varies by workload class.
Scheduling is the practice of running workloads when capacity is cheaper. Batch jobs at off-peak hours. Development environments shut down outside business hours. Reporting jobs scheduled to run when reserved capacity is otherwise idle. Tiering is the practice of moving workloads to lower-cost tiers when their performance requirements permit. Cold storage tiers. Spot or preemptible compute for non-critical jobs. Lower-availability tiers for non-revenue workloads.
KPI. Scheduled workload coverage and savings. Target: 30 to 50 percent of eligible workloads scheduled or tiered.
3.7 Architecture Efficiency Reviews
Category: Architecture. Enforcement: Recommended.
Periodic reviews of system architecture for cost-efficiency opportunities. The control is the Ring 2 layer that evaluates whether the system's design itself is efficient, beyond the right-sizing of individual components.
Architecture review evaluates patterns that right-sizing cannot reach. Cross-region traffic that could be eliminated through edge caching. Read-replica configurations that could be consolidated. Microservice boundaries that produce inter-service network costs disproportionate to their value. Database choice that does not match the access pattern. Each finding is an architecture-level redesign opportunity that produces savings beyond what right-sizing can reach.
The federation's reference cadence is quarterly review for high-spend systems and annual review for the rest. The output is a remediation backlog that flows into engineering planning rather than being discarded.
KPI. Architecture review cadence and remediation. Target: high-spend systems reviewed quarterly with measurable remediation in the next planning cycle.
3.8 Rate Optimization
Category: Pricing. Enforcement: Recommended.
Continuous evaluation of pricing tiers, negotiated rates, and volume discounts. The control extends Ring 3's rate-card enforcement (3.5) into a proactive optimization posture.
Rate optimization is the work of finding pricing inefficiencies that the institution's current contracts do not capture. New volume tiers as consumption grows. Multi-year commitment discounts as the institution's confidence in its consumption curve increases. Vendor-consolidation discounts when the institution can shift volume from a non-discounted vendor to a discounted one. Each opportunity is identified through continuous monitoring of the institution's consumption curve against the published rate cards.
KPI. Rate-optimization opportunities identified and captured. Target: 80 percent of identified opportunities captured within two procurement cycles.
3.9 Efficiency Benchmarking
Category: Benchmarking. Enforcement: Adaptive.
Comparison of efficiency metrics against industry benchmarks and internal historical performance. Adaptive because benchmarking methodology varies with industry and spend class.
The federation publishes the IFO4 Index Terminal which includes industry benchmarks for cloud spend per revenue dollar, SaaS spend per employee, AI compute cost per inference at scale, and similar unit economics. Practitioners use the Index as the external benchmark and the institution's historical performance as the internal benchmark. Both benchmarks anchor the optimization conversation.
KPI. Benchmarked metrics published quarterly. Target: at least three Ring 2 metrics tracked against external benchmarks; at least five tracked against internal historical performance.
3.10 Automation ROI Tracking
Category: Measurement. Enforcement: Adaptive.
Measuring the return on investment from optimization automation and tooling. Adaptive because ROI calculation methodology varies with automation class.
Optimization automation is itself a cost. Right-sizing engines, waste elimination workflows, commitment portfolio managers, and license-utilization scanners all consume engineering time, vendor spend, and operational overhead. The federation's standard requires institutions to measure the ROI of their optimization tooling and refuse to fund tooling that does not produce measurable returns.
KPI. ROI measurement coverage and refusal rate. Target: every optimization tool measured against ROI; tools that fail to produce returns either re-scoped or sunset within two cycles.
4 . The Pattern Library
Ring 2 across the five canonical stacks.
| Stack | Ring 2 Pattern | |---|---| | Public Cloud | Right-sizing every twenty-four hours. Commitment management with quarterly rebalancing. Idle reclamation inside fourteen days. Spot or Preemptible for non-critical workloads. Architecture review on high-spend systems quarterly. | | SaaS Portfolio | Overlapping tools collapsed. Seats optimized by actual usage. Contract tier stepped down on low utilization. Feature bundles audited for actual consumption. | | On-Prem and Hybrid | Workloads migrated to cleaner-grid regions. Old hardware consolidated. Cooling optimization via AI-driven thermal models. Power budgeting per rack. | | AI and ML | Prompt compression. Model routing (small first, big on fail). Cached inference. Batch windows for non-real-time jobs. Token-budget allocation per tenant. | | Data Platform | Query optimization enforced. Data lifecycle hot to nearline to cold to archive. Dedup jobs scheduled nightly. Warehouse vs. lake decisions reviewed against access pattern. |
5 . Industry Applications
Cloud Infrastructure. Right-sizing engine wired against cloud-provider utilization. Commitment portfolio managed across reserved instances, savings plans, and committed-use discounts. Workload scheduling for non-critical environments. Cross-region traffic optimization through edge caching.
Software Development. Build-time optimization. Test-environment scheduling. CI/CD pipeline cost analysis. Per-engineer cost telemetry to drive architecture decisions.
SaaS Portfolio. Seat reclamation on offboarding. Tier downgrade on low utilization. Feature-bundle unbundling. Contract consolidation when usage patterns favor a single vendor.
Government. Cost-per-mission unit economics. Fiscal-year commitment alignment. Anti-deficiency-aware optimization. Cross-agency consolidation opportunities under shared-services frameworks.
Supply Chain. Vendor consolidation when usage patterns permit. Logistics route optimization. Inventory-carry cost reduction. Procurement-tier alignment to consumption volume.
AI and ML Operations. Model right-sizing per inference latency requirement. Token-budget allocation per tenant or per use-case. Inference caching for repeated queries. Training-run scheduling against capacity availability.
6 . The Adversarial Audit
Five vectors.
Vector 1: "Show me a workload that is not covered by your right-sizing engine."
The practitioner runs the coverage query and produces zero gaps or, if gaps exist, the time-to-onboard contract for each. If the practitioner cannot run the query, 3.1 has not been claimed.
Vector 2: "Walk me through your commitment portfolio's risk-adjusted savings calculation."
The practitioner produces the portfolio, the methodology, the coverage and utilization metrics, and the risk-adjusted savings. The auditor verifies that the methodology is documented and that the savings number survives the methodology. Claims without methodology fail Ring 2 review.
Vector 3: "What is your unit economic for this product line, and how is it trending?"
The auditor picks an arbitrary revenue-generating product line. The practitioner produces the unit economic, the trend over the last four quarters, and the named owner. If the unit economic does not exist, 3.4 has not been satisfied.
Vector 4: "Show me an architecture review finding from last quarter that is in remediation."
The practitioner produces the finding, the engineering owner, the remediation plan, and the current status. The auditor verifies that the finding is in active remediation rather than archived. Architecture findings that remain unaddressed are failure mode M9.
Vector 5: "Reconcile this license-utilization report against actual usage."
The auditor picks a SaaS subscription. The practitioner produces the seat count, the active-seat utilization, the tier utilization, and the feature utilization. The auditor verifies that all three signals are tracked and that low-utilization triggers remediation. Subscriptions tracked only by seat count fail 3.5.
7 . The Working Capital Math
Ring 2's quantitative spine is the relationship between optimization maturity and recoverable waste plus unit-economic improvement.
For an institution with annualized spend $S$ and current Ring 2 phase, the recoverable optimization band is approximately:
Recoverable waste ≈ S × (waste-rate-current minus waste-rate-target)
The federation's calibration is that institutions in Phase 1 typically carry 20 to 30 percent waste in cloud and SaaS spend. Phase 4 institutions carry 4 to 8 percent. The compression band is the recoverable waste pool.
| Ring 2 Maturity | Waste Rate Band | Practical Posture | |---|---|---| | Phase 1 (Blind) | 20 to 30 percent | No right-sizing engine. No commitment portfolio. No unit economics. Spend grows with consumption without the institution measuring the slope. | | Phase 2 (Reactive) | 14 to 22 percent | Quarterly cost-cutting campaigns. Idle reclamation when invoices spike. No continuous practice. | | Phase 3 (Coordinated) | 8 to 14 percent | Right-sizing engine running. Waste elimination wired. Commitment portfolio managed but not yet rebalanced quarterly. | | Phase 4 (Proactive) | 4 to 8 percent | Continuous optimization across all five domains. Unit economics tracked. Architecture reviews producing remediation. License portfolio actively managed. | | Phase 5 (Adaptive) | Under 4 percent | Optimization is an engineering practice, not a finance function. Unit economics drive architecture decisions. Benchmarking against external indices. ROI measurement on the optimization tooling itself. |
8 . The 13 Modes of Failure
M1. Right-sizing run quarterly rather than continuously. Remedy: continuous engine with hours-cadence recommendations.
M2. Idle reclamation deferred until invoices spike. Remedy: continuous detection with published SLAs per resource class.
M3. Reserved-instance purchases on forecasts that never materialize. Remedy: commitment portfolio review with quarterly rebalancing.
M4. Unit economics treated as a finance metric rather than an engineering metric. Remedy: unit economics published to engineering teams with named owners and trend reporting.
M5. SaaS subscriptions tracked only by seat count. Remedy: license utilization tracked across active-seat, tier, and feature dimensions.
M6. Workload scheduling absent for non-critical environments. Remedy: scheduling applied to development, staging, and reporting workloads where tolerable.
M7. Architecture reviews producing findings that get archived rather than remediated. Remedy: findings flow into engineering planning with named owners.
M8. Rate optimization treated as a contract-renewal event rather than a continuous discipline. Remedy: continuous rate monitoring against published cards.
M9. Optimization "campaigns" that produce one-time savings that erode by the next quarter. Remedy: continuous engineering practice replacing episodic campaigns.
M10. Optimization claims without published methodology. Remedy: every claim anchored to a documented methodology that survives audit review.
M11. Architecture efficiency confused with right-sizing. Remedy: distinguish component-level right-sizing from system-level architectural redesign.
M12. Benchmarking against external indices but not internal historical performance. Remedy: dual-track benchmarking with both surfaces tracked.
M13. Optimization tooling unfunded after initial deployment. Remedy: ROI measurement on the tooling with re-funding decisions made on measured returns.
9 . Sidebars
>
Sidebar 2.A . Right-sizing is the floor, not the goal. Co-authored, signed at ratification. The federation reviews many Ring 2 implementations that proudly report their right-sizing coverage and treat the metric as the institution's optimization claim. Right-sizing is necessary but it is the entry point to Ring 2, not the destination. Mature institutions compete on commitment portfolio strategy, unit economics, architecture efficiency, and rate optimization. Practitioners building Ring 2 should treat right-sizing as the floor they have to clear before the more consequential work begins.
>
Sidebar 2.B . The Pinterest discipline. Co-authored, signed at ratification. Pinterest's pre-IPO optimization arc is the federation's reference case for what mature Ring 2 looks like at scale. The lesson is not that the company saved money. The lesson is that the company built a continuous engineering practice that produced a defended commitment structure when external accountability demanded one. Practitioners should expect that their own institutions will face equivalent moments: an IPO, a board scrutiny cycle, a regulatory disclosure, an acquirer's due diligence. Ring 2 is the work of being ready for those moments without a campaign.
>
Sidebar 2.C . Why optimization is upstream of policy. Co-authored, signed at ratification. Ring 2 sits inside Ring 3 in the methodology's reading order. The reasoning is that policy without optimization produces compliance theatre. A team that complies with the policy but operates an inefficient architecture produces a budget that is compliant on paper and wasteful in operation. Ring 2 is the engineering layer that ensures the institution's policy-compliant operations are also operationally efficient. The two rings together produce governance that is both legal and economical.
10 . The Founder's Annotation Track
>
I want the reader to know that the doctrine of Section 2 was where I most resisted the working group's preferred phrasing. The first draft said "every dollar must produce a measurable return." The working group's dissent was that "must work" reads sharper and "produce a measurable return" is a Ring 0 phrasing that pre-empts the Ring 0 chapter. I lost the editorial fight. The current language is the working group's. I think they were right. Section 3.4 (Unit Economics Tracking) is the section I expect to revise most heavily in v0.2. The current treatment is light on the methodology of unit-economic calculation. The federation's working group on unit economics is producing a reference methodology that will likely become an annex to RING:1000:2026, and this section will integrate the annex when it ratifies. Practitioners building unit economics today should treat the working group's draft as the reference until the annex publishes.
11 . The Capstone Artifact
The Ring 2 capstone is the Optimization Portfolio for the candidate's organization.
The portfolio contains, at minimum:
- The right-sizing engine evidence. Coverage, recommendation throughput, applied-recommendation rate, and the per-resource utilization profile.
- The waste-elimination report. Idle, orphaned, and unused resources with remediation status and the reclamation timeline.
- The commitment portfolio. Coverage, utilization, maturity distribution, vendor concentration, and risk-adjusted savings.
- The unit-economics dashboard. Per-product-line unit cost, the trend, the named owner, and the underlying methodology.
- The license-portfolio report. Per-subscription seat utilization, tier utilization, and feature utilization.
- The workload-scheduling configuration. Scheduled workloads, the schedule logic, and the savings produced.
- The architecture-review findings backlog. Findings, owners, remediation plans, and current status.
- The rate-optimization tracker. Identified opportunities, captured opportunities, and the optimization timeline.
- The benchmarking report. Federation Index references, internal historical references, and the institution's position.
- The optimization-tooling ROI report.
Submitted, signed, and dated. Federation Standards Council reviews. Accepted portfolios are filed against the candidate's CFO-R credential and contribute to the federation's public corpus of Ring 2 reference implementations.
12 . Doctrine Q&A
Fifteen calibrated questions. Forty-eight in the proctored examination.
Q1. A team reports 95 percent right-sizing coverage. Is the institution claiming Phase 4 Ring 2?
A. Not on coverage alone. Phase 4 requires continuous coverage plus unit economics, architecture reviews, license portfolio management, and commitment portfolio rebalancing. Right-sizing coverage is necessary but not sufficient.
Q2. A reserved-instance purchase made eighteen months ago is at 62 percent utilization. What action is required?
A. The portfolio rebalancing under 3.3 should have triggered when utilization dropped below the published band. Remediation is reviewing the purchase against the current consumption curve and either re-allocating or letting the commitment expire without renewal.
Q3. Idle resources are detected weekly with a 30-day reclamation cycle. Is 3.2 satisfied?
A. Detection cadence is acceptable. Reclamation cycle is too long for production-tier idle resources where the federation's reference SLA is fourteen days. Adjust the SLA to match the published target.
Q4. A SaaS subscription has 100 seats with 78 active. The subscription is at the premium tier where 22 of the seats use only standard-tier features. Which controls are firing?
A. 3.5 (License Right-Sizing) should produce two recommendations: reclaim the 22 unused seats and downgrade the 22 over-tiered seats to standard. The institution can claim 3.5 only if both recommendations are produced and acted upon within the published cadence.
Q5. Unit economics are tracked at the company-aggregate level (cost per active user across the whole company). Has 3.4 been satisfied?
A. Partially. 3.4 requires per-product-line unit economics, not company-aggregate. Aggregate signals do not give engineering teams the granularity to act on. Remediation is decomposition by revenue-generating product line.
Q6. An architecture review identified a $4M annualized cost from cross-region traffic. The finding has been in the backlog for three quarters. Is 3.7 active?
A. No. 3.7 requires findings to flow into engineering planning with active remediation. Three quarters of inaction is failure mode M7. Remediation is escalating the finding to engineering leadership and producing a planning artifact.
Q7. A workload-scheduling configuration shuts down development environments outside business hours. Production is unscheduled. Is 3.6 satisfied?
A. Yes. 3.6 is recommended and applies to non-critical workloads. Production correctly remains unscheduled. The control's coverage target is 30 to 50 percent of eligible workloads.
Q8. Spot or Preemptible instances cover 80 percent of batch workloads. Is the institution operating at Phase 4 or Phase 5?
A. The metric is consistent with Phase 4 or Phase 5. Phase distinction depends on the rest of the Ring 2 surface (commitment portfolio, unit economics, architecture reviews, license portfolio, ROI measurement on the tooling).
Q9. A right-sizing recommendation reduced an instance from m5.4xlarge to m5.large. Three weeks later the workload's latency degraded. What happened?
A. Either the right-sizing did not account for peak-load patterns or the recommendation was applied without regard for the workload's tolerance. Mature 3.1 implementations include rollback paths for right-sizing actions. The remediation is reverting and tightening the engine's evaluation criteria.
Q10. Rate-optimization opportunities are identified annually at contract renewal. Is 3.8 satisfied?
A. No. 3.8 requires continuous evaluation. Annual identification is failure mode M8. Remediation is monitoring consumption against the rate cards continuously and capturing opportunities mid-contract where the contract permits.
Q11. Optimization tooling produces savings of $3M annually at a tooling cost of $1.2M annually. Is the tooling worth funding?
A. Likely yes, depending on the federation's published ROI threshold and the institution's risk profile. The 2.5x return is within typical funding bands but the practitioner should also evaluate the tooling's coverage, the savings durability, and the alternative use of the engineering time.
Q12. Benchmarking against external indices is active. Internal historical performance is not tracked. Has 3.9 been satisfied?
A. Partially. 3.9 requires both external and internal benchmarking. Internal historical performance is the foundation against which external benchmarks are interpreted.
Q13. A right-sizing campaign produced $8M in annualized savings last year. The current year's savings rate is $1M. Is the institution improving?
A. No. The pattern is failure mode M9 (campaign-driven savings that erode). The remediation is moving from episodic campaign to continuous practice.
Q14. A license tier downgrade was identified by 3.5 but the team has not acted because the downgrade requires a contract amendment. What is the federation's standard response?
A. The amendment is a procurement task that should flow through Ring 3's vendor approval workflow. The 3.5 finding is correctly identified; the remediation runs through 3.3 (Approval Workflow Automation). If the workflow is taking too long, the institution has a Ring 3 latency issue, not a Ring 2 detection issue.
Q15. What is the canonical Ring 2 forensic the federation uses to ground the chapter?
A. The Pinterest pre-IPO optimization arc and S-1 disclosure of 2018. The federation reads the arc as a reference implementation of mature Ring 2 discipline at scale.
End of Chapter 2 . Edition v0.1 draft . Working group review pending . Ratification target Q3 2026 . Public comment window opens at vot.ifo4.org on the chapter publication date.