Score V2 contribution
10% of the institutional score.
Sustained AI-compute practice raises Score V2 across the cost, efficiency, and value vectors. The GreenOps playbook also drives the regulatory disclosure modifier.
Core Layer · ai-compute
The capital intensity of intelligence. GPU and TPU fleets, training runs, inference endpoints, model lifecycle, and the per-token economics that now drive line items previously hidden inside platform engineering. This is where the next decade of capital escape will happen if practice is not formalised now.
Score V2 weight
10%
Layer
Core
Sub-areas
8
Owner roles
4
Published playbooks
3
Sub-areas
Each sub-area is its own ledger. Each has a corresponding chapter in the playbook engine, sequenced for release in order of practitioner demand.
LLM inference cost-per-token
GPU and TPU fleet allocation
Training run governance
Model lifecycle and deprecation
Embedding and vector store costs
Carbon-aware training scheduling
Agent-driven inference budgets
Model card and capital provenance
Practice Spectrum for this domain
Locate yourself on the spectrum below. The single next move is named in the next column.
Practice Spectrum
AI inference is one consolidated invoice from the provider. Nobody can tell you the cost-per-token of the customer-support agent versus the marketing copy generator.
API keys are split per project, but attribution stops there. Per-feature, per-customer, and per-model cost is unknown.
Every inference call carries a tag for product, model, and customer cohort. Per-feature unit economics are reported monthly.
Cost-per-inference is computed per call, streamed to a real-time ledger, and exposed to the product team. Budget guardrails fire at the agent level.
AI cost is allocated per token, per call, per customer, in real time, with model card lineage and carbon disclosure attached. The agent budget is itself a controlled object.
Featured Playbooks
Each playbook ends with a measurable Score V2 delta and a signed evidence trail. Reusable, executable, attestable.
best · intermediate
AI inference invoices arrive as a single consolidated charge per provider per month.
elite · elite
Training runs are scheduled on the basis of GPU availability and engineer convenience, not on the basis of grid carbon intensity.
elite · advanced
Tag policies exist on paper.
Score V2 contribution
10% of the institutional score.
Sustained AI-compute practice raises Score V2 across the cost, efficiency, and value vectors. The GreenOps playbook also drives the regulatory disclosure modifier.
Maturity pillars affected
Practice in this domain primarily strengthens visibility, allocation, governance, and optimisation. Each playbook names the pillars it specifically exercises.
Owner roles
AI capital practice is shared across these roles. Each role has its own role-aware home view tuned to its primary concerns.
Role
AI and ML Lead
Cost-per-inference governance
Role
Chief Information / Technology Officer
Technology cost trajectory
Role
FinOps Lead
Cost visibility and attribution
Role
Platform Engineer
Policy-as-code coverage
Suggested next move
One playbook. Five days of work. A measurable change in how AI spend is attributed across products, agents, and customer cohorts.
Open the playbook