Beyond GDP: How AI Forces a Complete Rethink of National Productivity Measurement
The most important economic statistics in any country are among the least understood. Productivity growth — measured as total factor productivity (TFP) or labour productivity — determines long-run living standards. Yet the frameworks governments use to measure productivity were designed for economies dominated by factories, machines, and physical capital.
Artificial intelligence exists almost entirely outside those frameworks.
An AI model trained on a million customer interactions is productive capital. It generates measurable economic returns. But under current national accounting standards, it is expensed immediately as a cost, not capitalised as an investment. A large dataset that competitors cannot access is a valuable competitive asset. But it does not appear in capital stock calculations. Organizational knowledge codified into AI systems represents real productive capacity. But it is invisible in GDP statistics.
The result is that national productivity statistics are increasingly disconnected from the actual productive capacity of the economy. We measure what we can see (tangible goods output) but miss what we cannot (digital productivity). This is not a technical problem unique to AI — it is a structural flaw in the measurement framework that AI has made impossible to ignore.
0.3%
OECD TFP growth (2024)
60%
UK intangible investment as % of tangible investment
92%
S&P 500 value attributable to intangible assets
Why GDP Cannot Capture AI Productivity
Gross domestic product measures the monetary value of goods and services produced in a country during a specific period. It is an aggregate, designed for simplicity and international comparability. But that simplicity comes at a cost: GDP systematically misses entire categories of economic value.
The Free Digital Services Problem
When Google provides search, YouTube provides video, or WhatsApp provides messaging, no transaction occurs. The service is free from the user's perspective. There is no money changing hands, so there is no GDP impact recorded, even though the service has clear economic value.
An economist measuring productivity might record that a firm that employs 100 customer service representatives has higher labour productivity than a firm that has automated customer service with an AI chatbot. The first firm has revenue of £10 million with 100 staff; labour productivity is £100k per worker. The second firm has the same £10 million revenue with 50 staff; labour productivity is £200k per worker. Improvement is captured.
But consider the case where an AI product entirely displaces the need for human interaction. A firm provides a service that previously required humans but now requires only an AI system. If users do not pay for the service (it is free), or pay much less because the cost base has collapsed, GDP declines even though productivity has improved radically.
The measurement framework captures the revenue loss but misses the productivity gain.
This problem predates AI (it has affected the internet economy for decades), but AI has made it acute. As more services are AI-enabled and offered at zero or near-zero marginal cost, the disconnect between economic value created and GDP recorded will widen.
The Intangible Investment Expensing Problem
Under current accounting standards (IAS 38, ASC 350), most intangible investment is expensed immediately. When a firm spends £1 million developing a proprietary AI model, the £1 million is treated as a cost in the period incurred, not capitalised as an investment in productive capital.
This creates a systematic undercount of investment and capital stock. A manufacturing firm that spends £1 million on machinery capitalises it and depreciates it over 10 years. A software firm that spends £1 million on code and AI models expenses it immediately. The same economic activity (investment in productive capital) is treated completely differently depending on whether the capital is tangible or intangible.
★ Key Takeaway
National accounts measure tangible capital at cost and depreciate it over time. They measure intangible investment as immediate cost. This creates a two-tiered system that systematically undervalues knowledge-intensive, AI-driven economies relative to asset-heavy industries.
The consequence is that countries and firms with high intangible investment (UK, US, Nordics — strong in software, AI, digital services) appear less investment-intensive and less capital-productive than they actually are. And the productivity growth that intangible investment enables does not show up in TFP statistics.
The Quality Adjustment Problem
Productivity statistics are supposed to adjust for quality improvements. If a car manufacturer produces cars with higher reliability, lower emissions, and advanced safety features, the productivity statistician should recognise that each "car" today is not equivalent to each "car" 10 years ago.
In principle, this is right. In practice, quality adjustment is performed selectively and with considerable uncertainty, especially for digital goods. When AI models get better at specific tasks — higher accuracy, lower latency, more nuanced understanding — how should this be reflected in productivity statistics?
There is no standardised method. The productivity statistician might ignore the quality improvement entirely (underestimate productivity) or attempt an ad-hoc adjustment (introduce discretion and comparability issues). Either way, the measurement is imperfect.
What SNA 2025 Attempts to Address
The System of National Accounts (SNA) is the international standard framework for national accounting, revised periodically to reflect economic changes. SNA 2025 (released early 2025, implementation ongoing) represents the first major revision to explicitly address intangible capital and AI.
Key changes include:
Recognition of data as a capital asset. SNA 2025 formalises that data (purchased or generated) can be treated as capital formation rather than intermediate consumption in certain cases. This is significant: it means that investment in data collection, cleaning, and governance can be capitalised in national accounts.
Expanded treatment of software and R&D. SNA 2025 broadens the scope of what is capitalised as investment in software (not just sold packaged software, but custom development and embedded AI training) and R&D (including experimental AI development).
Organizational capital recognition. SNA 2025 acknowledges organizational capital — the value of documented processes, management practices, and institutional knowledge — as a capital asset that can be measured and capitalised.
AI investment guidance. The revision provides explicit guidance on treating AI model training, fine-tuning, and deployment as capital formation rather than intermediate expense.
These are genuine improvements. They represent a partial closing of the gap between what the economy actually invests in and what national accounts capture.
ℹ Note
SNA 2025 implementation is not automatic. Each country's national statistical office (ONS in the UK, BLS/BEA in the US) must adopt the recommendations and implement them in their national accounts. Full implementation typically takes 3-5 years. Until then, the measurement gap persists.
What SNA 2025 Still Misses
Despite these improvements, SNA 2025 does not fully resolve the measurement problem for AI-driven productivity.
First, measurement methodologies for AI capital are still nascent. SNA 2025 says that data and AI model investment can be capitalised, but does not provide detailed methodologies for determining the productive lifetime of an AI model, the appropriate depreciation curve, or how to value proprietary training data.
Second, the framework still relies on transactions data. GDP measures what is sold and purchased. But much of the value created by AI is not transacted. When a firm uses an internal AI model to improve operations, there is no transaction. When an AI system enhances organizational knowledge, there is no price. These remain invisible to transaction-based accounting systems.
Third, quality adjustment remains incomplete. Even with SNA 2025, the measurement of quality improvements from AI (better predictions, more nuanced language understanding, superior personalisation) is subjective and inconsistent across statistical offices.
Fourth, intangible capital depreciation is poorly understood. Tangible assets depreciate visibly and regularly: a factory depreciates from wear and tear. Intangible assets can appreciate or depreciate in less predictable ways. A proprietary dataset appreciates if it is kept current and curated, or depreciates if it becomes outdated. A trained model depreciates as the patterns it learned cease to reflect current dynamics, or appreciates if it continues to be fine-tuned and improved. Assigning standard depreciation curves to intangible assets is mathematically convenient but economically questionable.
Proposals for Better Measurement: Satellite Accounts and Firm-Level Decomposition
The path toward better AI productivity measurement lies in two complementary directions: satellite accounts and firm-level productivity decomposition.
Satellite Accounts for AI
A satellite account is a set of statistics, compiled alongside the standard national accounts, that provide more detailed information on a specific topic. The UK has satellite accounts for tourism, culture, and the digital economy. An AI satellite account would provide:
AI investment by sector. Which sectors are investing in AI at scale? What is the composition of AI spending (models, infrastructure, talent, integration)?
AI capital stock by type. How much proprietary AI capital (trained models, datasets, algorithms) exists in the economy? How is it distributed across sectors?
Decomposed TFP for AI-adopting firms. For firms that have adopted AI, what portion of their productivity improvement is attributable to AI capital vs. labour quality improvements vs. other factors?
AI-related intangible assets. Measurement of organizational capital, training, and capabilities created by AI deployment.
Quality improvements from AI. Attempt to quantify the productivity gains from AI quality improvements (prediction accuracy, naturalness of language, etc.) that traditional GDP measurement misses.
✔ Example
The UK Digital Economy Satellite Account (maintained by the Office for National Statistics) provides detailed measurement of digital sector productivity, ecommerce penetration, and digital skills. An AI-focused satellite account would do the same for AI, providing visibility into AI adoption, investment, and productivity impact that the main national accounts omit.
A satellite account is not perfect — the data collection is resource-intensive, the methodologies are exploratory, and the figures are subject to revision. But it is far superior to ignoring AI productivity altogether.
Firm-Level Productivity Decomposition
At the firm level, the Corrado-Hulten-Sichel framework (developed by Carol Corrado, Charles Hulten, and Daniel Sichel at the US Conference Board) provides a methodology for decomposing measured productivity growth into components:
The Decomposition
Measured productivity growth = labour quality improvements + [capital deepening](/intangibles/glossary/capital-deepening) + intangible capital deepening + multi-factor productivity (residual). By measuring each component, you can isolate the contribution of intangible assets (including AI) to firm-level productivity.
If a SaaS firm reports 20% labour productivity growth, the decomposition might reveal:
- 4% from higher labour quality (better-trained staff)
- 8% from capital deepening (more servers, infrastructure per worker)
- 6% from intangible capital deepening (better AI models, improved data quality, superior processes)
- 2% from residual (unexplained)
This provides a far more granular understanding of where productivity is actually coming from.
For private equity firms and growth company investors, this type of firm-level decomposition is becoming routine due diligence. It answers the question: "What portion of this firm's productivity growth is attributable to sustainable intangible assets vs. one-off efficiency gains?"
| Productivity Source |
Sustainable? |
Transferable? |
Valuation Impact |
| Labour quality improvements |
Moderate |
Low |
Salary cost reflects it |
| Capital deepening (more/better equipment) |
High |
High |
Buyer acquires the capital |
| Intangible capital deepening (AI, data, processes) |
High |
Moderate to Low |
Dependent on documentation and transferability |
| Residual (unexplained) |
Unknown |
Unknown |
Discounted by risk-averse buyers |
Firms that can demonstrate that their productivity growth is driven by sustainable, transferable intangible assets (especially AI-enabled assets with documentation) achieve higher valuations than firms whose growth is driven by unexplained residual.
The Measurement Hierarchy: From National Accounts to Firm Level
Productivity measurement operates at multiple levels, each with different frameworks and limitations:
Level 1: National Accounts (GDP, TFP)
International comparability, but limited detail. Captures transactions-based activity. Misses free services, intangible investment, quality improvements. Updated annually with long lags (often 2+ years).
Level 2: Satellite Accounts
More detailed than main accounts, sector-specific measurement. AI satellite account in development in some countries. Provides visibility into AI investment, capital stock, and productivity impact.
Level 3: Firm-Level Decomposition
Most granular, but dependent on firm-level data sharing. Corrado-Hulten-Sichel framework used by private equity and institutional investors. Reveals which components of productivity are sustainable and attributable to intangible assets.
For policymakers, national accounts matter because they inform macroeconomic policy and international comparisons. For PE investors and growth company leadership, firm-level decomposition matters because it reveals intangible asset quality and sustainable competitive advantage.
The Policy Implication: Why This Matters Beyond Economics
If national productivity statistics cannot measure AI productivity, they cannot guide policy. Policymakers may believe the economy is stagnating (based on flat TFP statistics) when in fact it is being transformed by AI (a change that the statistics cannot yet capture).
The policy consequences are significant:
Misdirected fiscal stimulus. If statistics show low productivity growth, governments might conclude that additional public investment or R&D subsidies are needed. But if the statistics are mismeasured and actual productivity is improving through AI investment, stimulus is unnecessary and inflationary.
Inappropriate monetary policy. Central banks depend on productivity estimates to model potential output and inflation dynamics. If productivity measurement is systematically lagging AI-driven improvements, central banks might underestimate potential output and keep interest rates higher than warranted.
Regulatory overreach. If policymakers believe productivity is stagnant despite large AI investments, they might regulate AI more heavily out of frustration. Better measurement would clarify whether AI is delivering on its productivity promise.
Talent and education policy. If measurement frameworks do not reveal what types of skills are driving productivity, education and training policy cannot respond effectively.
★ Key Takeaway
The quality of national productivity measurement determines the quality of macroeconomic policy. AI productivity is real but invisible in current statistics. Until measurement frameworks catch up, policymakers will be flying partially blind.
What Needs to Happen: The Measurement Reform Roadmap
Based on SNA 2025, OECD guidance, and the state of AI measurement practice, here is what should happen:
Near term (2025-2026):
- Statistical offices adopt SNA 2025 guidelines and begin capitalising data and AI investment in national accounts
- Satellite accounts for AI are launched in leading economies (UK, US, EU, Australia)
- Standardised guidance is issued on AI model depreciation and data valuation
Medium term (2026-2028):
- Firm-level data collection on AI investment and capital becomes routine
- Quality adjustment methodologies for AI-driven improvements are developed and tested
- Corrado-Hulten-Sichel decomposition is applied systematically to AI-adopting sectors
Long term (2028+):
- Intangible asset reporting becomes part of standard financial disclosure (parallel to tangible asset reporting)
- Real-time, granular productivity data becomes available through data sharing and digital reporting
- AI-specific productivity metrics (accuracy improvements, latency reduction, cost-per-output metrics) are integrated into official statistics
Why This Matters for Opagio
The productivity measurement gap is real, persistent, and consequential. It affects how governments measure economic progress, how investors value companies, and how CEOs understand their own operational performance.
At Opagio, we believe that the firms and investors that measure intangible asset productivity with rigour and consistency will be those best positioned to extract value from AI. Measurement discipline is the foundation of capital allocation discipline.
Our growth platform provides firm-level intangible asset measurement (including AI-driven assets) in a standardised framework. We apply methodologies drawn from the Corrado-Hulten-Sichel decomposition and SNA 2025 guidance to help companies measure the productivity impact of their AI investments.
The gap between national accounts (which cannot yet see AI productivity) and firm-level measurement (which can) creates an opportunity. Companies that can quantify their AI-driven intangible asset productivity will achieve premium valuations with informed investors. And as national accounts evolve, the firms that have been disciplined about intangible measurement will be positioned to prove to regulators and policymakers alike that AI productivity is real and measurable.
David Stroll is Co-Founder and Chief Scientist at Opagio, specialising in productivity measurement frameworks and the economics of intangible capital. His work draws on SNA 2025, OECD, and ONS methodologies. He has published research on intangible asset data collection (ESCoE/ONS, 2021), innovation diffusion measurement (ISPIM, 2018), and intangible capital frameworks (Big Innovation Centre, 2017).
Further Reading