Measuring Human Capital in the AI Age: Your Most Valuable Asset Still Walks Out the Door
A practical framework for measuring human capital when AI is rewriting the value of skills, using OECD methodology and AI literacy metrics.
Read more →
The question is deceptively simple, but the answer has profound implications for how enterprise value is measured and reported. When a company invests £50 million to train a proprietary language model, should that expenditure be expensed immediately as a period cost, or capitalised as an intangible asset on the balance sheet? Under current IFRS and US GAAP, the answer in most cases is: expense it. Under the emerging consensus reflected in SNA 2025 and the new FASB guidance, the answer is increasingly: it depends.
This is the accounting gap at the heart of the AI era. The standards by which publicly listed companies report their financial position have not caught up with the economic reality of AI investment. The result is systematically understated asset values, distorted return on capital metrics, and capital allocation decisions made on incomplete information.
IAS 38 (International Accounting Standard 38) governs the recognition and measurement of intangible assets. It is the baseline framework under which most non-US companies operate. The standard is neither hostile to capitalising intangible assets nor permissive — it requires six criteria to be met before an asset can be recognised on the balance sheet.
An intangible asset arising from development activity can only be capitalised if:
Technical feasibility: The entity must intend to and be able to complete the intangible asset so that it will be available for use or sale. For AI systems, this means the technical team must have demonstrated that the model can achieve specified performance targets, not merely that development is theoretically possible.
Intention to complete: The entity must have the intent to complete development and use or sell the asset. This is not merely future possibility — there must be contemporary evidence of committed plan. For a company training a model but uncertain whether it will deploy it, this criterion fails.
Ability to use or sell: The entity must have the ability to use the intangible asset internally or to sell it to a third party. For a proprietary model with no external market and unclear internal use case, this becomes contentious.
Future economic benefits: The entity must be able to demonstrate how the asset will generate future economic benefits — whether through cost reduction, revenue generation, or competitive advantage. This is where AI spending becomes difficult to defend under current standards. Training a model creates an asset, but quantifying its future economic contribution is inherently uncertain.
Adequate resources: The entity must have sufficient technical, financial, and other resources to complete development and use or sell the asset. A startup with £2 million invested in training might struggle to demonstrate this for a £50 million model.
Reliable measurement: The entity must be able to reliably measure the development costs attributable to the asset. This is theoretically straightforward — track compute, engineering time, data acquisition — but becomes complex when development spans multiple projects or when compute is shared across AI and non-AI development.
IAS 38 is theoretically applicable to AI development costs, but practical application is fraught. The standard was written for software development (code is measurable, functionality is verifiable) and patent applications (outcomes are definable). AI development is messier: the outputs are probabilistic, reusable across multiple applications, and their future value is speculative.
Consider a company investing in three AI projects. The decision process under IAS 38 looks like this:
| Project | Investment | Technical Feasibility | Intention | Ability to Deploy | Economic Benefit | Resources | Measurement | Decision |
|---|---|---|---|---|---|---|---|---|
| Fine-tuning existing model for customer support | £50K | Yes: proven methodology | Yes: documented roadmap | Yes: internal deployment in Q2 | Measurable cost savings in support | Yes | Yes: engineering time + compute tracked | Capitalise |
| Exploratory R&D into emerging model architecture | £500K | Uncertain: proof-of-concept stage | Uncertain: may not proceed | Unclear: might not be deployable | Speculative: outcome unknown | Limited resources allocated | Difficult: mixed with other research | Expense |
| Training proprietary LLM for competitive advantage | £50M | Yes: milestones achieved | Yes: core strategic asset | Yes: intended for product integration | Strong: trained model will improve product | Yes: dedicated team, committed budget | Difficult: complex allocation, multiple uses | Capitalise or Expense? |
The third case is where current standards break down. A £50 million investment in training a proprietary language model that will power your product for the next five years is economically an asset — it will generate returns over an extended period and has competitive value. Yet IAS 38's requirement for "reliable measurement" of development costs becomes problematic when a single model is used across multiple products, improved continuously, and its contribution to specific revenue streams is probabilistic.
In December 2024, the US Financial Accounting Standards Board (FASB) issued ASU 2025-06, providing updated guidance on accounting for AI development costs. This is the first major standard revision to address AI specifically, and it represents a significant shift toward capitalisation in certain contexts.
ASU 2025-06 establishes a two-tier framework:
Tier 1: Foundation models and large-scale AI systems can be capitalised if they meet modified criteria:
This is more permissive than IAS 38. A company that has trained a frontier model and can document that it meets performance targets, that it will be deployed in product or sold externally, and that development costs can be tracked — can now capitalise that investment under US GAAP. The threshold is still rigorous, but achievable for serious, committed AI investments.
Tier 2: AI model fine-tuning and incremental improvement can be capitalised if they create distinct, measurable value:
This is entirely new ground under previous standards. Fine-tuning a pre-trained model for £50 thousand, measuring its improvement in performance, and using it operationally — this can now be capitalised as an intangible asset under ASU 2025-06.
A financial services firm invests £400K to fine-tune a proprietary language model for anti-money laundering detection. They measure that the fine-tuned model improves detection accuracy by 18% and reduces false positives by 34%. They deploy it operationally in their AML systems, replacing a third-party vendor tool. Under ASU 2025-06, this £400K can be capitalised as an intangible asset, amortised over the expected useful life (typically 3-5 years for AI models subject to rapid change). This treatment is now permissible where it previously would have been expensed.
The System of National Accounts (SNA 2025) is the global standard for national economic accounting, followed by statistical offices including the ONS, Eurostat, and the BEA. In its 2025 update, SNA 2025 made a watershed decision: data is now recognised as a productive capital asset, and AI systems built on that data are treated as capital formation rather than intermediate consumption.
This has profound implications. When a company invests in collecting, cleaning, and structuring data for use in AI training, or when it invests in the AI system itself, these are now treated by national accountants as capital formation (investment) rather than operating expenditure. This is the opposite of current financial reporting standards, where most of this spending is expensed.
SNA 2025 implementation varies by country. The ONS (UK) will begin reflecting data capitalisation in national accounts from Q3 2026. Eurostat (EU) has committed to adoption across member states by 2027. This means published productivity statistics will begin to shift — organisations will appear more capital-intensive and labour-productivity statistics will improve.
The gap between SNA 2025 (data and AI systems are capital) and IAS 38/ASC 350 (expense most AI costs) is now explicitly documented. For companies reporting under IFRS in countries that follow SNA 2025 methodology, this creates a reporting anomaly: the national accounts will show rising intangible capital, while the company's financial statements will show these same investments as expensed costs.
Even where capitalisation is theoretically permitted, applying it in practice requires overcoming several technical challenges.
Allocation across multiple projects: A compute cluster training multiple models simultaneously generates costs that must be allocated to individual projects. How do you allocate £50 million in compute spending across five different models being trained in parallel? Standard methods (by GPU hours, by parameter count, by training time) are available but imperfect.
The problem of continuous improvement: An AI system is rarely complete and static. The model is trained initially, then fine-tuned, then updated with new data, then re-trained with improved algorithms. Which costs are capitalised (part of the initial asset), and which are expensed (maintenance and improvement of existing asset)? Current standards offer guidance, but application is contentious.
Amortisation and useful life: Once capitalised, an AI model must be amortised over its useful life. What is the useful life of a language model? Three years? Five years? The answer depends on expected technological obsolescence, competitive dynamics, and intended usage. Frontier AI models have historically shortened useful lives (new models render previous generation partly obsolete), but this may be changing as model quality plateaus.
Impairment testing: An expensed cost cannot be impaired — it is already gone. A capitalised asset must be tested for impairment whenever there is evidence of loss of value. If a model is trained at cost and subsequently proves less effective than anticipated, or is superseded by a cheaper alternative, the capitalised amount must be written down.
Auditors are the gatekeepers of capitalisation. A company that proposes to capitalise £20 million in AI development costs will face significant auditor scrutiny. The audit firm must be satisfied that the IAS 38 or ASU 2025-06 criteria are met — that the asset is technically feasible, that its future economic benefit is probable, that costs are reliably measured, and that there is credible evidence of intended deployment. This is not impossible, but it requires rigorous documentation and a compelling narrative. Companies seeking to capitalise must build the evidence base before the investment, not after.
A technology company commits £100 million to training a large language model from scratch. They build a dedicated team of 50 researchers and engineers. They acquire compute capacity for two years. They commit to integrating this model into their product line.
IAS 38 treatment: The company can capitalise this investment if it can demonstrate:
Outcome: Capitalisation is defensible. The £100M is recorded as an intangible asset (technology capital) on the balance sheet. It is then amortised over the expected useful life (likely 4-5 years for a frontier model). Annual amortisation expense is £20-25M.
ASU 2025-06 treatment: Capitalisation is explicitly supported under Tier 1 (foundation model). The company must document the defined performance benchmarks and evidence that they have been met.
A professional services firm uses OpenAI's GPT-4 API but decides to fine-tune it with proprietary firm knowledge to improve its performance on firm-specific terminology and methodologies. They invest £50K in data preparation, fine-tuning, and integration.
IAS 38 treatment: This is the boundary case. Technically, the six criteria can be met:
However, auditors would likely challenge capitalisation. The asset value is questionable — a fine-tuned model is vendor-dependent, not independently valuable. If the vendor discontinues the base model, the fine-tuning becomes worthless. The investment is also at the lower end of the scale where capitalisation overhead (tracking, amortisation, impairment testing) may exceed the administrative benefit.
Likely outcome: Expensed as a period cost, despite meeting theoretical criteria.
ASU 2025-06 treatment: Explicitly permissible under Tier 2 if the fine-tuned model demonstrates measurable improvement and is separately tracked. This represents a shift from IAS 38 practice.
A financial services firm allocates £500K to an exploratory research project investigating whether AI can improve fraud detection in payment systems. The outcome is uncertain. They may succeed and deploy the system, or they may conclude that the AI approach is inferior to existing rule-based systems.
IAS 38 treatment: Capitalisation is not defensible.
Outcome: Expense as R&D cost. This is correct treatment. Exploratory research has uncertain outcomes and is properly expensed as incurred.
The capitalisation question and the valuation question are related but distinct. Even if a cost is expensed under current accounting standards, the asset it creates may have genuine value.
Consider the example of the £50 million proprietary language model. Suppose that under IAS 38, the company concludes it cannot reliably measure the future economic benefit and decides to expense the cost. From a balance sheet perspective, the asset is invisible — an intangible asset worth £50 million has not been recorded.
But from a valuation perspective, the asset is very real. If the company is sold, the buyer will pay for the proprietary model. They will conduct due diligence on its technical quality, its competitive advantage, its integration into the product, and its expected useful life. They will then apply a valuation methodology — either cost approach (what did it cost to build), market approach (what do comparable models trade for), or income approach (what value does it generate) — to determine its contribution to enterprise value.
The buyer's valuation of the model might be £50 million (cost recovery), or £200 million (income approach, if the model is critical to product differentiation), or £10 million (if better alternatives have emerged since it was built).
The gap between what the balance sheet shows (nothing — it was expensed) and what the market values (potentially substantial) is where intangible asset measurement becomes critical for M&A, PE transactions, and investor valuation.
Expensing under accounting standards does not diminish economic value. It reflects conservative accounting practice. But it creates a measurement gap: investors and acquirers must look beyond the balance sheet to value the intangible assets that have been created by expensed spending. This is where structured intangible asset valuation becomes essential to informed capital allocation.
The divergence between IAS 38, ASU 2025-06, and SNA 2025 will not persist indefinitely. The IASB (International Accounting Standards Board) is monitoring the ASU 2025-06 changes and considering whether IFRS should converge with US GAAP on AI treatment. The next IFRS revision cycle (likely 2027-2029) will address this.
In the interim, companies face a choice:
The commercial reality is shifting toward capitalisation. Companies that are serious about AI as a capital asset — training proprietary models, building moats through technology capital — need balance sheets that reflect that reality. As SNA 2025 implementation begins and as ASU 2025-06 gains traction, the pressure for IAS 38 convergence will increase.
For boards and CFOs, the decision on AI capitalisation is not purely an accounting one. It is a signal about how the organisation views AI: as an operating expense to be minimised, or as a capital investment to be measured, managed, and valued.
David Stroll is Co-Founder and Chief Scientist at Opagio, specialising in productivity measurement frameworks and the economics of intangible capital. His work draws on SNA 2025, OECD, and ONS methodologies.
A practical framework for measuring human capital when AI is rewriting the value of skills, using OECD methodology and AI literacy metrics.
Read more →
The 2025 revision to the System of National Accounts represents genuine progress in measuring intangible assets. But the gap between what national statistics capture and what actually drives productivity in advanced economies remains wide. Here is what SNA 2025 fixes, what it does not, and why the remaining gaps matter for policy.
Read more →
National accounts now recognise R&D and software as capital assets. But the majority of intangible investment — organisational know-how, proprietary data, trained workforces, customer networks — still falls outside the measurement boundary. These are the seven categories that distort our understanding of productivity.
Read more →Get the latest insights on intangible asset growth and productivity delivered to your inbox.
Book a free consultation to see how the Opagio Growth Platform can help your business.