AI and IAS 38: When Can You Capitalise AI Development Costs?

AI and IAS 38: When Can You Capitalise AI Development Costs?

AI and IAS 38: When Can You Capitalise AI Development Costs?

The accounting treatment of AI development costs remains one of the most contested areas in financial reporting. Under IAS 38, internally generated intangible assets can only be recognised on the balance sheet when six specific criteria are met. But IAS 38 was drafted in 1998, when "intangible assets" meant patents and brand names — not neural networks, training data pipelines, and reinforcement learning systems.

This article provides practical guidance for CFOs and accountants facing the capitalisation question. For the broader accounting gap and the emerging FASB/SNA response, see Should You Capitalise Your AI Investment?.

85% of AI development costs expensed immediately (PwC survey)
6 IAS 38 capitalisation criteria
£127B Global corporate AI spending expensed in 2025

The IAS 38 Framework: Research vs Development

IAS 38 distinguishes between research and development phases. Research costs must always be expensed. Development costs can be capitalised — but only when all six recognition criteria are simultaneously satisfied. There is no partial credit.

The distinction between research and development is critical for AI projects because the boundary is far less clear than for traditional R&D. Training a model to test a hypothesis is research. Refining a validated model for deployment is development. But many AI projects iterate continuously between these phases, making clean separation difficult.

★ Key Takeaway

The research/development boundary in AI is not a line — it is a gradient. CFOs must work with technical teams to define project milestones that map to the IAS 38 transition point. Without clear phase gates, all costs default to the research phase and must be expensed.


The Six Criteria Applied to AI

Criterion 1: Technical feasibility of completing the asset

For traditional software, technical feasibility is typically demonstrated through a working prototype or proof of concept. For AI, the question is more nuanced: when is an AI model technically feasible?

A model that achieves target accuracy on a test dataset may still fail in production due to data drift, edge cases, adversarial inputs, or scaling challenges. Auditors are increasingly sceptical of "feasibility" claims based solely on benchmark performance.

Practical guidance: Define technical feasibility in terms of production-grade performance, not research-grade benchmarks. Document the specific performance thresholds, latency requirements, and reliability standards the model must meet. Do not claim feasibility until the model meets these standards in a production-representative environment.

Criterion 2: Intention to complete and use or sell the asset

This criterion is relatively straightforward for most AI projects — if the organisation is investing in development, there is typically a clear intention to deploy. However, exploratory AI projects where the end use is undefined do not satisfy this criterion. "We're building AI capability" is not an intention — "We are building a customer churn prediction model for the retail division" is.

Criterion 3: Ability to use or sell the asset

The organisation must have the infrastructure, talent, and operational readiness to deploy the AI system. This is not trivial. Many organisations build AI models that never reach production because they lack MLOps capability, data pipelines, or integration pathways.

Criterion Traditional software AI systems Key difference
Technical feasibility Working prototype Production-grade model performance AI models may work in lab but fail in production
Intention to complete Project charter Specific deployment plan "Building AI capability" is insufficient
Ability to use/sell Infrastructure exists MLOps, data pipeline, integration ready Many AI models never reach production
Future economic benefits Revenue or cost savings Measurable AI-driven improvement Must demonstrate causation, not correlation
Resources available Budget and team ML engineers, compute, training data Data availability is often the binding constraint
Reliable cost measurement Standard project accounting AI cost attribution Shared infrastructure makes attribution complex

Criterion 4: Probable future economic benefits

The asset must demonstrate probable future economic benefits — either through revenue generation, cost savings, or other measurable improvements. For AI, this requires evidence that the model delivers measurable business impact, not just technical performance.

✔ Example

A logistics company develops an AI routing optimisation system. During the development phase, the model demonstrates a 12% reduction in fuel costs on historical route data. The company runs a 60-day live pilot on 20% of its fleet, confirming an 11% reduction in actual fuel costs. At this point — with live operational evidence of economic benefit — the criterion is satisfied. The development costs incurred after the pilot validation (but not before) are eligible for capitalisation.

Criterion 5: Adequate resources to complete development

The organisation must have sufficient technical, financial, and human resources to complete the project. For AI, the binding constraint is often data rather than budget. If the training data required for the model is not available, or if key ML engineering talent has departed, this criterion may not be met regardless of financial resources.

Criterion 6: Ability to reliably measure development costs

AI development costs must be attributable to the specific asset. This is challenging when AI teams work on shared infrastructure, when compute resources are pooled across multiple projects, or when data preparation serves multiple models.

Practical guidance: Establish project-level cost tracking from day one. Tag cloud compute costs to specific projects. Track ML engineer time allocation at the project level. Without granular cost tracking, capitalisation is not possible because the "reliable measurement" criterion fails.

ℹ Note

Under FRS 102, UK companies have a choice: they can follow IAS 38 treatment for development costs or expense all R&D. Many smaller companies choose the simpler expensing approach, but this understates the balance sheet value of genuine AI assets. See our IAS 38 explained guide for the full standard.


The Practical Decision Framework

Define clear phase gates

Establish explicit milestones that mark the transition from research to development. Document what "technical feasibility" means for each AI project in specific, measurable terms.

Implement project-level cost tracking

Tag all costs — compute, data preparation, engineer time, third-party services — to specific AI projects. Without this, criterion 6 (reliable cost measurement) automatically fails.

Gather economic benefit evidence

Run controlled pilots or A/B tests to demonstrate that the AI system delivers measurable business impact. Document results with sufficient rigour to satisfy auditors.

Assess all six criteria simultaneously

Capitalisation requires all six criteria to be met at the same point in time. If any single criterion is not satisfied, all development costs for that period must be expensed.

When Capitalisation Is Not Appropriate

There are AI development scenarios where capitalisation is clearly inappropriate, regardless of the investment magnitude:

  • Exploratory AI research without a defined commercial application
  • Model experimentation where multiple approaches are being tested and none has been validated
  • AI capability building where the output is team knowledge rather than a deployable asset
  • Fine-tuning third-party models where the resulting capability cannot be separated from the provider's platform

In these cases, expensing is not just the conservative choice — it is the correct accounting treatment under IAS 38. The Opagio Growth Platform helps organisations track and categorise AI investments across the capitalisation boundary, ensuring accurate financial reporting while still recognising the strategic value of AI spending.

The Bottom Line

Capitalising AI development costs under IAS 38 is possible but demanding. The six criteria were not designed for AI, and applying them requires careful interpretation, robust evidence, and meticulous cost tracking. Most organisations will find that the majority of their AI spending falls in the research phase and must be expensed. The minority that qualifies for capitalisation requires production-grade evidence of technical feasibility and economic benefit. Get the accounting right — your intangible assets deserve accurate recognition.


David Stroll is Co-Founder and Chief Scientist at Opagio. His research spans productivity economics, intangible capital measurement, and accounting standards for technology assets. Learn more about the Opagio team.

Share:

DS

David Stroll — Chief Scientist, Co-Founder

PhD in Productivity | 40 years in strategy and technical systems delivery

Related Articles

AI for CFOs: Financial Reporting Implications of AI Investments
AI financial reporting 2026-03-16 · David Stroll

AI for CFOs: Financial Reporting Implications of AI Investments

AI investments create complex financial reporting challenges across recognition, measurement, disclosure, and impairment. This practical guide helps CFOs navigate the accounting treatment of AI spending, from capitalisation decisions to board-level disclosure.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.