Scaling with AI: What Growth-Stage Companies Get Wrong About Intangible Asset Investment
Over the past three years, I have worked with dozens of growth-stage companies — Series A through Series C, £5 million to £50 million in revenue — that have made substantial AI investments. Most arrived at my office with the same problem: they had spent heavily on AI tools, hired expensive AI talent, and yet had no clear picture of the return on that investment. More concerning, they had no framework for explaining that return to prospective investors or acquirers.
The issue is not that AI is not creating value. It is that most growth-stage companies are measuring the wrong things, investing in the wrong order, and failing to build the organisational capital required to turn AI capability into sustainable competitive advantage. The gap between AI investment and AI value is almost entirely explained by intangible asset blindness.
73%
of growth-stage companies lack AI ROI measurement (McKinsey 2025)
£2.4M
Average annual AI spend, Series B companies
8 months
Average time to first measurable AI-driven outcome
The Five Mistakes I See Repeatedly
In my experience advising PE-backed businesses and growth-stage companies through exits, five patterns emerge consistently when companies get AI investment wrong. Each is correctable, but only if addressed before investor due diligence or acquisition conversations begin.
Mistake 1: Over-Investing in Tools Without Measuring Impact
The most common error is acquiring AI tools and platforms without first establishing measurement frameworks. A company deploys ChatGPT Enterprise, buys a vector database, subscribes to multiple AI APIs — then has no clear picture of which tool is driving value, which is redundant, or whether the bundle actually improves productivity.
This is particularly damaging with investors because it signals poor capital discipline. When a PE buyer asks "Walk me through your AI spend and the measurable outcomes," the answer should be specific, quantified, and defensible. Instead, growth-stage founders often cannot articulate it at all.
The fix is straightforward: establish a measurement framework before committing to tools. Define what "success" looks like in operational terms: faster customer onboarding (measure it), reduced support costs (track it), improved forecast accuracy (baseline and monitor it). Only then commit to tooling. Ideally, you should be able to attribute each tool to specific, measurable business outcomes.
★ Key Takeaway
Growth-stage companies that cannot quantify the impact of individual AI investments lack the measurement discipline that PE buyers expect. Establish baselines and track outcomes before deploying tools, not after.
Mistake 2: Under-Investing in Organisational Capital
This is the most costly mistake, and it is almost always invisible until the company reaches due diligence. A company invests heavily in AI tools and hires talented ML engineers, but fails to invest in the organisational capital required to operationalise AI: documented processes, decision governance, data standards, change management, and training.
The result is that AI capability lives in isolated teams. The ML engineers build something excellent, but it does not integrate into broader business operations. Customer success cannot explain how the AI system works to customers. Finance cannot calculate the cost savings. Product cannot prioritise features because they do not understand the AI constraints.
When a buyer assesses this company, they see expensive ML talent sitting atop a non-scalable operation. They estimate the cost of reorganising, retraining, and operationalising the AI capability post-acquisition, and they discount the offer accordingly.
The fix: organisational capital investment should match or exceed tool investment. For every £1 spent on AI infrastructure and tools, spend £1 on processes, governance, training, and operational integration. Document how AI fits into customer journeys, support processes, and decision workflows. Measure process adoption and user confidence. Build this explicitly as an intangible asset.
Mistake 3: Ignoring Data Quality as a Prerequisite
No AI system performs better than the data it operates on. Yet I have seen multiple growth-stage companies invest in sophisticated AI models while their underlying data infrastructure remains chaotic: siloed spreadsheets, inconsistent definitions, missing values, poor data governance.
The AI system becomes a liability rather than an asset. It produces outputs that cannot be trusted. Users revert to manual processes. The expensive capability sits idle. And when investors inquire about data quality, the answer is evasive.
The prerequisite for meaningful AI investment is clean, structured, governed data. This is unsexy work. It does not attract venture capital or capture investor imagination. But it is the foundation on which all subsequent AI capability depends.
Before deploying AI, conduct a data maturity assessment. Establish data governance standards. Invest in data infrastructure. Only then deploy models. This is not optional — it is the cost of doing AI work that produces measurable value.
✔ Example
I worked with a B2B SaaS company that had invested £600k in a predictive churn model, but the underlying customer data was scattered across four different CRM systems with no unified customer identifier. The model could not run reliably. We spent six months unifying the data infrastructure, then the same model produced 94% accuracy. The AI investment was good; the data foundation was the missing piece. Investors explicitly valued the clean data architecture in their acquisition model.
Mistake 4: Hiring AI Talent Without a Strategy
Growth-stage companies often hire senior ML engineers or AI-focused product managers before they have clearly defined what problems those people should solve. The hire is made defensively — "all our competitors are hiring AI talent, so we should too" — rather than strategically.
The result is expensive talent with nothing to work on, or talent working on problems that do not connect to business strategy. Within 18 months, they leave. The company has burned cash and has nothing to show for it.
The correct approach: define the AI strategy first, then hire to execute it. The strategy should answer specific questions: Which business processes benefit most from AI? Which customer problems can AI solve that competitors cannot? What capabilities need to be built internally vs. acquired off-the-shelf? Only then hire for execution.
Most growth-stage companies need far less AI talent than they assume. A fractional Chief AI Officer and one senior engineer, combined with external partnerships and off-the-shelf tools, often delivers more value than a bloated internal AI team.
Mistake 5: Failing to Build Intangible Asset Visibility for Investors
This is the error that directly impacts exit value. A company has built genuine AI capability — better forecasting models, superior customer churn prediction, more efficient operations. But it has not documented or measured that capability as an intangible asset. There is no asset inventory. No valuation. No structured narrative connecting AI capability to enterprise value.
When investors conduct due diligence, they cannot see the AI-driven competitive advantage. It is not on the balance sheet. It is not quantified in the business case. It exists in the heads of engineers and product teams, which means it disappears the moment those people leave.
The most straightforward way to address this is to conduct a formal intangible asset assessment 12-18 months before a planned fundraise or exit. Identify AI-driven assets: proprietary models, trained datasets, augmented processes, customer relationships enhanced by AI, technology capital. Measure them. Establish valuation frameworks (relief-from-royalty for proprietary models, excess earnings for customer relationships). Document the connection between each asset and business performance.
This transforms "we use AI" into "we have built £X of identifiable AI-driven intangible assets that reduce churn by 8%, improve forecast accuracy by 15%, and lower support costs by 22%."
The Five Mistakes Framework: Diagnosis and Fix
| Mistake |
Root Cause |
Investor Signal |
Fix |
| Tool sprawl without measurement |
Tool-first mindset |
"We don't actually know the ROI" |
Establish baselines; attribute tools to outcomes |
| Isolated AI capability |
Under-invested in processes |
"The AI sits in one team" |
Operationalise AI; build organisational capital; measure adoption |
| Chaotic data foundation |
Skipped data maturity work |
"We cannot rely on model outputs" |
Audit and govern data; standardise definitions; create unified customer view |
| Wrong talent hire |
Defensive hiring |
"Expensive AI engineer turned over" |
Define strategy first; hire to execute; use external partnerships |
| No intangible asset visibility |
No measurement framework |
"We cannot see the AI value" |
Asset audit; structured valuation; documented business linkage |
Building Investor-Ready AI Capability: The 18-Month Programme
The companies that achieve premium valuations for AI capability are those that have systematically built investor-ready intangible assets. Here is the programme I recommend for growth-stage companies 18-24 months from fundraise or exit.
Months 1-3: AI Strategy and Measurement Framework
Define which business processes benefit most from AI. Establish baseline metrics (cost per transaction, customer churn, forecast accuracy, support time). Create a simple scorecard mapping each AI investment to business outcomes.
Months 4-6: Data Infrastructure and Governance
Audit data maturity. Establish data governance standards. Unify customer identifiers, product definitions, transaction records. Build single source of truth for the data that AI systems will operate on. This is the foundation.
Months 7-12: Operational Integration and Organisational Capital
Document AI-enabled processes. Build decision governance frameworks. Train customer-facing teams on explaining AI-driven recommendations. Create change management programmes. Measure process adoption and user confidence. This is where AI becomes embedded rather than peripheral.
Months 13-15: Intangible Asset Audit and Valuation
Catalogue all AI-driven intangible assets: proprietary models, trained datasets, process improvements, customer relationship enhancements. Apply structured valuation methodologies. Document the linkage between each asset and measurable business outcomes (cost reduction, revenue uplift, competitive positioning).
Months 16-18: Due Diligence Readiness
Package the AI capability narrative into a structured data room section: strategy document, measurement framework, asset inventory, valuation summaries, and business impact case studies. Rehearse investor questions about data quality, model reliability, talent retention, and competitive moat.
ℹ Note
The difference between companies that achieve premium valuations for AI capability and those that do not is almost entirely explained by intangible asset visibility. Investors cannot value what they cannot see. The measurement discipline required to make AI value visible also improves operational execution.
What PE Buyers Actually Value in AI-Capable Portfolio Companies
Private equity firms assess AI capability through a specific lens: de-risked, measurable, sustainable competitive advantage. Here is what moves the needle with sophisticated PE buyers:
Measurable operational impact. Not "we deployed AI," but "we reduced customer acquisition cost by 18% and forecast accuracy improved from 64% to 89%." Outcomes tied to business metrics, with before/after evidence.
Data infrastructure that enables future value creation. A clean, unified customer database with standardised definitions is more valuable than any single AI model because it is the platform for future AI capability. Buyers assess whether the data architecture can support continued AI evolution post-acquisition.
Organisational capability independent of individuals. The AI capability is documented, operationalised, and embedded in processes — not dependent on three brilliant engineers. This reduces acquisition risk and increases the likelihood of realising synergies.
Transparent talent plan. The company has been explicit about AI talent retention risk, has retention agreements in place, and has a succession plan. Buyers want to know exactly which people are essential and what happens if they leave.
Valuation framework for intangible assets. The company has applied consistent, defensible methodologies to value proprietary models, datasets, and customer relationships. When a buyer asks "Why is this company worth £50 million?", the answer should include specific intangible asset values that support the multiple.
The AI-Enabled Exit
The quality of AI investment — not just the size of it — determines exit multiple. Two companies with identical revenue and EBITDA can achieve dramatically different valuations depending on the quality of their AI capability and the visibility of the intangible assets it creates.
The company that invests carefully in AI, builds clean data infrastructure, operationalises AI into processes, measures outcomes consistently, and documents intangible assets rigorously achieves 12-15x EBITDA multiples and attracts strategic acquirers.
The company that deploys AI tools without measurement, isolates AI in specialist teams, ignores data quality, hires defensively, and fails to document intangible assets achieves 7-10x multiples and appeals only to financial buyers.
That gap — 40-50% difference in exit value — is almost entirely explained by intangible asset discipline. The investments required to close that gap are not primarily in AI tools or talent. They are in measurement, governance, and organisational capital.
★ Key Takeaway
Growth-stage companies that approach AI investment with the same discipline they apply to financial management — measurement, governance, documentation, investor communication — build sustainable competitive advantage that markets value. Those that treat AI as a technology category rather than an intangible asset investment programme leave value on the table.
For growth-stage companies at Series B or C, the 18-month window before exit is the critical period to build investor-ready AI capability. The Opagio platform provides the structured framework for this work: asset identification, valuation, measurement, and documentation. The difference is measurable, defensible, and worth millions at exit.
Mark Hillier is Co-Founder and CCO of Opagio. He has spent 30+ years advising businesses through growth and PE exit, with institutional clients including Legal & General, AEW UK Investment Management, and Salmon Harvester. He specialises in making intangible assets visible to investors and structuring businesses for premium valuations.