Valuing AI in M&A: Why 62% of Deals Fail and How Intangible Asset Frameworks Fix It
During my decades in structured finance at NM Rothschild & Sons, we built elaborate due diligence frameworks for asset-backed securities and complex international acquisitions. The frameworks were sophisticated, but they rested on a conceptually simple foundation: the underlying assets could be identified, inspected, and valued using agreed methodologies. You could see the ship in dry dock. You could audit the lease portfolio. You could price the infrastructure asset against comparable transactions.
Technology acquisitions have obliterated that simplicity. When a buyer acquires an AI company, what exactly is it acquiring. Is it acquiring the proprietary models. Is it acquiring the data that trained those models. Is it acquiring the team that built them. Is it acquiring the customer relationships that the models serve. Traditional M&A frameworks — financial audit, legal due diligence, commercial assessment — were not designed to answer these questions. The result is predictable: 62% of technology and software-weighted M&A deals fail to achieve the financial targets set at acquisition.
The 62% figure comes from McKinsey research on technology deal integration, but the pattern is consistent across deal databases: acquirers overpay for intangible assets they cannot value, discover critical gaps in due diligence they did not know to conduct, and struggle to extract value from acquisitions that looked compelling on the spreadsheet.
62%
of AI/tech-weighted deals fail post-acquisition targets
$750B+
in annual software/AI M&A value at risk
3
critical due diligence dimensions traditional frameworks miss
The Traditional Framework Failure Mode
Standard M&A due diligence follows a predictable sequence: financial audit, legal review, commercial assessment, and management interviews. For an asset-heavy or service-heavy business, this framework works reasonably well. You can assess the quality of the assets, the enforceability of the contracts, the stability of the customer base, and the capability of the management team.
For an AI company, the same framework becomes dangerously incomplete.
Financial audit assesses cash flows and balance sheets. For an AI company, the balance sheet is nearly empty: some servers (physical capital), perhaps a small office lease, maybe some capitalised software. The enormous value in the business does not appear on the balance sheet — it lives in intangible assets: proprietary models, training datasets, customer relationships, and the organisational knowledge to deploy these at scale. A financial audit that treats the P&L as the primary source of truth will massively undervalue the acquired company if it has not yet monetised its AI capability, and massively overpay if it has not properly valued the intangible assets that generated that revenue.
Legal due diligence focuses on contracts and regulatory compliance. For an AI company, the critical legal question is not "are the customer contracts enforceable" — it is "do we actually own the proprietary models and the data that trained them." In acquisition after acquisition, buyers have discovered that the models they believed they were acquiring were trained on licensed data that cannot be transferred, or were built using open-source frameworks with licence terms that constrain commercialisation, or were developed by contractors with residual IP claims. These are legal failures of catastrophic proportions, but they do not show up in standard contract review.
Commercial assessment examines market position and customer stability. For an AI company, the critical commercial question is not "do we have a good customer list" — it is "are these customers dependent on specific AI models, and if we change the models, will the customers stay." A company might have a large customer base that looks solid until the acquirer tries to integrate the acquired AI capability with its own platforms, at which point the customers discover that the unique value they were paying for was dependent on the specific model architecture they cannot replicate at scale. The commercial assessment missed this because it did not assess the model-specific value versus the platform-specific value.
★ Key Takeaway
Traditional M&A due diligence frameworks were designed for asset-heavy or service-heavy businesses. When applied to AI companies, they systematically miss the intangible assets that explain most of the value and risk. The result is mispricings and post-acquisition integration failures that destroy 62% of expected deal value.
Three Critical Due Diligence Gaps
The 62% failure rate breaks down into three dominant failure modes, each rooted in a due diligence gap that traditional frameworks do not address.
Gap 1: Model Governance and Data Provenance
When a buyer acquires an AI company, the first question should be: What proprietary models exist, how were they trained, and what data do they depend on.
The reality in most acquisitions is far murkier. Models are often trained on a combination of proprietary data, licensed third-party datasets, and open-source corpora. When ownership of the business changes, the licence terms for third-party data may be triggered, or may prohibit commercialisation under the new owner, or may require renegotiation at significantly higher cost.
I have seen a £50 million acquisition of an AI company where 60% of the model training relied on datasets licensed from a competitor, and the competitor immediately triggered a non-transferability clause in the licence agreement upon news of the acquisition. The buyer was forced to retrain the models using proprietary data alone, which reduced model performance by 25% and extended the integration timeline by 18 months. The value destruction was not a refinancing problem — it was a due diligence problem that should have been identified before the deal closed.
What AI-enhanced due diligence looks like: A rigorous model governance assessment includes:
| Element |
Traditional Due Diligence |
AI-Enhanced Due Diligence |
| Data provenance |
Assumed to be owned or licensed |
Explicit audit of every dataset source, licence terms, transferability |
| Model inventory |
Not assessed |
Detailed mapping of all models in production and development |
| Model performance |
Not assessed |
Baseline performance metrics, degradation timelines, sensitivity analysis |
| Retraining capability |
Not assessed |
Assessment of whether models can be retrained on proprietary data alone |
| API and integration |
Not assessed |
Documentation of how models are called, what data flows, what latency/throughput |
| Model dependencies |
Not assessed |
Explicit mapping of dependencies between models and data pipelines |
Gap 2: Team Dependency and Knowledge Transfer Risk
AI company value is often highly concentrated in the technical team that built the models. A traditional due diligence assessment might note the team composition and ask questions about retention. But it does not assess the distribution of knowledge: how much of the model architecture, training methodology, and debugging capability lives in specific individuals versus in documentation and systems.
I have encountered acquisitions where the acquirer identified the head of AI research as critical, negotiated a retention package, and believed the risk was mitigated. But the knowledge of how to adapt the models to different data distributions, how to debug performance degradation, and how to integrate the models with other systems lived elsewhere — in a principal engineer who had already quit before the acquisition closed, in tribal knowledge about hyperparameter choices made in 2021 and never documented, in ad hoc scripts written by interns who had moved on.
What AI-enhanced due diligence looks like:
| Assessment Area |
Risk Signal |
Mitigation Required |
| Model development knowledge |
Concentrated in <3 people |
Explicit knowledge transfer and documentation programme pre-close |
| Debugging capability |
Resides with model builders |
Documented debugging frameworks and troubleshooting playbooks |
| Hyperparameter and architecture decisions |
Not documented |
Complete documentation of why specific choices were made |
| Data pipeline understanding |
Resides with data engineers |
Pipeline architecture documentation and redundancy assessment |
| Integration experience |
Existing team has not integrated with other systems |
Proof-of-concept integration before acquisition closes |
✔ Example
In one acquisition I advised on, the acquirer required the AI team to spend 60 days pre-close producing documentation of model architecture, retraining procedures, and integration pathways. The cost was £200,000 and the schedule was tight. It also revealed that the proprietary model performance advantage was entirely due to a specific retraining procedure that was not patentable, not easily described, and highly dependent on the skill of the person implementing it. This discovery allowed the buyer to adjust the purchase price downward and negotiate extended retention of the key engineer. Without the documentation requirement, the buyer would have discovered this post-close.
Gap 3: Open-Source Model Risk and Competitive Moat Assessment
A third critical gap is assessment of the proprietary moat around AI models when the underlying models are built on open-source foundations (like Llama, Mistral, or other public models).
Many AI companies claim proprietary AI capability when what they actually have is fine-tuning of a public model using proprietary data or novel training methodology. This is not necessarily without value — fine-tuning can produce genuine performance advantages. But it is radically different from a proprietary base model, and the durability of the advantage is very different.
Open-source model releases can obsolete a fine-tuned competitive advantage in months. A buyer paying £50 million for a company built on fine-tuned Llama 2 discovers six months post-acquisition that Llama 3 has just been released, it is free, and it matches the performance of the acquired company's fine-tuned model without requiring any proprietary data or methodology.
What AI-enhanced due diligence looks like: Assessment of open-source model dependency includes:
- Complete inventory of all open-source models used, their release dates, and their licences
- Explicit assessment of whether proprietary advantage is in the base model or in fine-tuning/integration
- Competitive moat analysis: if a new version of the base model is released by the provider, does the acquired company's competitive advantage persist
- Licence compliance review: are there restrictions on commercialisation or on derivative works
Anatomy of a Failed AI Acquisition
To illustrate how these gaps combine to destroy value, here is a realistic example drawn from actual transactions I have advised on.
A buyer acquires an AI company for £40 million. The company has £5 million in annual revenue, is growing 40% year-on-year, and claims proprietary AI capability. The buyer's financial model assumes 70% margins once integration is complete and the AI models are deployed across the buyer's customer base.
Post-acquisition, three problems emerge:
Problem 1: Data provenance crisis. The buyer discovers that the acquired company's model was trained on a dataset licensed from a third-party AI training company. The original agreement included a non-transferability clause. Renegotiation requires a 3x price increase. The buyer must either pay the higher price (reducing integration value by £2.5 million) or retrain the model using proprietary data alone (delaying time to value by 12 months and reducing peak performance by 18%).
Problem 2: Team departure. The principal engineer responsible for model training and optimisation leaves three months post-acquisition. The buyer believed the team was retained, but did not assess whether specific domain knowledge was transferable. Models begin to degrade in performance as new data arrives and nobody understands how to retrain them. The buyer must hire a replacement engineer at significantly elevated cost and accept 6 months of performance degradation while the new engineer gets up to speed.
Problem 3: Competitive moat erosion. Six months after acquisition, the foundation model provider (OpenAI, Anthropic, or another) releases a new base model that performs equivalently to the acquired company's fine-tuned model. The acquirer's competitive advantage evaporates. The buying company cannot sell the integration to customers as a proprietary capability anymore — it is just integration with a public model.
The cumulative effect: the buyer paid £40 million for a company that was supposed to deliver £15 million in value uplift (on a £5 million base growing at 40%). Instead, it faces £2.5 million in unanticipated data licensing costs, 6-12 months of delayed time-to-value, and loss of the proprietary moat that justified the purchase multiple. The deal value destruction is £10-15 million, and the buyer blames "integration challenges" rather than due diligence gaps.
AI-Enhanced Due Diligence Framework
A proper due diligence approach for AI companies adds layers to traditional frameworks without replacing them. Here is the structure:
The Five-Layer AI-Weighted Due Diligence Model
Layer 1: Model Inventory and Governance. Complete catalogue of all models (production, development, deprecated). For each model: source code location, training data sources and licence terms, performance baselines, update frequency, integration points. Assessment of data provenance and transferability risk.
Layer 2: Data Asset Assessment. Detailed audit of all datasets used for training and inference. For each dataset: source, licence terms, volume, quality metrics, governance, sensitivity classification. Assessment of whether datasets are proprietary competitive advantage or commodity elements that could be replaced.
Layer 3: Team Knowledge Mapping. Explicit documentation of where critical knowledge resides. For each major component (model training, data pipeline, integration, operations): which team members carry essential knowledge, how is it documented, what happens if key individuals depart. Knowledge transfer plan and retention requirements.
Layer 4: Competitive Moat Analysis. Assessment of whether proprietary advantage is sustainable or open-source-dependent. For each claimed advantage: is it in the base model, the fine-tuning, the data, or the integration/deployment. How long would it take a well-resourced competitor to replicate using public models and standard techniques.
Layer 5: Integration Readiness. Proof-of-concept integration of acquired models with buyer's systems pre-close. Detailed assessment of latency, throughput, data requirements, and custom integration costs. Identification of incompatibilities or architectural clashes that only appear under integration load.
The Cost-Benefit of Rigorous Due Diligence
The cost of this level of due diligence is material. A complete AI-enhanced due diligence exercise for a £30-50 million acquisition requires 3-4 months and costs £500,000-£1 million. For internal teams and external advisors (AI researchers, data engineers, integration architects, structuring specialists), this is not trivial.
But the alternative cost of inadequate due diligence is far higher. If rigorous AI-focused due diligence prevents one failed acquisition in every three attempted, the cost of £750,000 per deal is radically cheaper than the value destruction of a £40 million acquisition that fails to deliver 40% of projected value. That is a £15 million cost of failure, and it happens repeatedly across the technology M&A market.
ℹ Note
Acquirers in other sectors routinised this kind of rigour decades ago. Pharmaceutical companies do extensive clinical and regulatory due diligence because they know the cost of missing risks in Phase III data. Infrastructure investors do asset inspection and engineering assessment because physical assets can hide structural defects. Technology buyers have been slower to develop equivalent discipline around intangible assets, but the cost of that lapse has become too high to ignore.
Building AI-Enhanced Due Diligence Capability
For acquirers without in-house AI expertise, the alternative is to build external advisory relationships with specialists who can conduct this layer of due diligence. The investment in building this capability (or in retaining advisors with this expertise) pays for itself in the first properly evaluated acquisition.
At Opagio, we are building tools specifically designed to support this due diligence process. Our intangible asset valuation framework includes modules for technology capital assessment, data asset inventory, and human capital evaluation — precisely the dimensions that traditional due diligence frameworks miss.
The businesses that will win in technology acquisition going forward will not be those that close deals fastest. They will be those that can thoroughly value the intangible assets they are acquiring, identify integration risks that others miss, and price accordingly.
The 62% failure rate in tech M&A is not destiny. It is a symptom of due diligence frameworks that have not caught up with the intangible-asset reality of modern technology companies. The acquirers that fix this gap will significantly outperform the market.
Tony Hillier is co-founder of Opagio. He holds an MA from Balliol College, Oxford and an MBA with distinction. Tony held executive board positions at NM Rothschild & Sons and GEC Finance, and a non-executive directorship at Financial Security Assurance in New York, where he specialised in structured finance, asset-backed securities, and cross-border tax-leveraged transactions.
TH
Tony Hillier — Chairman, Co-Founder
MA, Balliol College, University of Oxford | Harvard Business School MBA with Distinction
Connect on LinkedIn →