Pre-Exit: Preparing the Business for Sale
The 12–18 months before exit define how much value you capture — data room preparation, normalised EBITDA, quality of earnings reports, earn-outs, and intangible asset documentation.
Read more →
I have spent 30 years advising PE firms on portfolio company management. My clients have included leading mid-market sponsors managing portfolios of 10-20 companies, with enterprise values ranging from £50 million to £500 million per company. The playbook is well-established: identify value creation levers, install financial controls, drive organic growth and bolt-on M&A, and prepare for exit.
But something has changed in the last 18 months. Every founder I talk to claims their company is deploying AI. Every pitch includes a slide on AI capability. Every quarterly review includes an update on AI initiatives. The challenge for operating partners is separating companies that are genuinely building defensible AI intangible assets from those that are pursuing trendy but value-destructive AI-washing.
The distinction matters enormously. A portfolio company that has built genuine AI capability — proprietary models, customer-specific datasets, AI-augmented processes — enters an exit process with significantly higher enterprise value. A portfolio company that has pursued AI-washing without developing measurable competitive advantage exits with wasted capital and diluted brand.
Traditional value creation levers for PE are well-understood: improve operations (reduce costs, improve service delivery), grow revenue (market share gains, add new customer segments), improve margins (pricing, scale). These are familiar, measurable, and repeatable across portfolio companies.
AI represents something different. It is simultaneously a cost reduction opportunity (automate repetitive work), a revenue opportunity (new products, better customer experience), and a strategic asset (defensible competitive advantage). But it is also a domain where the difference between genuine capability and superficial deployment is difficult for a non-technical operating partner to assess.
The challenge is structural. Operating partners are not AI researchers. They cannot and should not evaluate the technical quality of machine learning models or the statistical sophistication of algorithms. What they can do is establish a portfolio-level assessment framework that identifies which companies are building measurable AI value and which are wasting capital.
I recommend that operating partners assess portfolio company AI capability along five dimensions, each measurable without deep technical expertise.
What you are assessing: Does the company have real AI capability, or is it using generic AI tools (ChatGPT, standard cloud APIs) without differentiation.
The five-level maturity scale:
| Level | Description | Examples | Value Signal |
|---|---|---|---|
| 1. Generic Tools | Using off-the-shelf AI services (OpenAI, Anthropic) without customisation | ChatGPT API plugged into customer support | Low — easily replicable, no competitive moat |
| 2. Customised Integration | Fine-tuning or integrating public models with company-specific data | Proprietary prompts and retrieval systems | Moderate — some defensibility, but limited |
| 3. Proprietary Models | Building custom models trained on proprietary data | Company-specific recommendation engines, prediction models | High — genuine competitive advantage |
| 4. Integrated AI Platform | AI models embedded in core product, tightly integrated with customer workflows | AI fully embedded in customer operations, high switching cost | Very High — significant exit value uplift |
| 5. AI-Driven Business Model | Core business model depends on proprietary AI capability; AI is the primary value proposition | AI as the entire product (like a proprietary research tool or decision-support system) | Extreme — potential for 3-5x multiple uplift |
Red flag questions:
What you are assessing: Does the company own genuinely differentiated data that creates competitive advantage, or is its data commodity or licensed.
Key questions:
Red flag signals:
High-performing companies:
What you are assessing: How much of the customer stickiness and willingness to pay is driven by AI capability versus other factors.
Key questions:
Red flag signals:
High-performing companies:
What you are assessing: Is the company investing in AI with discipline and measurable ROI, or is it pursuing AI as an open-ended investment.
Key questions:
Red flag signals:
High-performing companies:
What you are assessing: How much technical risk and integration complexity does the AI capability introduce, and is the company adequately prepared for it.
Key questions:
Red flag signals:
High-performing companies:
Rather than assessing AI capability in isolation for each company, I recommend that operating partners develop a portfolio-level AI intangible asset dashboard that compares companies and identifies where to allocate operating partner time and capital.
The dashboard tracks each company across the five dimensions, with a simple 1-5 rating:
| Portfolio Company | Maturity | Data Assets | Customer Dependency | Capital Efficiency | Technical Risk | Overall Score | Recommendation |
|---|---|---|---|---|---|---|---|
| Company A | 4 | 4 | 5 | 4 | 3 | 4.0 | Hold — strong AI strategy, minor technical risk. Prepare for exit with AI as value driver. |
| Company B | 2 | 2 | 2 | 3 | 4 | 2.6 | Refocus — AI deployment is undisciplined. Kill initiatives not showing ROI in 12 months. |
| Company C | 1 | 1 | 1 | 1 | 5 | 1.8 | Exit AI strategy. Reposition as service company. AI-washing is creating false expectations. |
| Company D | 3 | 4 | 3 | 3 | 2 | 3.2 | Invest — strong data assets and low technical risk. Opportunity to move from 3 to 4 maturity. |
| Company E | 2 | 3 | 4 | 2 | 3 | 2.8 | Review — customers value AI but capital efficiency is poor. Tighten investment discipline. |
The dashboard becomes the framework for quarterly discussions with portfolio company management:
For companies scoring >3.5:
For companies scoring 2.5-3.5:
For companies scoring <2.5:
A PE sponsor with £300 million in AUM across 12 portfolio companies conducted an AI portfolio assessment. Results:
The sponsor's decisions:
For the 3 high-performing companies: Identified them as potential joint exits or consolidation targets. Increased operating partner time investment on go-to-market and customer expansion.
For the 5 mid-performing companies: Required each to establish AI ROI targets for the next 12 months. Three improved to >3.5; two were repositioned away from AI. Capital allocation decisions were now data-driven rather than anecdotal.
For the 4 low-performing companies: Eliminated AI-focused investor communications. Stopped allowing "AI initiative" as a line item in quarterly updates. Three companies refocused on core operations. One company was merged with a stronger technology platform.
The result: exit valuations improved by 15-20% on average because the sponsor could credibly position AI capability where it existed and avoid credibility damage from overstating capability where it did not.
The operating partner's job is not to evaluate AI technology. It is to ensure that portfolio companies are making disciplined, measurable investments in AI as an intangible asset, and that those investments are creating defensible competitive advantage. The framework above enables that assessment without requiring the operating partner to be a data scientist.
The AI intangible asset assessment directly informs exit positioning. A company scoring 4+ in AI maturity should be positioned to buyers as having genuine competitive advantage through proprietary models, defensible data assets, and customer lock-in from AI integration. This supports a premium multiple.
A company scoring <2.5 should not be positioned with AI claims. Buyers will quickly detect overstated AI capability and will discount the valuation as a result. Better to compete on other dimensions — operational efficiency, market position, customer relationships — where the story is credible.
The portfolio dashboard is the tool that separates companies with real AI intangible assets from those with aspirational positioning. For operating partners, that distinction is the difference between exits that command premium multiples and exits that underperform expectations.
Mark Hillier is Co-Founder and Chief Commercial Officer of Opagio. He brings 30+ years of experience advising businesses through growth, scaling, and successful PE exits. His client roster includes Legal & General, AEW UK Investment Management, and Salmon Harvester. He leads go-to-market strategy and client acquisition across the SME and investor markets at Opagio.
The 12–18 months before exit define how much value you capture — data room preparation, normalised EBITDA, quality of earnings reports, earn-outs, and intangible asset documentation.
Read more →
The SEC has made AI-washing an enforcement priority, and for good reason: billions in capital are being allocated based on AI claims that range from exaggerated to fabricated. After 15 years evaluating technology claims at IG Group, here is the 7-point checklist that separates genuine AI capability from performative marketing.
Read more →
A practical framework for measuring human capital when AI is rewriting the value of skills, using OECD methodology and AI literacy metrics.
Read more →Get the latest insights on intangible asset growth and productivity delivered to your inbox.
Book a free consultation to see how the Opagio Growth Platform can help your business.