The PE Operating Partner's Guide to AI Intangible Assets Across a Portfolio

The PE Operating Partner's Guide to AI Intangible Assets Across a Portfolio

The PE Operating Partner's Guide to AI Intangible Assets Across a Portfolio

I have spent 30 years advising PE firms on portfolio company management. My clients have included leading mid-market sponsors managing portfolios of 10-20 companies, with enterprise values ranging from £50 million to £500 million per company. The playbook is well-established: identify value creation levers, install financial controls, drive organic growth and bolt-on M&A, and prepare for exit.

But something has changed in the last 18 months. Every founder I talk to claims their company is deploying AI. Every pitch includes a slide on AI capability. Every quarterly review includes an update on AI initiatives. The challenge for operating partners is separating companies that are genuinely building defensible AI intangible assets from those that are pursuing trendy but value-destructive AI-washing.

The distinction matters enormously. A portfolio company that has built genuine AI capability — proprietary models, customer-specific datasets, AI-augmented processes — enters an exit process with significantly higher enterprise value. A portfolio company that has pursued AI-washing without developing measurable competitive advantage exits with wasted capital and diluted brand.

73% of portfolio companies claim "AI capability" in investor updates
18% have demonstrable competitive advantage from AI
2-3x exit multiple premium for genuine AI capability

The Operating Partner's Challenge

Traditional value creation levers for PE are well-understood: improve operations (reduce costs, improve service delivery), grow revenue (market share gains, add new customer segments), improve margins (pricing, scale). These are familiar, measurable, and repeatable across portfolio companies.

AI represents something different. It is simultaneously a cost reduction opportunity (automate repetitive work), a revenue opportunity (new products, better customer experience), and a strategic asset (defensible competitive advantage). But it is also a domain where the difference between genuine capability and superficial deployment is difficult for a non-technical operating partner to assess.

The challenge is structural. Operating partners are not AI researchers. They cannot and should not evaluate the technical quality of machine learning models or the statistical sophistication of algorithms. What they can do is establish a portfolio-level assessment framework that identifies which companies are building measurable AI value and which are wasting capital.


The Five-Dimension Assessment Framework

I recommend that operating partners assess portfolio company AI capability along five dimensions, each measurable without deep technical expertise.

Dimension 1: AI Capability Maturity

What you are assessing: Does the company have real AI capability, or is it using generic AI tools (ChatGPT, standard cloud APIs) without differentiation.

The five-level maturity scale:

Level Description Examples Value Signal
1. Generic Tools Using off-the-shelf AI services (OpenAI, Anthropic) without customisation ChatGPT API plugged into customer support Low — easily replicable, no competitive moat
2. Customised Integration Fine-tuning or integrating public models with company-specific data Proprietary prompts and retrieval systems Moderate — some defensibility, but limited
3. Proprietary Models Building custom models trained on proprietary data Company-specific recommendation engines, prediction models High — genuine competitive advantage
4. Integrated AI Platform AI models embedded in core product, tightly integrated with customer workflows AI fully embedded in customer operations, high switching cost Very High — significant exit value uplift
5. AI-Driven Business Model Core business model depends on proprietary AI capability; AI is the primary value proposition AI as the entire product (like a proprietary research tool or decision-support system) Extreme — potential for 3-5x multiple uplift

Red flag questions:

  • "What would happen if OpenAI released a model that performed equally well on your use case." If the answer is "we would be at disadvantage," you are at level 1-2.
  • "How much of your competitive advantage is proprietary AI versus proprietary data versus superior implementation of standard techniques." If AI is <30% of the answer, you are at level 1-2.
  • "If we removed the AI component, what revenue would we lose." If the answer is "some customer features would degrade," you are at level 1-2. If it is "the entire value proposition collapses," you are at level 4-5.

Dimension 2: Data Asset Quality and Ownership

What you are assessing: Does the company own genuinely differentiated data that creates competitive advantage, or is its data commodity or licensed.

Key questions:

  • What percentage of the data used for AI is proprietary versus licensed versus public. (Proprietary >70% is strong; <40% proprietary suggests low moat)
  • How much data does the company accumulate per customer per month. (Higher frequency = higher competitive advantage as the data asset improves with scale)
  • What is the company's ability to use customer data for model improvement without violating contracts or regulations. (If heavily constrained, the data asset has limited value)
  • How defensible is the data competitive advantage. (Would a new competitor take 6 months or 6 years to accumulate equivalent data.)

Red flag signals:

  • Data governance is an afterthought, not a strategic priority
  • Data quality issues force frequent model retraining or manual intervention
  • Customers own the data the AI is trained on (you are renting, not owning)
  • No structured data collection pipeline — data is manual, sporadic, or fragmented

High-performing companies:

  • Treat data as a strategic asset on par with code
  • Have explicit data governance frameworks
  • Accumulate data at scale with every customer interaction
  • Have a clear roadmap to make the data asset more defensible over time

Dimension 3: Customer Dependency and Retention Impact

What you are assessing: How much of the customer stickiness and willingness to pay is driven by AI capability versus other factors.

Key questions:

  • What percentage of net revenue retention is attributable to AI capability. (>20% is significant)
  • If you removed the AI features, what percentage of customers would stay. (If >80% stay, the AI is a nice-to-have, not a lock-in)
  • How would customers respond to a 10% price increase specifically for the AI capability. (If customers do not perceive AI as differentiated, they will not pay for it)
  • Are customers contractually locked in to AI performance SLAs. (Multi-year contracts with SLAs indicate customer dependency)

Red flag signals:

  • AI is a feature announcement, but customer churn and NRR are unchanged
  • Customers use AI but would not be highly upset if the feature was removed
  • No measurement of customer willingness to pay specifically for AI
  • High churn after AI deployment suggests the feature did not meet expectations

High-performing companies:

  • Can quantify the revenue attributable to AI capability
  • See measurable improvements in NRR after AI deployment
  • Have customers contractually locked into AI-enabled service levels
  • Can demonstrate customer demand for more AI capability, not demand to remove it

Dimension 4: Capital Efficiency and Runway

What you are assessing: Is the company investing in AI with discipline and measurable ROI, or is it pursuing AI as an open-ended investment.

Key questions:

  • What is the total capital invested in AI over the past 18 months. (As % of revenue)
  • What is the measured return on that capital. (Revenue uplift, cost savings, margin improvement)
  • How long is the remaining runway before AI initiatives should show measurable ROI. (6-12 months is healthy; >24 months suggests undisciplined investment)
  • How is AI investment gated and measured. (Clear metrics, disciplined go/no-go decisions = healthy; Open-ended R&D budget = problematic)

Red flag signals:

  • "AI is a strategic investment, we are not measuring ROI yet" (after 18+ months)
  • AI investments consuming >5-10% of EBITDA with no clear payoff timeline
  • AI projects that have been running for 24+ months without materialisation of ROI
  • No clear ownership or accountability for AI project outcomes

High-performing companies:

  • Invest in AI with discipline and measurable ROI targets
  • Gate funding based on achievement of milestones
  • Shut down AI initiatives that are not delivering on timelines
  • Have a clear runway to ROI (12-18 months typical)

Dimension 5: Technical Risk and Integration Complexity

What you are assessing: How much technical risk and integration complexity does the AI capability introduce, and is the company adequately prepared for it.

Key questions:

  • How integrated is the AI into the core product and operations. (Deep integration = higher risk if something fails)
  • What is the model update frequency. (Frequently updated models require continuous monitoring and maintenance; static models are simpler but may degrade in performance)
  • How dependent is the company on specific vendors or frameworks. (Dependency on proprietary vendor tech = higher risk; open-source = lower risk but potentially less support)
  • What is the technical talent bench strength. (Can the company maintain and improve AI systems if key people leave.)

Red flag signals:

  • AI is deeply integrated into core product but the company lacks technical depth to maintain it
  • Model updates are ad hoc and not systematically tested before production
  • Heavy dependency on a single AI researcher or engineer
  • No monitoring or alerting on model performance degradation
  • Data infrastructure is fragile or manual

High-performing companies:

  • Have well-documented AI architecture and dependencies
  • Have processes for model monitoring, update, and rollback
  • Have technical bench strength to maintain systems if key people leave
  • Have a clear roadmap for improving and evolving AI capability

The Portfolio Dashboard

Rather than assessing AI capability in isolation for each company, I recommend that operating partners develop a portfolio-level AI intangible asset dashboard that compares companies and identifies where to allocate operating partner time and capital.

The dashboard tracks each company across the five dimensions, with a simple 1-5 rating:

Portfolio Company Maturity Data Assets Customer Dependency Capital Efficiency Technical Risk Overall Score Recommendation
Company A 4 4 5 4 3 4.0 Hold — strong AI strategy, minor technical risk. Prepare for exit with AI as value driver.
Company B 2 2 2 3 4 2.6 Refocus — AI deployment is undisciplined. Kill initiatives not showing ROI in 12 months.
Company C 1 1 1 1 5 1.8 Exit AI strategy. Reposition as service company. AI-washing is creating false expectations.
Company D 3 4 3 3 2 3.2 Invest — strong data assets and low technical risk. Opportunity to move from 3 to 4 maturity.
Company E 2 3 4 2 3 2.8 Review — customers value AI but capital efficiency is poor. Tighten investment discipline.

Using the Dashboard in Quarterly Reviews

The dashboard becomes the framework for quarterly discussions with portfolio company management:

For companies scoring >3.5:

  • These companies are building genuine AI intangible assets
  • Operating partner focus: optimise go-to-market, expand customer base, prepare for exit with AI as explicit value driver
  • Exit positioning: lead with AI capability, highlight data asset defensibility, emphasise customer lock-in from AI integration
  • Acquisition potential: prepare to be acquired by strategic buyer interested in AI capability (2-3x premium likely)

For companies scoring 2.5-3.5:

  • These companies have AI capability but need focused execution
  • Operating partner focus: improve capital discipline, connect AI investment to revenue/margin impact, reduce technical risk
  • Decision point: if company has not moved from 2.5 to 3.5 in 12 months, consider whether AI strategy is viable or should be redirected

For companies scoring <2.5:

  • These companies are engaged in AI-washing without building defensible assets
  • Operating partner focus: either dramatically refocus AI strategy with clear milestones and discipline, or exit AI strategy entirely
  • Exit strategy: if AI cannot be credibly positioned, exit AI narrative and compete on other dimensions

Real-World Example: Portfolio Review in Action

A PE sponsor with £300 million in AUM across 12 portfolio companies conducted an AI portfolio assessment. Results:

  • 3 companies scored >3.5 in AI maturity
  • 5 companies scored 2.5-3.5 (meaningful AI capability but undisciplined)
  • 4 companies scored <2.5 (effectively AI-washing)

The sponsor's decisions:

For the 3 high-performing companies: Identified them as potential joint exits or consolidation targets. Increased operating partner time investment on go-to-market and customer expansion.

For the 5 mid-performing companies: Required each to establish AI ROI targets for the next 12 months. Three improved to >3.5; two were repositioned away from AI. Capital allocation decisions were now data-driven rather than anecdotal.

For the 4 low-performing companies: Eliminated AI-focused investor communications. Stopped allowing "AI initiative" as a line item in quarterly updates. Three companies refocused on core operations. One company was merged with a stronger technology platform.

The result: exit valuations improved by 15-20% on average because the sponsor could credibly position AI capability where it existed and avoid credibility damage from overstating capability where it did not.

★ Key Takeaway

The operating partner's job is not to evaluate AI technology. It is to ensure that portfolio companies are making disciplined, measurable investments in AI as an intangible asset, and that those investments are creating defensible competitive advantage. The framework above enables that assessment without requiring the operating partner to be a data scientist.


Preparing for Exit with AI Intangible Assets

The AI intangible asset assessment directly informs exit positioning. A company scoring 4+ in AI maturity should be positioned to buyers as having genuine competitive advantage through proprietary models, defensible data assets, and customer lock-in from AI integration. This supports a premium multiple.

A company scoring <2.5 should not be positioned with AI claims. Buyers will quickly detect overstated AI capability and will discount the valuation as a result. Better to compete on other dimensions — operational efficiency, market position, customer relationships — where the story is credible.

The portfolio dashboard is the tool that separates companies with real AI intangible assets from those with aspirational positioning. For operating partners, that distinction is the difference between exits that command premium multiples and exits that underperform expectations.


Mark Hillier is Co-Founder and Chief Commercial Officer of Opagio. He brings 30+ years of experience advising businesses through growth, scaling, and successful PE exits. His client roster includes Legal & General, AEW UK Investment Management, and Salmon Harvester. He leads go-to-market strategy and client acquisition across the SME and investor markets at Opagio.

Share:

Mark Hillier

Mark Hillier — CCO, Co-Founder

BSc (Hons) Estate Management, Oxford Brookes | MRICS Chartered Surveyor

Related Articles

Pre-Exit: Preparing the Business for Sale
startups 2026-03-14 · Mark Hillier

Pre-Exit: Preparing the Business for Sale

The 12–18 months before exit define how much value you capture — data room preparation, normalised EBITDA, quality of earnings reports, earn-outs, and intangible asset documentation.

Read more →
AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real
AI washing 2026-01-22 · Ivan Gowan

AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real

The SEC has made AI-washing an enforcement priority, and for good reason: billions in capital are being allocated based on AI claims that range from exaggerated to fabricated. After 15 years evaluating technology claims at IG Group, here is the 7-point checklist that separates genuine AI capability from performative marketing.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.