AI Washing: How to Spot Fake AI Claims in Due Diligence

AI Washing: How to Spot Fake AI Claims in Due Diligence

In a market where "AI-powered" adds an average 25-40% valuation premium, the incentive to exaggerate AI capability is enormous. The SEC has made AI washing an enforcement priority, and for good reason: the gap between AI marketing and AI reality is widening as more capital flows into AI-adjacent investments.

This article focuses specifically on due diligence — the structured process of verifying AI claims before committing capital. Whether you are a private equity firm evaluating an acquisition target, a venture investor conducting technical diligence, or a board member assessing management's AI strategy, these techniques will help you separate substance from spin.

40% of EU "AI startups" have no material AI (MMC Ventures)
25-40% Valuation premium for "AI-powered" claims
$4.3B SEC AI-related enforcement actions (2024-2025)

The AI Washing Spectrum

Not all AI misrepresentation is outright fraud. AI washing exists on a spectrum, and understanding where a target falls on this spectrum is the first task of due diligence.

Level Description Example Risk level
Level 1: Genuine AI Real ML models, proprietary training data, measurable impact Custom NLP models trained on domain-specific corpora Low
Level 2: AI-assisted Uses third-party AI APIs with light customisation GPT API calls with company-specific prompts Medium
Level 3: AI-enhanced marketing Rules-based systems rebranded as "AI" If-then decision trees labelled "machine learning" High
Level 4: No AI Zero AI capability despite AI-forward marketing Static algorithms presented as "AI-powered" Very high
★ Key Takeaway

The most common form of AI washing is not fabrication — it is exaggeration. Level 2 and Level 3 companies constitute the majority of AI washing cases. They have some automation or analytics capability but position it as proprietary AI to capture valuation premiums they have not earned.

Technical Red Flags

The team test

Genuine AI capability requires genuine AI talent. During due diligence, request the organisational chart for the technology function and specifically identify:

  • Machine learning engineers (not software engineers who "also do ML")
  • Data scientists with published research or demonstrable ML project history
  • ML operations (MLOps) engineers responsible for model deployment and monitoring
  • Data engineers maintaining the training data pipeline

If the company claims "AI-powered" products but employs fewer than three people with dedicated ML titles, the claim warrants deep scrutiny. AI is not a side project that a generalist software team builds during spare cycles.

The infrastructure test

Real AI systems leave infrastructure footprints. Ask to see:

  • GPU or TPU usage history and cloud compute bills
  • Model training logs with dates, durations, and resource consumption
  • Model versioning and experiment tracking systems (MLflow, Weights & Biases, etc.)
  • Data storage volumes and growth trajectories

A company genuinely training and deploying ML models will have substantial compute costs, version-controlled model artifacts, and systematic experiment tracking. If none of this exists, the "AI" is likely a thin wrapper around API calls or rules-based logic.

✔ Example

During a recent PE due diligence engagement, a target company claimed to use "proprietary AI" for demand forecasting. When the technical team examined the infrastructure, they found a single Python script running a linear regression model updated quarterly by a business analyst. Monthly cloud compute costs were £47. The company's AI claims had contributed to a £15 million valuation premium that was entirely unjustified.


Commercial Red Flags

Technical due diligence is necessary but not sufficient. Commercial red flags often reveal AI washing more efficiently than code review.

Vague benefit claims

Genuine AI systems produce specific, measurable results. When a company describes its AI impact using vague language — "enhanced efficiency," "improved insights," "next-generation analytics" — without specific metrics, the underlying capability is likely thin.

Ask for three things:

  1. Before-and-after metrics for specific processes where AI has been deployed
  2. A/B test results comparing AI-driven and non-AI-driven approaches
  3. Customer testimonials that reference specific AI-driven improvements (not generic satisfaction)

Disproportionate AI marketing

Compare the prominence of AI in the company's marketing with the proportion of its technology budget spent on AI. If AI features prominently in investor decks, website copy, and press releases but represents less than 10% of the technology budget, the company is marketing AI rather than building it.

No AI roadmap degradation plan

Companies with genuine AI capability have plans for what happens when models degrade, when training data becomes stale, or when the competitive landscape shifts. Ask: "What is your model retraining schedule? How do you detect model drift? What happens if your primary data source becomes unavailable?" Companies that cannot answer these questions do not operate AI systems at scale.

⚠ Warning

Do not confuse API integration with proprietary AI. A company that calls the OpenAI API, applies a prompt template, and returns the result is not an "AI company." It is a company that uses AI as a service. This distinction matters enormously for valuation because API-based approaches have no competitive moat — any competitor can replicate the integration in days.


The Due Diligence Protocol

Request the AI capability inventory

Ask the target to list every AI/ML system in production, its purpose, the team responsible, and the measurable business impact. This document reveals scope and specificity simultaneously.

Conduct technical architecture review

Have an independent ML engineer review the model architecture, training pipeline, data infrastructure, and deployment systems. This cannot be delegated to generalist consultants.

Verify with infrastructure evidence

Cross-reference AI claims against cloud bills, compute usage, model versioning logs, and data storage growth. Infrastructure evidence is harder to fabricate than presentations.

Assess the AI talent depth

Review LinkedIn profiles, publication records, and project histories of the AI team. Genuine capability is built by people with demonstrable experience, not by titles on an org chart.

Valuation Implications

AI washing creates a specific valuation risk: the premium paid for AI capability that does not exist. When the AI claims unravel post-acquisition — through failed integration, customer complaints, or regulatory action — the valuation premium evaporates and the acquirer absorbs the loss.

The practical response is to decompose the target's valuation into AI and non-AI components. What is the business worth without any AI capability? What incremental value does verified, genuine AI add? If the gap between the asking price and the non-AI valuation exceeds the verified AI value, the premium is unjustified.

The Opagio questionnaire provides a structured assessment of technology intangible assets, including AI capability, proprietary data, and human capital in the AI function. It is designed to complement traditional due diligence with systematic intangible asset evaluation.

The Bottom Line

AI washing is a valuation risk disguised as a technology problem. The due diligence protocol — capability inventory, architecture review, infrastructure verification, and talent assessment — provides a systematic defence. In a market where AI claims carry 25-40% valuation premiums, the ability to distinguish genuine AI capability from marketing is worth millions in avoided overpayment. Every investor and board member needs this skill.


Ivan Gowan is Founder and CEO of Opagio. He spent 15 years as a senior technology leader at IG Group (LSE: IGG), where he evaluated hundreds of technology claims from vendors, acquisition targets, and internal teams. Learn more about the Opagio team.

Share:

Ivan Gowan

Ivan Gowan — CEO, Co-Founder

25 years as tech entrepreneur, exited Angel

Connect on LinkedIn →

Related Articles

AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real
AI washing 2026-01-22 · Ivan Gowan

AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real

The SEC has made AI-washing an enforcement priority, and for good reason: billions in capital are being allocated based on AI claims that range from exaggerated to fabricated. After 15 years evaluating technology claims at IG Group, here is the 7-point checklist that separates genuine AI capability from performative marketing.

Read more →
AI governance 2026-03-16 · David Stroll

Building AI Governance: A Framework for Responsible AI Investment

The EU AI Act takes effect in August 2026, making AI governance a compliance requirement — not just a best practice. This article provides a practical governance framework covering risk classification, oversight structures, accountability mechanisms, and audit processes for organisations investing in AI.

Read more →
Abstract portfolio visualization showing distributed AI maturity assessment across companies
private equity 2026-02-18 · Mark Hillier

The PE Operating Partner's Guide to AI Intangible Assets Across a Portfolio

PE operating partners managing 5-15 portfolio companies face a new dimension of value creation: assessing which portfolio companies are building genuine AI intangible assets and which are pursuing fashionable but value-destructive AI-washing. Here is the assessment framework that separates signal from noise.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.