AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real

AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real

AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real

In March 2024, the SEC charged two investment advisers with making false and misleading statements about their use of artificial intelligence. The companies had marketed themselves as using AI-driven investment strategies when, in reality, no meaningful AI capability existed behind the claims. These were not fringe operations — they were SEC-registered firms managing real capital. Since then, SEC enforcement actions around AI-washing have accelerated, and the European Commission's AI Act introduces additional disclosure obligations from August 2026.

This matters far beyond regulatory compliance. AI-washing distorts capital allocation, inflates valuations, and erodes investor trust. When a portfolio company claims to be "AI-powered," the board has a fiduciary obligation to verify that claim. From my experience building and evaluating technology platforms at IG Group across 15 years, the gap between genuine AI capability and AI marketing is often obvious — if you know what to look for.

$2.6T Global M&A value in 2025, 28% AI-driven
62% of deals fail to meet financial targets from poor technical diligence
92% of S&P 500 value is intangible assets

What AI-Washing Actually Looks Like

AI-washing is the practice of overstating or fabricating a company's use of artificial intelligence to attract investment, increase valuations, or gain competitive positioning. It exists on a spectrum from mild exaggeration to outright fraud.

At the mild end, companies rebrand existing rules-based systems as "AI-powered" — a recommendation engine built on simple if-then logic becomes "our proprietary AI," or a basic statistical model gets marketed as "machine learning." At the severe end, companies claim AI capabilities that do not exist at all, using the term purely as a valuation multiplier.

The SEC has been explicit about its position: representing that a product uses AI when it does not is securities fraud. The enforcement signal is clear, and boards that fail to verify AI claims face both regulatory and reputational risk.

⚠ Warning

AI-washing is not limited to startups seeking funding. Established firms across financial services, healthcare, and enterprise software have been found rebranding legacy analytics as AI. The reputational and regulatory consequences apply regardless of company size or maturity.


The 7-Point AI Authenticity Checklist

Over 15 years at IG Group, I evaluated hundreds of technology claims — from vendor pitches to acquisition targets to internal project proposals. The following checklist distils that experience into a practical framework for assessing whether AI claims are genuine.

1. Is there a dedicated ML/AI engineering team?

Genuine AI requires specialist talent. A company claiming AI capability should employ machine learning engineers, data scientists, or AI researchers as distinct roles — not just software developers who have "learned some Python." Ask for the org chart. If there is no dedicated AI function, the claims warrant scrutiny.

2. Can they describe their training data?

Every legitimate AI system is built on data. If a company cannot describe the data their models are trained on — its source, volume, quality controls, labelling methodology, and update frequency — the AI claim is suspect. Proprietary training data is one of the most valuable intangible assets a company can build. Companies with genuine AI capability talk about their data with precision.

3. Do they publish model performance metrics?

Real AI teams measure model performance rigorously: accuracy, precision, recall, F1 scores, AUC-ROC curves, latency benchmarks. If a company cannot produce these metrics — or deflects with vague claims about "continuous improvement" — the AI is likely superficial or non-existent.

★ Key Takeaway

The single most reliable indicator of genuine AI capability is the ability to produce specific, quantitative model performance metrics. Companies that cannot do this are almost certainly overstating their AI claims. Genuine AI practitioners measure obsessively — it is fundamental to the discipline.

4. Is there evidence of A/B testing or controlled experiments?

Companies that genuinely use AI to drive business outcomes run experiments. They test AI-driven decisions against baselines and measure the delta. If the company has no experimentation infrastructure — no A/B testing framework, no control groups, no measured lift — the AI is either decorative or non-functional.

5. What is the AI architecture?

This question separates genuine capability from API wrappers. There are three broad categories:

AI architecture categories

Category Description Intangible Asset Value AI-Washing Risk
Proprietary models Custom-trained models on proprietary data High — creates durable competitive advantage Low
Fine-tuned foundation models Foundation models (GPT, Claude, Gemini) adapted with company data Medium — value depends on fine-tuning depth Medium
API wrappers Third-party AI APIs called with minimal customisation Low — no proprietary capability, easily replicated High

An API wrapper calling OpenAI's API with a branded front-end is not meaningfully "AI-powered." It is a user interface. The value resides in the underlying model, which belongs to someone else. This does not mean API integrations are worthless — but they should not be valued as proprietary AI capability.

6. Can they demonstrate the AI working on unseen data?

Request a live demonstration using data the company has not previously processed. Prepared demos are easy to stage; genuine AI systems perform consistently on novel inputs. If the company resists or offers only scripted demonstrations, treat the resistance as a signal.

7. Is AI mentioned in their engineering job postings?

This is a simple but revealing signal. Companies building real AI capability recruit for it — they post roles for ML engineers, data scientists, MLOps engineers, and AI researchers. If the company's careers page shows no AI-related hiring, the AI claims may be more marketing than engineering.


Red Flags and Green Flags

Beyond the 7-point checklist, there are broader patterns that distinguish genuine AI capability from performative claims.

Red Flags

  • "AI-powered" appears in marketing but not in technical documentation
  • No ML engineers or data scientists on staff
  • Cannot explain what type of AI they use (supervised, unsupervised, reinforcement, generative)
  • AI features were added after a funding round announcement
  • Claims of "proprietary AI" but the product could function identically without it
  • No published benchmarks, whitepapers, or technical blog posts
  • Defensive or evasive responses to technical questions

Green Flags

  • Published model performance metrics with methodology
  • Documented training data sources and data governance
  • A/B test results showing AI-driven improvement
  • Dedicated ML engineering team with named leadership
  • Technical blog posts or conference presentations by the AI team
  • Clear explanation of model architecture and trade-offs
  • Active hiring for AI-specific roles
✔ Example

A portfolio company in financial services claimed their "AI-driven risk engine" was a core differentiator. Technical diligence revealed the system was a set of manually maintained rules tables — the same architecture used since the 1990s — with a GPT-based chatbot bolted on top for customer queries. The chatbot was useful but represented roughly 2% of the platform's functionality. The valuation had been premised on AI capability that did not exist in any meaningful sense.

The Intangible Asset Test

There is a deeper principle at work here: genuine AI creates measurable intangible assets. AI-washing does not. When a company builds real AI capability, it accumulates proprietary training data, trained model weights, institutional ML knowledge, and experimentation infrastructure. These are durable assets that compound in value over time and create defensible competitive advantage.

When a company is AI-washing, none of these assets exist. There is no proprietary data pipeline, no model training infrastructure, no institutional knowledge. Remove the marketing language and the product is unchanged.

This is why the Opagio valuator assesses technology capital as a distinct intangible asset category. The presence or absence of genuine AI assets is a material factor in company valuation — and a company that claims AI capability without the underlying assets is, by definition, overvalued.

What Boards Should Do

Boards have a governance obligation to verify AI claims, particularly when those claims influence valuation, investor communications, or regulatory filings. Here are five practical steps.

Commission an independent technical review

Engage an external technical adviser to assess AI claims against the 7-point checklist. This should be independent of the management team making the claims.

Request quarterly AI capability reporting

Require the CTO or Head of AI to present model performance metrics, data quality indicators, and experimentation results quarterly. If they cannot produce these reports, that is itself a finding.

Map AI claims to intangible assets

For every AI capability claimed, ask: what intangible asset does this create? If the answer is "none" — if removing the AI label would not change the product — the claim is cosmetic.

Review AI references in investor materials

Audit pitch decks, investor updates, and public statements for AI claims. Ensure every claim can be substantiated with technical evidence. The SEC is watching.

Benchmark against genuine AI competitors

Compare the company's AI team size, data assets, and published metrics against competitors with verified AI capability. Context reveals whether claims are proportionate.

From My CTO Experience: How I Evaluate AI Claims

At IG Group, I managed technology teams for 15 years and evaluated countless vendor claims, acquisition targets, and internal proposals. The single most reliable signal of genuine technical capability — whether AI or otherwise — is specificity. Teams that have built something real describe it with precision: the architecture, the trade-offs, the failure modes, the metrics. Teams that are performing describe it with adjectives: "cutting-edge," "industry-leading," "next-generation."

The same principle applies to AI. When I hear a company describe their "proprietary AI" in specific terms — the model architecture, the training data source, the performance on held-out test sets, the failure modes they have observed and how they handle them — I am confident the capability is genuine. When the description is abstract and marketing-driven, with no specificity beneath the surface, the capability almost certainly is not.

The Opagio questionnaire is designed to assess exactly these dimensions — whether a company's technology investments are creating genuine, measurable intangible assets or merely generating marketing claims.

The Bottom Line

AI-washing is not merely a regulatory risk — it is a valuation risk. Companies that overstate AI capability attract capital at inflated valuations, creating exposure for investors and boards. The 7-point checklist provides a practical, repeatable framework for distinguishing genuine AI assets from performative claims. In an era where SEC enforcement is accelerating and AI-driven M&A is at record levels, the ability to verify AI claims is a core competency for any investor or board member.


Ivan Gowan is Founder and CEO of Opagio. He spent 15 years as a senior technology leader at IG Group (LSE: IGG), overseeing engineering growth from 4 to 250 during the company's rise from £300m to £2.7bn. He built IG's first online and mobile trading platforms, launched the world's first Apple Watch trading app, and holds an MSc from Edinburgh with neural networks research (2001). Learn more about the Opagio team.

Share:

Ivan Gowan

Ivan Gowan — CEO, Co-Founder

25 years as tech entrepreneur, exited Angel

Connect on LinkedIn →

Related Articles

The PE Operating Partner's Guide to AI Intangible Assets Across a Portfolio
private equity 2026-02-18 · Mark Hillier

The PE Operating Partner's Guide to AI Intangible Assets Across a Portfolio

PE operating partners managing 5-15 portfolio companies face a new dimension of value creation: assessing which portfolio companies are building genuine AI intangible assets and which are pursuing fashionable but value-destructive AI-washing. Here is the assessment framework that separates signal from noise.

Read more →
The AI ROI Framework Your Board Actually Needs
AI ROI 2026-01-27 · Ivan Gowan

The AI ROI Framework Your Board Actually Needs

Only 29% of executives can measure AI ROI confidently. The problem is not that AI fails to deliver value — it is that traditional ROI frameworks were not designed for investments that create intangible assets. Here is the 4-layer framework that connects AI spending to board-level outcomes.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.