AI-Washing: How to Detect Performative AI Claims

Companies overstate AI capabilities. They claim data they don't have. They promise productivity gains that don't materialise. The SEC is watching—and enforcement is accelerating. Learn the red flags, the SEC timeline, and how to audit AI claims in your portfolio or team.

The Definition: What Is AI-Washing?

AI-washing is the deliberate or reckless exaggeration of a company's artificial intelligence capabilities, data assets, or AI-driven productivity gains. It mirrors the mechanics of environmental greenwashing—making inflated claims of sustainability to attract capital or boost valuation—but applied to AI.

Three common forms:

1. Feature Exaggeration

Claiming a feature is 'AI-powered' when it is actually rule-based heuristics or simple threshold logic. A demand forecast built on IF-THEN rules, rebranded as 'machine learning', is not AI-washing if the company knows the difference internally. But if they tell investors it uses deep learning when it doesn't, that is.

2. Capability Inflation

Asserting the AI can do things it cannot. A language model that scores 60% accuracy on internal tests being marketed as 'state-of-the-art' with implied 90%+ performance. A recommendation engine that works for 40% of customer segments being sold as a universal AI solution.

3. Data Claims

Overstating the quantity, quality, or uniqueness of training data. 'Trained on billions of data points' when the actual figure is millions. 'Proprietary dataset' when the data is purchased from a vendor and used by competitors.

★ Key Takeaway

AI-washing is difficult to distinguish from marketing enthusiasm until you demand evidence. The SEC's standard: material misstatement made to investors or the public with knowledge (or reckless disregard) of falsity.

The motivation is straightforward. AI commands valuation premiums: investors pay more for 'AI-enabled companies'. A commodity SaaS tool gains 20–30% valuation uplift if rebranded as AI. A struggling company becomes acquisition bait if it claims breakthrough AI. This is why AI-washing is rampant.


The SEC's First Enforcement Actions (March 2024)

Delphia and Global Predictions: £400K in Combined Penalties

In March 2024, the SEC charged Delphia (an investment adviser) and Global Predictions (an AI consulting firm) with material misstatements about AI capabilities. Delphia claimed its AI-driven investment model outperformed the market; testing showed it underperformed. Global Predictions overstated data coverage and model accuracy in client pitches. Combined, the SEC imposed approximately £400K in civil penalties and ordered disgorgement of ill-gotten gains.

These cases signal a policy shift. The SEC had largely ignored AI claims in the 2020–2023 period; now it is actively investigating. Why? Three reasons:

Prevalence

A 2024 survey found roughly 60% of venture-backed AI startups make capability claims they cannot substantiate in third-party testing. The volume of AI-washing has grown too large to ignore.

Investor Harm

Retail investors have already lost significant capital to AI-washing schemes, from cryptocurrency-based AI scams to AI startups that raised on inflated claims.

Competitive Concern

When honest AI companies compete against washers, the market fails. The SEC views enforcement as a credibility mechanism.

✓ Example

A fintech startup, Beacon AI, claimed its proprietary model predicted stock moves with 73% accuracy. Auditors found the 73% figure was based on cherry-picked backtests on a single stock, with significant look-ahead bias. Out-of-sample accuracy was 52%—no better than random. The SEC fined the company and ordered it to disclose methodology and risk factors in future marketing.


Why AI-Washing Is So Common: The Measurement Gap

Most AI projects fail to deliver measurable productivity gains. Global consulting estimates suggest 90% of AI initiatives show zero or negligible productivity improvement in Year 1. This is the AI productivity paradox—trillions spent, little measured return.

When a company invests in AI and sees no ROI, two paths emerge:

Path A: Honest

Acknowledge the gap, reframe the AI as a long-term investment, adjust expectations, and iterate.

Path B: Washing

Exaggerate the gains achieved, redefine success metrics to make the numbers work, or simply claim AI capabilities are still being realised. This buys time and maintains investor confidence.

The measurement gap creates opportunity for washers. Because AI's actual productivity impact is hard to quantify, false claims are harder to disprove—at least until an auditor, acquirer, or regulator demands proof.


The Seven Red Flags Checklist

Use this checklist when evaluating a company's AI claims—in due diligence, investor pitches, or board reviews:

  1. Vague AI Claims Without Measurable Output: Phrases like 'AI-powered' or 'leveraging machine learning' without specific metrics, accuracy figures, or use-case outcomes. Legitimate AI claims come with numbers: 'Reduces processing time by 40%', '88% classification accuracy on out-of-sample data', 'Delivers £5M annual cost savings'.
  2. Data Claims Without Source Documentation: 'Trained on millions of data points' without disclosure of where the data came from, licensing terms, or data quality validation. Proprietary data claims unmatched by non-disclosure agreements with customers or suppliers.
  3. Capability Claims Unmatched by Patent Filings: Announcing breakthrough AI but holding zero patents in that domain. Patent filings are expensive and time-consuming, but genuine AI innovation is typically patented. Absence of patents suggests the capability is either derivative or exaggerated.
  4. No Product Functionality Tied to AI Claims: Marketing materials boast AI capabilities, but the product roadmap shows no AI-specific features, integrations, or releases. If AI is core to the strategy, where is it in the product?
  5. Privacy Claims Contradicted by Data Practices: Claiming 'privacy-preserving AI' or 'differential privacy' while collecting granular personal data, sharing with third parties, or lacking transparent data policies. Credible privacy claims require architectural proof and third-party audits.
  6. Exaggerated ROI or Productivity Claims: '500% ROI in 6 months' or 'eliminates 70% of manual work' without cohort comparison, control group data, or third-party validation. Real productivity gains are documented, attributed, and verifiable.
  7. Leadership Silence on AI Limitations and Risk: No public acknowledgement of failure cases, model drift, regulatory constraints, or data biases. Mature AI teams openly discuss limitations; washers paint a rosy picture only.
£400K first SEC AI-washing penalties (March 2024)
90% AI initiatives with zero measured Year 1 ROI
7 red flags in the AI-washing checklist

Impact on Valuations and M&A

AI-washing directly damages company valuation and deal outcomes.

In M&A due diligence, acquirers now conduct AI audits. They test claimed capabilities in live environments, validate training data sources, review model performance across different data cohorts, and stress-test systems with adversarial inputs. A disclosed AI misrepresentation leads to valuation adjustments of 20–40%. An undisclosed misrepresentation, discovered post-close, triggers indemnification claims and legal action.

In equity research and public markets, detected AI-washing leads to stock underperformance. A 2024 study tracking 47 publicly disclosed AI-washing cases found average stock underperformance of 18–22% over 12 months post-disclosure. Analyst coverage shrinks, float narrows, and institutional investors exit.

In venture and growth equity, AI-washing erodes fundraising capacity. VC firms now conduct technical diligence on AI claims. A startup discovered exaggerating capabilities faces difficulty closing Series B or later rounds.

⚠ Warning

The cost of discovered AI-washing typically far exceeds the short-term valuation benefit of the initial exaggeration. A company that claims 90% model accuracy to justify a £500M Series D valuation, later discovered at 65%, faces a £125–175M valuation reset, shareholder litigation, and reputational damage lasting years.


What Boards Should Do: An Audit Framework

If you are a board member, investor, or leader responsible for AI strategy, establish a standing AI audit function. This does not require deep technical expertise—it requires rigour and scepticism.

1. Demand Evidence, Not Marketing

Any AI capability claimed publicly or to investors must be documented with technical reports, validation results, and third-party assessment. Insist on clear separation between research (early, uncertain) and production (deployed, validated).

2. Test Claims Against Product Reality

Walk the product yourself. If the team claims 'AI-powered' recommendations, do they work? Is there evidence of learning over time? Or are you seeing static rules?

3. Require Risk Disclosure

Any AI system has limitations—model drift, edge-case failure, data bias, regulatory constraint. Insist the team publicly acknowledges these. Absence of risk discussion is a red flag.

4. Track Actual Productivity Impact

Don't accept proxy metrics. Demand measured impact: time saved, cost reduced, quality improved. If the AI initiative has been live for 6+ months with no measured productivity gain, it is a candidate for retraining or retirement.

5. Conduct Periodic Independent Audits

For material AI systems, hire an external technical audit firm (not your consultants, who may have sold you the system). Independent audits cost £20K–£100K and often surface assumptions and limitations the internal team has normalised.


The Road Ahead: Regulatory Escalation

The March 2024 enforcement actions are the beginning. The FTC is also investigating AI-washing in consumer products. The SEC has indicated intent to update guidance on AI disclosure for public companies. Expect:

  • More enforcement actions against companies making material AI misstatements
  • Formal SEC guidance on AI disclosure standards for public companies (likely 2025–2026)
  • Third-party AI audit frameworks and certification emerging (similar to SOC 2 for security)
  • Insurance products for AI liability and misrepresentation
  • Data sources companies use for AI training becoming increasingly scrutinised for licensing and IP compliance

The Bottom Line

Companies that build AI with rigour, test claims empirically, and disclose limitations transparently will outcompete those that exaggerate. Regulators are creating competitive advantage for the honest. Establish an AI audit function now—before your investors or regulators do it for you.

Share:

Related Resources

AI ROI Framework

Measure what AI investments actually create with a step-by-step framework and worked examples for boards.

Learn the Framework →

AI Productivity Paradox

Why trillions in AI spending show zero measured productivity impact—and what the gap means for your business.

Understand the Paradox →

AI Valuation Methods

Four proven approaches to value AI assets in M&A, with worked examples and risk frameworks.

Explore Methods →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Audit your AI claims before regulators do

Our technical AI audit framework helps boards validate capabilities, measure impact, and disclose risks—preparing you for SEC oversight.