The AI Due Diligence Checklist for Investors

The AI Due Diligence Checklist for Investors

Every AI-enabled company looks impressive in a pitch deck. The challenge is separating genuine AI capability from AI marketing. This checklist provides the structured framework to do exactly that.

Built on the four-dimension AI due diligence framework, this checklist distils the assessment into 40 specific, actionable verification points. Each can be evaluated through document review, management interviews, technical assessment, or independent verification.

Use it as a scoring framework: each item rated Red (fail/missing), Amber (partial/unclear), or Green (verified/satisfactory). A company with more than 5 Red ratings has material AI risk that must be addressed before investment.

40 Due diligence verification points
5 Assessment categories
2-4 weeks Typical AI due diligence timeline

Category 1: Technology Verification (10 points)

The technology assessment verifies that AI claims are substantiated by genuine capability.

AI system inventory

  1. Complete AI system catalogue exists — every AI/ML system in production is documented with purpose, architecture, team, and business impact
  2. Each system has a defined owner — a named individual responsible for the system's performance, maintenance, and governance
  3. Production monitoring is active — automated monitoring tracks model performance, data quality, and system health in real time

Architecture and quality

  1. Model architecture is documented — model cards or equivalent documentation describe architecture, training methodology, and design decisions
  2. Model performance is measured on production data — not just research benchmarks or curated test sets
  3. Version control and rollback capability exists — the team can revert to previous model versions if new deployments underperform
  4. Retraining processes are defined and scheduled — models are updated on a regular cadence, with drift detection triggering ad-hoc retraining

Verification

  1. Independent model evaluation confirms claims — an independent ML engineer has verified that the models perform as described
  2. Infrastructure evidence supports claims — cloud compute bills, GPU usage, and training logs are consistent with claimed AI activity
  3. Non-AI alternatives have been evaluated — the team can demonstrate that AI outperforms simpler approaches (rules-based, statistical) for each use case
★ Key Takeaway

The technology verification category is designed to detect AI washing. Items 8-10 are the most revealing: companies with genuine AI capability welcome independent evaluation and can demonstrate that AI adds measurable value over simpler alternatives. Companies that resist these assessments likely have something to hide.


Category 2: Data Asset Assessment (8 points)

Data assets are often the most valuable and most overlooked component of AI capability.

  1. Training data sources are documented — every dataset used for model training is catalogued with source, volume, update frequency, and quality metrics
  2. Data provenance and legal rights are verified — the company has clear legal rights to use all training data for AI purposes
  3. Data quality monitoring is active — automated checks track data completeness, accuracy, consistency, and freshness
  4. Proprietary data creates defensible advantage — at least one key dataset is proprietary and cannot be replicated by competitors within 12 months
  5. Data refresh pipeline is operational — new data flows continuously into the training pipeline without manual intervention
  6. Data governance framework exists — policies and procedures for data handling, privacy, retention, and access control are documented and enforced
  7. GDPR/privacy compliance is verified — data processing activities are lawful, documented, and subject to data protection impact assessments where required
  8. Data backup and recovery is tested — training data can be recovered from backup in a defined timeframe

Category 3: Talent and Organisation (8 points)

AI capability is embodied in people. Talent assessment is as important as technology assessment.

  1. Dedicated AI/ML team exists — at least three people with ML-specific titles and demonstrable ML experience
  2. Team composition covers the ML lifecycle — data engineering, model development, MLOps, and evaluation roles are filled
  3. Key-person dependency is manageable — no single individual's departure would cripple AI capability
  4. Retention risk is assessed and mitigated — AI team compensation is benchmarked against market rates, and retention mechanisms are in place
  5. Hiring pipeline is active — the company is actively recruiting AI talent and has a track record of successful ML hires
  6. Technical leadership is credible — the AI team leader has demonstrable ML experience (publications, prior ML roles, or measurable project delivery)
  7. Knowledge documentation exists — model decisions, training procedures, and operational runbooks are documented — not stored only in individuals' heads
  8. Collaboration between AI and business teams — AI projects are driven by business requirements, not technology exploration
✔ Example

A PE firm evaluated an AI-powered logistics company. The 4-person AI team looked strong on paper. Deeper investigation revealed that 3 of the 4 had joined within the last 6 months, the founding ML engineer had departed, and critical model training knowledge was undocumented. Despite a functioning production system, the team could not retrain or improve the models. This key-person knowledge loss was a material risk that reduced the acquisition offer by 20%.


Category 4: Regulatory and Compliance (6 points)

Regulatory risk in AI is growing and must be assessed as part of due diligence.

  1. EU AI Act risk classification completed — all AI systems have been classified according to the risk tier framework
  2. High-risk system compliance documented — systems classified as high-risk have conformity assessments, risk management documentation, and human oversight mechanisms
  3. AI transparency obligations met — users interacting with AI systems are informed they are interacting with AI (EU AI Act limited-risk requirement)
  4. No AI washing in public materials — AI claims in marketing, investor decks, and public statements are substantiated and proportionate to actual capability
  5. Bias testing and fairness monitoring implemented — AI systems that affect individuals are tested for discriminatory outcomes and monitored in production
  6. Incident response plan for AI failures — documented procedures for responding to AI errors, bias detection, and regulatory inquiries

Category 5: Commercial Validation (8 points)

Technology capability is necessary but not sufficient. The commercial assessment verifies that AI creates measurable business value.

  1. Revenue attribution to AI is measurable — the company can quantify how much revenue or cost saving is directly attributable to AI
  2. AI-driven KPIs are tracked — specific metrics (not generic "improved efficiency") demonstrate AI's business impact
  3. Customer dependency on AI features — customers actively use and value AI features, evidenced by usage data and feedback
  4. Competitive moat is assessable — the company can articulate why competitors cannot replicate its AI capability and in what timeframe
  5. AI roadmap is realistic — planned AI investments have clear business cases, defined timelines, and allocated resources
  6. Total cost of ownership is understood — the company accounts for all AI costs including hidden integration expenses, not just licensing
  7. AI governance framework exists — the company has an AI governance structure appropriate to its scale and risk profile
  8. AI is integrated into business strategy — AI is not a standalone initiative but is embedded in the company's growth strategy and operational planning

Strong AI Profile (30+ Green)

  • Documented, monitored AI systems
  • Proprietary data with legal rights verified
  • Deep AI team with no critical key-person risk
  • EU AI Act compliance underway
  • Measurable revenue attribution to AI

Weak AI Profile (10+ Red)

  • Undocumented or unmonitored AI
  • Data provenance unclear
  • Small team with key-person dependency
  • No regulatory compliance activity
  • AI claims without measurable impact
⚠ Warning

This checklist is a screening tool, not a substitute for deep technical due diligence. A company that scores well across all 40 points warrants further investigation to confirm the quality of each element. A company that scores poorly should trigger either a significant valuation adjustment or a decision to pass on the investment.

Using the Checklist in Practice

The checklist is most effective when:

  1. Completed before the offer stage — AI risk findings should influence valuation and deal terms
  2. Assessed by independent technical experts — not the target's own team or the vendor
  3. Documented with evidence — each rating should cite specific documents, interviews, or test results
  4. Shared with the investment committee — as a structured risk summary alongside financial due diligence

The Opagio questionnaire provides a systematic digital assessment of technology intangible assets, including AI capability, that complements this due diligence checklist. The Growth Platform tracks these assessments across portfolio companies over time.

The Bottom Line

AI due diligence is now a mandatory component of investment evaluation for any AI-enabled target. The 40-point checklist provides a structured, repeatable framework that covers technology, data, talent, regulatory, and commercial dimensions. Use it to separate genuine AI capability from AI marketing — and to price AI risk accurately into investment decisions. In a market where AI premiums can reach 25-40%, rigorous due diligence is worth every hour invested.


Ivan Gowan is Founder and CEO of Opagio. He spent 15 years at IG Group (LSE: IGG) evaluating technology capability across vendors, acquisition targets, and internal teams. The due diligence methodology in this article reflects that practical experience. Learn more about the Opagio team.

Share:

Ivan Gowan

Ivan Gowan — CEO, Co-Founder

25 years as tech entrepreneur, exited Angel

Connect on LinkedIn →

Related Articles

AI due diligence 2026-03-16 · Ivan Gowan

AI in Private Equity: Due Diligence for AI-Enabled Targets

Private equity firms are acquiring AI-enabled targets at record rates, but traditional due diligence frameworks miss critical AI-specific risks. This article provides a comprehensive AI due diligence methodology covering technical, commercial, regulatory, and talent dimensions.

Read more →
AI data assets 2026-03-16 · David Stroll

AI and Data Assets: The Symbiotic Value Relationship

AI makes data more valuable, and data makes AI more capable — creating a symbiotic value relationship that compounds over time. This article examines how to measure, value, and invest in the AI-data flywheel that drives the most defensible technology businesses.

Read more →
generative AI ROI 2026-03-16 · Ivan Gowan

Generative AI ROI: Beyond Cost Savings to Revenue Growth

Most generative AI business cases focus on cost reduction, but the larger opportunity is revenue growth through new products, enhanced customer experiences, and market expansion. This article provides a framework for measuring generative AI's revenue-side impact.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.