The Board's AI Accountability Checklist: Governance for the Age of Intangible Capital

The Board's AI Accountability Checklist: Governance for the Age of Intangible Capital

The Board's AI Accountability Checklist: Governance for the Age of Intangible Capital

When I joined IG Group in 2008, the regulatory environment for financial services firms was hardening. Basel III was coming. The FSA was intensifying oversight. Senior management was accountable not just for financial results but for the systems and controls that generated those results. The board's job shifted from strategic vision to structured accountability.

I have watched the same shift happening now with artificial intelligence. Except this time, the stakes are higher, the risks are less understood, and the governance frameworks are further behind. A significant portion of the companies spending billions on AI have no board-level framework for assessing whether those investments are creating value, how much risk they are introducing, whether they are being deployed ethically, or how they are governed.

This is a governance failure. It is not a technology problem. It is a board problem.

67% of boards lack formal AI governance frameworks (MIT 2025)
£500B+ Corporate AI spending globally (2025)
43% of executives report zero AI ROI tracking

Ten Governance Questions Every Board Should Ask

Here is the accountability framework I would recommend for any board overseeing significant AI investment. It is structured as 10 questions that should be on the agenda at least quarterly, with clear owners and measurable answers.

1. AI Strategy Alignment: Is AI Integrated Into Corporate Strategy?

The Question: What is the company's AI strategy, and how does it integrate with core business objectives.

What You Are Assessing: Many companies have AI initiatives that are disconnected from strategic priorities. They are pursuing AI because competitors are, or because the technology is available, not because it serves a deliberate business objective. This results in orphaned projects, wasted capital, and fragmented technology investments.

The Board Should Know:

  • What are the top 3-5 AI initiatives currently underway
  • How does each initiative connect to a specific business outcome (revenue, cost, customer experience, risk mitigation)
  • What is the total capital committed to AI across the company
  • How is AI investment evaluated against alternative uses of capital

Red Flags:

  • "AI is a strategic priority" without specific business outcome attached
  • AI initiatives owned by isolated innovation teams, not core business units
  • No quantified objectives for AI projects (e.g., "improve customer satisfaction" with no baseline or target)
  • AI investment treated as R&D expense rather than capital investment

Example Question: "Walk us through the top 5 AI investments. For each one, name the business objective it addresses, the owner within the core business, the expected ROI, and what would happen if we cancelled it."


2. ROI Measurement: Can You Prove AI Investments Are Creating Value

The Question: How does the company measure the return on AI investment, and what is the current evidence that AI projects are delivering value.

What You Are Assessing: If the board cannot measure AI ROI, it cannot govern AI spending. This is not an academic point. Deloitte research shows that 43% of executives cannot confidently measure AI ROI. This means boards are making capital allocation decisions on hunches, not data.

The Board Should Know:

  • For each material AI initiative, what is the baseline business metric before AI, the target after AI, and the current performance
  • What is the evidence that any performance improvement is due to AI deployment versus other factors
  • What is the total capital consumed by AI projects, year-to-date and projected
  • What is the measured return on that capital

Red Flags:

  • "We measure success by project completion, not business impact"
  • No baseline established before AI deployment (impossible to measure impact)
  • Projects that have been running for 18+ months without measurable outcomes
  • ROI measured as "engagement" or "sentiment" rather than business metrics (revenue, cost, risk)

Example Question: "Show us the top 10 AI projects by capital invested. For each, show us: 1) the baseline metric, 2) the current metric, 3) when we will have final data on whether this project delivered ROI. If you cannot show these for all 10, why not."

★ Key Takeaway

The absence of measured AI ROI is not a measurement problem — it is a governance problem. Boards that allow AI spending to continue without ROI measurement are failing their fiduciary duty to shareholders.


3. Risk Assessment: What Are the Specific Risks of Your AI Systems

The Question: What are the material risks that AI systems introduce to the company's operations, customer relationships, or regulatory compliance, and how are those risks being mitigated.

What You Are Assessing: AI systems introduce specific risk categories that traditional technology governance does not address: model accuracy degradation, data bias, adversarial manipulation, regulatory change. These risks need explicit assessment and mitigation strategies.

The Board Should Know:

  • What are the top 5 operational risks introduced by AI systems (model failure, bias, data breach, regulatory change)
  • For each risk, what is the mitigation strategy and who owns it
  • What is the tolerance level for model accuracy degradation in production systems
  • How would the company detect if an AI model is producing biased decisions
  • What is the consequence if a material AI system fails

Red Flags:

  • "AI risk is IT risk" (it is not — AI has specific risk categories)
  • No baseline accuracy metrics for production AI systems
  • Decisions to deploy AI models without documented risk assessment
  • No monitoring of model performance post-deployment
  • Regulatory risk not explicitly modelled (e.g., AI Act compliance)

Example Question: "For your top 3 customer-facing AI systems, show us: 1) the accuracy baseline at deployment, 2) the minimum acceptable accuracy level, 3) how you monitor for bias, 4) the business impact if the model's accuracy drops 5% post-deployment, 5) how you would detect that."


4. Ethical AI and Fairness: Are You Deploying AI Responsibly

The Question: What governance exists to ensure that AI systems do not discriminate unfairly, operate with appropriate transparency, and align with company values.

What You Are Assessing: This is no longer a nice-to-have. Regulatory frameworks (AI Act in EU, executive orders in US) are embedding fairness and explainability requirements. Companies deploying AI without fairness governance are introducing regulatory and reputational risk.

The Board Should Know:

  • Are AI systems that make consequential decisions (credit, hiring, resource allocation) being tested for bias
  • If bias is detected, what is the process for remediating it
  • How is the company communicating AI use to customers where it affects their experience
  • What external fairness frameworks (EU AI Act, FairML, etc.) is the company preparing for
  • Who is accountable for fairness oversight

Red Flags:

  • "Fairness is the data scientist's responsibility" (it is a board-level responsibility)
  • No testing for bias in AI systems
  • Fairness assessments done ad hoc, not systematically
  • No transparency to customers about AI use in decisions affecting them
  • Fairness and ethics treated as compliance checkbox, not strategic imperative

Example Question: "Show us your fairness testing framework for AI systems. What specific bias tests do you run. When you find bias, how do you decide whether to accept it, mitigate it, or remove the system. Give us a specific example from the last 12 months."


5. Data Governance: Is the Company's Data Ready for AI

The Question: What governance exists over data quality, access, and compliance to support responsible AI deployment.

What You Are Assessing: AI systems are only as good as the data they use. Poor data governance leads to poor models, model drift, compliance failures, and security vulnerabilities. This is a foundational risk.

The Board Should Know:

  • What is the company's data governance framework
  • What percentage of data used for AI meets quality and compliance standards
  • What are the major data access and compliance risks (GDPR, CCPA, consent for use)
  • Who is accountable for data governance and how are they measured

Red Flags:

  • Data governance is reactive rather than proactive
  • No consistent classification of data (sensitive, personal, proprietary)
  • Data lineage not documented (you do not know where data comes from)
  • Significant data used for AI that is not subject to quality controls
  • No audit trail of who accessed what data and when

Example Question: "Take your top 5 AI models. For each, trace the data lineage: where does each dataset come from, what is its quality score, what compliance requirements apply (GDPR, CCPA, etc.), and who is accountable for that data. If you cannot do this, what does that tell us about governance."


6. Talent and Capability: Does the Company Have the Talent to Deploy AI Responsibly

The Question: What talent strategy exists to ensure the company can develop, deploy, and maintain AI systems effectively.

What You Are Assessing: AI is still a talent-constrained domain. Companies without deep data science, ML engineering, and AI research expertise will struggle to build defensible AI capability or assess the quality of AI deployments.

The Board Should Know:

  • What is the company's AI talent strategy (build vs partner)
  • How many senior data scientists and ML engineers does the company employ
  • How is AI talent distributed across business units versus centralised
  • What is the retention rate for critical AI roles
  • How is the company training non-specialist staff to work alongside AI systems

Red Flags:

  • Heavy reliance on contractors for AI development (retention/knowledge transfer risk)
  • No seniority in AI roles (all junior, all outsourced)
  • High turnover in data science and ML engineering teams
  • AI talent concentrated in one person or small team
  • No training for business users to work effectively with AI systems

Example Question: "Show us your AI talent pyramid. How many senior (10+ years) practitioners do you have. What is your retention rate in core AI roles. If your head of AI left tomorrow, could the company continue maintaining and developing its AI systems."


7. Vendor Management: Are Third-Party AI Systems Properly Governed

The Question: For AI systems and models sourced from vendors (cloud providers, SaaS platforms, third-party models), what governance exists to assess their quality, manage dependency, and ensure compliance.

What You Are Assessing: Most companies do not build their own foundational AI models. They rely on OpenAI, Anthropic, Google, Microsoft, or other vendors. This introduces dependency risk that needs explicit governance.

The Board Should Know:

  • What AI systems and models does the company rely on from external providers
  • What is the company's dependency on each provider (could we replace it if needed)
  • What SLAs and uptime guarantees exist
  • How are vendor models audited for bias, accuracy, and compliance
  • What is the cost structure and how sensitive is the company to price changes

Red Flags:

  • No inventory of third-party AI systems in use
  • Significant business dependency on a single vendor's model
  • No SLAs or uptime guarantees for critical vendor systems
  • Vendor models accepted "as-is" without bias or fairness testing
  • No contingency plan if a vendor system becomes unavailable or changes terms

Example Question: "Show us all third-party AI systems your company uses. For each, identify: 1) how critical it is to operations, 2) what the switching cost would be if we needed to replace it, 3) what SLA we have, 4) how we audit the vendor's fairness and accuracy. Where would we be most vulnerable if a vendor changed their terms or pricing."


8. Regulatory and Compliance Readiness: Is the Company Positioned for AI Regulation

The Question: What is the company's strategy for evolving AI regulations (EU AI Act, FairML standards, industry-specific rules), and how is compliance being embedded into AI development.

What You Are Assessing: The regulatory landscape for AI is hardening rapidly. Companies that do not anticipate and prepare for regulatory requirements are introducing legal and financial risk.

The Board Should Know:

  • What regulatory frameworks apply to the company's AI systems (EU AI Act, FairML, industry-specific)
  • What is the company's compliance strategy for each framework
  • Are compliance requirements being built into AI development or bolted on post-hoc
  • What is the potential regulatory impact if the company fails to comply (fines, restrictions, reputational damage)

Red Flags:

  • "Regulation does not apply to us" (it increasingly does)
  • Compliance activities treated as separate from AI development
  • No audit programme for regulatory compliance
  • Executive and board not familiar with specific compliance requirements that apply to their industry
  • No budget allocated for compliance infrastructure

Example Question: "Walk us through regulatory requirements for your top 5 AI systems. Which frameworks apply. What would happen if we failed to comply. What are we doing now to prepare."


9. Reporting and Transparency: Does the Board Have Clear, Timely Visibility Into AI Performance

The Question: What is the reporting cadence and content that the board receives on AI initiatives, risks, and performance.

What You Are Assessing: The board cannot govern what it cannot see. Transparent, regular reporting on AI is essential to effective oversight.

The Board Should Know:

  • How often does the board review AI performance (should be quarterly minimum)
  • What metrics are reported (ROI, risk, compliance, talent, vendor performance)
  • Are reports consistent and comparable over time
  • Is there a single source of truth for AI initiatives and performance, or are reports fragmented across business units

Red Flags:

  • AI performance reported quarterly in a general technology update (should have dedicated agenda time)
  • Inconsistent metrics or definitions across reports
  • Reports from different business units contradict each other on AI spending or ROI
  • No structured risk or compliance reporting on AI
  • Board members unfamiliar with key AI initiatives or metrics

Example Question: "Show us the AI reporting package you are giving us this quarter. How does this compare to last quarter. What changed. What are we not measuring that we should be."


10. Board Composition and Expertise: Does the Board Have Competency to Govern AI

The Question: Does the board have sufficient technical expertise and AI fluency to effectively oversee AI investment and risk.

What You Are Assessing: Board members do not need to be data scientists. But they need sufficient fluency in AI capabilities, risks, and governance to ask intelligent questions and hold management accountable.

The Board Should Know:

  • How many board members have meaningful AI/technology experience
  • Is there at least one board member who can credibly challenge management on AI strategy and risk assessment
  • Are board education programmes in place to build AI fluency across the full board
  • What gaps exist in board competency around AI and intangible assets

Red Flags:

  • No board members with technology background
  • Board questions about AI that suggest lack of foundational understanding
  • No investment in board education on AI
  • Over-reliance on a single board member to validate all AI decisions
  • Pressure from management to "just approve" AI initiatives without substantive challenge

Example Question: "If I asked each board member to explain what a large language model is, could they do it. If I asked them to articulate the specific risks of your company's AI systems, what would they say."


Implementing the Checklist

These 10 questions should structure a quarterly governance process:

  1. Q1: Complete the initial assessment. For each question, identify the current state, the desired state, and the gap.
  2. Q2-Q4: Track progress against each question. Update the board on metrics, risks, and accountability.
  3. Annual: Refresh the assessment and adjust priorities based on business changes and regulatory evolution.

The checklist is not a one-time exercise. It is a governance framework that keeps AI strategy, performance, risk, and compliance on the board's regular agenda.

✔ Example

A manufacturing company with £500 million in revenue spent £15 million on AI initiatives over two years. When the board applied this checklist, they discovered: 1) No clear connection between AI initiatives and business strategy, 2) No measurement of ROI on £15 million spent, 3) No governance over fairness in a hiring AI system, 4) Heavy reliance on a single consultant for all AI strategy and oversight. The board immediately halted new AI spending until governance was in place. Within 12 months, with proper governance, the company doubled its measured AI ROI and reduced uncontrolled spending by 40%.


Why This Matters

The era when boards could treat AI as a "technology problem" for management to solve is over. AI investment is now material to capital allocation, risk management, and strategy. It demands the same board-level scrutiny and accountability that boards apply to financial controls, regulatory compliance, and M&A.

The 67% of boards without formal AI governance frameworks are allowing billions in capital to be deployed without clear accountability. That is a governance failure.

The 10-point checklist is a starting point. It is not comprehensive. It is designed to establish a cadence and structure that keeps AI where it belongs: on the board agenda, regularly reviewed, with clear accountability for strategy, performance, risk, and compliance.

Boards that implement this level of governance will not only protect shareholder value — they will enable their companies to capture AI opportunity more effectively than competitors who are operating without governance.

The question is not whether boards will govern AI. Regulation, shareholder pressure, and risk management will force that. The question is whether boards will govern AI intentionally now, or reactively after the damage is done.


Ivan Gowan is the founder and CEO of Opagio. He spent 15 years as a senior technology leader at IG Group (LSE: IGG), overseeing engineering growth from 4 to 250 during the company's rise from £300m to £2.7bn market capitalisation. He holds an MSc from Edinburgh with research in neural networks (2001).

Share:

Ivan Gowan

Ivan Gowan — CEO, Co-Founder

25 years as tech entrepreneur, exited Angel

Connect on LinkedIn →

Related Articles

The AI ROI Framework Your Board Actually Needs
AI ROI 2026-01-27 · Ivan Gowan

The AI ROI Framework Your Board Actually Needs

Only 29% of executives can measure AI ROI confidently. The problem is not that AI fails to deliver value — it is that traditional ROI frameworks were not designed for investments that create intangible assets. Here is the 4-layer framework that connects AI spending to board-level outcomes.

Read more →
AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real
AI washing 2026-01-22 · Ivan Gowan

AI-Washing: How to Tell If Your Portfolio Company's AI Claims Are Real

The SEC has made AI-washing an enforcement priority, and for good reason: billions in capital are being allocated based on AI claims that range from exaggerated to fabricated. After 15 years evaluating technology claims at IG Group, here is the 7-point checklist that separates genuine AI capability from performative marketing.

Read more →

Subscribe to our newsletter

Get the latest insights on intangible asset growth and productivity delivered to your inbox.

Want to learn more about your intangible assets?

Book a free consultation to see how the Opagio Growth Platform can help your business.