The SME CEO's First 90 Days with AI: A Practical Roadmap
Most SME leaders understand that AI is important and that they need to do something about it. Yet the transition from understanding to action is unclear. AI is not like other technology investments where you can buy the software, allocate budget, and wait. AI requires continuous decisions about where to deploy it, how to measure impact, how to integrate it into existing workflows, and how to develop the intangible assets — capability, data, processes — that make AI valuable.
The result is that many SME leaders delay. They wait for clarity that never comes. They assume they will understand AI better in six months. Meanwhile, their competitors are running pilots, learning, and building capabilities that create competitive advantage.
After 30 years of advising businesses through transformational change, I have learned that the best time to act is not when you have perfect clarity. It is when you have enough clarity to move, and you can learn through structured experimentation.
Here is the playbook I recommend for SME CEOs implementing AI for the first time. The 90-day structure is arbitrary — your business might move faster or slower — but it provides a framework for progressing from assessment through to measurable pilots through to scaling decisions.
73%
of SMEs have launched AI pilots (McKinsey, 2025)
34%
of SME AI pilots scale to full implementation (Deloitte, 2025)
2.3x
Average productivity gain from successful AI implementation (12-month view)
Days 1-30: Audit and Assess
Week 1: Foundational Activities
Day 1 (Monday): Announce the AI initiative to your leadership team. This is not optional — if the top of the organisation is unclear about the importance of AI, the rest of the organisation will not take it seriously. Your message should be: We are going to understand where AI can create value in our business. We will run pilots. We will measure impact. This is a learning journey, not a one-time project.
Schedule your first working session: yourself, your COO or head of operations, and ideally one person from finance. This is the core team that will shepherd the initiative. You will add functional leaders later, but this core team will coordinate.
Day 2-4 (Tuesday-Thursday): Conduct deep dives with your functional leaders. Spend two hours with the head of sales, two hours with the head of operations, two hours with the head of customer success, etc. Ask each the same questions:
- In your function, where are people spending time on routine, repetitive work?
- Where are we making decisions based on incomplete information that AI could help improve?
- Where do we have data that AI could help us extract insight from?
- What would it be worth to us if we could do X faster or better?
Document the answers. These are your initial AI opportunity hypotheses. You will refine these, but this gives you a realistic picture of what people actually think, not what they think you want to hear.
Day 5 (Friday): Synthesise the week's conversations into a preliminary opportunity map. Do not overthink this. Create a simple list:
- Function / Process
- Opportunity (what could AI do better)
- Potential Impact (time saved, quality improved, cost reduced)
- Relative Difficulty (is this easy to pilot or complex?)
This becomes your working document for the next three weeks.
Week 2-3: Process Mapping
Objective: Understand in detail where your people spend their time and what opportunities exist for AI augmentation.
For your top 3-5 functional processes (pick based on the impact assessment from Week 1), conduct a detailed process mapping exercise:
- Map each step of the process
- Identify where judgment is applied vs. where it is routine
- Identify data flows (what information is used at each step)
- Identify decision points (where is AI potentially most valuable)
- Estimate time spent on each step
This is best done with the team that actually does the work, not with managers. The people doing the work will identify inefficiencies and bottlenecks that managers miss.
Example: In a legal practice, map the process of contract review. Identify: reading the contract (routine), extracting key terms (routine), comparing to standard terms (routine), identifying unusual provisions (judgment-intensive), flagging risks (judgment-intensive), drafting comments (hybrid). Estimate time. For a 50-page contract: 4 hours reading and extracting, 1 hour comparing to standard, 1 hour flagging risks. AI might reduce the first 4 hours to 30 minutes. The value becomes visible.
Week 4: Intangible Asset Audit and Budget Planning
Objective: Understand your current intangible asset base and allocate budget.
For each process you have mapped, ask: What intangible assets currently support this process?
- Technology capital (tools, systems, platforms we own)
- Data assets (data we have accumulated that could train AI)
- Organisational capital (processes, documentation, standards)
- Human capital (skills, expertise)
In parallel, work with finance to allocate a budget for the AI initiative. I recommend budgeting for:
- External consulting/fractional CTO (typically 15-20% of total budget)
- Software subscriptions and tools (30-40%)
- Internal training and capability building (20-25%)
- Contingency (15-20%)
For a £5m revenue business, I would allocate £50-75k for a 90-day pilot programme including initial scaling. For a £20m business, 150-200k. This is not trivial, but it is material, not transformational.
Days 31-60: Pilot and Learn
Week 5-6: Tool Selection and Team Training
Objective: Select your AI tools and build basic capability across your team.
By now you have identified your top 2-3 pilot opportunities. For each, evaluate available tools:
- If the opportunity is document processing or analysis, you will likely use a language model API (OpenAI, Claude, etc.) or a consumer tool (ChatGPT, Claude, etc.)
- If the opportunity is data analysis or reporting, you might use dedicated tools (ChatGPT plugins, dedicated analytics platforms)
- If the opportunity is complex (custom model building), you might need external help
Do not overthink this. The goal is to move fast and learn, not to select the perfect tool on the first try.
Conduct basic training for your core team and the functions involved in pilots. This is not "everyone in the company learns about AI." This is targeted training:
- Three-hour workshop on AI capabilities and limitations
- Hands-on practical session with the tools you have selected
- Role-specific training: How will AI change your function's work?
Key point: People's fears are usually overblown and different from the actual challenges. Get hands-on quickly to move past anxiety into productive engagement.
Week 7-8: Pilot Execution
Objective: Run 2-3 parallel pilots. The goal is not perfection — it is to generate actual experience and data.
Each pilot should have:
- Clear success criteria: We will reduce time spent on X by Y%, or we will improve quality metric by Z%
- Defined scope: Pilot a specific process or decision type, not the entire function
- Real stakes: Process real work through the pilot, do not create artificial test cases
- Weekly measurement: Track the metrics you committed to every week
- Weekly review: Meet with the pilot team weekly to assess results, identify issues, adjust approach
Sample pilots for different functions:
Sales: Use AI to generate first drafts of proposals. Measure: time spent drafting (before vs. after), win rate (is proposal quality maintained?), customer reaction to AI-assisted proposals.
Operations: Use AI to analyse operational data and flag anomalies. Measure: time spent on anomaly detection, false positive rate, whether flagged anomalies are actionable.
Finance: Use AI to automate invoice processing and expense categorisation. Measure: time spent on processing, error rate, cash flow improvement from faster processing.
Customer Success: Use AI to summarise customer interactions and flag at-risk accounts. Measure: time spent on analysis, accuracy of at-risk predictions, improvement in early intervention.
Days 61-90: Scale and Measure
Week 9: Assess Pilot Results and Make Go/No-Go Decisions
Objective: Decide which pilots to expand, which to iterate, and which to abandon.
Review the data from your pilots. Be rigorous:
- Did the metrics improve as expected? If not, why not? Is it a tool problem, an implementation problem, or a process problem?
- Did people adopt the AI tool or resist it? Resistance is data. If a team loves the tool, that is a green light. If they find it annoying or unreliable, that is useful feedback.
- Is the ROI positive? For simple pilots, the ROI calculation might be: time saved × hourly rate. Do not expect massive savings yet — first pilots often break even.
- Did the intangible assets improve? Did you build better processes? Did people develop AI literacy? Did your data assets improve?
Make go/no-go decisions:
- Go: Expand this pilot to the full function. Develop supporting processes. Start training all people in the function on the tool.
- Iterate: This approach has potential but needs refinement. Adjust the tool, the process, or the implementation. Run a second iteration of the pilot.
- No-go: This is not working or is not valuable enough to continue. Learn from it and move on.
★ Key Takeaway
The best pilots are those that fail to achieve their target metrics. Failure teaches. It tells you something about your processes, your data, or your expectations that was wrong. Use that learning to improve.
Week 10: Establish KPIs and Measurement Systems
Objective: Build the measurement infrastructure that will allow you to track AI impact over the next 12 months.
For each function that is implementing AI, establish a set of KPIs:
| Function |
Core KPI |
Augmented KPI (AI-relevant) |
| Sales |
Revenue per salesperson |
Time spent on proposal writing (should decrease); Win rate (should maintain or improve) |
| Operations |
Cost per transaction |
Error rate (should improve); Variance in process execution (should decrease) |
| Finance |
Days sales outstanding |
Invoice processing time (should decrease); Anomaly detection accuracy (track separately) |
| Customer Success |
Churn rate |
Engagement score (should improve); Time spent on analysis vs. customer contact (should shift toward customer contact) |
Do not add too many metrics. Each person should have 3-4 KPIs they are accountable for. Make sure AI impact metrics are in there alongside traditional performance metrics.
Establish a measurement cadence:
- Weekly: Check that AI tools are being used (utilisation metrics)
- Monthly: Assess whether key performance metrics are moving as expected
- Quarterly: Deeper analysis of ROI, intangible asset development, and strategic learning
Week 11-12: Board Reporting and Next-Phase Planning
Objective: Report results to your board and plan the next phase of AI implementation.
Prepare a board paper or investor update covering:
- What we launched: The pilots we ran
- What we learned: The results, both positive and negative
- What we changed: Process improvements, capability improvements
- What we measured: Intangible asset creation (new skills, new processes, improved data)
- What's next: Scaling plans, budget requirements, expected ROI over the next 12 months
Do not oversell the results. If a pilot broke even but taught you something valuable, say that. Board-level confidence in your AI strategy comes from credibility, not from overstated results.
Plan the next phase:
- Which pilots will you expand to full implementation?
- Which functions will you tackle next?
- What infrastructure (tools, training, governance) do you need to build?
- What additional budget do you need?
The Intangible Asset Tracking Imperative
Throughout this 90-day process, you should be tracking intangible asset development, not just financial metrics:
Organisational capital:
- Did you develop new processes? Document them.
- Did you build new capabilities? Measure adoption and proficiency.
- Did you establish new decision-making systems? Track their quality.
Human capital:
- How many people developed AI literacy? Track through skill assessments.
- Did people develop new skills (prompt engineering, model evaluation)? Document.
- Did engagement improve due to capability building? Survey before and after.
Technology capital:
- Did you invest in AI-integrated tools that are now part of your stack? Value the improvement.
- Did you build proprietary integrations or customisations? Document their value.
Data capital:
- Did your data quality improve? That is measurable.
- Did you accumulate new data assets? Document what you have.
These intangible assets are the real value of the AI initiative. Financial savings are often modest in pilots. But capability building, process improvement, and data accumulation compound over time. An organisation that has developed strong AI literacy and solid AI-integrated processes will outcompete an organisation that is still figuring out what AI is.
Managing Resistance and Adoption
Throughout the 90 days, you will encounter resistance. Some will come from fear (my job will be automated), some from scepticism (this won't actually work), and some from real friction (the tools are annoying to use).
Address fear directly: Be transparent about where you see AI augmenting roles vs. replacing roles. For most knowledge work, AI augments — a person with AI is more productive than a person without. But some lower-value work will be automated. Be honest about that.
Address scepticism with evidence: Do not try to convince people before pilots. Let pilots convince them. Data is more persuasive than arguments.
Address friction quickly: If a tool is annoying or unreliable, fix it or change tools. Do not force adoption of a broken tool.
The most important thing is to build a culture of learning. Position the 90 days as "we are learning together" rather than "we have to implement AI." That reframe is subtle but powerful.
Beyond 90 Days
After your initial 90 days, the roadmap becomes function-specific. But the principles remain:
- Measure continuously. Track how AI is affecting your KPIs and your intangible assets.
- Develop capability systematically. Build AI literacy across your organisation.
- Iterate based on feedback. The most successful AI implementations are those that evolve based on real usage and feedback.
- Build organisational capital. Document what you learn. Make processes repeatable.
The SME that treats AI as a 90-day project will get 90 days of value. The SME that treats AI as an ongoing capability-building programme will get compounding value for years.
The Opagio Support Structure
Throughout this 90-day journey, measurement is essential. The Opagio Growth Platform provides the framework for tracking intangible asset development alongside financial metrics. As you build AI capability, you can measure:
- Human capital development (AI literacy, new skills)
- Organisational capital improvement (new processes, documented knowledge)
- Data asset quality and growth
- Technology capital from AI tools and integrations
- ROI linkage (how intangible assets drive financial performance)
For SME CEOs running this roadmap, having a clear measurement framework is not optional. It is the difference between learning and flailing.
Mark Hillier is Co-Founder and CCO of Opagio. He brings 30+ years of experience advising businesses through growth, scaling, and successful PE exits. His client roster includes Legal & General, AEW UK Investment Management, and Salmon Harvester. At Opagio, Mark leads go-to-market strategy and client acquisition across the SME and investor markets.