Building AI-Ready Engineering Teams: What 15 Years of Hiring 250 Engineers Taught Me
Building AI-Ready Engineering Teams: What 15 Years of Hiring 250 Engineers Taught Me
Between 2000 and 2015, I built an engineering organisation at IG Group from 4 to 250 engineers. We grew during periods of explosive growth and severe market contraction. We deployed agile at scale before that term was standardised. We achieved continuous deployment with zero-downtime releases — a rarity in 2010 and still uncommon now. We navigated multiple technology transitions: the shift from desktop to mobile, the adoption of cloud infrastructure, the integration of data science into product development.
Throughout these transitions, two lessons became clear. First, the team is the product. The quality of software a team produces is a direct reflection of the team's capability, culture, and decision-making systems. Second, team composition matters more than headcount. A well-composed team of 30 engineers executing against clear direction will outperform a poorly composed team of 100.
The transition to AI creates a third moment of structural reset in engineering team design. The skills that were critical in 2020 are still valuable, but they are no longer sufficient. The team composition that worked before AI will not be optimal after. And the hiring, promotion, and development systems that were designed for a pre-AI engineering culture will not attract or retain the talent that matters most in 2026.
The Shift in What Engineers Need to Know
In 2015, the competency profile for a strong software engineer looked like this:
- Core language proficiency (usually Java, C++, or Python)
- Understanding of distributed systems principles
- Database design and optimisation
- API design and service architecture
- Testing and deployment practices
- Problem-solving and architectural thinking
These fundamentals remain essential. But the competency profile for an AI-ready engineer in 2026 requires addition in three critical areas:
1. Model Evaluation and Prompt Engineering
Engineers can no longer treat AI models as black boxes. They need to understand model behaviour well enough to assess whether a particular AI implementation is appropriate for a production use case. This means:
- How to evaluate model output quality for your specific task
- How to detect model drift and degradation
- How to structure prompts to extract reliable behaviour from language models
- How to recognise hallucination risk and design mitigations
- How to assess bias and fairness in model predictions
This is not a specialisation for ML engineers only. A backend engineer integrating an AI service into a product needs these competencies. A data engineer building a pipeline that feeds AI models needs them. A frontend engineer rendering AI-generated content needs them.
AI model evaluation is not a specialist competency. It is a foundational skill for any engineer working with AI systems in production. Engineers who cannot assess model quality cannot build reliable AI-integrated features.
2. Data Pipeline Architecture
The classical software engineering pipeline — code → build → test → deploy — remains essential. But around it, a parallel pipeline has emerged: data → feature engineering → model training → model evaluation → deployment.
Engineers need to understand:
- How training data flows through the system and what quality it must have
- How to construct reliable feature engineering pipelines
- How to instrument model training and monitoring
- How to design safe rollout and rollback for model updates
- How to separate model infrastructure from application infrastructure
In 2015, maybe 5% of engineers needed to understand these concepts deeply. In 2026, it is closer to 30%. This is not because we are shifting 30% of the engineering team to ML. It is because AI is distributed across the product.
3. Human-AI Collaboration Design
The most important competency shift is conceptual, not technical. It is the ability to design systems where humans and AI work together effectively.
This means understanding:
- Where AI should augment vs. replace human judgment
- How to design interfaces that help humans understand AI-generated output
- How to structure workflows where AI handles routine cases and humans handle exceptions
- How to measure the productivity gain from human-AI collaboration vs. purely automated approaches
- How to identify failure modes where human-AI collaboration breaks down
This is partly product design, partly engineering, partly organisational design. Engineers who can think clearly about how humans and machines should interact will be the most valuable in the AI era.
The Optimal Team Composition in the AI Era
Based on scaling engineering teams across multiple technology transitions, here is the team composition I would recommend for a technology-driven organisation in 2026:
| Role Category | 2015 Optimal Mix | 2026 Optimal Mix | Selection Criteria |
|---|---|---|---|
| Senior engineers (staff/principal level) | 8-10% | 10-12% | Deep expertise in architectural thinking, human-AI collaboration design |
| Full-stack and backend engineers | 50% | 40% | Core coding competency + basic AI model evaluation + data pipeline understanding |
| ML/AI specialists | 2-3% | 8-10% | Deep expertise in model training, evaluation, deployment; infrastructure for ML systems |
| Data engineers | 3-5% | 6-8% | Data quality, pipeline reliability, instrumentation for AI systems |
| Frontend/UX engineers | 20% | 15% | Reduced headcount due to AI-augmented design; focus on human-AI interaction patterns |
| Infrastructure/DevOps | 8-12% | 12-15% | Expanded scope to include ML infrastructure, model registry, feature stores |
The most significant changes:
First, ML engineer headcount roughly tripled as a proportion of the team. This is not because AI is a specialisation that only affects 8% of work. It is because effective AI deployment requires ML engineers embedded across teams, not siloed in a separate organisation.
Second, senior engineering headcount increased slightly, reflecting that architectural decisions in the AI era are more complex. Where to use AI vs. where to optimise existing code is an architectural decision, not an implementation detail. Designing effective human-AI workflows is an architectural challenge.
Third, frontend headcount decreased, reflecting that much work traditionally done by frontend engineers (form generation, simple UI state management, basic personalisation) can now be handled by AI-augmented approaches. The remaining frontend engineering is focused on complex interaction patterns and human-AI collaboration.
Fourth, infrastructure headcount increased, reflecting the new complexity of ML infrastructure. Model registries, feature stores, real-time serving infrastructure, and instrumentation for model monitoring all require skilled infrastructure engineers.
The Competency Framework: What to Assess at Hire and at Promotion
The most consequential mistake organisations make when building AI-ready teams is failing to define clearly what competencies matter most and how they are assessed.
Here is the competency framework I would use to guide hiring and promotion decisions:
Tier 1: Foundational (Required for all engineers)
- Core coding competency: Ability to write clear, maintainable code in your team's primary language. Can be assessed in technical interviews through pair coding exercises.
- System thinking: Ability to reason about how components interact, identify coupling, and design for resilience. Assessed through architecture discussions, code review feedback, design doc quality.
- AI literacy: Can explain what a language model is, what it can reliably do, what failure modes exist. Can assess whether a particular use case is appropriate for AI or should be solved deterministically. Assessed through discussion, not coding exercises.
Tier 2: Specialised (Required for 30% of team)
- Data pipeline design: Can design reliable data flows from source through feature engineering to consumption. Understands data quality, staleness, and latency tradeoffs. Assessed through design discussions, code review of data systems.
- Model evaluation: Can assess model quality for a specific use case. Understands false positive/false negative tradeoffs, can interpret model metrics, can identify when a model is not suitable for production. Assessed through case studies, evaluation of actual models.
- Human-AI interaction design: Can reason about when to use AI to augment vs. when to solve deterministically. Can design workflows where humans and AI systems fail gracefully together. Assessed through design discussions, product review.
Tier 3: Deep Specialisation (Required for ML engineers, ~8-10% of team)
- Model training and evaluation: Can train, evaluate, and iterate on models. Understands hyperparameter tuning, data augmentation, evaluation methodology. Can identify model quality issues from metrics.
- ML infrastructure: Can design systems to train, serve, and monitor models in production. Understands model registries, feature stores, serving patterns, and monitoring.
- Domain-specific ML: Deep expertise in specific domains — computer vision, NLP, recommendation systems, time series — with understanding of state-of-the-art techniques and their limitations.
How to Hire for AI Readiness
The hiring process for AI-ready engineering teams should be distinctly different from hiring for traditional software engineering:
Test AI literacy as a foundational gate. Before evaluating coding ability, assess whether a candidate understands what AI can and cannot do. Can they explain hallucination? Can they identify when a deterministic approach is better than an AI approach? Many excellent traditional engineers lack this intuition and it cannot be built in a 6-month onboarding. It is better to identify it early.
Emphasise data pipeline understanding in backend engineer interviews. Ask candidates to design a system that ingests data, applies transformations, and feeds it into models. Assess their thinking about data quality, freshness, and latency.
Use realistic AI case studies in interviews. Present candidates with a realistic scenario: "We trained a model that achieves 92% accuracy on a test set. The business wants to deploy it in production to assist our customer service team. What would you need to know before recommending deployment? What could go wrong?"
Assess the quality of caution about AI. The best AI-ready engineers have healthy scepticism about AI. They understand limitations. They ask hard questions about data and drift. Overconfidence in AI is often a warning sign.
For ML engineers specifically, weight research experience and publication heavily. ML engineering is a field where knowing the literature matters. Papers from top conferences (NeurIPS, ICML, ICLR) signal engagement with the field. Publications signal ability to communicate technical concepts clearly.
Structuring the Team for Continuous Learning
AI is moving faster than almost any domain in software engineering. A team that is static — that hires once and then relies on internal promotion — will degrade rapidly as the field evolves.
Here is how I would structure a team for continuous learning:
First, allocate time explicitly for learning and experimentation. At IG Group, we used a "10% time" model where engineers could spend 10% of their week on learning, experimentation, or side projects. For AI-intensive teams, this should be higher — 20% is not unreasonable if the team is responsible for staying current with a rapidly evolving field.
Second, create a learning pathway from junior to senior. The engineer who is competent in traditional backend engineering but new to AI should have a clear progression: learn AI literacy through structured onboarding; contribute to projects that require Tier 2 competencies; potentially progress to Tier 3 if they show aptitude and interest. This should be explicit and supported.
Third, rotate engineers through different specialisations. An engineer who spends their entire career writing deterministic backend code will remain a backend engineer. An engineer who rotates through data engineering, model evaluation, and infrastructure will develop a broader perspective on how to integrate AI effectively. Rotations build both competency and cross-team understanding.
Fourth, invest in senior hiring to anchor the team. The single most important hire for an AI-ready team is a senior engineer or principal engineer who has lived through AI adoption at scale. They become the anchor around which the rest of the team's learning orbits. This hire is expensive and usually worth every penny.
The Most Valuable Engineering Intangible Asset
When I look back at the engineering organisations I have built, the most valuable intangible asset was not a particular piece of code. It was the decision-making system — the mental models, the design principles, the quality standards — that the team shared.
The team that values simplicity over cleverness ships more reliable systems. The team that prioritises observability and monitoring catches production issues before customers do. The team that maintains strong boundaries between layers is easier to refactor than the team that has high coupling everywhere.
In the AI era, the most valuable engineering intangible asset is not a trained model or a data asset. It is the team's shared understanding of when and how to use AI effectively — and the discipline to not use it when a simpler, more deterministic approach is better.
For organisations building AI-ready teams, this is where the real investment should be. Not in trying to make every engineer an AI specialist. But in creating a shared competency framework, a culture that values both technical depth and breadth, and leadership that can guide the team through the transition successfully.
The teams that will be most effective in the AI era are those that treat AI as a tool to be integrated thoughtfully into the engineering discipline, not as a revolutionary force that makes everything that came before obsolete.
Ivan Gowan is the founder and CEO of Opagio. He spent 15 years as a senior technology leader at IG Group (LSE: IGG), overseeing engineering growth from 4 to 250 during the company's rise from £300m to £2.7bn market capitalisation. He holds an MSc from Edinburgh with research in neural networks (2001).
Subscribe to our newsletter
Get the latest insights on intangible asset growth and productivity delivered to your inbox.
Want to learn more about your intangible assets?
Book a free consultation to see how the Opagio Growth Platform can help your business.