Technology and Product Quality Assessment
PE Due Diligence Programme — Lesson 6 of 10
Technology diligence has traditionally been the province of specialist advisors — brought in to "kick the tyres" on the codebase and provide a red/amber/green assessment. In too many deals, the technology report lands on the deal team's desk two weeks before completion, is 80 pages long, and is largely incomprehensible to anyone without a computer science degree.
This is a problem, because technology quality is no longer a niche IT concern. It is a strategic intangible asset that directly affects every dimension of the investment: growth potential (can the platform scale?), margin trajectory (how much will remediation cost?), competitive position (is the technology genuinely differentiated?), and integration risk (can this technology be combined with the platform?).
This lesson provides a commercially focused technology assessment framework — one that translates technical findings into deal-relevant language and helps PE professionals ask the questions that matter.
Technology diligence is not about whether the code is elegant. It is about whether the technology supports the business plan you are underwriting. A deal model that assumes 30% revenue growth requires a platform that can handle 30% more users, transactions, and data. A platform with significant technical debt may need 12-24 months of remediation before it can support growth — a delay that directly affects hold-period returns.
The Five Dimensions of Technology Assessment
Technology Assessment Dimensions
| Dimension | Core Question | Deal Impact |
|---|---|---|
| Architecture quality | Is the technology well-designed, modular, and maintainable? | Determines the cost and speed of future development |
| Technical debt | How much remediation work is required before the platform can support growth? | Directly affects the capex/opex plan and growth timeline |
| Scalability | Can the platform handle the growth the deal model assumes? | Constrains or enables the revenue growth assumption |
| Security and compliance | Are there security vulnerabilities or compliance gaps that create liability? | Potential for regulatory fines, breach costs, reputational damage |
| AI readiness | Is the technology architecture positioned to leverage AI, or will it be disrupted by it? | Increasingly determines competitive sustainability over the hold period |
Architecture Quality
Software architecture is the foundation on which everything else is built. Good architecture enables rapid feature development, reliable scaling, and efficient maintenance. Poor architecture creates compounding costs that grow worse over time.
Architecture Quality Indicators
| Indicator | Good | Concerning |
|---|---|---|
| Modularity | Loosely coupled services with clear interfaces | Monolithic application where changing one thing breaks others |
| Code quality | Consistent standards, automated testing, code review process | Inconsistent quality, no testing, no reviews |
| Documentation | Architecture decisions documented; API specifications maintained | Tribal knowledge; no documentation |
| Dependency management | Dependencies up to date; automated vulnerability scanning | Outdated frameworks; unpatched security vulnerabilities |
| Deployment | Automated CI/CD pipeline; frequent, low-risk releases | Manual deployments; infrequent, high-risk releases |
| Monitoring | Comprehensive logging, alerting, and performance monitoring | Reactive; problems discovered by customers |
A PE fund acquired a B2B SaaS company at 8x ARR based on a growth plan requiring rapid feature development for a new market segment. Post-deal, the technology assessment revealed that the application was a tightly coupled monolith built over 7 years with no automated testing. Every new feature risked breaking existing functionality. The engineering team spent 60% of its time on bug fixes and regression testing rather than new development. The growth plan was delayed by 18 months while the team refactored the architecture — 18 months of flat revenue against a plan that assumed 35% annual growth.
Technical Debt Assessment
Technical debt is the accumulated cost of suboptimal technology decisions — shortcuts taken, maintenance deferred, architecture left unchanged as the business outgrew it. Like financial debt, it compounds: small amounts are manageable, but left unchecked, it can paralyse a business.
Technical Debt Categories
| Category | Description | Typical Cost to Remediate |
|---|---|---|
| Code debt | Poorly written, duplicated, or unmaintainable code | Moderate — can be addressed incrementally through refactoring |
| Architecture debt | Fundamental design limitations that constrain the platform | High — may require partial or full replatforming |
| Infrastructure debt | Outdated servers, unsupported operating systems, end-of-life frameworks | Moderate-High — migration projects with operational risk |
| Testing debt | Insufficient automated testing, leading to manual QA bottlenecks | Moderate — takes 3-6 months to build adequate test coverage |
| Documentation debt | Undocumented systems, APIs, and processes | Low-Moderate — but creates high ongoing cost through slower development |
| Security debt | Known vulnerabilities unpatched; security practices below standard | Variable — could be low (patching) or very high (architectural security redesign) |
Quantifying Technical Debt
Technical debt should be quantified in terms PE professionals understand: time and money.
The Technical Debt Equation
Ask the CTO two questions: (1) If you could rebuild this platform from scratch with your current team, how long would it take? (2) What percentage of your engineering team's time is currently spent on maintenance, bug fixes, and working around existing limitations rather than building new capabilities? If the answer to question 2 is above 40%, the technical debt is material. If the answer to question 1 is "we could rebuild it better in 12 months," you should seriously consider whether you are paying for a platform or paying for a customer base that needs a new platform.
Scalability Assessment
Scalability is the technology's ability to handle increased load — more users, more transactions, more data — without degradation in performance or proportional increases in cost.
Scalability Assessment Framework
| Factor | Questions | Red Flags |
|---|---|---|
| Load capacity | What are the current usage levels vs. capacity? Has the system been load-tested? | Operating above 70% capacity with no headroom plan |
| Scaling approach | Can the system scale horizontally (add more servers) or only vertically (bigger servers)? | Vertical-only scaling with approaching hardware limits |
| Database | Is the database architecture designed for the projected data volumes? | Single-server database with no replication or sharding strategy |
| Cost economics | How does infrastructure cost scale with usage? Linear, sublinear, or superlinear? | Infrastructure costs growing faster than revenue |
| Third-party limits | Are there rate limits, usage caps, or pricing cliffs on critical third-party services? | Critical dependency on a service with punitive pricing at scale |
Security and Compliance
Security vulnerabilities and compliance gaps are intangible liabilities — they represent value that could be destroyed rather than value that exists. A data breach, a GDPR fine, or a compromised customer database can wipe out years of returns.
Security Diligence Priorities
| Area | Assessment | Data Sources |
|---|---|---|
| Vulnerability management | Are known vulnerabilities tracked and patched? What is the patch cadence? | Vulnerability scan reports, patch management records |
| Access controls | Are production systems properly secured? Who has admin access? | Access control lists, IAM configuration, audit logs |
| Data protection | Is personal data encrypted at rest and in transit? Is data processing GDPR-compliant? | Privacy impact assessments, data flow maps, DPA inventory |
| Incident history | Have there been security incidents? How were they handled? | Incident reports, breach notifications, forensic assessments |
| Penetration testing | When was the last pen test? What was found? Were findings remediated? | Pen test reports, remediation evidence |
A business that has never had a penetration test is not a business with no vulnerabilities — it is a business that does not know its vulnerabilities. In PE diligence, the absence of security testing is itself a red flag. Commission an independent pen test as part of the diligence process for any technology-intensive target.
AI Readiness
AI readiness is an increasingly important dimension of technology assessment. Not every business needs to be an AI company, but every business needs a technology architecture that can leverage AI tools and is not at risk of being disrupted by AI-native competitors.
AI Readiness Assessment
| Factor | AI-Ready | At Risk |
|---|---|---|
| Data accessibility | Clean, structured, accessible data in modern formats | Data trapped in silos, legacy databases, or unstructured formats |
| API architecture | Well-documented APIs enabling AI integration | No APIs; manual data extraction required |
| Automation potential | Clear opportunities for AI to automate or enhance core workflows | Manual processes that are candidates for AI disruption by competitors |
| Talent | Team with AI/ML experience or clear path to acquiring it | No AI capability and no plan to develop it |
| Competitive landscape | AI used as a competitive advantage or at parity with competitors | Competitors deploying AI that the target cannot match |
AI as Opportunity
- Proprietary data that could train AI models
- Repetitive processes that AI could automate
- Customer interactions that AI could personalise
- Pricing/underwriting that AI could optimise
AI as Threat
- Core service that AI could commoditise
- Competitor deploying AI to undercut pricing
- Manual processes that AI-native entrants automate
- Data moat that is eroding as alternatives emerge
Translating Technology Findings Into Deal Terms
Technology diligence findings should directly influence the deal model and structure.
Technology Finding to Deal Impact
| Finding | Deal Model Impact | Structural Response |
|---|---|---|
| Significant technical debt requiring remediation | Add remediation capex to the hold-period model; delay growth assumptions | Price adjustment; capex escrow; specific warranty on platform capability |
| Scalability constraints | Cap revenue growth assumptions until platform upgraded | Milestone-based earn-out; technology upgrade completion as condition |
| Security vulnerabilities | Model potential breach cost; insurance review | Specific indemnity for pre-completion vulnerabilities; mandatory pen test post-deal |
| AI disruption risk | Stress-test margins against AI-enabled competitor scenario | Shorter hold period assumption; AI investment plan in value creation plan |
What Comes Next
In Lesson 7: Goodwill Impairment Risk Analysis, we examine the residual — the portion of deal value that cannot be attributed to identifiable assets. Goodwill represents the gap between what you paid and what you can point to, and understanding the drivers and risks of that gap is essential for avoiding overpayment.
Mark Hillier is Co-Founder and CCO of Opagio. He brings more than 30 years' experience helping businesses scale, prepare for PE investment, and execute successful exits. He has sat across the table from PE buyers and knows what they need to see — and what they routinely miss. Meet the team.