Definition

A set of practices combining machine learning, DevOps, and data engineering to standardise and streamline the end-to-end lifecycle of machine learning models, from development through deployment to monitoring. MLOps encompasses version control for models and data, automated testing, continuous integration and deployment, and model performance monitoring in production.

Complementary Terms

Concepts that frequently appear alongside MLOps in practice.

Data Pipeline

An automated sequence of data processing steps that extracts, transforms, and loads data from source systems into target systems for analysis, reporting, or machine learning model training. Well-architected data pipelines are critical infrastructure assets that enable data-driven decision-making and AI deployment, and their reliability directly impacts downstream business processes.

AI Governance

The framework of policies, procedures, and organisational structures that guide the responsible development, deployment, and monitoring of artificial intelligence systems. AI governance encompasses risk management, ethical guidelines, regulatory compliance, model validation, and accountability mechanisms.

Privacy by Design

An approach to systems engineering and product development that embeds data protection principles into the design and architecture of IT systems and business practices from the outset, rather than retrofitting them. Privacy by Design is codified as a legal requirement under GDPR Article 25 and encompasses data minimisation, pseudonymisation, and purpose limitation as default settings.

Model Drift

The degradation in a machine learning model's predictive accuracy over time as the statistical properties of the input data diverge from the training data distribution. Model drift requires ongoing monitoring and periodic retraining to maintain performance, and is a key operational risk in production AI systems.

Feature Store

A centralised platform for storing, managing, and serving the engineered features (input variables) used by machine learning models in both training and real-time inference. Feature stores ensure consistency between training and production environments, enable feature reuse across multiple ML models, reduce duplication of feature engineering effort, and provide a governance layer for tracking feature lineage and ownership.

Transfer Learning

A machine learning technique where a model trained on one task is repurposed as the starting point for a different but related task, significantly reducing the data and compute required for training. Transfer learning accelerates AI development timelines and reduces costs, making AI adoption more accessible to SMEs.

Synthetic Data

Artificially generated data that mimics the statistical properties of real-world datasets, used to train machine learning models when actual data is scarce, sensitive, or expensive to obtain. Synthetic data enables AI development in privacy-constrained domains such as healthcare and finance, while reducing data acquisition costs and regulatory exposure.

Fine-Tuning

The process of further training a pre-trained machine learning model on a smaller, domain-specific dataset to adapt it for a particular task or industry. Fine-tuning allows organisations to leverage foundational models while creating proprietary, specialised AI capabilities that constitute identifiable intangible assets.

Put this knowledge to work

Use Opagio's free tools to measure and grow the intangible assets that drive your business value.