Transfer Learning

Definition

A machine learning technique where a model trained on one task is repurposed as the starting point for a different but related task, significantly reducing the data and compute required for training. Transfer learning accelerates AI development timelines and reduces costs, making AI adoption more accessible to SMEs.

Complementary Terms

Concepts that frequently appear alongside Transfer Learning in practice.

Machine Learning Model

A mathematical model trained on data to identify patterns and make predictions without being explicitly programmed for each task. Machine learning models underpin many AI-driven business applications, from demand forecasting to fraud detection, and their development costs are increasingly recognised as intangible assets under IAS 38 when they meet the identifiability and future economic benefit criteria.

Federated Learning

A machine learning technique that trains models across multiple decentralised devices or servers holding local data, without transferring the raw data to a central location. Federated learning addresses data privacy and sovereignty concerns by keeping sensitive data on-device while still enabling collaborative model improvement.

Technology Transfer

The process of transferring technological knowledge, intellectual property, or capabilities from one organisation or context to another. Technology transfer is central to the commercialisation of university research, licensing agreements, and cross-border investment, and its effectiveness depends on the quality of codified knowledge and absorptive capacity of the recipient.

Transfer Pricing

The rules and methods governing the pricing of transactions between related entities within a multinational group, designed to ensure that intercompany transactions reflect arm's-length prices. Transfer pricing is particularly significant for intangible assets, where the OECD Transfer Pricing Guidelines and BEPS Action 8-10 address the allocation of profits arising from intangible asset development, ownership, and exploitation across jurisdictions.

Fine-Tuning

The process of further training a pre-trained machine learning model on a smaller, domain-specific dataset to adapt it for a particular task or industry. Fine-tuning allows organisations to leverage foundational models while creating proprietary, specialised AI capabilities that constitute identifiable intangible assets.

Synthetic Data

Artificially generated data that mimics the statistical properties of real-world datasets, used to train machine learning models when actual data is scarce, sensitive, or expensive to obtain. Synthetic data enables AI development in privacy-constrained domains such as healthcare and finance, while reducing data acquisition costs and regulatory exposure.

MLOps

A set of practices combining machine learning, DevOps, and data engineering to standardise and streamline the end-to-end lifecycle of machine learning models, from development through deployment to monitoring. MLOps encompasses version control for models and data, automated testing, continuous integration and deployment, and model performance monitoring in production.

Model Drift

The degradation in a machine learning model's predictive accuracy over time as the statistical properties of the input data diverge from the training data distribution. Model drift requires ongoing monitoring and periodic retraining to maintain performance, and is a key operational risk in production AI systems.

Put this knowledge to work

Use Opagio's free tools to measure and grow the intangible assets that drive your business value.