Algorithmic Bias

Definition

Systematic and repeatable errors in an AI system's outputs that create unfair outcomes for particular groups, typically arising from biased training data, flawed model design, or unrepresentative sampling. Algorithmic bias poses significant reputational, legal, and regulatory risks, and its identification and mitigation are core components of responsible AI governance.

Complementary Terms

Concepts that frequently appear alongside Algorithmic Bias in practice.

Responsible AI

A framework for developing and deploying artificial intelligence systems that are fair, transparent, accountable, and aligned with human values and societal well-being. Responsible AI encompasses technical practices such as bias testing and model interpretability, alongside governance processes including ethical review boards, impact assessments, and stakeholder engagement.

Platform Business Model

A business model that creates value by facilitating exchanges between two or more interdependent user groups — typically producers and consumers — through a digital platform. Platform businesses generate powerful network effects and intangible assets including user data, algorithmic matching capabilities, and brand trust.

Generative AI

A category of artificial intelligence systems capable of creating new content — including text, images, code, music, and video — based on patterns learned from training data. Generative AI is transforming content production, product design, and software development, raising novel questions about intellectual property ownership and the valuation of AI-generated outputs.

Platform Economy

An economic model built around digital platforms that create value by facilitating exchanges between two or more user groups. Platform businesses derive the majority of their enterprise value from intangible assets including network effects, proprietary algorithms, user data, and brand trust.

Retrieval-Augmented Generation (RAG) Architecture

A technical architecture that enhances large language model outputs by retrieving relevant information from an external knowledge base before generating a response, grounding the model's output in verified, up-to-date, and domain-specific data. RAG reduces hallucination risk, enables LLMs to access proprietary or recent information not in their training data, and provides citation capabilities.

Retrieval-Augmented Generation (RAG)

An AI architecture that combines a large language model with an external knowledge retrieval system, enabling the model to ground its responses in verified, up-to-date information rather than relying solely on its training data. RAG reduces hallucination risk, improves factual accuracy, and allows organisations to deploy AI systems that reference proprietary knowledge bases without retraining the underlying model.

Fine-Tuning

The process of further training a pre-trained machine learning model on a smaller, domain-specific dataset to adapt it for a particular task or industry. Fine-tuning allows organisations to leverage foundational models while creating proprietary, specialised AI capabilities that constitute identifiable intangible assets.

Data Lineage

The documented lifecycle of data as it moves through an organisation's systems, showing its origin, transformations, dependencies, and destinations. Data lineage provides visibility into how data is created, processed, and consumed, enabling organisations to ensure data quality, comply with regulatory requirements (particularly GDPR's right to explanation), debug data pipeline issues, and assess the impact of system changes.

Put this knowledge to work

Use Opagio's free tools to measure and grow the intangible assets that drive your business value.