AI Hallucination

Definition

An output generated by an artificial intelligence system — particularly large language models — that is factually incorrect, fabricated, or nonsensical, yet presented with apparent confidence. AI hallucinations pose significant risks in applications such as legal research, medical advice, and financial analysis, and their mitigation through grounding, retrieval-augmented generation, and human oversight is a key challenge in enterprise AI deployment.

Complementary Terms

Concepts that frequently appear alongside AI Hallucination in practice.

Retrieval-Augmented Generation (RAG) Architecture

A technical architecture that enhances large language model outputs by retrieving relevant information from an external knowledge base before generating a response, grounding the model's output in verified, up-to-date, and domain-specific data. RAG reduces hallucination risk, enables LLMs to access proprietary or recent information not in their training data, and provides citation capabilities.

Natural Language Processing

A branch of artificial intelligence concerned with enabling computers to understand, interpret, and generate human language. NLP powers applications such as chatbots, sentiment analysis, document classification, and automated contract review.

Retrieval-Augmented Generation (RAG)

An AI architecture that combines a large language model with an external knowledge retrieval system, enabling the model to ground its responses in verified, up-to-date information rather than relying solely on its training data. RAG reduces hallucination risk, improves factual accuracy, and allows organisations to deploy AI systems that reference proprietary knowledge bases without retraining the underlying model.

Knowledge-Intensive Business Services (KIBS)

Firms that provide specialist knowledge-based services such as consulting, engineering, IT services, legal advisory, and financial analysis. KIBS firms are characterised by high intangible asset intensity, with the majority of their enterprise value derived from human capital, client relationships, proprietary methodologies, and reputation.

Human Capital Return on Investment (HCROI)

A metric that measures the financial return generated per unit of human capital expenditure, typically calculated as adjusted profit divided by total compensation and benefits costs. HCROI enables firms and investors to evaluate workforce productivity and benchmark the efficiency of human capital deployment across organisations.

Freedom to Operate (FTO) Analysis

A legal assessment that determines whether a product, process, or technology can be commercialised without infringing the intellectual property rights of third parties. FTO analysis involves searching and reviewing granted patents and pending applications in relevant jurisdictions to identify potential infringement risks.

Explainable AI

Artificial intelligence systems designed to provide human-interpretable explanations of their decision-making processes and outputs. Explainability is increasingly required by regulators — particularly in financial services, healthcare, and criminal justice — and is a key differentiator for AI products seeking enterprise adoption in regulated industries.

Edge Computing

A distributed computing paradigm that processes data near the source of generation rather than in a centralised data centre, reducing latency, bandwidth costs, and data privacy risks. Edge computing is essential for real-time AI applications such as autonomous vehicles, industrial IoT, and point-of-sale analytics.

Put this knowledge to work

Use Opagio's free tools to measure and grow the intangible assets that drive your business value.