Edge Computing

Definition

A distributed computing paradigm that processes data near the source of generation rather than in a centralised data centre, reducing latency, bandwidth costs, and data privacy risks. Edge computing is essential for real-time AI applications such as autonomous vehicles, industrial IoT, and point-of-sale analytics.

Complementary Terms

Concepts that frequently appear alongside Edge Computing in practice.

Data Governance

The framework of policies, standards, and processes that ensures data assets are managed consistently, securely, and in compliance with regulations throughout their lifecycle. Strong data governance increases the reliability and value of data as an intangible asset, directly supporting analytics, AI applications, and data monetisation strategies.

Master Data Management (MDM)

The processes, governance, policies, and technology used to ensure that an organisation's critical shared data entities — such as customers, products, suppliers, and accounts — are accurate, consistent, and controlled across all systems and business units. MDM creates a single trusted source of master data, reducing duplication, resolving conflicts, and enabling reliable reporting and analytics.

Data Lake

A centralised repository that stores large volumes of raw data in its native format — structured, semi-structured, and unstructured — until it is needed for analysis. Unlike data warehouses, which store data in predefined schemas, data lakes use a schema-on-read approach that provides flexibility for diverse analytical workloads including machine learning, real-time analytics, and ad hoc exploration.

AI Hallucination

An output generated by an artificial intelligence system — particularly large language models — that is factually incorrect, fabricated, or nonsensical, yet presented with apparent confidence. AI hallucinations pose significant risks in applications such as legal research, medical advice, and financial analysis, and their mitigation through grounding, retrieval-augmented generation, and human oversight is a key challenge in enterprise AI deployment.

Transfer Learning

A machine learning technique where a model trained on one task is repurposed as the starting point for a different but related task, significantly reducing the data and compute required for training. Transfer learning accelerates AI development timelines and reduces costs, making AI adoption more accessible to SMEs.

Computer Vision

A field of artificial intelligence that enables machines to interpret and extract information from visual inputs such as images, video, and documents. Computer vision is applied in quality inspection, medical imaging, autonomous vehicles, and document processing.

Data Pipeline

An automated sequence of data processing steps that extracts, transforms, and loads data from source systems into target systems for analysis, reporting, or machine learning model training. Well-architected data pipelines are critical infrastructure assets that enable data-driven decision-making and AI deployment, and their reliability directly impacts downstream business processes.

Data Mesh

A decentralised data architecture paradigm that treats data as a product owned by domain-specific teams rather than centralising all data management in a single platform team. Data mesh is built on four principles: domain ownership, data as a product, self-serve data infrastructure, and federated computational governance.

Put this knowledge to work

Use Opagio's free tools to measure and grow the intangible assets that drive your business value.