Prompt Engineering
Definition
The practice of designing and optimising input instructions (prompts) to elicit desired outputs from large language models and other generative AI systems. Effective prompt engineering can significantly improve AI output quality and consistency, and documented prompt libraries are emerging as a form of organisational knowledge capital.
Complementary Terms
Concepts that frequently appear alongside Prompt Engineering in practice.
The accumulated knowledge, processes, systems, and culture that enable a firm to operate effectively. Organisational capital includes management practices, internal processes, proprietary methodologies, quality systems, and the institutional knowledge that persists beyond individual employees.
A quantitative measure of data fitness for its intended use, typically assessed across dimensions including accuracy, completeness, consistency, timeliness, uniqueness, and validity. Data quality scores enable organisations to monitor and improve the reliability of their data assets, prioritise remediation efforts, and establish trust in analytical outputs.
A technical architecture that enhances large language model outputs by retrieving relevant information from an external knowledge base before generating a response, grounding the model's output in verified, up-to-date, and domain-specific data. RAG reduces hallucination risk, enables LLMs to access proprietary or recent information not in their training data, and provides citation capabilities.
The process of breaking text, code, or other sequential data into discrete units (tokens) that serve as the input and output elements for large language models. Tokenisation determines how a model processes language and directly affects inference costs, since API pricing for large language models is typically based on token count.
The amount of output produced per unit of labour input, commonly measured as gross value added (GVA) divided by labour costs or number of employees. Labour productivity is a key efficiency metric that reflects the quality of human capital, processes, and technology deployed by a firm.
A category of artificial intelligence systems capable of creating new content — including text, images, code, music, and video — based on patterns learned from training data. Generative AI is transforming content production, product design, and software development, raising novel questions about intellectual property ownership and the valuation of AI-generated outputs.
A centralised platform for storing, managing, and serving the engineered features (input variables) used by machine learning models in both training and real-time inference. Feature stores ensure consistency between training and production environments, enable feature reuse across multiple ML models, reduce duplication of feature engineering effort, and provide a governance layer for tracking feature lineage and ownership.
Proprietary datasets, analytics capabilities, and data infrastructure that provide competitive advantage. Data assets include customer behavioural data, market intelligence, training datasets for AI models, and proprietary databases that improve decision-making or product quality.
Put this knowledge to work
Use Opagio's free tools to measure and grow the intangible assets that drive your business value.