Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Meta prompting is an advanced prompt engineering technique where large language models (LLMs) are used to generate, refine, or optimize prompts for themselves or other models. Rather than manually crafting prompts, meta prompting treats prompt design as a task the LLM itself can perform, focusing on structural reasoning patterns rather than task-specific content.1)
Meta prompting leverages LLMs as “prompt engineers” through several mechanisms:
For example, a user might ask an LLM to “create an optimized prompt for JSON API processing.” The LLM generates a refined prompt with error handling, validation, and logging steps, which is then used for the actual task.
Several meta prompting frameworks have been developed:
The LLM generates its own step-by-step meta-prompt in a first pass, then solves the task using that prompt in a second pass. This is adaptive for zero-shot and few-shot settings but depends heavily on initial model quality.
A central “conductor” LLM decomposes tasks and assigns specialized meta-prompts to different expert LLMs (e.g., coder, verifier, mathematician). This enables multi-agent collaboration for complex workflows.2)
The LLM simulates multiple expert roles via a meta-prompt for multi-perspective problem solving. The model engages in iterative dialogue from different viewpoints before synthesizing a final answer.
Basic user requests are enhanced into detailed, structured instructions. The meta-prompt maps reasoning steps and includes self-checking mechanisms for safety and accuracy.
The paper “On Meta-Prompting” by Zhang et al. (2023) formalizes meta prompting as a technique where LLMs condition outputs via in-context learning without backpropagation.3) Key contributions: