Table of Contents

Meta Prompting

Meta prompting is an advanced prompt engineering technique where large language models (LLMs) are used to generate, refine, or optimize prompts for themselves or other models. Rather than manually crafting prompts, meta prompting treats prompt design as a task the LLM itself can perform, focusing on structural reasoning patterns rather than task-specific content.1)

How It Works

Meta prompting leverages LLMs as “prompt engineers” through several mechanisms:

  1. Prompt generation: Given a high-level task description, the LLM produces a detailed, step-by-step prompt template.
  2. Iterative refinement: The LLM evaluates its own outputs and refines the prompt through feedback loops.
  3. Task decomposition: Complex tasks are broken into subtasks with specialized instructions, and outputs are synthesized.

For example, a user might ask an LLM to “create an optimized prompt for JSON API processing.” The LLM generates a refined prompt with error handling, validation, and logging steps, which is then used for the actual task.

Key Frameworks

Several meta prompting frameworks have been developed:

Recursive Meta Prompting

The LLM generates its own step-by-step meta-prompt in a first pass, then solves the task using that prompt in a second pass. This is adaptive for zero-shot and few-shot settings but depends heavily on initial model quality.

Conductor-Model Meta Prompting

A central “conductor” LLM decomposes tasks and assigns specialized meta-prompts to different expert LLMs (e.g., coder, verifier, mathematician). This enables multi-agent collaboration for complex workflows.2)

Meta-Expert

The LLM simulates multiple expert roles via a meta-prompt for multi-perspective problem solving. The model engages in iterative dialogue from different viewpoints before synthesizing a final answer.

Instruction Enhancement

Basic user requests are enhanced into detailed, structured instructions. The meta-prompt maps reasoning steps and includes self-checking mechanisms for safety and accuracy.

The Zhang et al. Paper

The paper “On Meta-Prompting” by Zhang et al. (2023) formalizes meta prompting as a technique where LLMs condition outputs via in-context learning without backpropagation.3) Key contributions:

Practical Applications

Limitations

See Also

References

1) , 3)
Zhang et al. 2023, On Meta-Prompting
2)
Developed through Stanford-OpenAI collaboration