Table of Contents

Analogical Prompting

Analogical Prompting is a prompt engineering technique introduced by Yasunaga et al. (2023) from Google DeepMind and Stanford University that instructs large language models to self-generate relevant examples through analogical reasoning before solving a target problem. Inspired by how humans recall past experiences when facing new challenges, this method eliminates the need for manually labeled few-shot exemplars while adapting generated demonstrations to each specific problem.

Motivation

Existing chain-of-thought (CoT) prompting methods face a trade-off:

Analogical Prompting achieves the best of both worlds: automatically generated, problem-specific exemplars with no manual labeling required.

Method

The approach follows three steps within a single LLM call:

Step 1: Problem Statement

Present the target problem to the LLM.

Step 2: Self-Generate Exemplars

Instruct the model to recall or generate 3-5 relevant problems (with solutions) that are structurally similar to the target. The prompt explicitly asks for distinct and relevant examples.

Step 3: Solve the Original Problem

The LLM uses its self-generated exemplars as context to solve the original problem.

# Analogical Prompting implementation
 
def analogical_prompt(problem, llm, n_exemplars=3):
    prompt = (
        f"Your task is to solve the following problem.\n\n"
        f"Problem: {problem}\n\n"
        f"Before solving, recall {n_exemplars} relevant and distinct problems "
        f"you have encountered before. For each:\n"
        f"1. State the problem\n"
        f"2. Explain the solution step by step\n"
        f"3. Identify the key principle or technique used\n\n"
        f"After generating these exemplars, solve the original problem using "
        f"insights from the analogies above.\n"
    )
    response = llm.generate(prompt)
    return response
 
 
# Self-Generated Knowledge + Exemplars variant (for code generation)
def analogical_prompt_with_knowledge(problem, llm, n_exemplars=3):
    prompt = (
        f"Your task is to solve the following problem.\n\n"
        f"Problem: {problem}\n\n"
        f"First, identify the core concepts and techniques relevant to this problem.\n"
        f"Provide a brief tutorial or key takeaways for each concept.\n\n"
        f"Then, recall {n_exemplars} relevant and distinct problems. For each:\n"
        f"1. State the problem\n"
        f"2. Explain the solution step by step\n\n"
        f"Finally, solve the original problem using the knowledge and exemplars above.\n"
    )
    response = llm.generate(prompt)
    return response

Two Variants

The paper introduces two complementary approaches:

Basic Analogical Prompting

Generates relevant exemplar problems and solutions. Works well for mathematical reasoning and general problem-solving tasks.

Self-Generated Knowledge + Exemplars

For complex tasks like code generation, the model may over-rely on low-level exemplar patterns. This variant adds an instruction to first identify core concepts and provide high-level tutorials before generating exemplars. This mitigates overfitting to surface-level similarities.

How It Differs from Other Methods

Method Exemplars Adaptability Manual Effort
Zero-Shot CoT None (generic instruction) Low None
Few-Shot CoT Fixed, manually labeled Low (same for all problems) High
Retrieval-Augmented CoT Retrieved from database Medium Medium (requires database)
Analogical Prompting Self-generated per problem High None

The key advantage is adaptability: generated exemplars are tailored to each problem's specific structure, providing more relevant guidance than any fixed set of demonstrations.

Cognitive Science Foundation

The approach draws from analogical reasoning in cognitive psychology (Vosniadou & Ortony, 1989):

Results

Evaluated with GPT-3.5-turbo and GPT-4 across diverse reasoning benchmarks:

Benchmark Task Type Improvement over 0-shot CoT
GSM8K Math reasoning Significant
MATH Advanced math Significant
Codeforces Code generation Significant (with Knowledge variant)
BIG-Bench Diverse reasoning Average +5% accuracy

Key findings:

Prompt Template

A minimal template for analogical prompting:

[Insert problem here]
Instruction: Before solving, recall 3 relevant and distinct problems
as exemplars. For each, describe the problem and solution. Then solve
the initial problem step by step.

graph LR A[Target Problem] --> B[LLM Self-Generates Exemplars] B --> C[Exemplar 1: Problem + Solution] B --> D[Exemplar 2: Problem + Solution] B --> E[Exemplar 3: Problem + Solution] C --> F[Solve Original Problem] D --> F E --> F F --> G[Final Answer]

References

See Also