Analogical Prompting is a prompt engineering technique introduced by Yasunaga et al. (2023) from Google DeepMind and Stanford University that instructs large language models to self-generate relevant examples through analogical reasoning before solving a target problem. Inspired by how humans recall past experiences when facing new challenges, this method eliminates the need for manually labeled few-shot exemplars while adapting generated demonstrations to each specific problem.
Existing chain-of-thought (CoT) prompting methods face a trade-off:
Analogical Prompting achieves the best of both worlds: automatically generated, problem-specific exemplars with no manual labeling required.
The approach follows three steps within a single LLM call:
Present the target problem to the LLM.
Instruct the model to recall or generate 3-5 relevant problems (with solutions) that are structurally similar to the target. The prompt explicitly asks for distinct and relevant examples.
The LLM uses its self-generated exemplars as context to solve the original problem.
# Analogical Prompting implementation def analogical_prompt(problem, llm, n_exemplars=3): prompt = ( f"Your task is to solve the following problem.\n\n" f"Problem: {problem}\n\n" f"Before solving, recall {n_exemplars} relevant and distinct problems " f"you have encountered before. For each:\n" f"1. State the problem\n" f"2. Explain the solution step by step\n" f"3. Identify the key principle or technique used\n\n" f"After generating these exemplars, solve the original problem using " f"insights from the analogies above.\n" ) response = llm.generate(prompt) return response # Self-Generated Knowledge + Exemplars variant (for code generation) def analogical_prompt_with_knowledge(problem, llm, n_exemplars=3): prompt = ( f"Your task is to solve the following problem.\n\n" f"Problem: {problem}\n\n" f"First, identify the core concepts and techniques relevant to this problem.\n" f"Provide a brief tutorial or key takeaways for each concept.\n\n" f"Then, recall {n_exemplars} relevant and distinct problems. For each:\n" f"1. State the problem\n" f"2. Explain the solution step by step\n\n" f"Finally, solve the original problem using the knowledge and exemplars above.\n" ) response = llm.generate(prompt) return response
The paper introduces two complementary approaches:
Generates relevant exemplar problems and solutions. Works well for mathematical reasoning and general problem-solving tasks.
For complex tasks like code generation, the model may over-rely on low-level exemplar patterns. This variant adds an instruction to first identify core concepts and provide high-level tutorials before generating exemplars. This mitigates overfitting to surface-level similarities.
| Method | Exemplars | Adaptability | Manual Effort |
|---|---|---|---|
| Zero-Shot CoT | None (generic instruction) | Low | None |
| Few-Shot CoT | Fixed, manually labeled | Low (same for all problems) | High |
| Retrieval-Augmented CoT | Retrieved from database | Medium | Medium (requires database) |
| Analogical Prompting | Self-generated per problem | High | None |
The key advantage is adaptability: generated exemplars are tailored to each problem's specific structure, providing more relevant guidance than any fixed set of demonstrations.
The approach draws from analogical reasoning in cognitive psychology (Vosniadou & Ortony, 1989):
Evaluated with GPT-3.5-turbo and GPT-4 across diverse reasoning benchmarks:
| Benchmark | Task Type | Improvement over 0-shot CoT |
|---|---|---|
| GSM8K | Math reasoning | Significant |
| MATH | Advanced math | Significant |
| Codeforces | Code generation | Significant (with Knowledge variant) |
| BIG-Bench | Diverse reasoning | Average +5% accuracy |
Key findings:
A minimal template for analogical prompting:
[Insert problem here]
Instruction: Before solving, recall 3 relevant and distinct problems as exemplars. For each, describe the problem and solution. Then solve the initial problem step by step.