Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Zero-shot prompting is a fundamental prompt engineering technique where a large language model (LLM) is asked to perform a task using only a natural-language instruction, without any task-specific examples provided in the prompt. The model relies entirely on knowledge acquired during pretraining to understand and execute the task.1)
In standard zero-shot prompting, the user provides a task description and input directly to the model. The prompt takes the form:
[Task instruction] [Input] [Output indicator]
For example, a sentiment classification task might be prompted as:
Classify the following text as positive or negative. Text: "The movie was absolutely wonderful." Sentiment:
The model generates its response based solely on patterns learned during pretraining, without any demonstrations of the expected input-output mapping.
A landmark advancement in zero-shot prompting was introduced by Kojima et al. (2022) in their paper “Large Language Models are Zero-Shot Reasoners.”2) The authors discovered that simply appending the phrase “Let's think step by step” to a prompt triggers multi-step reasoning in LLMs without any demonstrations.
This technique, called Zero-Shot Chain-of-Thought (Zero-Shot-CoT), operates in two stages:
Zero-Shot-CoT produced dramatic improvements over standard zero-shot prompting on reasoning benchmarks:3)
| Task | Zero-Shot | Zero-Shot-CoT | Gain |
| MultiArith | 17.7% | 78.7% | +61.0% |
| GSM8K | 10.4% | 40.7% | +30.3% |
| AQUA-RAT | — | Substantial | — |
| SVAMP | — | Substantial | — |
The approach also outperformed standard few-shot prompting (without CoT) on GSM8K, improving from 17.9% to 58.1%.
Zero-shot prompting is most appropriate when:
| Aspect | Zero-Shot | Few-Shot |
| Examples needed | None | 1-5+ demonstrations |
| Setup effort | Minimal | Requires example curation |
| Flexibility | Task-agnostic, single template | Task-specific, needs per-task examples |
| Performance | Good baseline, strong with CoT | Generally higher on complex tasks |
| Best for | Rapid prototyping, simple tasks | Production systems, complex reasoning |