Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Few-shot prompting is a prompt engineering technique where a small number of input-output examples are included in the prompt to demonstrate the desired task behavior. The model performs in-context learning, identifying patterns from the provided demonstrations and applying them to new inputs without any parameter updates or fine-tuning.1)
Few-shot prompting leverages the in-context learning capability of large language models. The process involves three steps:
A typical few-shot prompt structure:
Input: "The food was amazing" -> Sentiment: Positive Input: "Terrible service" -> Sentiment: Negative Input: "Pretty good overall" -> Sentiment:
The model infers the pattern and completes the final example accordingly.
Few-shot prompting was formalized and extensively studied in the landmark GPT-3 paper by Brown et al. (2020).2) Key findings include:
The choice and arrangement of examples significantly impacts performance:
| Aspect | Few-Shot Prompting | Fine-Tuning |
| Data required | 1-5 examples | Hundreds to thousands |
| Training needed | None (inference only) | Gradient updates required |
| Cost | Minimal (API calls) | Significant (compute + data) |
| Flexibility | Change examples per task | Retrain for each task |
| Performance ceiling | Good, model-dependent | Generally higher |
| Deployment speed | Immediate | Hours to days |
| Model modification | None | Weights updated |