Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Generate Knowledge Prompting is a two-step prompting technique that improves commonsense reasoning by first generating relevant knowledge statements from a language model, then using those statements to augment the question-answering prompt. The method treats LLMs as flexible sources of external knowledge without requiring task-specific supervision or structured knowledge bases.1)
The method operates through two distinct phases:
A language model generates question-related knowledge statements through few-shot prompting. The generation prompt consists of three components:
The model generates multiple knowledge statements (M=20 in the original paper) using nucleus sampling with p=0.5. Generation is terminated when output exceeds 64 tokens or encounters a newline character. Repetitions and empty strings are discarded.2)
Each generated knowledge statement is concatenated with the original question to create M knowledge-augmented questions:
q_0 = q (original question) q_1 = [k_1 || q] q_2 = [k_2 || q] ... q_m = [k_m || q]
The model makes predictions on each augmented question, and the highest-confidence prediction across all versions is selected as the final answer. This approach requires no joint fine-tuning for knowledge integration.
Generate Knowledge Prompting achieved state-of-the-art results on multiple commonsense reasoning benchmarks:3)
The method outperformed template-based knowledge generation methods like Self-Talk while performing comparably to retrieval-based systems. It improved performance across all four commonsense reasoning tasks tested.
Three factors were identified as critical to effectiveness:
Qualitative analysis revealed that generated knowledge statements cover diverse knowledge types and can transform commonsense QA into explicit reasoning procedures (such as deduction) that language models process more effectively.