Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
LLM+P is a framework that combines the natural language understanding capabilities of large language models with the formal guarantees of classical AI planners.1) Introduced by Liu et al., 2023, the approach uses an LLM to translate natural language problem descriptions into Planning Domain Definition Language (PDDL), which is then solved by an established planner such as Fast Downward. This hybrid architecture leverages the strengths of both paradigms: LLMs handle ambiguous natural language input while classical planners provide optimal and correct solutions for well-defined planning problems.
The core LLM+P pipeline operates in three stages:
This separation of concerns ensures that the LLM handles what it excels at (language understanding, common sense, disambiguation) while the planner handles what it excels at (combinatorial search with correctness guarantees).
A 2025 survey by Tantakoun, Muise, and Zhu (ACL Findings 2025) reframes LLMs not as planners themselves but as planning formalizers that construct and iteratively refine PDDL models. Key contributions:
Common planners used in LLM+P architectures:
The 2025 International Planning Competition evaluation tested frontier LLMs (DeepSeek R1, Gemini 2.5 Pro, GPT-5) directly against LAMA on standard IPC domains. While GPT-5 was competitive on standard domains, all LLMs degraded significantly on obfuscated variants where semantic cues were removed, confirming that pure LLM planning relies heavily on pattern matching rather than formal reasoning.
The LLM+P paradigm has inspired several extensions:
Active research directions include automated PDDL domain learning, end-to-end differentiable planning, and integration with reinforcement learning for problems that resist pure symbolic formulation.