Core Concepts
Reasoning Techniques
Memory Systems
Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools & Products
Safety & Governance
Evaluation
Research
Development
Meta
Core Concepts
Reasoning Techniques
Memory Systems
Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools & Products
Safety & Governance
Evaluation
Research
Development
Meta
The Cognitive Architectures for Language Agents (CoALA) framework, proposed by Sumers et al. (2023), provides a systematic taxonomy for organizing LLM-based language agents into modular components inspired by cognitive science. Drawing on decades of research in cognitive architectures such as Soar and ACT-R, CoALA formalizes the design space of language agents through memory modules, structured action spaces, and decision-making procedures.
As language model-based agents proliferate — from ReAct to Reflexion to Voyager — the field lacks a unifying framework to compare, categorize, and design them. CoALA addresses this by proposing a modular architecture that retrospectively organizes existing agents and prospectively identifies gaps in the design space. The framework defines an agent as a tuple:
$$A = (M_w, M_{lt}, \mathcal{A}_i, \mathcal{A}_e, D)$$
where $M_w$ is working memory, $M_{lt}$ is long-term memory, $\mathcal{A}_i$ is the internal action space, $\mathcal{A}_e$ is the external action space, and $D$ is the decision procedure.
CoALA divides agent memory into working memory and three types of long-term memory, mirroring distinctions from cognitive psychology:
Actions are partitioned into internal and external categories:
# Simplified CoALA agent loop class CoALAAgent: def __init__(self, llm, episodic_mem, semantic_mem, procedural_mem): self.llm = llm self.working_memory = [] self.episodic = episodic_mem self.semantic = semantic_mem self.procedural = procedural_mem def decision_loop(self, observation): self.working_memory.append(observation) while not self.should_act_externally(): # Internal actions: retrieve, reason, learn retrieved = self.retrieve(self.working_memory) reasoning = self.llm.reason(self.working_memory + retrieved) self.working_memory.append(reasoning) action = self.select_external_action(self.working_memory) result = self.execute(action) self.episodic.store(observation, action, result) return result
CoALA formalizes decision-making as a continuous loop with two stages:
This distinguishes agents on a spectrum from purely reactive (single LLM call maps observation to action) to deliberative (multi-step internal planning before acting).
CoALA explicitly builds on classical cognitive architectures:
The framework positions LLM agents within a 50-year lineage of AI research, arguing that cognitive architectures provide the missing organizational structure for the rapidly expanding space of language agents.