====== Task Decomposition Strategies ====== ACONIC (Analysis of CONstraint-Induced Complexity) is a systematic framework for decomposing complex LLM tasks by modeling them as constraint satisfaction problems (CSPs). By reducing tasks to 3-SAT instances and using formal complexity measures like **treewidth** to guide decomposition, ACONIC achieves 10-40 percentage point improvements on combinatorial tasks compared to heuristic decomposition methods like Chain-of-Thought and Tree-of-Thoughts. graph TD A[Complex Task] --> B[Reduce to 3-SAT CSP] B --> C[Build Constraint Graph] C --> D[Measure Treewidth] D --> E[Tree Decomposition into Bags] E --> F[Solve Bag 1] F --> G[Solve Bag 2] G --> H[Solve Bag N] H --> I[Combine into Global Solution] F -.->|Shared variables| G G -.->|Shared variables| H ===== Background ===== LLMs struggle with combinatorial and constraint-heavy tasks because they attempt to satisfy all constraints simultaneously. Existing decomposition strategies (Chain-of-Thought, Divide-and-Conquer, Tree-of-Thoughts) rely on heuristic splitting without formal guarantees that local solutions compose into globally consistent answers. ACONIC addresses this gap by grounding decomposition in complexity theory. The key insight is that a task's structural complexity, measured by the treewidth of its constraint graph, determines the optimal decomposition strategy. ===== Formal Framework ===== === Task Reduction to CSP === Given a task specification $P = \langle F, A, I, G \rangle$ with fluents $F$, actions $A$, initial state $I$, and goal $G$, ACONIC applies Planning-as-Satisfiability (PaS) to reduce it to a 3-SAT instance: $$P \xrightarrow{\text{PaS}} \Phi = \{C_1, C_2, \ldots, C_m\}$$ where each clause $C_j$ encodes a constraint over task variables. === Constraint Graph and Treewidth === The constraint graph $G = (V, E)$ has variables as vertices and edges between variables appearing in the same clause. The **treewidth** $\text{tw}(G)$ quantifies structural complexity: $$\text{tw}(G) = \min_{T \in \mathcal{T}(G)} \max_{B \in T} (|B| - 1)$$ where $\mathcal{T}(G)$ is the set of all valid tree decompositions and $B$ ranges over bags in decomposition $T$. Lower treewidth means the task decomposes into weakly-coupled subproblems amenable to sequential reasoning. === Tree Decomposition === Using Bodlaender's algorithm, ACONIC computes a tree decomposition $T = (\mathcal{B}, E_T)$ where each bag $B_i \in \mathcal{B}$ contains a subset of variables. The decomposition satisfies: - Every variable appears in at least one bag - Every constraint's variables appear together in some bag - Bags containing the same variable form a connected subtree (running intersection property) ===== Decomposition-Guided Reasoning ===== The agent processes bags sequentially along the tree decomposition. For each bag $B_i$, the agent receives: * The sub-query restricted to variables in $B_i$ * A variable mapping $\psi_i$ connecting shared variables to prior assignments * Violation status from consistency checks on bag intersections * Prior outputs from parent bags Local consistency within each bag implies global satisfiability of the original CSP, provided the running intersection property holds. ===== Code Example ===== import networkx as nx from networkx.algorithms import approximation as approx def build_constraint_graph(clauses): # Build constraint graph from SAT clauses graph = nx.Graph() for clause in clauses: variables = list(clause) for i in range(len(variables)): for j in range(i + 1, len(variables)): graph.add_edge(variables[i], variables[j]) return graph def decompose_task(clauses): # Compute tree decomposition for constraint-guided reasoning graph = build_constraint_graph(clauses) treewidth, decomposition = approx.treewidth_min_degree(graph) print(f"Treewidth: {treewidth} -> guides decomposition granularity") bags = list(decomposition.nodes()) return bags def solve_with_aconic(llm, task, clauses): # Solve task using ACONIC decomposition-guided reasoning bags = decompose_task(clauses) assignments = {} for bag in bags: sub_query = restrict_query(task, bag, assignments) result = llm.generate(sub_query) assignments.update(parse_assignments(result, bag)) if not check_consistency(assignments, clauses): assignments = repair_violations(llm, assignments, bag) return compile_solution(assignments) ===== Results ===== ACONIC demonstrates substantial improvements across benchmarks: * **SATBench**: Completion rate improvements of 9-15 percentage points (Claude: 49.3% to 58.1%; LLaMA: 21.5% to 36.5%) * **Spider (NL2SQL)**: Outperforms Tree-of-Thoughts by 3.6-7.5 points overall; LLaMA3-70B gains ~30-40 points over Chain-of-Thought across all difficulty levels * **Token Efficiency**: Uses fewer tokens than Divide-and-Conquer baselines due to structure-aware decomposition The results confirm that formal complexity analysis provides a principled basis for task decomposition that outperforms heuristic methods. ===== Solvability Frontiers ===== A key contribution is the concept of **solvability frontiers** -- the boundary of task complexity where an LLM transitions from reliable to unreliable performance. ACONIC shifts these frontiers toward harder tasks by reducing effective per-step complexity through decomposition. For a task with treewidth $\text{tw}$, the effective complexity per reasoning step is bounded by $O(2^{\text{tw}})$, making previously intractable problems accessible when decomposition yields low treewidth. ===== References ===== * [[https://arxiv.org/abs/2510.07772|ACONIC: An Approach for Systematic Decomposition of Complex LLM Tasks (2025)]] * [[https://arxiv.org/html/2510.07772v3|ACONIC Full Paper HTML]] * [[https://daplab.cs.columbia.edu/projects/aconic/|ACONIC Project Page -- Columbia University]] ===== See Also ===== * [[mcts_llm_reasoning]] -- Tree search for structured reasoning * [[agentic_rag]] -- Autonomous retrieval planning with decomposition * [[formal_verification_llm_reasoning]] -- Verifying decomposed reasoning outputs