Table of Contents

Collective Agent Behavior

Collective agent behavior studies the emergent phenomena that arise when networks of LLM-powered agents interact, communicate, and make decisions together. Unlike classical agent-based models using simple rules, cognitive agents powered by LLMs exhibit advanced reasoning that fundamentally changes how consensus, polarization, and swarm dynamics emerge at scale.

From Particles to Cognitive Agents

Classical collective behavior models (particle swarm optimization, opinion dynamics, Schelling segregation) use agents governed by formal mathematical rules. LLM agents differ in three critical ways:

The central question is whether embedded “intelligence” improves or degrades collective outcomes compared to simple rule-following particles.

The Nature npj AI Paper (March 2026)

De Marzo, Castellano, and Garcia published “Unraveling the emergence of collective behavior in networks of cognitive agents” in npj Artificial Intelligence (March 2026), comparing LLM agent swarms with classical particles across two tasks.

LLM Agent Swarm Optimization (llmASO): A swarm of interacting LLM agents acts as a function optimizer. Each agent proposes solutions, observes neighbors' results, and updates its strategy through natural language reasoning.

Key findings:

$$F_{majority} = \frac{n_{agree}}{n_{total}} \cdot w_{conformity}$$

The majority force coefficient governs how strongly agents conform to neighborhood consensus. Coordination breaks down beyond a critical group size $N_c$ that grows exponentially with model language ability.

Consensus and Conformity

LLM agents exhibit conformity bias consistent with Social Impact Theory from social psychology. Conformity pressure depends on:

Even high-performing agents in isolation falter under peer pressure near their competence boundaries. This has implications for multi-agent systems where agents vote or debate — the “wisdom of crowds” effect can reverse into groupthink.

# Simulating opinion dynamics in an LLM agent network
import networkx as nx
import numpy as np
 
class CognitiveAgent:
    def __init__(self, agent_id, initial_opinion, stubbornness=0.3):
        self.id = agent_id
        self.opinion = initial_opinion  # continuous value in [0, 1]
        self.stubbornness = stubbornness
 
    def update_opinion(self, neighbor_opinions):
        compatible = [o for o in neighbor_opinions
                      if abs(o - self.opinion) < 0.3]  # confidence threshold
        if compatible:
            mean_neighbor = np.mean(compatible)
            self.opinion = (
                self.stubbornness * self.opinion +
                (1 - self.stubbornness) * mean_neighbor
            )
        return self.opinion
 
def simulate_opinion_dynamics(n_agents=50, n_steps=100, topology="watts_strogatz"):
    if topology == "watts_strogatz":
        G = nx.watts_strogatz_graph(n_agents, k=6, p=0.3)
    elif topology == "barabasi_albert":
        G = nx.barabasi_albert_graph(n_agents, m=3)
    else:
        G = nx.complete_graph(n_agents)
 
    agents = {i: CognitiveAgent(i, np.random.random()) for i in range(n_agents)}
    history = []
    for step in range(n_steps):
        for node in G.nodes():
            neighbors = list(G.neighbors(node))
            neighbor_opinions = [agents[n].opinion for n in neighbors]
            agents[node].update_opinion(neighbor_opinions)
        history.append([agents[i].opinion for i in range(n_agents)])
    return np.array(history)

Swarm Patterns and Tribal Formation

In resource-constrained environments, diverse LLM agent populations (GPT-2, OPT, Pythia) spontaneously polarize into tribes:

Under scarcity ($C/N = 0.14$), tribal formation reduces demand variance and enables individual wins despite 92% system overload. Under abundance ($C/N = 0.86$), the same tribal structures hinder equitable resource distribution.

Network Topology Effects

Topology Convergence Speed Solution Quality Premature Convergence Risk
Complete graph Fast Low (groupthink) High
Small-world Medium Good balance Medium
Scale-free Variable Hub-dependent High (hub influence)
Ring/lattice Slow High (diverse exploration) Low

The optimal topology depends on the task. For optimization, sparse networks preserve diversity. For coordination tasks requiring agreement, denser networks accelerate consensus.

Implications for Multi-Agent Systems

References

See Also