Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety & Security
Evaluation
Meta
Graph prompting integrates graph structures – such as knowledge graphs, relational networks, and graph neural network (GNN) embeddings – into LLM prompts to enhance the model's ability to reason over structured, relational data. This addresses a key limitation of LLMs: their difficulty in precisely handling factual and relational information encoded in graph form.1)
Graphs are serialized into textual representations that LLMs can process directly. Google Research's “Talk Like a Graph” explores encoding strategies including node ordering, edge notation formats, and subgraph selection methods.2) Key findings:
Graph Neural Prompting combines GNNs with LLMs through a multi-step process:3)
This produces instance-specific prompts per query, unlike dataset-level methods like standard prompt tuning.
GraphICL uses structured prompt templates to capture graph structure in Text-Attributed Graphs, enabling in-context learning without training. It outperforms specialized graph LLMs in resource-constrained settings.4)
Knowledge graphs provide factual and structural knowledge to augment LLMs through retrieval-augmented mechanisms:
Integration approaches fall into four categories:
Graph Neural Prompting achieved significant improvements:5)
GraphICL outperformed specialized graph LLMs and GNNs on out-of-domain text-attributed graph benchmarks.