Table of Contents

Graph Prompting

Graph prompting integrates graph structures – such as knowledge graphs, relational networks, and graph neural network (GNN) embeddings – into LLM prompts to enhance the model's ability to reason over structured, relational data. This addresses a key limitation of LLMs: their difficulty in precisely handling factual and relational information encoded in graph form.1)

Approaches

Text-Based Graph Encoding

Graphs are serialized into textual representations that LLMs can process directly. Google Research's “Talk Like a Graph” explores encoding strategies including node ordering, edge notation formats, and subgraph selection methods.2) Key findings:

Graph Neural Prompting (GNP)

Graph Neural Prompting combines GNNs with LLMs through a multi-step process:3)

  1. Subgraph retrieval: Relevant subgraphs are retrieved from a knowledge graph based on query entities.
  2. GNN encoding: A graph neural network encodes the subgraph nodes into embeddings.
  3. Cross-modality pooling: Relevant nodes are selected through cross-attention with the text query.
  4. Domain projection: A projector aligns graph embeddings with the LLM's text embedding space.
  5. Prompt construction: The resulting graph neural prompt (a soft prompt) is prepended to text embeddings.

This produces instance-specific prompts per query, unlike dataset-level methods like standard prompt tuning.

In-Context Learning with Graphs (GraphICL)

GraphICL uses structured prompt templates to capture graph structure in Text-Attributed Graphs, enabling in-context learning without training. It outperforms specialized graph LLMs in resource-constrained settings.4)

Knowledge Graph Integration

Knowledge graphs provide factual and structural knowledge to augment LLMs through retrieval-augmented mechanisms:

Integration approaches fall into four categories:

Benchmark Results

Graph Neural Prompting achieved significant improvements:5)

GraphICL outperformed specialized graph LLMs and GNNs on out-of-domain text-attributed graph benchmarks.

Practical Applications

Limitations

See Also

References

2)
Google Research, Talk like a Graph
5)
Tian et al. 2023, experimental results