====== Generative UI ======
**Generative UI** is a paradigm where AI agents dynamically generate or control interactive user interface components at runtime, adapting the interface based on context, user input, and agent state rather than relying on static, pre-designed layouts. This transforms UIs from passive wrappers around chat interfaces into active participants in agent execution, enabling task-specific rendering, structured input collection, and real-time progress visualization.
===== Overview =====
Traditional agent interfaces are limited to text-based chat -- the agent produces text, and the user reads it. Generative UI breaks this constraint by allowing agents to emit structured UI specifications that frontends render as interactive components: forms, charts, maps, media players, data tables, and custom widgets.
This is enabled by protocols like [[ag_ui_protocol|AG-UI]] that provide the transport layer for streaming UI events between agents and frontends, combined with UI specifications that define how components are described and rendered.
===== CopilotKit OpenGenerativeUI =====
**OpenGenerativeUI**, created by [[https://www.copilotkit.ai|CopilotKit]], is an open-source framework providing a universal runtime for multiple generative UI specifications. Built on the AG-UI protocol, it supports:
* **A2UI** -- Agent-to-UI specification for declarative component assembly
* **Open-JSON-UI** -- JSON-based UI descriptions for cross-framework rendering
* **MCP Apps (MCP-UI)** -- UI extensions surfaced through Model Context Protocol tool calls
* **Custom Specifications** -- Extensible architecture for domain-specific UI patterns
AG-UI itself is not a UI spec but the bidirectional event/state protocol handling real-time coordination. It manages tool lifecycles (started, streaming, finished, failed), user interactions (clicks, form submissions), and agent state updates (progress, partial results, next steps).
===== Three Generative UI Patterns =====
^ Pattern ^ Description ^ Control ^ Flexibility ^ Protocols ^
| **Static** | Agent fills data into predefined components | Maximum | Minimum | AG-UI tool lifecycle events |
| **Declarative** | Agent assembles UI from a component registry via JSON specs | Moderate | Moderate | A2UI / Open-JSON-UI + AG-UI |
| **Open-ended** | Agent outputs raw content (HTML, iframes) | Minimum | Maximum | MCP Apps + AG-UI |
Each pattern represents a different tradeoff. Static patterns ensure visual consistency and security; open-ended patterns maximize agent creativity but require sandboxing to prevent untrusted content injection.
===== Code Example =====
Agent-side generative UI with LangGraph:
from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import BaseMessage, AIMessage
from langgraph.graph import add_messages
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], add_messages]
ui: Annotated[Sequence[dict], lambda a, b: a + b]
def push_ui_message(component, props, message):
"""Emit a UI component alongside an agent message."""
message.additional_kwargs["ui"] = {
"component": component,
"props": props
}
async def weather_node(state):
"""Agent node that generates both text and UI."""
weather_data = await fetch_weather(state["messages"][-1].content)
message = AIMessage(content=f"Weather for {weather_data['city']}")
push_ui_message("WeatherCard", {
"city": weather_data["city"],
"temperature": weather_data["temp"],
"conditions": weather_data["conditions"],
}, message)
return {"messages": [message]}
Declarative JSON UI specification emitted by an agent:
# Agent emits this JSON, frontend renders the appropriate component
generative_ui_spec = {
"type": "form",
"title": "Flight Booking",
"components": [
{
"type": "date_picker",
"id": "departure",
"label": "Departure Date",
"min_date": "2026-03-25"
},
{
"type": "select",
"id": "cabin_class",
"label": "Cabin Class",
"options": ["Economy", "Business", "First"]
},
{
"type": "submit_button",
"label": "Search Flights",
"action": "search_flights"
}
]
}
===== Framework Integration =====
Generative UI is supported across major agent frameworks:
* **LangGraph** -- Native push_ui_message for colocating React components with graph nodes
* **AI SDK UI (Vercel)** -- Tool call results map directly to React components
* **CopilotKit** -- Full OpenGenerativeUI runtime with React component library
* **CrewAI / Mastra** -- AG-UI adapter support for generative UI emission
===== Security Considerations =====
Open-ended generative UI patterns require careful sandboxing:
* Agent-generated HTML should be rendered in sandboxed iframes
* Content Security Policies must restrict script execution
* User input from generated forms needs standard validation
* Component registries should whitelist allowed component types
===== References =====
* [[https://www.copilotkit.ai/blog/the-developer-s-guide-to-generative-ui-in-2026|CopilotKit -- Developer Guide to Generative UI (2026)]]
* [[https://github.com/CopilotKit/generative-ui|OpenGenerativeUI GitHub Repository]]
* [[https://ai-sdk.dev/docs/ai-sdk-ui/generative-user-interfaces|Vercel AI SDK -- Generative User Interfaces]]
* [[https://docs.langchain.com/langsmith/generative-ui-react|LangGraph -- Generative UI with React]]
===== See Also =====
* [[ag_ui_protocol|AG-UI Protocol]]
* [[model_context_protocol|Model Context Protocol (MCP)]]
* [[human_in_the_loop|Human-in-the-Loop Agents]]
* [[agent_ux|Agent User Experience Design]]