Table of Contents

Generative UI

Generative UI is a paradigm where AI agents dynamically generate or control interactive user interface components at runtime, adapting the interface based on context, user input, and agent state rather than relying on static, pre-designed layouts. This transforms UIs from passive wrappers around chat interfaces into active participants in agent execution, enabling task-specific rendering, structured input collection, and real-time progress visualization.

Overview

Traditional agent interfaces are limited to text-based chat – the agent produces text, and the user reads it. Generative UI breaks this constraint by allowing agents to emit structured UI specifications that frontends render as interactive components: forms, charts, maps, media players, data tables, and custom widgets.

This is enabled by protocols like AG-UI that provide the transport layer for streaming UI events between agents and frontends, combined with UI specifications that define how components are described and rendered.

CopilotKit OpenGenerativeUI

OpenGenerativeUI, created by CopilotKit, is an open-source framework providing a universal runtime for multiple generative UI specifications. Built on the AG-UI protocol, it supports:

AG-UI itself is not a UI spec but the bidirectional event/state protocol handling real-time coordination. It manages tool lifecycles (started, streaming, finished, failed), user interactions (clicks, form submissions), and agent state updates (progress, partial results, next steps).

Three Generative UI Patterns

Pattern Description Control Flexibility Protocols
Static Agent fills data into predefined components Maximum Minimum AG-UI tool lifecycle events
Declarative Agent assembles UI from a component registry via JSON specs Moderate Moderate A2UI / Open-JSON-UI + AG-UI
Open-ended Agent outputs raw content (HTML, iframes) Minimum Maximum MCP Apps + AG-UI

Each pattern represents a different tradeoff. Static patterns ensure visual consistency and security; open-ended patterns maximize agent creativity but require sandboxing to prevent untrusted content injection.

Code Example

Agent-side generative UI with LangGraph:

from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import BaseMessage, AIMessage
from langgraph.graph import add_messages
 
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]
    ui: Annotated[Sequence[dict], lambda a, b: a + b]
 
def push_ui_message(component, props, message):
    """Emit a UI component alongside an agent message."""
    message.additional_kwargs["ui"] = {
        "component": component,
        "props": props
    }
 
async def weather_node(state):
    """Agent node that generates both text and UI."""
    weather_data = await fetch_weather(state["messages"][-1].content)
    message = AIMessage(content=f"Weather for {weather_data['city']}")
    push_ui_message("WeatherCard", {
        "city": weather_data["city"],
        "temperature": weather_data["temp"],
        "conditions": weather_data["conditions"],
    }, message)
    return {"messages": [message]}

Declarative JSON UI specification emitted by an agent:

# Agent emits this JSON, frontend renders the appropriate component
generative_ui_spec = {
    "type": "form",
    "title": "Flight Booking",
    "components": [
        {
            "type": "date_picker",
            "id": "departure",
            "label": "Departure Date",
            "min_date": "2026-03-25"
        },
        {
            "type": "select",
            "id": "cabin_class",
            "label": "Cabin Class",
            "options": ["Economy", "Business", "First"]
        },
        {
            "type": "submit_button",
            "label": "Search Flights",
            "action": "search_flights"
        }
    ]
}

Framework Integration

Generative UI is supported across major agent frameworks:

Security Considerations

Open-ended generative UI patterns require careful sandboxing:

References

See Also