Table of Contents

System Prompt Composition: Building Effective System Prompts from Identity and Skills

System prompt composition is the practice of building AI agent system prompts from modular, reusable components rather than writing monolithic instruction blocks. By decomposing prompts into discrete sections — identity, skills, constraints, output format, and context — teams can maintain, test, and evolve prompts like code modules. This approach reduces AI errors by up to 60% and cuts manual prompt maintenance time by 60-75%.1)

Why Modular Composition Matters

Monolithic system prompts — single large blocks of instructions — are fragile. Changing one section can break behavior in another. They are difficult to test in isolation, impossible to reuse across agents, and expensive to maintain as requirements evolve.2)

Modular composition treats prompts as structured assets:

Core Components

Every production system prompt should contain these components, assembled in order of priority:4)

1. Identity and Persona

Defines who the agent is, its role, expertise level, and communication style. This grounds the LLM in a consistent personality and knowledge domain.

You are a senior enterprise data analyst with 15 years of experience
in financial modeling. Respond professionally, concisely, and with
data-driven insights.

Best practices: Be specific about expertise level, domain knowledge, and tone. Avoid vague descriptors like “helpful assistant” in favor of concrete role definitions.5)

2. Skills and Capabilities

Explicitly lists the tools, knowledge areas, or functions the agent can use. This prevents hallucinated capabilities and guides tool selection.

Available tools: SQL querying, Python data analysis via pandas,
Salesforce API integration. Use tools only when the task requires
them — prefer direct answers for simple questions.

3. Constraints and Guardrails

Defines boundaries: what the agent must not do, token limits, safety rules, compliance requirements, and ethical guidelines.

Do not generate code without user approval. Limit responses to
500 words. Never disclose PII. Cite sources for factual claims.
If uncertain, say so rather than speculating.

4. Output Format

Specifies the structure, format, and length of responses for consistency across interactions.

Format responses as JSON:
{"summary": string, "recommendations": array, "confidence": number}

Structured output formatting is critical for downstream systems that parse agent responses programmatically.6)

5. Context Injection

Provides dynamic background data that changes per request: user profile, conversation history, retrieved documents, or real-time data.

Current user: {user_profile}
Conversation history: {last_3_messages}
Retrieved context: {rag_results}

6. Few-Shot Examples

Input-output pairs that demonstrate desired behavior. These are particularly effective for complex formatting requirements or nuanced decision-making.

7. Error Handling

Fallback instructions for when the agent encounters ambiguity, missing data, or tool failures.

Composition Patterns

Template-Based Composition

Use variables and templates to assemble prompts programmatically. A base template defines the structure, and variables inject role-specific content:

<identity>{persona}</identity>
<skills>{roster}</skills>
<constraints>{constraints}</constraints>
<context>{injection}</context>
<output>{format}</output>

This pattern enables a single template to generate prompts for dozens of different agents by swapping variable values.7)

The Five Primitives Pattern

A composition framework using five reusable building blocks:

Composable Skills Pattern

Instead of one monolithic prompt, decompose capabilities into independent skill modules that are loaded on demand. This follows the Unix philosophy of small programs that do one thing well.9)

Benefits include progressive disclosure (loading only relevant skills per request), cleaner context windows, and the ability to test each skill independently.

The RTCCO Framework

The Role-Task-Context-Constraints-Output framework organizes prompts into five clear functional components, treating them like LEGO blocks that can be swapped or adjusted independently:

Sandwich Method

For complex prompts, use a three-layer structure: top bun (intent and role), filling (detailed instructions and examples), bottom bun (restate intent and constraints). This redundancy helps LLMs prioritize the most important instructions.11)

Enterprise Best Practices

Common Mistakes

See Also

References