PromptFlow is an open-source development framework created by Microsoft for building, evaluating, and deploying LLM-based applications through executable workflows called flows.1) It connects LLMs, prompts, Python code, and external tools into directed acyclic graphs (DAGs) that can be tested, debugged, and iterated systematically. PromptFlow integrates tightly with Azure Machine Learning and Microsoft Foundry.
Flows are executable workflows represented as DAGs where each node is one of:
Flows chain these nodes together, passing outputs from one as inputs to the next, creating reproducible pipelines for LLM application logic.
PromptFlow supports a structured development lifecycle:2)
The visual DAG editor makes complex prompt engineering accessible without deep coding expertise, while the SDK supports programmatic flow construction for advanced users.
PromptFlow excels in systematic LLM application testing:3)
In Microsoft Foundry (classic), additional features include advanced quality assessments and comprehensive tracking dashboards.
PromptFlow is deeply embedded in the Microsoft Azure ecosystem:4)
The cloud version handles backend complexity (compute, scaling, monitoring) while maintaining the same flow definitions used locally.
from promptflow.core import tool, Prompty from promptflow.client import PFClient # Define individual tools (nodes) for the flow @tool def extract_keywords(text: str) -> liststr: """Extract key topics from input text.""" stop_words = {"the", "a", "is", "in", "to", "and", "of", "for", "on", "with"} words = text.lower().split() return [w for w in words if len(w) > 3 and w not in stop_words] @tool def classify_intent(keywords: liststr) -> str: """Classify user intent based on extracted keywords.""" intents = { "technical": ["code", "error", "debug", "api", "function", "deploy"], "billing": ["price", "cost", "subscription", "payment", "invoice"], "general": ["help", "question", "info", "about", "learn"], } for intent, triggers in intents.items(): if any(kw in triggers for kw in keywords): return intent return "general" @tool def route_response(intent: str, original_text: str) -> dict: """Route to appropriate handler based on classified intent.""" handlers = { "technical": "Routing to technical support team...", "billing": "Routing to billing department...", "general": "Routing to general assistant...", } return { "intent": intent, "handler": handlers.get(intent, handlers["general"]), "original_query": original_text, } # Execute the DAG: extract_keywords -> classify_intent -> route_response user_input = "I need help debugging an API error in my deployment" keywords = extract_keywords(user_input) print(f"Keywords: {keywords}") intent = classify_intent(keywords) print(f"Intent: {intent}") result = route_response(intent, user_input) print(f"Routing: {result}") # To run as a full PromptFlow flow, define flow.dag.yaml: flow_yaml = """ inputs: text: type: string nodes: - name: extract_keywords type: python source: {type: code, path: flow_tools.py} inputs: {text: ${inputs.text}} - name: classify_intent type: python source: {type: code, path: flow_tools.py} inputs: {keywords: ${extract_keywords.output}} - name: route_response type: python source: {type: code, path: flow_tools.py} inputs: {intent: ${classify_intent.output}, original_text: ${inputs.text}} outputs: result: value: ${route_response.output} """ print(f"\nFlow DAG YAML:\n{flow_yaml}")
| Aspect | PromptFlow | LangChain | Haystack |
|---|---|---|---|
| Paradigm | Visual DAG flows | Code-first chains (LCEL) | Pipeline-based |
| Evaluation | Built-in batch eval, metrics | Requires custom scripting | LLM-based evaluation |
| Accessibility | Low-code visual editor | Developer-oriented | Developer-oriented |
| Cloud | Native Azure integration | Cloud-agnostic | Cloud-agnostic |
| Best For | Microsoft ecosystem teams | General LLM orchestration | Search/RAG pipelines |