🎓 New! Pondhouse AI Academy - Master AI with our expert-led training programs.

Explore Courses

Orchestrating Complex Tasks with Microsoft Agent Framework Workflows

blog preview

In our previous article, we built a practical IT Helpdesk agent using the Microsoft Agent Framework. We saw how the ChatAgent could intelligently interact with users, use tools to check system statuses, and manage a conversation's state. It was a perfect example of a smart, conversational assistant, capable of understanding user intent and taking immediate, reactive steps.

But real-world business processes are rarely so simple. The agent we built is good at handling in-the-moment tasks, but what happens when a process needs to span several days, like a manager's approval for a new software license? How do you handle a task that must pause, wait for a human decision, and then reliably resume without losing context? What about a multi-step data pipeline where each step must complete successfully before the next one begins, and the entire sequence needs to be auditable?

This is where the conversational ChatAgent paradigm meets its limits, and a more structured, robust concept is needed. Enter Microsoft Agent Framework Workflows.

A ChatAgent handles ad-hoc requests dynamically, deciding what to do based on the input it receives. A Workflow takes a different approach: it defines a fixed, graph-based sequence of steps that execute in a predictable order. This matters when you need to guarantee that specific steps always run, that approvals happen at the right points, and that you can trace exactly what happened after the fact. Workflows are designed for complex, long-running processes where reliability and auditability are requirements, not nice-to-haves.

In this deep dive, we'll move beyond the chat interface and explore the orchestration engine at the heart of the framework. We'll learn how to build, run, and manage these durable constructs, including the all-important "human-in-the-loop" pattern that tries to make enterprise AI not just working, arguably a bit more complex, but trustworthy.

Agents vs. Workflows: Choosing the Right Tool for the Job

Before we dive into writing code, it's important to understand a fundamental architectural decision within the Microsoft Agent Framework: when to use a conversational ChatAgent and when to build a structured Workflow. They are not interchangeable. Each is designed to solve a different class of problems, and choosing the right tool is the first step toward building a robust and maintainable system.

ChatAgent: The Conversationalist

As we saw in our previous guide, the ChatAgent is very good at open-ended, dynamic interaction. Its behavior is primarily steered by the Large Language Model at its core. You give it a set of instructions, a collection of tools, and a user prompt, and the agent uses its reasoning capabilities to decide the best course of action.

The path it takes is not pre-determined, but it emerges from the dialogue.

  • Best for:

    • User-facing conversational interfaces (chatbots, copilots).
    • Ad-hoc task execution where the user's intent guides the process.
    • Scenarios where flexibility and natural language understanding are more important than a rigid, repeatable process.
  • Driving Logic: The LLM's turn-by-turn reasoning. It plans, executes a tool, observes the result, and re-plans in a continuous loop.

  • State Management: The agent's state is encapsulated within the AgentThread, which holds the conversation history. This is ideal for managing the context of a single, continuous interaction but is not inherently designed for long-term persistence across system restarts without additional engineering.

  • Key Strength: Flexibility. It can adapt to unexpected user requests and navigate complex conversations without a predefined script.

Workflow: The Structured Orchestrator

A Workflow, by contrast, is a deterministic, graph-based process defined by the developer. If an agent is a smart employee given a goal, a workflow is the documented business process they are required to follow.

The control flow is explicit, defined as a series of nodes (Executors) connected by edges. While an LLM can be used within a node to perform an intelligent task, it does not control the overall direction of the process.

This explicit structure is what - from our point of view - enables the framework’s best enterprise features.

  • Best for:

    • Automating established business processes (e.g., expense approvals, user onboarding).
    • Long-running tasks that need to be paused and resumed (e.g., waiting for human input, running a multi-hour data job).
    • Scenarios where auditability, reliability, and a predictable execution path are non-negotiable.
  • Driving Logic: A developer-defined execution graph. The flow moves from one Executor to the next based on pre-defined connections and conditional logic.

  • State Management: Built for durability. Workflows are designed to be checkpointed - their state can be serialized and saved to persistent storage (like a database) at any point. This allows them to survive restarts and wait indefinitely for external events.

  • Key Strength: Reliability. The process is predictable, auditable, and resilient to interruption.

The Best of Both Worlds: Embedding Agents in Workflows

This isn't an "either/or" decision. The most sophisticated solutions often combine both patterns. A Workflow can orchestrate the high-level process, and one of its nodes can be a ChatAgent tasked with handling a specific, agent-like sub-task.

Consider an intelligent document processing pipeline:

  1. Workflow Node 1 (Executor): A simple function fetches a new PDF from a SharePoint folder. This step is deterministic and reliable.
  2. Workflow Node 2 (Agent): The PDF is passed to a ChatAgent with the instruction: "You are a legal analyst. Read this document, summarize the key clauses, and extract the names of all involved parties into a JSON object." This step uses the LLM's advanced reasoning for a complex, unstructured task.
  3. Workflow Node 3 (Executor): The structured JSON output from the agent is then taken and saved into a database. This is another deterministic, reliable step.

Note: We see many projects, where people simply use an agent for exactly the above-mentioned process. Why not slap a pdf reading tool and database writing tool onto an agent and call it a day? Well, because LLMs are so undeterministic when it comes to hundreds and thousands of executions. The more decisions you leave to the LLM, the higher the risk of failure. And it will fail. By using a Workflow to orchestrate the overall process, you ensure that the critical steps (fetching the PDF, saving to the database) are always executed reliably, while still using the power of the ChatAgent for the complex task of document understanding.

This hybrid approach gives you the best of both worlds: the robust, auditable orchestration of a Workflow combined with the flexible reasoning of an Agent for specific, well-contained tasks.

Here is a simple rule of thumb for making the choice:

Use a ChatAgent when...Use a Workflow when...
The primary interface is a conversation with a user.You are automating a back-end, multi-step business process.
The sequence of steps is unpredictable and LLM-driven.The sequence of steps is known and should be predictable.
The task is relatively short-lived (seconds to minutes).The process could be long-running (hours or days).
You need maximum flexibility to handle diverse inputs.You need maximum reliability, auditability, and control.

Note: As always, treat such tables as guidelines, not hard rules.

The Core Concepts of Workflows: Nodes, Edges, and Executors

To build a workflow, you construct an executable directed acyclic graph (DAG). The Microsoft Agent Framework provides a clear and robust set of components for defining the nodes, edges, and data flow of this graph. Understanding these components is quite essential to designing modular and maintainable automated processes, therefore let's explore the core concepts.

Executors: The Nodes of the Graph

An Executor is the fundamental unit of computation in a workflow. It represents a single, self-contained node in the execution graph. Each executor should be designed with a single responsibility: to receive an input, perform a specific operation, and produce an output. This design promotes modularity, testability, and clear separation of concerns.

The framework supports two implementation patterns for executors:

  1. Class-based Executors: For components with complex logic, internal state, or significant configuration, you can define a class that inherits from agent_framework.Executor. This object-oriented approach is ideal for encapsulating non-trivial business logic.
  2. Function-based Executors: For stateless, single-purpose transformations, you can define an async function and apply the @executor decorator. This is a lightweight pattern for simple data mapping, filtering, or routing nodes.

Handlers (@handler): The Execution Entry Point

Within a class-based Executor, the @handler decorator designates the specific async method that the workflow engine will invoke. This method serves as the entry point for the node's logic.

For function-based executors, the function itself is implicitly the handler, making the @executor decorator sufficient. The use of @handler is a convention to explicitly declare the execution logic within a class structure.

WorkflowBuilder and Edges: Defining the Graph Topology

A collection of executors is not a workflow until its structure is defined. The WorkflowBuilder is the fluent API used to define the graph's topology - the directed edges that dictate the flow of control and data between executors.

The API provides clear, declarative methods for constructing the graph:

  • set_start_executor(executor): Specifies the entry point node for the workflow.
  • add_edge(source_executor, destination_executor): Establishes a directed edge. When the source_executor emits a message, the workflow engine routes it as input to the destination_executor.
  • add_conditional_edge(...): Allows for creating branches in the graph, routing data based on the content of the message itself. This is fundamental for implementing business rules and conditional logic.

This declarative approach separates the orchestration logic (the graph structure) from the business logic (the executor implementations), which is a core tenet of building maintainable systems.

WorkflowContext: The Interface to the Workflow Engine

Executors do not interact with each other directly. Instead, they are fully decoupled and communicate through the workflow engine via the WorkflowContext object (ctx), which is passed as an argument to every handler.

This context object provides two essential methods for controlling data flow:

  1. ctx.send_message(data): This method is used to emit output from the current executor. The engine intercepts this call and routes the data payload to all downstream nodes connected by outgoing edges. This is the standard mechanism for passing data between nodes.
  2. ctx.yield_output(data): This method is used by terminal nodes to publish a final result for the entire workflow. Any data passed to yield_output is collected and returned to the external client that initiated the workflow run. A workflow can have multiple terminal nodes and can yield multiple outputs.

Hands-On Part 1: Building a Multi-Step IT Request Workflow

Theory is essential, but the best way to understand the use of workflows is to build one. We'll now create a practical, multi-step workflow that automates the initial processing, enrichment, and logging of an IT support request.

The Goal: Our workflow will be a four-step pipeline:

  1. Categorize: It will first use an LLM to analyze a raw user request and extract structured details like the problem area and priority.
  2. Enrich: Next, it will take the extracted category and "enrich" the ticket by looking up the appropriate IT support team from a predefined knowledge base.
  3. Format: It will then combine all this information into a clean JSON object, ready for an API.
  4. Create Ticket: Finally, it will simulate a call to an external ticketing system API and yield a final confirmation message.

We will first build and run this entire workflow from the command line to see the end-to-end process. Then, we'll see how the framework's built-in Dev Web UI can provide a visual interface for debugging and interaction.

Prerequisites

Before we begin, ensure your environment is set up with all necessary components, including those for the web UI.

  1. Install the necessary packages:

    1pip install agent-framework --pre
  2. Configure Azure OpenAI Environment Variables: The workflow will use the AzureOpenAIResponsesClient. Ensure the following are set:

    • AZURE_OPENAI_ENDPOINT
    • AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
    • AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME
    • AZURE_OPENAI_API_KEY

Step 1: Define the Data Models and Executors

Our workflow will pass strongly-typed data between nodes using Pydantic models. We will define four distinct executors for each step of our pipeline.

Create a file named executors.py.

1import asyncio
2import time
3from pydantic import BaseModel, Field
4from typing import cast
5from typing_extensions import Never
6
7from agent_framework import Executor, WorkflowContext, executor, handler
8from agent_framework.azure import AzureOpenAIResponsesClient
9
10import os
11
12os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<your-name>.openai.azure.com"
13os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"] = "gpt-5"
14os.environ["AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME"] = "gpt-5"
15os.environ["AZURE_OPENAI_API_KEY"] = "<your-key>"
16
17
18# --- Data Models ---
19class CategorizedTicket(BaseModel):
20 category: str = Field(
21 description="The general category of the IT issue, e.g., 'Hardware', 'Software', 'Network', 'Access Request'."
22 )
23 priority: str = Field(
24 description="The estimated priority, e.g., 'Low', 'Medium', 'High'."
25 )
26 summary: str = Field(description="A one-sentence summary of the user's issue.")
27
28
29class EnrichedTicket(CategorizedTicket):
30 assigned_team: str = Field(description="The IT team responsible for this category.")
31
32
33# --- Executors ---
34
35
36# Executor 1: Categorize the raw request using an LLM.
37class CategorizeRequestExecutor(Executor):
38 def __init__(self, id: str):
39 super().__init__(id=id)
40 self.responses_client = AzureOpenAIResponsesClient()
41
42 @handler
43 async def categorize(
44 self, user_request: str, ctx: WorkflowContext[CategorizedTicket]
45 ):
46 """Takes raw text, extracts details, and sends a CategorizedTicket."""
47 print(f"--- Node Log (Categorize): Analyzing request... ---")
48 prompt = f'Analyze the IT support request and extract details.\nRequest: "{user_request}"'
49 response = await self.responses_client.get_response(
50 prompt, response_format=CategorizedTicket
51 )
52 if response.value:
53 categorized_ticket = cast(CategorizedTicket, response.value)
54 await ctx.send_message(categorized_ticket)
55 else:
56 raise ValueError("No structured response received from LLM")
57
58
59# Executor 2: Enrich the ticket with internal business logic.
60@executor(id="enrich_ticket_executor")
61async def enrich_ticket(
62 ticket: CategorizedTicket, ctx: WorkflowContext[EnrichedTicket]
63):
64 """Receives a ticket, assigns a team, and forwards an EnrichedTicket."""
65 print(
66 f"--- Node Log (Enrich): Assigning team for category '{ticket.category}'... ---"
67 )
68 team_assignments = {
69 "Hardware": "Desktop Support L2",
70 "Software": "Application Support",
71 "Network": "Network Operations",
72 "Access Request": "Identity & Access Management",
73 "Default": "General Helpdesk L1",
74 }
75 assigned_team = team_assignments.get(ticket.category, team_assignments["Default"])
76 enriched_ticket = EnrichedTicket(**ticket.model_dump(), assigned_team=assigned_team)
77 await ctx.send_message(enriched_ticket)
78
79
80# Executor 3: Format the enriched data into a JSON string for an API.
81@executor(id="format_ticket_executor")
82async def format_ticket_as_json(ticket: EnrichedTicket, ctx: WorkflowContext[str]):
83 """
84 Receives an EnrichedTicket and sends a JSON string to the next node.
85 Note: The context type is WorkflowContext[str] because it passes data on,
86 it is no longer a terminal node.
87 """
88 print(f"--- Node Log (Format): Formatting ticket into JSON... ---")
89 json_output = ticket.model_dump_json(indent=2)
90 await ctx.send_message(json_output)
91
92
93# Executor 4: Simulate creating the ticket in an external system.
94@executor(id="create_ticket_executor")
95async def create_ticket_in_system(ticket_json: str, ctx: WorkflowContext[Never, str]):
96 """
97 Receives the ticket JSON, simulates an API call, and yields the final result.
98 This is the new terminal node of our workflow.
99 """
100 print(f"--- Node Log (Create Ticket): Calling ticketing system API... ---")
101 # Simulate API call latency
102 await asyncio.sleep(1)
103 ticket_id = f"IT-{int(time.time())}"
104 print(f"--- Node Log (Create Ticket): Successfully created ticket {ticket_id} ---")
105
106 # Yield the final output for the entire workflow.
107 await ctx.yield_output(
108 f"Successfully created ticket {ticket_id} with details:\n{ticket_json}"
109 )

Step 2: Build and Run the Workflow from the Console

With our four executors defined, we can now wire them together and run the workflow directly to test the end-to-end logic.

Create a main.py file with the following content.

1# main.py
2import asyncio
3from agent_framework import WorkflowBuilder
4from executors import (
5 CategorizeRequestExecutor,
6 create_ticket_in_system,
7 enrich_ticket,
8 format_ticket_as_json,
9)
10
11async def run_workflow():
12 # 1. Instantiate the executors.
13 categorize_node = CategorizeRequestExecutor(id="categorize_request_node")
14 enrich_node = enrich_ticket
15 format_node = format_ticket_as_json
16 create_ticket_node = create_ticket_in_system
17
18 # 2. Build the four-step workflow graph.
19 workflow = (
20 WorkflowBuilder()
21 .add_edge(categorize_node, enrich_node)
22 .add_edge(enrich_node, format_node)
23 .add_edge(format_node, create_ticket_node)
24 .set_start_executor(categorize_node)
25 .build()
26 )
27
28 # 3. Define the initial input and run the workflow.
29 user_request = "My VPN client keeps disconnecting every 5 minutes, it's impossible to work."
30 print(f"Starting workflow with input: '{user_request}'\n")
31 events = await workflow.run(user_request)
32
33 # 4. Retrieve and print the final output.
34 final_outputs = events.get_outputs()
35 print("\nWorkflow completed. Final output:")
36 for output in final_outputs:
37 print(output)
38
39 print(f"\nFinal state: {events.get_final_state()}")
40
41if __name__ == "__main__":
42 asyncio.run(run_workflow())

Now, run this from your terminal: python main.py. You will see the log output from each of the four nodes in sequence, confirming the workflow executed correctly.

Expected Console Output:

1Starting workflow with input: 'My VPN client keeps disconnecting every 5 minutes, it's impossible to work.'
2
3--- Node Log (Categorize): Analyzing request... ---
4--- Node Log (Enrich): Assigning team for category 'Network'... ---
5--- Node Log (Format): Formatting ticket into JSON... ---
6--- Node Log (Create Ticket): Calling ticketing system API... ---
7--- Node Log (Create Ticket): Successfully created ticket IT-1732598400 ---
8
9Workflow completed. Final output:
10Successfully created ticket IT-1732598400 with details:
11{
12 "category": "Network",
13 "priority": "High",
14 "summary": "User's VPN client frequently disconnects, disrupting their work.",
15 "assigned_team": "Network Operations"
16}
17
18Final state: WorkflowRunState.IDLE

Step 3: Visualizing the Workflow with the DevUI

While running from the console is effective for validation, the framework provides a very nice experience for development and debugging: the Dev Web UI. It allows you to visualize the execution graph, inspect the data flowing between nodes, and interactively run the workflow.

Let's adapt our main.py to launch this UI.

1. Create the Workflow Object: First, we'll define a reusable workflow object outside of our main execution block.

2. Modify main.py to Serve the UI: Replace the run_workflow function and the main execution block with the code to serve the UI.

1# main.py
2from agent_framework import WorkflowBuilder
3from agent_framework.devui import serve # <-- Import the serve function
4from executors import (
5 CategorizeRequestExecutor,
6 create_ticket_in_system,
7 enrich_ticket,
8 format_ticket_as_json,
9)
10
11# 1. Instantiate executors and build the workflow object.
12# This part is the same as before, but defined at the top level.
13categorize_node = CategorizeRequestExecutor(id="categorize_request_node")
14enrich_node = enrich_ticket
15format_node = format_ticket_as_json
16create_ticket_node = create_ticket_in_system
17
18workflow = (
19 WorkflowBuilder(
20 name="IT Ticket Triage Workflow",
21 description="A four-step workflow to categorize, enrich, format, and create an IT support ticket."
22 )
23 .add_edge(categorize_node, enrich_node)
24 .add_edge(enrich_node, format_node)
25 .add_edge(format_node, create_ticket_node)
26 .set_start_executor(categorize_node)
27 .build()
28)
29
30# 2. Define a main function to launch the DevUI.
31def main():
32 """Launch the IT Ticket Triage workflow in the Dev Web UI."""
33 print("Starting IT Ticket Triage Workflow...")
34 print("Navigate to http://localhost:8091 in your browser.")
35
36 # The `serve` function starts a web server and hosts the UI for the provided entities.
37 serve(entities=[workflow], port=8091, auto_open=True)
38
39if __name__ == "__main__":
40 main()

Now, run this updated script: python main.py. A browser window will open to http://localhost:8091.

First, you'll see your workflow in an interactive graph view.

Workflow Graph ViewWorkflow Graph View

By clicking on the "Configure and Run" button, you can input a sample user request and execute the workflow.

Run Workflow in DevUIRun Workflow in DevUI

Upon execution, you can observe the live data flowing through each node, inspect the inputs and outputs, and verify that each step behaves as expected.

Workflow Execution in DevUIWorkflow Execution in DevUI

Hands-On Part 2: Adding Conditional Logic for High-Priority Alerts

Linear workflows are a good starting point, but real business processes often require branching. A critical support request should not follow the same path as a routine one. The Microsoft Agent Framework handles this with conditional edges, allowing the workflow to route data based on its content.

The Goal: We will enhance our IT Triage Workflow to handle high-priority tickets differently.

  • If a ticket is classified as "High" priority, the workflow will branch.
  • Instead of just creating a ticket, it will also trigger a separate "Send Alert" notification (which we'll simulate).
  • Low and Medium priority tickets will follow the standard ticket creation path.

This introduces a decision point into our graph, making our automation smarter and more responsive to the urgency of the situation.

Step 1: Update the Executors and Add a Condition

We need a new executor for sending alerts and a function to define the routing logic. We'll add these to our existing executors.py file.

1. Add the New SendAlertExecutor

This will be a new terminal node for our high-priority branch.

2. Create the Condition Function

This is a simple boolean function that inspects the data flowing through an edge and decides if that path should be taken.

Update your executors.py file with the following additions:

1# executors.py
2# (Keep all existing code from the previous chapter)
3...
4
5# --- New Executor for High-Priority Path ---
6
7# Executor 5: A terminal node for the high-priority branch to send an alert.
8@executor(id="send_alert_executor")
9async def send_alert(ticket: EnrichedTicket, ctx: WorkflowContext[Never, str]):
10 """
11 Simulates sending a high-priority alert to a monitoring channel (e.g., Slack, PagerDuty).
12 This node also yields a final output for its branch.
13 """
14 print(f"--- Node Log (ALERT): Sending HIGH PRIORITY alert for team '{ticket.assigned_team}'... ---")
15 alert_message = f"HIGH PRIORITY ALERT: Ticket with summary '{ticket.summary}' assigned to {ticket.assigned_team}."
16
17 # In a real app, this would call a Slack or PagerDuty API.
18 await asyncio.sleep(0.5) # Simulate API call
19
20 # Yield a specific output for the alert path.
21 await ctx.yield_output(alert_message)
22
23
24# --- New Condition Functions for Routing ---
25
26def is_high_priority(ticket: EnrichedTicket) -> bool:
27 """Condition function that returns True if the ticket priority is 'High'."""
28 print(f"--- Condition Check: Is priority '{ticket.priority}' high? {ticket.priority.lower() == 'high'} ---")
29 return ticket.priority.lower() == 'high'
30
31def is_normal_priority(ticket: EnrichedTicket) -> bool:
32 """Condition function that returns True if the ticket priority is not 'High'."""
33 print(f"--- Condition Check: Is priority '{ticket.priority}' normal? {ticket.priority.lower() != 'high'} ---")
34 return ticket.priority.lower() != 'high'

Step 2: Build the Branched Workflow

Now we update main.py to construct the new, non-linear graph. We will use add_edge with the condition parameter to create the two branches after the enrich_node.

  • The enrich_node now becomes our decision point.
  • If is_high_priority returns True, the data flows to send_alert_executor.
  • If is_normal_priority returns True, the data flows to the original format_ticket_executor path.

Modify your main.py to reflect this new structure.

1# main.py
2from agent_framework import WorkflowBuilder
3from agent_framework.devui import serve
4from executors import (
5 CategorizeRequestExecutor,
6 create_ticket_in_system,
7 enrich_ticket,
8 format_ticket_as_json,
9 send_alert,
10 is_high_priority,
11 is_normal_priority,
12)
13
14# 1. Instantiate all executors, including the new one.
15categorize_node = CategorizeRequestExecutor(id="categorize_request_node")
16enrich_node = enrich_ticket
17format_node = format_ticket_as_json
18create_ticket_node = create_ticket_in_system
19alert_node = send_alert # New alert node
20
21# 2. Build the workflow with conditional branches.
22workflow = (
23 WorkflowBuilder(
24 name="IT Ticket Triage Workflow with Priority Routing",
25 description="A workflow that routes high-priority tickets to a special alert path."
26 )
27 .add_edge(categorize_node, enrich_node)
28
29 # --- Branching Logic from the enrich_node ---
30 # Path 1: Normal priority tickets go through the standard creation process.
31 .add_edge(enrich_node, format_node, condition=is_normal_priority)
32 .add_edge(format_node, create_ticket_node)
33
34 # Path 2: High priority tickets go directly to the alert node.
35 .add_edge(enrich_node, alert_node, condition=is_high_priority)
36
37 .set_start_executor(categorize_node)
38 .build()
39)
40
41# 3. Serve the workflow using the DevUI (this part remains the same).
42def main():
43 """Launch the branching IT Ticket Triage workflow in the Dev Web UI."""
44 print("Starting IT Ticket Triage Workflow with Priority Routing...")
45 print("Navigate to http://localhost:8091 in your browser.")
46 serve(entities=[workflow], port=8091, auto_open=True)
47
48if __name__ == "__main__":
49 main()

Step 3: Testing Both Branches in the DevUI

Run the updated main.py script. The DevUI will now display your new, branched graph.

Branched Workflow Graph
ViewBranched Workflow Graph View

Test Case 1: Normal Priority

  • Input: I need to reset my password for the HR portal, I forgot the old one.
  • Action: Click "Run".
  • Expected Behavior: The LLM should classify this as "Low" or "Medium" priority. You will see the workflow execute the right-hand branch in the DevUI graph: Enrich -> Format -> Create Ticket. The final output will be the standard ticket creation confirmation.

Test Case 2: High Priority

  • Input: The entire payment processing service is down! We cannot process any customer credit cards!
  • Action: Click "Run".
  • Expected Behavior: The LLM will classify this as "High" priority. You will see the workflow execute the left-hand branch: Enrich -> Send Alert. The final output in the UI will be the high-priority alert message.

By adding conditional edges, we have significantly increased the intelligence of our automation. The workflow is no longer a simple pipeline but a dynamic process that can adapt its behavior based on the data it is processing. This is a fundamental pattern for building automations that can handle the complexity and variability of real-world scenarios.

Wrapping Up: From Pipelines to Processes

In this guide, we have moved beyond the conversational paradigm of a ChatAgent to the structured orchestration capabilities of Workflows. We began by constructing a simple, linear data processing pipeline, demonstrating how to chain multiple executors to transform unstructured user input into enriched, structured data ready for a downstream system.

We then enhanced this design by introducing conditional logic. By adding branching based on the data's content - in our case, ticket priority - we transformed a simple pipeline into an intelligent business process capable of adapting its behavior to different scenarios. This ability to define explicit, auditable, and conditional execution paths is a cornerstone of building reliable enterprise automation.

However, the automations we've built so far are still entirely self-contained and execute from start to finish in a single run. The most critical enterprise processes often don't work this way. They need to persist state across system restarts, pause for extended periods to wait for external events, and, most importantly, incorporate human judgment for critical decisions.

In the next part of this series, we will tackle these advanced, production-critical requirements directly. We will explore:

  • Durability and State Management: How to configure workflows with checkpointing to make them durable. This allows a workflow to be paused, its state persisted to a database, and then resumed hours or even days later, ensuring resilience against system interruptions.
  • Human-in-the-Loop Integration: We will implement a proper approval step, where the workflow pauses and waits for an external signal
    • simulating a manager's decision - before proceeding. This is the key to building trustworthy AI systems that combine the speed of automation with the oversight of human governance.

See you in the next part!

Official Resources

  • Official Documentation on Workflows: The definitive source for deep dives into workflow concepts, advanced patterns, and API references.
  • GitHub Repository: Explore the source code, find more examples (including the ones used in this article), and contribute to the project.
  • Introductory Blog Post: Read the original announcement from Microsoft for more context on the vision and strategy behind the framework.

Get our Newsletter!

The latest on AI, RAG, and data

Interested in building high-quality AI agent systems?

We prepared a comprehensive guide based on cutting-edge research for how to build robust, reliable AI agent systems that actually work in production. This guide covers:

  • Understanding the 14 systematic failure modes in multi-agent systems
  • Evidence-based best practices for agent design
  • Structured communication protocols and verification mechanisms

Further Reading

  • Building Enterprise-Grade AI Agents with Microsoft's Agent Framework: The first article in this series, a perfect starting point for understanding the ChatAgent pattern and the framework's core enterprise features.
  • AI Agents From Scratch: A foundational guide to understanding the core concepts of how AI agents work, from planning to tool execution, which provides context for the agentic components you can embed within workflows.
  • Langfuse: The Open Source Observability Platform: Observability is critical for production systems. Dive deeper into the tools and techniques for monitoring and debugging complex LLM applications and workflows.
  • Building a RAG Pipeline with Azure AI Search: Discover how to power your agents and workflows with internal knowledge by building a retrieval-augmented generation system on Azure, a perfect complement to the tools used in this guide.
  • AI Agents with n8n: Explore a different, low-code approach to building agentic workflows, providing a valuable comparison of methodologies for process automation.
More information on our managed RAG solution?
To Pondhouse AI
More tips and tricks on how to work with AI?
To our Blog