🎓 New! Pondhouse AI Academy - Master AI with our expert-led training programs.

Explore Courses

Building Enterprise-Grade AI Agents with Microsoft's Agent Framework - A Practical Guide

blog preview

The excitement around AI agents is undeniable. We’ve all seen the impressive demos: autonomous systems that can plan, reason, and execute complex tasks. But moving from a compelling proof-of-concept to a reliable, production-ready application is a much different story. In an enterprise setting, you need more than just prompting and a well-designed agent-loop. You need robust state management, ironclad security, reliable tool integration, and a clear, auditable trail of the agent's reasoning.

Many existing frameworks, while excellent for rapid prototyping, lack the scaffolding required for production systems. They often leave developers to solve the hard problems of observability, error handling, and long-running task management on their own (especially this last one). This creates a significant gap between a cool demo that works on a developer's machine and a resilient business process that can be trusted with critical operations.

This is precisely the gap that the new Microsoft Agent Framework is designed to fill. Positioned as a unification of the innovative, multi-agent patterns from AutoGen and the enterprise-grade stability of Semantic Kernel, Microsoft tries to provide a single framework for the whole agent lifecycle - including enterprise features.

NOTE: One of the main advantages of the Microsoft Agent Framework is that its one of the few agent frameworks that supports both Python and C#. Basically the only major framework for C# and .NET developers.

The Core Architecture: An Operating System for Agents

To appreciate what makes the Microsoft Agent Framework special, it's helpful to think of it as an operating system for your AI agents. I'm aware that this sounds like click-bait, but let me explain: A standard OS provides the essential services that allow applications to run reliably: it manages memory, schedules tasks, and provides a consistent way for software to interact with hardware. Similarly, the Agent Framework provides a structured environment that manages an agent's memory (state), runs its programs (tools), and handles its communication with other systems (workflows).

And it sounds appealing, doesn't it? Having all the required components neatly packaged and managed by a robust framework, so developers can focus on building the agent's logic and capabilities. Let's find out if it lives up to the promise.

The Core Components: Building Blocks of an Agent

At its heart, the framework consists of four key components that work together to bring an agent to life.

  1. AI Agents: These are the core "workers" or the "brains" of the operation. Each agent uses an LLM to reason, plan, and act based on a given set of instructions. Microsoft has designed these agents as an evolution of the best concepts from its earlier projects, combining the multi-agent conversational patterns of AutoGen with the structured, tool-oriented approach of Semantic Kernel.

  2. Tools (AIFunctions): Tools are how agents interact with the outside world. Instead of relying on fragile text parsing, the framework allows you to define tools as strongly-typed Python or C# functions. By decorating a standard function, you expose it to the agent with its specific inputs and outputs. This makes tool creation safer, more predictable, and easier to debug, as the framework handles the complex task of translating the LLM's intent into a valid function call.

  3. Workflows: If agents are the workers, workflows are the nervous system that connects them. The framework uses a sophisticated graph-based system to define complex, multi-step processes. This is a major leap beyond simple, linear agent chains. With workflows, you can orchestrate multiple agents and tools, implement conditional logic ("if X, then do Y"), run tasks in parallel, and, crucially, pause for human-in-the-loop approvals.

  4. State Management: An agent without memory is just a stateless tool. The framework solves this with a robust object called the AgentThread, which encapsulates the entire history and context of a conversation or task. This provides a durable memory for the agent, which is essential for handling multi-turn conversations and long-running, complex jobs.

Built for Business: The Enterprise-Ready Features

These core components are the foundation for a set of features designed specifically for the rigors of enterprise deployment. This is where the framework tries to distinguish itself from other players.

  1. Observability: For any production system, being able to answer "Why did it do that?" is non-negotiable. The framework has built-in support for OpenTelemetry, the industry standard for tracing and logging. This allows you to trace every step an agent takes - from its initial thought process to the exact tool it called and the result it got back - making it possible to debug failures, audit decision-making, and optimize performance.

  2. Security & Governance: The framework is built with enterprise security in mind. It integrates with services like Azure AI Content Safety to filter harmful content and with Microsoft Entra ID for authentication and authorization. Furthermore, its first-class support for "human-in-the-loop" workflows is a nice governance feature we are happy to see. You can design agents that automatically pause and request human approval for sensitive actions, like authorizing a payment, deleting data, or sending a customer-facing message, ensuring that the final say always rests with a person. While AI is capable of amazing things, most of the time you want a human to be in control.

  3. Durability and Long-Running Tasks: Business processes don't always finish in seconds. They can take hours, or even days, and must survive system restarts or interruptions. The framework's workflows are designed to be durable. By using a technique called checkpointing , the state of a workflow can be saved to persistent storage and resumed later. This allows you to build agents that can manage a multi-day expense approval process or orchestrate a weekend-long data migration, reliably picking up right where they left off.

  4. Open Standards and Interoperability: To fit into a complex enterprise ecosystem, a platform must speak the same language as other systems. The framework embraces open standards, using OpenAPI to integrate with existing APIs and supporting the Model Context Protocol (MCP). MCP is an emerging standard that simplifies how AI models connect to external data sources and tools, ensuring that agents built on the framework can easily and securely access the information they need to be effective.

And last but not least: the Microsoft Agent Framework provides full .NET support, making it a first-class citizen in the Microsoft ecosystem.

Deep Dive: How Durability and Long-Running Tasks Actually Work with the Microsoft Agent Framework

The promise of "long-running durability" is one of the framework's most critical enterprise features, so it's worth understanding how it's implemented. The mechanism is best understood with a "save game" analogy. When an agent is running, its state (history, plans, etc.) is in active memory. When it needs to pause for a long-running external process, the framework takes a "snapshot" of that state, saves it to a durable location, and can then shut down the process. When the external event completes, the framework can "load the saved game," rehydrating the agent's state to continue exactly where it left off.

This process is broken down into three phases:

  • Pause and Checkpoint: When a workflow reaches a point where it must wait - for example, for a human approval or a multi-hour data job - the framework serializes the agent's entire AgentThread into a storable format like JSON. This process is explicitly referred to as "checkpointing" in the official documentation.

  • Durable Storage: This serialized state is then passed to a configurable persistence provider. For production, you would plug in a robust database like Azure Cosmos DB, where the state is safely stored outside the application's memory.

  • Resume and Rehydrate: When an external trigger is received (e.g., an API call from an approval button), a new process retrieves the saved state from the database. It deserializes the data, rehydrating the AgentThread back into a live object. The agent is now "awake" with its full memory and context, ready to proceed to the next step.

Let's consider a practical example: an expense approval bot. An employee submits a $500 expense. The agent checks company policy and determines it needs manager approval. Here, the long-running process begins. The agent's workflow sends an approval request to the manager and then hits a pause point. The framework checkpoints the AgentThread - containing the receipt data, the policy check result, and its current status of "awaiting_manager_approval" - and saves it to a database. Hours later, the manager clicks "Approve," triggering an API call. The API handler loads the saved state, rehydrates the agent, and feeds it the "approved" event. The agent wakes up, calls the payment processing tool, and confirms completion with the employee.

This ability to pause, persist, and resume is what makes the Microsoft Agent Framework very intriguing in our eyes.

The Connection to Copilot 365: From Framework to Microsoft 365

The Microsoft Agent Framework isn't designed as a standalone tool. While it can be used standalone, it's actually a foundational component of the broader Microsoft AI and Copilot strategy.

It serves as Microsoft "pro-code" engine that enables developers to build sophisticated, enterprise-grade agents that can then be integrated, managed, and surfaced across the entire Microsoft ecosystem, bridging the gap between custom development and end-user applications.

Note: Microsoft considers Copilot Studio as their "low-code" platform and Azure AI Foundry as the underlying infrastructure tying everything together.

Here’s how that connection between Copilot 365 and the Agent Framework plays out:

  • A Pro-Code Foundation for Copilot Studio: The framework is designed to work hand-in-hand with Microsoft Copilot Studio, the company's low-code platform for building and customizing copilots. Developers can use the Agent Framework in Python or C# to build complex, stateful agents with vast business logic and tool integrations. These agents can then be connected to and managed within Copilot Studio. This creates a nice workflow where developers handle the complex back-end logic, while business users or IT admins can use the low-code interface to configure, deploy, and monitor these agents without writing a single line of code.

Microsoft Agent Framework and Copilot
StudioMicrosoft Agent Framework and Copilot Studio

Hands-On: Building an Enterprise IT Helpdesk Agent with Microsoft Agent Framework

Now that we've covered the architecture and enterprise features of the Microsoft Agent Framework, let's put it into practice and let's see how the framework really works. We'll build a simple but practical IT Helpdesk Agent that showcases state management, multi-turn conversation, and intelligent tool use.

The Goal: Our agent will triage IT support requests. It will first check for known system outages to avoid creating unnecessary tickets. If there are no outages, it will gather the necessary information from the user and create a formal support ticket.

NOTE: We're using Python in these examples, but all of them can very easily be translated to .NET C# as well. See the github repository for the semantically equivalent C# code.

Prerequisites

Before we start, make sure you have the following set up:

  1. Install the necessary packages:

    1pip install agent-framework agent-framework-azure-ai pydantic
  2. Configure Azure OpenAI Environment Variables: The AzureOpenAIResponsesClient needs to know your endpoint and deployment name. Make sure you have the following set in your environment:

    • AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint
    • AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: The name of your Azure OpenAI chat model deployment
    • AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME: The name of your Azure OpenAI Responses deployment
    • AZURE_OPENAI_API_KEY: Your Azure OpenAI API key

Step 1: Define the Tools (The Agent's "Hands")

First, we'll create the tools our agent can use. These are standard Python functions. We use typing.Annotated and pydantic.Field to provide rich descriptions for the function parameters, which is critical for helping the LLM understand how to use them correctly.

Create a file named tools.py:

1# tools.py
2import datetime
3from typing import Annotated
4from pydantic import Field
5
6def check_system_status(
7 service_name: Annotated[str, Field(description="The IT service to check, e.g., 'email', 'vpn', 'sharepoint'.")]
8) -> str:
9 """Checks the operational status of a known IT service."""
10 print(f"--- TOOL LOG: Checking status for '{service_name}' ---")
11 known_outages = {"email": "Degraded performance"}
12 status = known_outages.get(service_name.lower(), "Fully operational.")
13 return f"Status for {service_name}: {status}"
14
15def create_support_ticket(
16 user_name: Annotated[str, Field(description="The full name of the user reporting the issue.")],
17 issue_description: Annotated[str, Field(description="A detailed description of the IT issue.")],
18) -> str:
19 """Creates a support ticket in the helpdesk system."""
20 # In a real application, this would call an API like Jira or ServiceNow
21 ticket_id = f"IT-{int(datetime.datetime.now().timestamp())}"
22 print(f"--- TOOL LOG: Creating ticket {ticket_id} for {user_name} with issue: '{issue_description}' ---")
23 return f"Successfully created ticket {ticket_id}. A technician will be in touch shortly."

Step 2: Create and Run the Agent

Next, we define and interact with the agent. The key class is ChatAgent, which we configure with a client, instructions, and our list of tools. The ChatAgent instance itself maintains the conversation history, allowing for stateful, multi-turn interactions.

Create a file named main.py:

1# main.py
2import asyncio
3from agent_framework import ChatAgent
4from agent_framework.azure import AzureOpenAIResponsesClient
5from tools import check_system_status, create_support_ticket
6
7# 1. Initialize the client using Azure CLI credentials
8# It automatically reads the endpoint and deployment from environment variables.
9chat_client = AzureOpenAIResponsesClient()
10
11# 2. Create the agent instance, providing instructions and tools.
12helpdesk_agent = ChatAgent(
13 chat_client=chat_client,
14 instructions="""You are an enterprise IT Helpdesk assistant.
15 1. First, understand the user's problem and identify the specific IT service.
16 2. Use the `check_system_status` tool to check for outages.
17 3. If there is an outage, inform the user and tell them the team is aware. Do NOT create a ticket.
18 4. If the system is normal, you MUST get the user's name before creating a ticket. Ask for it if you don't have it.
19 5. Once you have the user's name and issue description, use the `create_support_ticket` tool.
20 6. Confirm to the user that a ticket has been created and provide the ticket number.""",
21 tools=[check_system_status, create_support_ticket],
22)
23
24async def run_conversation():
25 print("Agent: Hello! I'm the IT Helpdesk Agent. How can I assist you today?\n")
26
27 # First user message
28 user_input_1 = "Hi, my email is not working. I can't send or receive messages."
29 print(f"User: {user_input_1}")
30
31 # Calling 'run' on the agent instance executes the turn.
32 # The agent's internal state is updated with this interaction.
33 agent_response_1 = await helpdesk_agent.run(user_input_1)
34 print(f"Agent: {agent_response_1}\n")
35
36 # Second user message (simulating a scenario with no outage)
37 # NOTE: To test this path, modify tools.py to have no email outage.
38 # The agent will ask for a name.
39 # user_input_2 = "My name is Alice."
40 # print(f"User: {user_input_2}")
41
42 # # The agent remembers the context from the first message.
43 # agent_response_2 = await helpdesk_agent.run(user_input_2)
44 # print(f"Agent: {agent_response_2}\n")
45
46
47if __name__ == "__main__":
48 asyncio.run(run_conversation())

Step 3: Debugging with the Dev Web UI

The built-in web UI is very nice for seeing the agent's step-by-step reasoning. Let's adapt our code to launch it.

1. Install Web Components

1pip install "agent-framework[web]" uvicorn

2. Modify main.py to Serve the Agent

Update main.py by replacing the run_conversation part with the serve_web function.

1# main.py
2import asyncio
3from agent_framework import ChatAgent
4from agent_framework.azure import AzureOpenAIResponsesClient
5from agent_framework.devui import serve # <-- Import serve_web
6from tools import check_system_status, create_support_ticket
7
8# --- Agent definition (this part remains the same) ---
9chat_client = AzureOpenAIResponsesClient()
10helpdesk_agent = ChatAgent(
11 chat_client=chat_client,
12 instructions="""You are an enterprise IT Helpdesk assistant...""", # Instructions truncated for brevity
13 tools=[check_system_status, create_support_ticket],
14)
15# --- End of agent definition ---
16
17# Create the web application instance, passing the agent to it
18serve(entities=[helpdesk_agent], port=8090, auto_open=True)
19
20# The console-based run_conversation function is no longer needed

Upon running this command, a web UI will open in your browser at http://localhost:8090, allowing you to interact with the agent and see its internal thought process, tool calls, and responses in real-time.

NOTE: This is actually so cool. The UI is really well-made and provides a very nice overview about your chat client.

Agent Framework Dev UIAgent Framework Dev UI

Step 4: Deploying Your Agent for Production

The DevUI is a fantastic tool for local development and debugging, but it is not designed for production use. To deploy your agent so it can be used by other applications, you need to expose it via a standard, scalable web API. We will use FastAPI, a modern and high-performance Python web framework, to create a production-ready endpoint.

1. Create the Production API Endpoint

We'll create a new file, api.py, to define our web server. This keeps our production serving logic separate from any local debugging scripts.

First, ensure you have FastAPI and the Uvicorn web server installed:

1pip install fastapi uvicorn

Now, create api.py. This file will import the helpdesk_agent you've already configured and wrap it in an API endpoint.

1# api.py
2import os
3from fastapi import FastAPI
4from pydantic import BaseModel
5
6# Import the agent definition components
7from agent_framework import ChatAgent
8from agent_framework.azure import AzureOpenAIResponsesClient
9from tools import check_system_status, create_support_ticket
10
11# --- Agent Definition ---
12# This is the same agent we configured for the DevUI.
13# Ensure your environment variables are set for production.
14chat_client = AzureOpenAIResponsesClient()
15helpdesk_agent = ChatAgent(
16 chat_client=chat_client,
17 instructions="""You are an enterprise IT Helpdesk assistant.
18 1. First, understand the user's problem and identify the specific IT service.
19 2. Use the `check_system_status` tool to check for outages.
20 3. If there is an outage, inform the user and tell them the team is aware. Do NOT create a ticket.
21 4. If the system is normal, you MUST get the user's name before creating a ticket. Ask for it if you don't have it.
22 5. Once you have the user's name and issue description, use the `create_support_ticket` tool.
23 6. Confirm to the user that a ticket has been created and provide the ticket number.""",
24 tools=[check_system_status, create_support_ticket],
25)
26# --- End of Agent Definition ---
27
28# Create a FastAPI app instance
29app = FastAPI(
30 title="IT Helpdesk Agent API",
31 description="An API to interact with the Microsoft Agent Framework IT Helpdesk Agent.",
32)
33
34# Define the request model for our API endpoint
35class ChatRequest(BaseModel):
36 message: str
37
38@app.post("/chat")
39async def chat_with_agent(request: ChatRequest):
40 """
41 Receives a user message and returns the agent's response.
42 The ChatAgent instance maintains conversation state in memory.
43 """
44 response = await helpdesk_agent.run(request.message)
45 return {"response": response}

Note on Statefulness: This simple API maintains conversation history for the lifetime of the server process. For a true production environment that can be scaled or restarted without losing context, you would need to integrate a persistence layer to save and load the agent's state, as discussed in the "Durability and Long-Running Tasks" section.

2. Update Files for Containerization

Next, prepare the files needed to package the application into a Docker container.

  • requirements.txt

    # requirements.txt
    agent-framework
    agent-framework-azure-ai
    fastapi
    uvicorn
    pydantic
    
  • Dockerfile

    This file will now be configured to run our FastAPI application using Uvicorn.

    1# Use an official Python runtime as a parent image
    2FROM python:3.11-slim
    3
    4WORKDIR /app
    5
    6# Copy and install requirements
    7COPY requirements.txt .
    8RUN pip install --no-cache-dir -r requirements.txt
    9
    10# Copy the application code (api.py, tools.py)
    11COPY . .
    12
    13# Expose the port the API will run on
    14EXPOSE 8000
    15
    16# Run the Uvicorn server, pointing to the FastAPI app in api.py
    17CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8000"]

3. Build, Run, and Test the API

With the API and Dockerfile in place, you can now build and run your production-ready container.

  • Build the Docker image:

    1docker build -t helpdesk-agent-api .
  • Run the container:

    Remember to pass your Azure credentials as environment variables.

    1docker run -p 8000:8000 \
    2 -e AZURE_OPENAI_ENDPOINT="YOUR_ENDPOINT" \
    3 -e AZURE_OPENAI_API_KEY="YOUR_API_KEY" \
    4 -e AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="YOUR_CHAT_DEPLOYMENT" \
    5 -e AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME="YOUR_RESPONSES_DEPLOYMENT" \
    6 helpdesk-agent-api
  • Test with curl:

    Once the container is running, open a new terminal and test the /chat endpoint using curl.

    1curl -X POST http://localhost:8000/chat \
    2-H "Content-Type: application/json" \
    3-d '{"message": "Hi, my email is not working."}'

You will receive a JSON response from your agent, confirming that your production API is working correctly. This containerized agent can now be deployed to any cloud service like Azure App Service or Azure Container Apps.

Wrapping Up: The Future is Agentic and Integrated

The Microsoft Agent Framework is a powerful, production-focused toolkit designed for the next generation of enterprise AI applications. Its true strength lies in its unification of research-led innovation from projects like AutoGen with the stability, security, and structure required for real-world business processes.

By providing a robust "operating system" for agents, it solves the hard problems of state management, observability, and long-running tasks, allowing developers to focus on creating value.

Furthermore, by offering first-class support for both Python and .NET, we expect it to become he premier, enterprise-grade agent framework for the .NET ecosystem, while simultaneously providing a robust and familiar environment for the vast community of Python developers.

The journey into building enterprise-grade agents has never been more accessible. The tools are here to move beyond simple prototypes and start creating reliable, observable, and scalable AI solutions that can be deeply integrated into your business.

We encourage you to dive deeper, experiment with the code, and start building your first agent today.

Official Resources

  • Official Documentation: The best place to start for a deep dive into the framework's architecture and capabilities.
  • GitHub Repository: Explore the source code, find more examples, and contribute to the project.
  • Introductory Blog Post: Read the original announcement from Microsoft for more context on the vision and strategy behind the framework.

Get our Newsletter!

The latest on AI, RAG, and data

Interested in building high-quality AI agent systems?

We prepared a comprehensive guide based on cutting-edge research for how to build robust, reliable AI agent systems that actually work in production. This guide covers:

  • Understanding the 14 systematic failure modes in multi-agent systems
  • Evidence-based best practices for agent design
  • Structured communication protocols and verification mechanisms

Further Reading

More information on our managed RAG solution?
To Pondhouse AI
More tips and tricks on how to work with AI?
To our Blog