AI Agents

Building Durable AI Agents

AI agents are built on the primitive of LLM and tool-call loops, often with additional processes for data fetching, resource provisioning, or reacting to external events. Workflow DevKit allows you to designate these loops as stateful, resumable workflows, with LLM calls, tool calls, and other async operations as individual, retryable, observable steps.

This guide walks through converting a basic AI SDK agent into a durable agent using Workflow DevKit.

Why Durable Agents?

Building production-ready AI agents typically requires solving several challenges:

  • Durability: Splitting calls into async jobs, managing associated workers, queuing, and data persistence
  • Reliability: Error handling, retries, and recovery from partial failures
  • Observability: Storing messages and emitting traces, often using external services for observability
  • Resumability: Repeatedly persisting and loading state from a database, storing streams in a separate service, and managing stream reconnects
  • Long-running operations and human approval loops: Wiring up queues, async workers, and connecting state to your existing API and interface

Workflow DevKit provides these capabilities out of the box. Your agent becomes a workflow, your tools become steps, and the framework handles interplay with your existing infrastructure.

Getting Started

Start with a Next.js application using the AI SDK's Agent class:

app/api/chat/route.ts
import { createUIMessageStreamResponse, Agent, stepCountIs } from 'ai';
import { tools } from '@/ai/tools';

export async function POST(req: Request) {
  const { messages, modelId } = await req.json();

  const agent = new Agent({
    model: modelId,
    system: 'You are a helpful assistant.',
    tools: tools(),
    stopWhen: stepCountIs(20),
  });

  const stream = agent.stream({ messages });

  return createUIMessageStreamResponse({
    stream: stream.toUIMessageStream(),
  });
}

Step 1: Install Dependencies

Add the Workflow DevKit packages to your project:

npm install workflow @workflow/ai

Step 2: Create a Workflow Function

Move the agent logic into a separate workflow function:

app/api/chat/workflow.ts
import { DurableAgent } from '@workflow/ai/agent'; 
import { getWritable } from 'workflow'; 
import { stepCountIs } from 'ai';
import { tools } from '@/ai/tools';
import type { ModelMessage, UIMessageChunk } from 'ai';

export async function chatWorkflow({
  messages,
  modelId,
}: {
  messages: ModelMessage[];
  modelId: string;
}) {
  'use workflow'; 

  const writable = getWritable<UIMessageChunk>(); 

  const agent = new DurableAgent({ 
    model: modelId,
    system: 'You are a helpful assistant.',
    tools: tools(),
  });

  await agent.stream({ 
    messages,
    writable,
    stopWhen: stepCountIs(20),
  });
}

Key changes:

  • Replace Agent with DurableAgent from @workflow/ai/agent
  • Add the "use workflow" directive to mark this as a workflow function
  • Use getWritable() to get a stream for agent output
  • Pass the writable to agent.stream() instead of returning a stream directly

Step 3: Update the API Route

Replace the agent call with start() to run the workflow:

app/api/chat/route.ts
import { createUIMessageStreamResponse, convertToModelMessages } from 'ai';
import { start } from 'workflow/api'; 
import { chatWorkflow } from './workflow'; 

export async function POST(req: Request) {
  const { messages, modelId } = await req.json();
  const modelMessages = convertToModelMessages(messages);

  const run = await start(chatWorkflow, [{ messages: modelMessages, modelId }]); 

  return createUIMessageStreamResponse({
    stream: run.readable, 
  });
}

Step 4: Convert Tools to Steps

Mark tool execution functions with "use step" to make them durable. This enables automatic retries and observability:

ai/tools/search-web.ts
import { tool } from 'ai';
import { z } from 'zod';

async function executeSearch({ query }: { query: string }) {
  'use step'; 

  const response = await fetch(`https://api.search.com?q=${query}`);
  return response.json();
}

export const searchWeb = tool({
  description: 'Search the web for information',
  inputSchema: z.object({ query: z.string() }),
  execute: executeSearch,
});

With "use step":

  • The tool execution runs in a separate step with full Node.js access
  • Failed tool calls are automatically retried (up to 3 times by default)
  • Each tool execution appears as a discrete step in observability tools
  • Results are persisted, so replays skip already-completed tools

Step 5: Stream Progress Updates from Tools

Tools can emit progress updates to the same stream the agent uses. This allows the UI to display tool status:

ai/tools/run-command.ts
import { tool } from 'ai';
import { getWritable } from 'workflow'; 
import { z } from 'zod';
import type { UIMessageChunk } from 'ai';

async function executeRunCommand(
  { command }: { command: string },
  { toolCallId }: { toolCallId: string }
) {
  'use step';

  const writable = getWritable<UIMessageChunk>(); 
  const writer = writable.getWriter(); 

  // Emit a progress update
  await writer.write({ 
    id: toolCallId, 
    type: 'data-run-command', 
    data: { command, status: 'executing' }, 
  }); 

  const result = await runCommand(command);

  await writer.write({ 
    id: toolCallId, 
    type: 'data-run-command', 
    data: { command, status: 'done', result }, 
  }); 

  writer.releaseLock(); 

  return result;
}

export const runCommand = tool({
  description: 'Run a shell command',
  inputSchema: z.object({ command: z.string() }),
  execute: executeRunCommand,
});

The UI can handle these data chunks to display real-time progress.

Running the Workflow

Run your development server, then open the observability dashboard to see your workflow in action:

npx workflow web

This opens a local dashboard showing:

  • All workflow runs and their status
  • Individual step executions with timing
  • Errors and retry attempts
  • Stream data flowing through the workflow

Next Steps

Now that you have a basic durable agent, explore these additional capabilities: