Information
OpenAI Agents SDK Orchestrating multiple agents Intro Quickstart Documentation Agents Running agents Results Streaming Tools Handoffs Tracing Context management Guardrails Orchestrating multiple agents Orchestrating multiple agents Orchestrating via LLM Orchestrating via code Models Configuring the SDK API Reference Agents Agents module Agents Runner Tools Results Streaming events Handoffs Lifecycle Items Run context Usage Exceptions Guardrails Model settings Agent output Function schema Model interface OpenAI Chat Completions model OpenAI Responses model Tracing Tracing module Creating traces/spans Traces Spans Processor interface Processors Scope Setup Span data Util Extensions Handoff filters Handoff prompt Orchestrating via LLM Orchestrating via code Orchestration refers to the flow of agents in your app. Which agents run, in what order, and how do they decide what happens next? There are two main ways to orchestrate agents: You can mix and match these patterns. Each has their own tradeoffs, described below. An agent is an LLM equipped with instructions, tools and handoffs. This means that given an open-ended task, the LLM can autonomously plan how it will tackle the task, using tools to take actions and acquire data, and using handoffs to delegate tasks to sub-agents. For example, a research agent could be equipped with tools like: This pattern is great when the task is open-ended and you want to rely on the intelligence of an LLM. The most important tactics here are: While orchestrating via LLM is powerful, orchestrating via code makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are: We have a number of examples in examples/agent_patterns.