Information
OpenAI Codex CLI
Lightweight coding agent that runs in your terminal
brew install codex
Table of contents
- [Experimental technology disclaimer](#experimental-technology-disclaimer) - [Quickstart](#quickstart) - [OpenAI API Users](#openai-api-users) - [OpenAI Plus/Pro Users](#openai-pluspro-users) - [Why Codex?](#why-codex) - [Security model & permissions](#security-model--permissions) - [Platform sandboxing details](#platform-sandboxing-details) - [System requirements](#system-requirements) - [CLI reference](#cli-reference) - [Memory & project docs](#memory--project-docs) - [Non-interactive / CI mode](#non-interactive--ci-mode) - [Model Context Protocol (MCP)](#model-context-protocol-mcp) - [Tracing / verbose logging](#tracing--verbose-logging) - [Recipes](#recipes) - [Installation](#installation) - [DotSlash](#dotslash) - [Configuration](#configuration) - [FAQ](#faq) - [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage) - [Codex open source fund](#codex-open-source-fund) - [Contributing](#contributing) - [Development workflow](#development-workflow) - [Writing high-impact code changes](#writing-high-impact-code-changes) - [Opening a pull request](#opening-a-pull-request) - [Review process](#review-process) - [Community values](#community-values) - [Getting help](#getting-help) - [Contributor license agreement (CLA)](#contributor-license-agreement-cla) - [Quick fixes](#quick-fixes) - [Releasing \`codex\`](#releasing-codex) - [Security & responsible AI](#security--responsible-ai) - [License](#license)Use --profile
to use other models
Codex also allows you to use other providers that support the OpenAI Chat Completions (or Responses) API.
To do so, you must first define custom [providers](./config.md#model_providers) in \`~/.codex/config.toml\`. For example, the provider for a standard Ollama setup would be defined as follows:
\`\`\`toml
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
\`\`\`
The \`base_url\` will have \`/chat/completions\` appended to it to build the full URL for the request.
For providers that also require an \`Authorization\` header of the form \`Bearer: SECRET\`, an \`env_key\` can be specified, which indicates the environment variable to read to use as the value of \`SECRET\` when making a request:
\`\`\`toml
[model_providers.openrouter]
name = "OpenRouter"
base_url = "https://openrouter.ai/api/v1"
env_key = "OPENROUTER_API_KEY"
\`\`\`
Providers that speak the Responses API are also supported by adding \`wire_api = "responses"\` as part of the definition. Accessing OpenAI models via Azure is an example of such a provider, though it also requires specifying additional \`query_params\` that need to be appended to the request URL:
\`\`\`toml
[model_providers.azure]
name = "Azure"
# Make sure you set the appropriate subdomain for this URL.
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY" # Or "OPENAI_API_KEY", whichever you use.
# Newer versions appear to support the responses API, see https://github.com/openai/codex/pull/1321
query_params = \{ api-version = "2025-04-01-preview" \}
wire_api = "responses"
\`\`\`
Once you have defined a provider you wish to use, you can configure it as your default provider as follows:
\`\`\`toml
model_provider = "azure"
\`\`\`
> [!TIP]
> If you find yourself experimenting with a variety of models and providers, then you likely want to invest in defining a _profile_ for each configuration like so:
\`\`\`toml
[profiles.o3]
model_provider = "azure"
model = "o3"
[profiles.mistral]
model_provider = "ollama"
model = "mistral"
\`\`\`
This way, you can specify one command-line argument (.e.g., \`--profile o3\`, \`--profile mistral\`) to override multiple settings together.
Run interactively: \`\`\`shell codex \`\`\` Or, run with a prompt as input (and optionally in \`Full Auto\` mode): \`\`\`shell codex "explain this codebase to me" \`\`\` \`\`\`shell codex --full-auto "create the fanciest todo-list app" \`\`\` That's it - Codex will scaffold a file, run it inside a sandbox, install any missing dependencies, and show you the live result. Approve the changes and they'll be committed to your working directory. --- ## Why Codex? Codex CLI is built for developers who already **live in the terminal** and want ChatGPT-level reasoning **plus** the power to actually run code, manipulate files, and iterate - all under version control. In short, it's _chat-driven development_ that understands and executes your repo. - **Zero setup** - bring your OpenAI API key and it just works! - **Full auto-approval, while safe + secure** by running network-disabled and directory-sandboxed - **Multimodal** - pass in screenshots or diagrams to implement features ✨ And it's **fully open-source** so you can see and contribute to how it develops! --- ## Security model & permissions Codex lets you decide _how much autonomy_ you want to grant the agent. The following options can be configured independently: - [\`approval_policy\`](./codex-rs/config.md#approval_policy) determines when you should be prompted to approve whether Codex can execute a command - [\`sandbox\`](./codex-rs/config.md#sandbox) determines the _sandbox policy_ that Codex uses to execute untrusted commands By default, Codex runs with \`approval_policy = "untrusted"\` and \`sandbox.mode = "read-only"\`, which means that: - The user is prompted to approve every command not on the set of "trusted" commands built into Codex (\`cat\`, \`ls\`, etc.) - Approved commands are run outside of a sandbox because user approval implies "trust," in this case. Though running Codex with the \`--full-auto\` option changes the configuration to \`approval_policy = "on-failure"\` and \`sandbox.mode = "workspace-write"\`, which means that: - Codex does not initially ask for user approval before running an individual command. - Though when it runs a command, it is run under a sandbox in which: - It can read any file on the system. - It can only write files under the current directory (or the directory specified via \`--cd\`). - Network requests are completely disabled. - Only if the command exits with a non-zero exit code will it ask the user for approval. If granted, it will re-attempt the command outside of the sandbox. (A common case is when Codex cannot \`npm install\` a dependency because that requires network access.) Again, these two options can be configured independently. For example, if you want Codex to perform an "exploration" where you are happy for it to read anything it wants but you never want to be prompted, you could run Codex with \`approval_policy = "never"\` and \`sandbox.mode = "read-only"\`. ### Platform sandboxing details The mechanism Codex uses to implement the sandbox policy depends on your OS: - **macOS 12+** uses **Apple Seatbelt** and runs commands using \`sandbox-exec\` with a profile (\`-p\`) that corresponds to the \`sandbox.mode\` that was specified. - **Linux** uses a combination of Landlock/seccomp APIs to enforce the \`sandbox\` configuration. Note that when running Linux in a containerized environment such as Docker, sandboxing may not work if the host/container configuration does not support the necessary Landlock/seccomp APIs. In such cases, we recommend configuring your Docker container so that it provides the sandbox guarantees you are looking for and then running \`codex\` with \`sandbox.mode = "danger-full-access"\` (or more simply, the \`--dangerously-bypass-approvals-and-sandbox\` flag) within your container. --- ## System requirements | Requirement | Details | | --------------------------- | --------------------------------------------------------------- | | Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** | | Git (optional, recommended) | 2.23+ for built-in PR helpers | | RAM | 4-GB minimum (8-GB recommended) | --- ## CLI reference | Command | Purpose | Example | | ------------------ | ---------------------------------- | ------------------------------- | | \`codex\` | Interactive TUI | \`codex\` | | \`codex "..."\` | Initial prompt for interactive TUI | \`codex "fix lint errors"\` | | \`codex exec "..."\` | Non-interactive "automation mode" | \`codex exec "explain utils.ts"\` | Key flags: \`--model/-m\`, \`--ask-for-approval/-a\`. --- ## Memory & project docs You can give Codex extra instructions and guidance using \`AGENTS.md\` files. Codex looks for \`AGENTS.md\` files in the following places, and merges them top-down: 1. \`~/.codex/AGENTS.md\` - personal global guidance 2. \`AGENTS.md\` at repo root - shared project notes 3. \`AGENTS.md\` in the current working directory - sub-folder/feature specifics --- ## Non-interactive / CI mode Run Codex head-less in pipelines. Example GitHub Action step: \`\`\`yaml - name: Update changelog via Codex run: | npm install -g @openai/codex@native # Note: we plan to drop the need for \`@native\`. export OPENAI_API_KEY="$\{\{ secrets.OPENAI_KEY \}\}" codex exec --full-auto "update CHANGELOG for next release" \`\`\` ## Model Context Protocol (MCP) The Codex CLI can be configured to leverage MCP servers by defining an [\`mcp_servers\`](./codex-rs/config.md#mcp_servers) section in \`~/.codex/config.toml\`. It is intended to mirror how tools such as Claude and Cursor define \`mcpServers\` in their respective JSON config files, though the Codex format is slightly different since it uses TOML rather than JSON, e.g.: \`\`\`toml # IMPORTANT: the top-level key is \`mcp_servers\` rather than \`mcpServers\`. [mcp_servers.server-name] command = "npx" args = ["-y", "mcp-server"] env = \{ "API_KEY" = "value" \} \`\`\` > [!TIP] > It is somewhat experimental, but the Codex CLI can also be run as an MCP _server_ via \`codex mcp\`. If you launch it with an MCP client such as \`npx @modelcontextprotocol/inspector codex mcp\` and send it a \`tools/list\` request, you will see that there is only one tool, \`codex\`, that accepts a grab-bag of inputs, including a catch-all \`config\` map for anything you might want to override. Feel free to play around with it and provide feedback via GitHub issues. ## Tracing / verbose logging Because Codex is written in Rust, it honors the \`RUST_LOG\` environment variable to configure its logging behavior. The TUI defaults to \`RUST_LOG=codex_core=info,codex_tui=info\` and log messages are written to \`~/.codex/log/codex-tui.log\`, so you can leave the following running in a separate terminal to monitor log messages as they are written: \`\`\` tail -F ~/.codex/log/codex-tui.log \`\`\` By comparison, the non-interactive mode (\`codex exec\`) defaults to \`RUST_LOG=error\`, but messages are printed inline, so there is no need to monitor a separate file. See the Rust documentation on [\`RUST_LOG\`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options. --- ## Recipes Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the [prompting guide](https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md) for more tips and usage patterns. | ✨ | What you type | What happens | | --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | | 1 | \`codex "Refactor the Dashboard component to React Hooks"\` | Codex rewrites the class component, runs \`npm test\`, and shows the diff. | | 2 | \`codex "Generate SQL migrations for adding a users table"\` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. | | 3 | \`codex "Write unit tests for utils/date.ts"\` | Generates tests, executes them, and iterates until they pass. | | 4 | \`codex "Bulk-rename *.jpeg -> *.jpg with git mv"\` | Safely renames files and updates imports/usages. | | 5 | \`codex "Explain what this regex does: ^(?=.*[A-Z]).\{8,\}$"\` | Outputs a step-by-step human explanation. | | 6 | \`codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"\` | Suggests impactful PRs in the current codebase. | | 7 | \`codex "Look for vulnerabilities and create a security review report"\` | Finds and explains security bugs. | --- ## Installation
Reply