了解 Codex 的工作原理 · 任务类型 · 使用接口
编辑器扩展
命令行界面
网页界面
移动应用
开发套件
按月续费 全程质保!
CHATGPT 代充
自助代充 快人一步
Codex CLI 是一个可以在终端本地运行的编码代理,能够在你的机器上读取、修改和运行代码。
开源 · Rust 构建 · 快速高效 · 在 GitHub openai/codex 持续改进
To get started, describe a task or try one of these commands:
/init - create an AGENTS.md file with instructions for Codex
/status - show current session configuration
/approvals - choose what Codex can do without approval
/model - choose what model and reasoning effort to use
/review - review any changes and find issues
选择包管理器安装 Codex CLI
npm i -g @openai/codex支持 macOS 和 Linux · Windows 推荐使用 WSL
打开交互式终端 UI 开始工作
codex首次启动需要身份验证 · 使用 ChatGPT Plus/Pro/Enterprise 账户
定期更新 CLI 获取新功能
npm i -g @openai/codex@latest查看 changelog 了解最新发布内容
包含在 ChatGPT Plus、Pro、Business、Edu 和 Enterprise 计划中
或使用 API key 进行精细化控制 · 查看定价页面了解更多
codex [PROMPT]# 可选的初始指令--image, -i <path># 附带图片(可逗号分隔多个)--model, -m <model># 指定模型(如 gpt-5-codex)--oss# 使用本地开源模型(需 Ollama)--profile, -p <name># 加载配置文件--cd, -C <path># 设置工作目录--sandbox, -s <level># 沙箱级别:read-only - 只读
workspace-write - 可写工作目录
danger-full-access - 完全访问
--ask-for-approval, -a <mode>untrusted - 不信任的操作需批准
on-failure - 失败时批准
on-request - 请求时批准
never - 从不批准
--full-auto自动模式: workspace-write + on-failure
--yolo危险: 跳过所有批准和沙箱
codex# 启动 TUIcodex "prompt"# 带初始提示启动codex --search# 启用网络搜索codex -i img.png "describe"# 带图片输入codex exec "task"# 执行任务(别名: codex e)codex exec --json "task"# 输出 JSONL 格式codex exec - < prompt.txt# 从 stdin 读取提示codex exec --skip-git-repo-check "task"# 允许非 Git 目录codex resume# 选择历史会话codex resume --last# 恢复最近会话codex resume <SESSION_ID># 恢复指定会话codex resume --last "continue"# 恢复并追加指令非交互式恢复:
codex exec resume <SESSION_ID> "next step"codex exec resume --last "implement the plan"codex login# ChatGPT OAuth 登录codex login --with-api-key# 从 stdin 读取 API keycodex login status# 检查登录状态(exit 0 = 已登录)codex logout# 注销示例:
printenv OPENAI_API_KEY | codex login --with-api-keycodex cloud# 交互式任务管理codex cloud exec --env <ENV_ID> "task"# 提交任务codex cloud exec --env <ENV_ID> --attempts 3 "task"# 多次尝试(1-4)codex apply <TASK_ID># 应用云任务的 diff(别名: codex a)codex mcp list# 列出已配置的服务器codex mcp list --json# JSON 格式输出codex mcp get <name># 查看服务器配置codex mcp remove <name># 删除服务器STDIO 传输:
codex mcp add <name> -- <command> [args...]codex mcp add <name> --env KEY=VALUE -- <command>HTTP 传输:
codex mcp add <name> --url https://example.com/mcpcodex mcp add <name> --url <url> --bearer-token-env-var TOKEN_VARcodex --enable rmcp_client mcp login <name> --scopes scope1,scope2codex mcp logout <name>配置文件方式:
# ~/.codex/config.toml [features] rmcp_client = true
codex sandbox --full-auto -- <command>codex sandbox -c key=value -- <command>codex sandbox --full-auto -- <command>codex sandbox -c key=value -- <command>codex completion bashcodex completion zshcodex completion fishcodex completion power-shellcodex completion elvish安装示例:
# Zsh
codex completion zsh > "${fpath[1]}/_codex"# Bash
codex completion bash >> ~/.bashrccodex mcp-server# 将 Codex 作为 MCP 服务器运行codex app-server# 启动本地应用服务器(开发用)codex --full-auto exec "task"codex --cd /path --add-dir /other exec "task"codex exec --json "task" > output.jsonlcodex exec --skip-git-repo-check "analyze files"codex --yolo exec "dangerous task"主配置: ~/.codex/config.toml
会话记录: ~/.codex/sessions/
codex -c key=value# 命令行覆盖配置codex --profile <name># 使用配置文件0 - 成功
非 0 - 失败(用于脚本判断)
# 示例:
if codex login status; then codex exec "task" fi
1. 本地工作 - 使用 --full-auto,避免 --yolo
2. 多目录访问 - 优先使用 --add-dir 而非 danger-full-access
3. CI 环境 - 配合 --json 获取机器可读输出
4. MCP OAuth - 需启用 rmcp_client 功能标志
5. 隔离环境 - 仅在沙箱 VM 中使用 --yolo
配置文件: ~/.codex/config.toml
完整文档: Codex CLI overview, AGENTS.md
网络搜索: 需在配置中启用 web_search_request = true
~/.codex/config.toml# 主配置文件(CLI 和 IDE 共享)IDE 扩展访问方式:
设置图标 → Codex Settings → Open config.toml
model = "gpt-5-codex"codex --model gpt-5model_provider = "ollama"codex --config model_provider="ollama"approval_policy = "on-request" # untrusted | on-failure | on-request | nevercodex --ask-for-approval on-requestsandbox_mode = "workspace-write" # read-only | workspace-write | danger-full-accesscodex --sandbox workspace-writemodel_reasoning_effort = "high" # minimal | low | medium | highcodex --config model_reasoning_effort="high"[shell_environment_policy] include_only = ["PATH", "HOME"]
codex --config shell_environment_policy.include_only='["PATH","HOME"]'model = "gpt-5-codex" approval_policy = "on-request" profile = "deep-review" # 设置默认 profile [profiles.deep-review] model = "gpt-5-pro" model_reasoning_effort = "high" approval_policy = "never" [profiles.lightweight] model = "gpt-4.1" approval_policy = "untrusted"
codex --profile deep-reviewcodex --profile lightweight1. CLI 显式标志 (如 --model)
2. Profile 值
3. config.toml 根级别配置
4. CLI 内置默认值
[features] streamable_shell = true web_search_request = true unified_exec = false
codex --enable web_search_request| 功能标志 | 默认值 | 阶段 | 说明 |
|---|---|---|---|
| unified_exec | false | Experimental | 使用统一的 PTY 支持的 exec 工具 |
| streamable_shell | false | Experimental | 使用可流式传输的 exec-command/write-stdin |
| rmcp_client | false | Experimental | 启用 HTTP MCP 服务器的 OAuth 支持 |
| apply_patch_freeform | false | Beta | 包含自由格式的 apply_patch 工具 |
| view_image_tool | true | Stable | 包含 view_image 工具 |
| web_search_request | false | Stable | 允许模型进行网络搜索 |
| experimental_sandbox_command_assessment | false | Experimental | 启用基于模型的沙箱风险评估 |
| ghost_commit | false | Experimental | 每轮创建幽灵提交 |
| enable_experimental_windows_sandbox | false | Experimental | 使用 Windows 受限令牌沙箱 |
# 旧方式(已弃用)
experimental_use_rmcp_client → features.rmcp_client experimental_use_exec_command_tool → features.streamable_shell experimental_use_unified_exec_tool → features.unified_exec include_apply_patch_tool → features.apply_patch_freeform tools.web_search → features.web_search_request tools.view_image → features.view_image_tool
# 新方式
[features] rmcp_client = true streamable_shell = true unified_exec = true apply_patch_freeform = true web_search_request = true view_image_tool = true
model = "gpt-5-codex" # 使用的模型 model_provider = "openai" # 提供商 ID model_context_window = 200000 # 上下文窗口令牌数 model_max_output_tokens = 8192 # 最大输出令牌数 model_reasoning_effort = "medium" # minimal | low | medium | high model_reasoning_summary = "auto" # auto | concise | detailed | none model_verbosity = "medium" # low | medium | high (GPT-5 API) model_supports_reasoning_summaries = false # 强制发送推理元数据 model_reasoning_summary_format = "none" # none | experimental
approval_policy = "untrusted" # untrusted | on-failure | on-request | never sandbox_mode = "workspace-write" # read-only | workspace-write | danger-full-access [sandbox_workspace_write] writable_roots = ["/extra/path"] # 额外可写根目录 network_access = true # 允许出站网络访问 exclude_tmpdir_env_var = false # 排除 $TMPDIR exclude_slash_tmp = false # 排除 /tmp
[shell_environment_policy] inherit = "all" # all | core | none ignore_default_excludes = false # 保留包含 KEY/SECRET/TOKEN 的变量 exclude = ["CUSTOM_*"] # 排除模式(glob) include_only = ["PATH", "HOME"] # 白名单(设置后仅保留匹配项) [shell_environment_policy.set] MY_VAR = "value" # 显式环境变量覆盖
STDIO 服务器:
[mcp_servers.myserver]
command = "node"
args = ["server.js"]
env = { KEY = "value" }
env_vars = ["EXTRA_VAR"]
cwd = "/path/to/server"
enabled = true
startup_timeout_sec = 10
tool_timeout_sec = 60
enabled_tools = ["tool1", "tool2"] # 工具白名单
disabled_tools = ["tool3"] # 工具黑名单HTTP 服务器:
[mcp_servers.httpserver]
url = "https://example.com/mcp"
bearer_token_env_var = "MCP_TOKEN"
http_headers = { "X-Custom" = "value" }
env_http_headers = { "Authorization" = "AUTH_TOKEN_VAR" }
enabled = true
startup_timeout_sec = 10
tool_timeout_sec = 60
enabled_tools = ["*"]
disabled_tools = [][model_providers.custom]
name = "My Provider"
base_url = "https://api.example.com/v1"
env_key = "CUSTOM_API_KEY" # 环境变量名
wire_api = "chat" # chat | responses
query_params = { "version" = "v1" }
http_headers = { "X-Custom" = "header" }
env_http_headers = { "Auth" = "TOKEN_VAR" }
request_max_retries = 4
stream_max_retries = 5
stream_idle_timeout_ms = 300000# 项目文档 project_doc_max_bytes = 1048576 # AGENTS.md 最大读取字节数 project_doc_fallback_filenames = [ # AGENTS.md 缺失时的备选文件 "CODEX.md", "PROJECT.md" ] [projects."/path/to/project"] trust_level = "trusted" # 标记项目为可信 # 历史记录 [history] persistence = "save-all" # save-all | none max_bytes = 10485760 # 预留,当前未强制执行 # 通知 notify = ["notify-send", "-t", "5000"] # 通知命令(接收 JSON) [tui] notifications = true # 启用 TUI 通知 # 或限制特定事件类型 notifications = ["task_complete", "error"] # 显示控制 hide_agent_reasoning = false # 隐藏推理事件 show_raw_agent_reasoning = false # 显示原始推理内容 # 文件打开器 file_opener = "vscode" # vscode | vscode-insiders | windsurf | cursor | none # OpenTelemetry [otel] environment = "dev" # 环境标签 exporter = "none" # none | otlp-http | otlp-grpc log_user_prompt = false # 导出原始用户提示 # 认证 chatgpt_base_url = "https://custom.url" # 覆盖 ChatGPT 登录 URL forced_login_method = "chatgpt" # chatgpt | api forced_chatgpt_workspace_id = "uuid" # 限制到特定工作区 # 实验性配置 instructions = "" # 预留,优先使用 AGENTS.md experimental_instructions_file = "/path/to/instructions.md" # 替代内置指令
# 基础配置
model = "gpt-5-codex"
model_provider = "openai"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
profile = "default"
# 功能标志
[features]
web_search_request = true
view_image_tool = true
streamable_shell = false
# 沙箱配置
[sandbox_workspace_write]
writable_roots = ["/extra/path"]
network_access = true
# 环境变量
[shell_environment_policy]
inherit = "core"
include_only = ["PATH", "HOME", "USER"]
# MCP 服务器
[mcp_servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_PERSONAL_ACCESS_TOKEN = "env:GITHUB_TOKEN" }
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
# 自定义提供商
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
wire_api = "chat"
# Profile 配置
[profiles.fast]
model = "gpt-4.1"
approval_policy = "untrusted"
[profiles.thorough]
model = "gpt-5-pro"
model_reasoning_effort = "high"
approval_policy = "never"
# 历史与通知
[history]
persistence = "save-all"
[tui]
notifications = true
# 其他
file_opener = "vscode"
hide_agent_reasoning = falsemodel = "gpt-5-codex" approval_policy = "on-request" [features] web_search_request = true
codex --model gpt-5codex --sandbox read-onlycodex --ask-for-approval nevercodex \ --model gpt-5-pro \ --config model_reasoning_effort="high" \ --config approval_policy="never" \ --enable web_search_request
codex --config shell_environment_policy.include_only='["PATH","HOME"]'codex --config sandbox_workspace_write.network_access=true1. 点击扩展右上角设置图标
2. 选择 IDE settings - 查看可用配置
3. 选择 Keyboard shortcuts - 自定义快捷键
4. 选择 Codex Settings > Open config.toml - 编辑配置文件
IDE 扩展与 CLI 共享 ~/.codex/config.toml
IDE 特定设置通过扩展界面配置
键盘快捷键可通过扩展界面自定义
# 检查并迁移这些已弃用的配置: experimental_use_rmcp_client → features.rmcp_client experimental_use_exec_command_tool → features.streamable_shell experimental_use_unified_exec_tool → features.unified_exec include_apply_patch_tool → features.apply_patch_freeform tools.web_search → features.web_search_request tools.view_image → features.view_image_tool
[features] rmcp_client = true streamable_shell = true unified_exec = true apply_patch_freeform = true web_search_request = true view_image_tool = true
~/.codex/ ├── config.toml # 主配置文件 ├── sessions/ # 会话历史 └── history.jsonl # 历史记录(如启用)
Codex CLI reference - 命令行参数完整参考
Codex CLI overview - 安装和快速入门
AGENTS.md - 项目级指令文档
GitHub 自动化审查
在 Pull Request 中标记 @codex review
在仓库上启用自动代码审查功能
Concepts Guide
Local · Cloud · Interfaces