Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
tools.* config keys and custom provider / base-URL setup. For agents, channels, and other top-level config keys, see Configuration reference.
Tools
Tool profiles
tools.profile sets a base allowlist before tools.allow/tools.deny:
Local onboarding defaults new local configs to
tools.profile: "coding" when unset (existing explicit profiles are preserved).| Profile | Includes |
|---|---|
minimal | session_status only |
coding | group:fs, group:runtime, group:web, group:sessions, group:memory, cron, image, image_generate, video_generate |
messaging | group:messaging, sessions_list, sessions_history, sessions_send, session_status |
full | No restriction (same as unset) |
Tool groups
| Group | Tools |
|---|---|
group:runtime | exec, process, code_execution (bash is accepted as an alias for exec) |
group:fs | read, write, edit, apply_patch |
group:sessions | sessions_list, sessions_history, sessions_send, sessions_spawn, sessions_yield, subagents, session_status |
group:memory | memory_search, memory_get |
group:web | web_search, x_search, web_fetch |
group:ui | browser, canvas |
group:automation | heartbeat_respond, cron, gateway |
group:messaging | message |
group:nodes | nodes |
group:agents | agents_list, update_plan |
group:media | image, image_generate, music_generate, video_generate, tts |
group:openclaw | All built-in tools (excludes provider plugins) |
tools.allow / tools.deny
Global tool allow/deny policy (deny wins). Case-insensitive, supports * wildcards. Applied even when Docker sandbox is off.
write and apply_patch are separate tool ids. allow: ["write"] also enables apply_patch for compatible models, but deny: ["write"] does not deny apply_patch. To block all file mutation, deny group:fs or list each mutating tool explicitly:
tools.byProvider
Further restrict tools for specific providers or models. Order: base profile → provider profile → allow/deny.
tools.toolsBySender
Restricts tools for a specific requester identity. This is defense-in-depth on top of channel access control; sender values must come from the channel adapter, not message text.
channel:<channelId>:<senderId>, id:<senderId>, e164:<phone>, username:<handle>, name:<displayName>, or "*". Channel ids are canonical OpenClaw ids; aliases such as teams normalize to msteams. Legacy unprefixed keys are accepted as id: only. Matching order is channel+id, id, e164, username, name, then wildcard.
Per-agent agents.list[].tools.toolsBySender overrides the global sender match when it matches, even with an empty {} policy.
tools.elevated
Controls elevated exec access outside the sandbox:
- Per-agent override (
agents.list[].tools.elevated) can only further restrict. /elevated on|off|ask|fullstores state per session; inline directives apply to single message.- Elevated
execbypasses sandboxing and uses the configured escape path (gatewayby default, ornodewhen the exec target isnode).
tools.exec
tools.loopDetection
Tool-loop safety checks are disabled by default. Set enabled: true to activate detection. Settings can be defined globally in tools.loopDetection and overridden per-agent at agents.list[].tools.loopDetection.
Max tool-call history retained for loop analysis.
Repeating no-progress pattern threshold for warnings.
Higher repeating threshold for blocking critical loops.
Hard stop threshold for any no-progress run.
Warn on repeated same-tool/same-args calls.
Warn/block on known poll tools (
process.poll, command_status, etc.).Warn/block on alternating no-progress pair patterns.
tools.web
tools.media
Configures inbound media understanding (image/audio/video):
Media model entry fields
Media model entry fields
Provider entry (
type: "provider" or omitted):provider: API provider id (openai,anthropic,google/gemini,groq, etc.)model: model id overrideprofile/preferredProfile:auth-profiles.jsonprofile selection
type: "cli"):command: executable to runargs: templated args (supports{{MediaPath}},{{Prompt}},{{MaxChars}}, etc.;openclaw doctor --fixmigrates deprecated{input}placeholders to{{MediaPath}})
capabilities: optional list (image,audio,video). Defaults:openai/anthropic/minimax→ image,google→ image+audio+video,groq→ audio.prompt,maxChars,maxBytes,timeoutSeconds,language: per-entry overrides.tools.media.image.timeoutSecondsand matching image modeltimeoutSecondsentries also apply when the agent calls the explicitimagetool.- Failures fall back to the next entry.
auth-profiles.json → env vars → models.providers.*.apiKey.Async completion fields:asyncCompletion.directSend: deprecated compatibility flag. Completed async media tasks stay requester-session mediated so the agent receives the result, decides how to tell the user, and uses the message tool when source delivery requires it.
tools.agentToAgent
tools.sessions
Controls which sessions can be targeted by the session tools (sessions_list, sessions_history, sessions_send).
Default: tree (current session + sessions spawned by it, such as subagents).
Visibility scopes
Visibility scopes
self: only the current session key.tree: current session + sessions spawned by the current session (subagents).agent: any session belonging to the current agent id (can include other users if you run per-sender sessions under the same agent id).all: any session. Cross-agent targeting still requirestools.agentToAgent.- Sandbox clamp: when the current session is sandboxed and
agents.defaults.sandbox.sessionToolsVisibility="spawned", visibility is forced totreeeven iftools.sessions.visibility="all".
tools.sessions_spawn
Controls inline attachment support for sessions_spawn.
Attachment notes
Attachment notes
- Attachments are only supported for
runtime: "subagent". ACP runtime rejects them. - Files are materialized into the child workspace at
.openclaw/attachments/<uuid>/with a.manifest.json. - Attachment content is automatically redacted from transcript persistence.
- Base64 inputs are validated with strict alphabet/padding checks and a pre-decode size guard.
- File permissions are
0700for directories and0600for files. - Cleanup follows the
cleanuppolicy:deletealways removes attachments;keepretains them only whenretainOnSessionKeep: true.
tools.experimental
Experimental built-in tool flags. Default off unless a strict-agentic GPT-5 auto-enable rule applies.
planTool: enables the structuredupdate_plantool for non-trivial multi-step work tracking.- Default:
falseunlessagents.defaults.embeddedPi.executionContract(or a per-agent override) is set to"strict-agentic"for an OpenAI or OpenAI Codex GPT-5-family run. Settrueto force the tool on outside that scope, orfalseto keep it off even for strict-agentic GPT-5 runs. - When enabled, the system prompt also adds usage guidance so the model only uses it for substantial work and keeps at most one step
in_progress.
agents.defaults.subagents
model: default model for spawned sub-agents. If omitted, sub-agents inherit the caller’s model.allowAgents: default allowlist of target agent ids forsessions_spawnwhen the requester agent does not set its ownsubagents.allowAgents(["*"]= any; default: same agent only).runTimeoutSeconds: default timeout (seconds) forsessions_spawnwhen the tool call omitsrunTimeoutSeconds.0means no timeout.announceTimeoutMs: per-call timeout (milliseconds) for gatewayagentannounce delivery attempts. Default:120000. Transient retries can make the total announce wait longer than one configured timeout.- Per-subagent tool policy:
tools.subagents.tools.allow/tools.subagents.tools.deny.
Custom providers and base URLs
OpenClaw uses the built-in model catalog. Add custom providers viamodels.providers in config or ~/.openclaw/agents/<agentId>/agent/models.json.
Auth and merge precedence
Auth and merge precedence
- Use
authHeader: true+headersfor custom auth needs. - Override agent config root with
OPENCLAW_AGENT_DIR(orPI_CODING_AGENT_DIR, a legacy environment variable alias). - Merge precedence for matching provider IDs:
- Non-empty agent
models.jsonbaseUrlvalues win. - Non-empty agent
apiKeyvalues win only when that provider is not SecretRef-managed in current config/auth-profile context. - SecretRef-managed provider
apiKeyvalues are refreshed from source markers (ENV_VAR_NAMEfor env refs,secretref-managedfor file/exec refs) instead of persisting resolved secrets. - SecretRef-managed provider header values are refreshed from source markers (
secretref-env:ENV_VAR_NAMEfor env refs,secretref-managedfor file/exec refs). - Empty or missing agent
apiKey/baseUrlfall back tomodels.providersin config. - Matching model
contextWindow/maxTokensuse the higher value between explicit config and implicit catalog values. - Matching model
contextTokenspreserves an explicit runtime cap when present; use it to limit effective context without changing native model metadata. - Use
models.mode: "replace"when you want config to fully rewritemodels.json. - Marker persistence is source-authoritative: markers are written from the active source config snapshot (pre-resolution), not from resolved runtime secret values.
- Non-empty agent
Provider field details
Top-level catalog
Top-level catalog
models.mode: provider catalog behavior (mergeorreplace).models.providers: custom provider map keyed by provider id.- Safe edits: use
openclaw config set models.providers.<id> '<json>' --strict-json --mergeoropenclaw config set models.providers.<id>.models '<json-array>' --strict-json --mergefor additive updates.config setrefuses destructive replacements unless you pass--replace.
- Safe edits: use
Provider connection and auth
Provider connection and auth
models.providers.*.api: request adapter (openai-completions,openai-responses,anthropic-messages,google-generative-ai, etc). For self-hosted/v1/chat/completionsbackends such as MLX, vLLM, SGLang, and most OpenAI-compatible local servers, useopenai-completions. A custom provider withbaseUrlbut noapidefaults toopenai-completions; setopenai-responsesonly when the backend supports/v1/responses.models.providers.*.apiKey: provider credential (prefer SecretRef/env substitution).models.providers.*.auth: auth strategy (api-key,token,oauth,aws-sdk).models.providers.*.contextWindow: default native context window for models under this provider when the model entry does not setcontextWindow.models.providers.*.contextTokens: default effective runtime context cap for models under this provider when the model entry does not setcontextTokens.models.providers.*.maxTokens: default output-token cap for models under this provider when the model entry does not setmaxTokens.models.providers.*.timeoutSeconds: optional per-provider model HTTP request timeout in seconds, including connect, headers, body, and total request abort handling.models.providers.*.injectNumCtxForOpenAICompat: for Ollama +openai-completions, injectoptions.num_ctxinto requests (default:true).models.providers.*.authHeader: force credential transport in theAuthorizationheader when required.models.providers.*.baseUrl: upstream API base URL.models.providers.*.headers: extra static headers for proxy/tenant routing.
Request transport overrides
Request transport overrides
models.providers.*.request: transport overrides for model-provider HTTP requests.request.headers: extra headers (merged with provider defaults). Values accept SecretRef.request.auth: auth strategy override. Modes:"provider-default"(use provider’s built-in auth),"authorization-bearer"(withtoken),"header"(withheaderName,value, optionalprefix).request.proxy: HTTP proxy override. Modes:"env-proxy"(useHTTP_PROXY/HTTPS_PROXYenv vars),"explicit-proxy"(withurl). Both modes accept an optionaltlssub-object.request.tls: TLS override for direct connections. Fields:ca,cert,key,passphrase(all accept SecretRef),serverName,insecureSkipVerify.request.allowPrivateNetwork: whentrue, allow HTTPS tobaseUrlwhen DNS resolves to private, CGNAT, or similar ranges, via the provider HTTP fetch guard (operator opt-in for trusted self-hosted OpenAI-compatible endpoints). Loopback model-provider stream URLs such aslocalhost,127.0.0.1, and[::1]are allowed automatically unless this is explicitly set tofalse; LAN, tailnet, and private DNS hosts still require opt-in. WebSocket uses the samerequestfor headers/TLS but not that fetch SSRF gate. Defaultfalse.
Model catalog entries
Model catalog entries
models.providers.*.models: explicit provider model catalog entries.models.providers.*.models.*.input: model input modalities. Use["text"]for text-only models and["text", "image"]for native image/vision models. Image attachments are only injected into agent turns when the selected model is marked image-capable.models.providers.*.models.*.contextWindow: native model context window metadata. This overrides provider-levelcontextWindowfor that model.models.providers.*.models.*.contextTokens: optional runtime context cap. This overrides provider-levelcontextTokens; use it when you want a smaller effective context budget than the model’s nativecontextWindow;openclaw models listshows both values when they differ.models.providers.*.models.*.compat.supportsDeveloperRole: optional compatibility hint. Forapi: "openai-completions"with a non-empty non-nativebaseUrl(host notapi.openai.com), OpenClaw forces this tofalseat runtime. Empty/omittedbaseUrlkeeps default OpenAI behavior.models.providers.*.models.*.compat.requiresStringContent: optional compatibility hint for string-only OpenAI-compatible chat endpoints. Whentrue, OpenClaw flattens pure textmessages[].contentarrays into plain strings before sending the request.models.providers.*.models.*.compat.strictMessageKeys: optional compatibility hint for strict OpenAI-compatible chat endpoints. Whentrue, OpenClaw strips outgoing Chat Completions message objects toroleandcontentbefore sending the request.models.providers.*.models.*.compat.thinkingFormat: optional thinking payload hint. Use"qwen"for top-levelenable_thinking, or"qwen-chat-template"forchat_template_kwargs.enable_thinkingon Qwen-family OpenAI-compatible servers that support request-level chat-template kwargs, such as vLLM.
Amazon Bedrock discovery
Amazon Bedrock discovery
plugins.entries.amazon-bedrock.config.discovery: Bedrock auto-discovery settings root.plugins.entries.amazon-bedrock.config.discovery.enabled: turn implicit discovery on/off.plugins.entries.amazon-bedrock.config.discovery.region: AWS region for discovery.plugins.entries.amazon-bedrock.config.discovery.providerFilter: optional provider-id filter for targeted discovery.plugins.entries.amazon-bedrock.config.discovery.refreshInterval: polling interval for discovery refresh.plugins.entries.amazon-bedrock.config.discovery.defaultContextWindow: fallback context window for discovered models.plugins.entries.amazon-bedrock.config.discovery.defaultMaxTokens: fallback max output tokens for discovered models.
--custom-image-input to force image-capable metadata or --custom-text-input to force text-only metadata.
Provider examples
Cerebras (GLM 4.7 / GPT OSS)
Cerebras (GLM 4.7 / GPT OSS)
The bundled Use
cerebras provider plugin can configure this via openclaw onboard --auth-choice cerebras-api-key. Use explicit provider config only when overriding defaults.cerebras/zai-glm-4.7 for Cerebras; zai/glm-4.7 for Z.AI direct.Kimi Coding
Kimi Coding
openclaw onboard --auth-choice kimi-code-api-key.Local models (LM Studio)
Local models (LM Studio)
See Local Models. TL;DR: run a large local model via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
MiniMax M2.7 (direct)
MiniMax M2.7 (direct)
MINIMAX_API_KEY. Shortcuts: openclaw onboard --auth-choice minimax-global-api or openclaw onboard --auth-choice minimax-cn-api. The model catalog defaults to M2.7 only. On the Anthropic-compatible streaming path, OpenClaw disables MiniMax thinking by default unless you explicitly set thinking yourself. /fast on or params.fastMode: true rewrites MiniMax-M2.7 to MiniMax-M2.7-highspeed.Moonshot AI (Kimi)
Moonshot AI (Kimi)
baseUrl: "https://api.moonshot.cn/v1" or openclaw onboard --auth-choice moonshot-api-key-cn.Native Moonshot endpoints advertise streaming usage compatibility on the shared openai-completions transport, and OpenClaw keys that off endpoint capabilities rather than the built-in provider id alone.OpenCode
OpenCode
OPENCODE_API_KEY (or OPENCODE_ZEN_API_KEY). Use opencode/... refs for the Zen catalog or opencode-go/... refs for the Go catalog. Shortcut: openclaw onboard --auth-choice opencode-zen or openclaw onboard --auth-choice opencode-go.Synthetic (Anthropic-compatible)
Synthetic (Anthropic-compatible)
/v1 (Anthropic client appends it). Shortcut: openclaw onboard --auth-choice synthetic-api-key.Z.AI (GLM-4.7)
Z.AI (GLM-4.7)
ZAI_API_KEY. z.ai/* and z-ai/* are accepted aliases. Shortcut: openclaw onboard --auth-choice zai-api-key.- General endpoint:
https://api.z.ai/api/paas/v4 - Coding endpoint (default):
https://api.z.ai/api/coding/paas/v4 - For the general endpoint, define a custom provider with the base URL override.
Related
- Configuration — agents
- Configuration — channels
- Configuration reference — other top-level keys
- Tools and plugins