OpenClaw builds a custom system prompt for every agent run. The prompt is OpenClaw-owned and does not use the pi-coding-agent default prompt. The prompt is assembled by OpenClaw and injected into each agent run. Prompt assembly has three layers:Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
buildAgentSystemPromptrenders the prompt from explicit inputs. It should stay a pure renderer and should not read global config directly.resolveAgentSystemPromptConfigresolves config-backed prompt knobs such as owner display, TTS hints, model aliases, memory citation mode, and sub-agent delegation mode for a specific agent.- Runtime adapters (embedded, CLI, command/export previews, compaction) gather live facts such as tools, sandbox state, channel capabilities, context files, and provider prompt contributions, then call the configured prompt facade.
- replace a small set of named core sections (
interaction_style,tool_call_style,execution_bias) - inject a stable prefix above the prompt cache boundary
- inject a dynamic suffix below the prompt cache boundary
before_prompt_build prompt mutation for compatibility or truly global prompt
changes, not normal provider behavior.
The OpenAI GPT-5 family overlay keeps the core execution rule small and adds
model-specific guidance for persona latching, concise output, tool discipline,
parallel lookup, deliverable coverage, verification, missing context, and
terminal-tool hygiene.
Structure
The prompt is intentionally compact and uses fixed sections:- Tooling: structured-tool source-of-truth reminder plus runtime tool-use guidance.
- Execution Bias: compact follow-through guidance: act in-turn on actionable requests, continue until done or blocked, recover from weak tool results, check mutable state live, and verify before finalizing.
- Safety: short guardrail reminder to avoid power-seeking behavior or bypassing oversight.
- Skills (when available): tells the model how to load skill instructions on demand.
- OpenClaw Control: tells the model to prefer the
gatewaytool for config/restart work and to avoid inventing CLI commands. - OpenClaw Self-Update: how to inspect config safely with
config.schema.lookup, patch config withconfig.patch, replace the full config withconfig.apply, and runupdate.runonly on explicit user request. The owner-onlygatewaytool also refuses to rewritetools.exec.ask/tools.exec.security, including legacytools.bash.*aliases that normalize to those protected exec paths. - Workspace: working directory (
agents.defaults.workspace). - Documentation: local path to OpenClaw docs/source and when to read them.
- Workspace Files (injected): indicates bootstrap files are included below.
- Sandbox (when enabled): indicates sandboxed runtime, sandbox paths, and whether elevated exec is available.
- Current Date & Time: time zone only (cache-stable; the live clock comes from
session_status). - Assistant Output Directives: compact attachment, voice-note, and reply tag syntax.
- Heartbeats: heartbeat prompt and ack behavior, when heartbeats are enabled for the default agent.
- Runtime: host, OS, node, model, repo root (when detected), thinking level (one line).
- Reasoning: current visibility level + /reasoning toggle hint.
- use cron for future follow-up (
check back later, reminders, recurring work) instead ofexecsleep loops,yieldMsdelay tricks, or repeatedprocesspolling - use
exec/processonly for commands that start now and continue running in the background - when automatic completion wake is enabled, start the command once and rely on the push-based wake path when it emits output or fails
- use
processfor logs, status, input, or intervention when you need to inspect a running command - if the task is larger, prefer
sessions_spawn; sub-agent completion is push-based and auto-announces back to the requester - do not poll
subagents list/sessions_listin a loop just to wait for completion
agents.defaults.subagents.delegationMode can strengthen this guidance. The
default suggest mode keeps the baseline nudge. prefer adds a dedicated
Sub-Agent Delegation section telling the main agent to act as a responsive
coordinator and push anything more involved than a direct reply through
sessions_spawn. This is prompt-only; tool policy still controls whether
sessions_spawn is available.
When the experimental update_plan tool is enabled, Tooling also tells the
model to use it only for non-trivial multi-step work, keep exactly one
in_progress step, and avoid repeating the whole plan after each update.
Safety guardrails in the system prompt are advisory. They guide model behavior but do not enforce policy. Use tool policy, exec approvals, sandboxing, and channel allowlists for hard enforcement; operators can disable these by design.
On channels with native approval cards/buttons, the runtime prompt now tells the
agent to rely on that native approval UI first. It should only include a manual
/approve command when the tool result says chat approvals are unavailable or
manual approval is the only path.
Prompt modes
OpenClaw can render smaller system prompts for sub-agents. The runtime sets apromptMode for each run (not a user-facing config):
full(default): includes all sections above.minimal: used for sub-agents; omits Memory Recall, OpenClaw Self-Update, Model Aliases, User Identity, Assistant Output Directives, Messaging, Silent Replies, and Heartbeats. Tooling, Safety, Skills when supplied, Workspace, Sandbox, Current Date & Time (when known), Runtime, and injected context stay available.none: returns only the base identity line.
promptMode=minimal, extra injected prompts are labeled Subagent
Context instead of Group Chat Context.
For channel auto-reply runs, OpenClaw can omit the generic Silent Replies
section when the direct/group chat context already includes the resolved
conversation-specific NO_REPLY behavior. This avoids repeating token mechanics
in both the global system prompt and channel context.
Prompt snapshots
OpenClaw keeps committed prompt snapshots for the Codex runtime happy path undertest/fixtures/agents/prompt-snapshots/codex-runtime-happy-path/. They render
selected app-server thread/turn params plus a reconstructed model-bound prompt
layer stack for Telegram direct, Discord group, and heartbeat turns. That stack
includes a pinned Codex gpt-5.5 model prompt fixture generated from Codex’s
model catalog/cache shape, the Codex happy-path permission developer text,
OpenClaw developer instructions, turn-scoped collaboration-mode instructions
when OpenClaw provides them, user turn input, and references to the dynamic tool
specs.
Refresh the pinned Codex model prompt fixture with
pnpm prompt:snapshots:sync-codex-model. By default, the script looks for
Codex’s runtime cache at $CODEX_HOME/models_cache.json, then
~/.codex/models_cache.json, and only then falls back to the maintainer Codex
checkout convention at ~/code/codex/codex-rs/models-manager/models.json. If
none of those sources exist, the command exits without changing the committed
fixture. Pass --catalog <path> to refresh from a specific models_cache.json
or models.json file.
These snapshots are still not a byte-for-byte raw OpenAI request capture. Codex
can add runtime-owned workspace context such as AGENTS.md, environment
context, memories, app/plugin instructions, and built-in Default
collaboration-mode instructions inside the Codex runtime after OpenClaw sends
thread and turn params.
Regenerate them with pnpm prompt:snapshots:gen and verify drift with
pnpm prompt:snapshots:check. CI runs the drift check in the additional
boundary shard so prompt changes and snapshot updates stay attached to the same
PR.
Workspace bootstrap injection
Bootstrap files are trimmed and appended under Project Context so the model sees identity and profile context without needing explicit reads:AGENTS.mdSOUL.mdTOOLS.mdIDENTITY.mdUSER.mdHEARTBEAT.mdBOOTSTRAP.md(only on brand-new workspaces)MEMORY.mdwhen present
HEARTBEAT.md is omitted on normal runs when
heartbeats are disabled for the default agent or
agents.defaults.heartbeat.includeSystemPromptSection is false. Keep injected
files concise, especially MEMORY.md. MEMORY.md is intended to stay a
curated long-term summary; detailed daily notes belong in memory/*.md where
memory_search and memory_get can retrieve them on demand. Oversized
MEMORY.md files increase prompt usage and can be partially injected because of
the bootstrap file limits below.
When a session runs on the native Codex harness, Codex loads AGENTS.md
through its own project-doc discovery. OpenClaw still resolves the remaining
bootstrap files and forwards them as Codex config instructions, so SOUL.md,
TOOLS.md, IDENTITY.md, USER.md, HEARTBEAT.md, BOOTSTRAP.md, and
MEMORY.md keep the same workspace-context role without duplicating
AGENTS.md.
memory/*.md daily files are not part of the normal bootstrap Project Context. On ordinary turns they are accessed on demand via the memory_search and memory_get tools, so they do not count against the context window unless the model explicitly reads them. Bare /new and /reset turns are the exception: the runtime can prepend recent daily memory as a one-shot startup-context block for that first turn.agents.defaults.bootstrapMaxChars (default: 12000). Total injected bootstrap
content across files is capped by agents.defaults.bootstrapTotalMaxChars
(default: 60000). Missing files inject a short missing-file marker. When truncation
occurs, OpenClaw can inject a concise system-prompt warning notice; control this with
agents.defaults.bootstrapPromptTruncationWarning (off, once, always;
default: once). Detailed raw/injected counts stay in diagnostics such as
/context, /status, doctor, and logs.
For memory files, truncation is not data loss: the file remains intact on disk,
but the model only sees the shortened injected copy until it reads or searches
memory directly. If MEMORY.md is repeatedly truncated, distill it into a
shorter durable summary and move detailed history into memory/*.md, or
intentionally raise the bootstrap limits.
Sub-agent sessions only inject AGENTS.md and TOOLS.md (other bootstrap files
are filtered out to keep the sub-agent context small).
Internal hooks can intercept this step via agent:bootstrap to mutate or replace
the injected bootstrap files (for example swapping SOUL.md for an alternate persona).
If you want to make the agent sound less generic, start with
SOUL.md Personality Guide.
To inspect how much each injected file contributes (raw vs injected, truncation, plus tool schema overhead), use /context list or /context detail. See Context.
Time handling
The system prompt includes a dedicated Current Date & Time section when the user timezone is known. To keep the prompt cache-stable, it now only includes the time zone (no dynamic clock or time format). Usesession_status when the agent needs the current time; the status card
includes a timestamp line. The same tool can optionally set a per-session model
override (model=default clears it).
Configure with:
agents.defaults.userTimezoneagents.defaults.timeFormat(auto|12|24)
Skills
When eligible skills exist, OpenClaw injects a compact available skills list (formatSkillsForPrompt) that includes the file path for each skill. The
prompt instructs the model to use read to load the SKILL.md at the listed
location (workspace, managed, or bundled). If no skills are eligible, the
Skills section is omitted.
Eligibility includes skill metadata gates, runtime environment/config checks,
and the effective agent skill allowlist when agents.defaults.skills or
agents.list[].skills is configured.
Plugin-bundled skills are eligible only when their owning plugin is enabled.
This lets tool plugins expose deeper operating guides without embedding all of
that guidance directly in every tool description.
- Global default:
skills.limits.maxSkillsPromptChars - Per-agent override:
agents.list[].skillsLimits.maxSkillsPromptChars
agents.defaults.contextLimits.*agents.list[].contextLimits.*
memory_get, live tool results, and post-compaction AGENTS.md refreshes.
Documentation
The system prompt includes a Documentation section. When local docs are available, it points to the local OpenClaw docs directory (docs/ in a Git checkout or the bundled npm
package docs). If local docs are unavailable, it falls back to
https://docs.openclaw.ai.
The same section also includes the OpenClaw source location. Git checkouts expose the local
source root so the agent can inspect code directly. Package installs include the GitHub
source URL and tell the agent to review source there whenever the docs are incomplete or
stale. The prompt also notes the public docs mirror, community Discord, and ClawHub
(https://clawhub.ai) for skills discovery. It tells the model to
consult docs first for OpenClaw behavior, commands, configuration, or architecture, and to
run openclaw status itself when possible (asking the user only when it lacks access).
For configuration specifically, it points agents to the gateway tool action
config.schema.lookup for exact field-level docs and constraints, then to
docs/gateway/configuration.md and docs/gateway/configuration-reference.md
for broader guidance.