An agent harness is the low level executor for one prepared OpenClaw agent turn. It is not a model provider, not a channel, and not a tool registry. For the user-facing mental model, see Agent runtimes. Use this surface only for bundled or trusted native plugins. The contract is still experimental because the parameter types intentionally mirror the current embedded runner.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
When to use a harness
Register an agent harness when a model family has its own native session runtime and the normal OpenClaw provider transport is the wrong abstraction. Examples:- a native coding-agent server that owns threads and compaction
- a local CLI or daemon that must stream native plan/reasoning/tool events
- a model runtime that needs its own resume id in addition to the OpenClaw session transcript
What core still owns
Before a harness is selected, OpenClaw has already resolved:- provider and model
- runtime auth state
- thinking level and context budget
- the OpenClaw transcript/session file
- workspace, sandbox, and tool policy
- channel reply callbacks and streaming callbacks
- model fallback and live model switching policy
params.runtimePlan, an OpenClaw-owned
policy bundle for runtime decisions that must stay shared across PI and native
harnesses:
runtimePlan.tools.normalize(...)andruntimePlan.tools.logDiagnostics(...)for provider-aware tool schema policyruntimePlan.transcript.resolvePolicy(...)for transcript sanitization and tool-call repair policyruntimePlan.delivery.isSilentPayload(...)for sharedNO_REPLYand media delivery suppressionruntimePlan.outcome.classifyRunResult(...)for model fallback classificationruntimePlan.observabilityfor resolved provider/model/harness metadata
Register a harness
Import:openclaw/plugin-sdk/agent-harness
Selection policy
OpenClaw chooses a harness after provider/model resolution:- Model-scoped runtime policy wins.
- Provider-scoped runtime policy comes next.
autoasks registered harnesses if they support the resolved provider/model.- If no registered harness matches, OpenClaw uses PI unless PI fallback is disabled.
auto mode, PI fallback is
only used when no registered plugin harness supports the resolved
provider/model. Once a plugin harness has claimed a run, OpenClaw does not
replay that same turn through PI because that can change auth/runtime semantics
or duplicate side effects.
Whole-session and whole-agent runtime pins are ignored by selection. That
includes stale session agentHarnessId values, agents.defaults.agentRuntime,
agents.list[].agentRuntime, and OPENCLAW_AGENT_RUNTIME. /status shows the
effective runtime selected from the provider/model route.
If the selected harness is surprising, enable agents/harness debug logging and
inspect the gateway’s structured agent harness selected record. It includes
the selected harness id, selection reason, runtime/fallback policy, and, in
auto mode, each plugin candidate’s support result.
The bundled Codex plugin registers codex as its harness id. Core treats that
as an ordinary plugin harness id; Codex-specific aliases belong in the plugin
or operator config, not in the shared runtime selector.
Provider plus harness pairing
Most harnesses should also register a provider. The provider makes model refs, auth status, model metadata, and/model selection visible to the rest of
OpenClaw. The harness then claims that provider in supports(...).
The bundled Codex plugin follows this pattern:
- preferred user model refs:
openai/gpt-5.5 - compatibility refs: legacy
codex/gpt-*refs remain accepted, but new configs should not use them as normal provider/model refs - harness id:
codex - auth: synthetic provider availability, because the Codex harness owns the native Codex login/session
- app-server request: OpenClaw sends the bare model id to Codex and lets the harness talk to the native app-server protocol
openai/gpt-* agent refs on the official
OpenAI provider select the Codex harness by default. Older codex/gpt-* refs
still select the Codex provider and harness for compatibility.
For operator setup, model prefix examples, and Codex-only configs, see
Codex Harness.
OpenClaw requires Codex app-server 0.125.0 or newer. The Codex plugin checks
the app-server initialize handshake and blocks older or unversioned servers so
OpenClaw only runs against the protocol surface it has been tested with. The
0.125.0 floor includes the native MCP hook payload support that landed in
Codex 0.124.0, while pinning OpenClaw to the newer tested stable line.
Tool-result middleware
Bundled plugins can attach runtime-neutral tool-result middleware throughapi.registerAgentToolResultMiddleware(...) when their manifest declares the
targeted runtime ids in contracts.agentToolResultMiddleware. This trusted
seam is for async tool-result transforms that must run before PI or Codex feeds
tool output back into the model.
Legacy bundled plugins can still use
api.registerCodexAppServerExtensionFactory(...) for Codex app-server-only
middleware, but new result transforms should use the runtime-neutral API.
The Pi-only api.registerEmbeddedExtensionFactory(...) hook has been removed;
Pi tool-result transforms must use runtime-neutral middleware.
Terminal outcome classification
Native harnesses that own their own protocol projection can useclassifyAgentHarnessTerminalOutcome(...) from
openclaw/plugin-sdk/agent-harness-runtime when a completed turn produced no
visible assistant text. The helper returns empty, reasoning-only, or
planning-only so OpenClaw’s fallback policy can decide whether to retry on a
different model. It intentionally leaves prompt errors, in-flight turns, and
intentional silent replies such as NO_REPLY unclassified.
Native Codex harness mode
The bundledcodex harness is the native Codex mode for embedded OpenClaw
agent turns. Enable the bundled codex plugin first, and include codex in
plugins.allow if your config uses a restrictive allowlist. Native app-server
configs should use openai/gpt-*; OpenAI agent turns select the Codex harness
by default. Legacy openai-codex/* routes should be repaired with
openclaw doctor --fix, and legacy codex/* model refs remain compatibility
aliases for the native harness.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
visible transcript mirror, tool policy, approvals, media delivery, and session
selection. Use provider/model agentRuntime.id: "codex" when you need to prove
that only the Codex app-server path can claim the run. Explicit plugin runtimes
fail closed; Codex app-server selection failures and runtime failures are not
retried through PI.
Runtime strictness
By default, OpenClaw usesauto provider/model runtime policy: registered
plugin harnesses can claim a provider/model pair, and PI handles the turn when
none match. OpenAI agent refs on the official OpenAI provider default to Codex.
Use an explicit provider/model plugin runtime such as
agentRuntime.id: "codex" when missing harness selection should fail instead
of routing through PI. Selected plugin harness failures always fail hard. This
does not block an explicit provider/model agentRuntime.id: "pi".
For Codex-only embedded runs:
Native sessions and transcript mirror
A harness may keep a native session id, thread id, or daemon-side resume token. Keep that binding explicitly associated with the OpenClaw session, and keep mirroring user-visible assistant/tool output into the OpenClaw transcript. The OpenClaw transcript remains the compatibility layer for:- channel-visible session history
- transcript search and indexing
- switching back to the built-in PI harness on a later turn
- generic
/new,/reset, and session deletion behavior
reset(...) so OpenClaw can
clear it when the owning OpenClaw session is reset.
Tool and media results
Core constructs the OpenClaw tool list and passes it into the prepared attempt. When a harness executes a dynamic tool call, return the tool result back through the harness result shape instead of sending channel media yourself. This keeps text, image, video, music, TTS, approval, and messaging-tool outputs on the same delivery path as PI-backed runs.Current limitations
- The public import path is generic, but some attempt/result type aliases still
carry
Pinames for compatibility. - Third-party harness installation is experimental. Prefer provider plugins until you need a native session runtime.
- Harness switching is supported across turns. Do not switch harnesses in the middle of a turn after native tools, approvals, assistant text, or message sends have started.