OpenRouter provides a unified API that routes requests to many models behind a single endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
Getting started
Get your API key
Create an API key at openrouter.ai/keys.
Config example
Model references
Model refs follow the pattern
openrouter/<provider>/<model>. For the full list of
available providers and models, see /concepts/model-providers.| Model ref | Notes |
|---|---|
openrouter/auto | OpenRouter automatic routing |
openrouter/moonshotai/kimi-k2.6 | Kimi K2.6 via MoonshotAI |
openrouter/moonshotai/kimi-k2.5 | Kimi K2.5 via MoonshotAI |
Image generation
OpenRouter can also back theimage_generate tool. Use an OpenRouter image model under agents.defaults.imageGenerationModel:
modalities: ["image", "text"]. Gemini image models receive supported aspectRatio and resolution hints through OpenRouter’s image_config. Use agents.defaults.imageGenerationModel.timeoutMs for slower OpenRouter image models; the image_generate tool’s per-call timeoutMs parameter still wins.
Video generation
OpenRouter can also back thevideo_generate tool through its asynchronous /videos API. Use an OpenRouter video model under agents.defaults.videoGenerationModel:
polling_url, and downloads the completed video from
OpenRouter’s unsigned_urls or the documented job content endpoint.
Reference images are sent as first/last frame images by default; images
tagged with reference_image are sent as OpenRouter input references. The
bundled google/veo-3.1-fast default advertises the currently supported 4/6/8
second durations, 720P/1080P resolutions, and 16:9/9:16 aspect
ratios. Video-to-video is not registered for OpenRouter because the upstream
video generation API currently accepts text and image references.
Text-to-speech
OpenRouter can also be used as a TTS provider through its OpenAI-compatible/audio/speech endpoint.
messages.tts.providers.openrouter.apiKey is omitted, TTS reuses
models.providers.openrouter.apiKey, then OPENROUTER_API_KEY.
Speech-to-text (inbound audio)
OpenRouter can transcribe inbound voice/audio attachments through the sharedtools.media.audio path using its STT endpoint (/audio/transcriptions).
This applies to any channel plugin that forwards inbound voice/audio into
media understanding preflight.
input_audio (OpenRouter STT contract), not as multipart OpenAI form uploads.
Authentication and headers
OpenRouter uses a Bearer token with your API key under the hood. On real OpenRouter requests (https://openrouter.ai/api/v1), OpenClaw also adds
OpenRouter’s documented app-attribution headers:
| Header | Value |
|---|---|
HTTP-Referer | https://openclaw.ai |
X-OpenRouter-Title | OpenClaw |
X-OpenRouter-Categories | cli-agent,cloud-agent,programming-app,creative-writing,writing-assistant,general-chat,personal-agent |
Advanced configuration
Response caching
Response caching
OpenRouter response caching is opt-in. Enable it per OpenRouter model with
model params:OpenClaw sends
X-OpenRouter-Cache: true and, when configured,
X-OpenRouter-Cache-TTL. responseCacheClear: true forces a refresh for
the current request and stores the replacement response. Snake_case aliases
(response_cache, response_cache_ttl_seconds, and
response_cache_clear) are also accepted.This is separate from provider prompt caching and from OpenRouter’s
Anthropic cache_control markers. It is only applied on verified
openrouter.ai routes, not custom proxy base URLs.Anthropic cache markers
Anthropic cache markers
On verified OpenRouter routes, Anthropic model refs keep the
OpenRouter-specific Anthropic
cache_control markers that OpenClaw uses for
better prompt-cache reuse on system/developer prompt blocks.Anthropic reasoning prefill
Anthropic reasoning prefill
On verified OpenRouter routes, Anthropic model refs with reasoning enabled
drop trailing assistant prefill turns before the request reaches OpenRouter,
matching Anthropic’s requirement that reasoning conversations end with a user
turn.
Thinking / reasoning injection
Thinking / reasoning injection
On supported non-
auto routes, OpenClaw maps the selected thinking level to
OpenRouter proxy reasoning payloads. Unsupported model hints and
openrouter/auto skip that reasoning injection. Hunter Alpha also skips
proxy reasoning for stale configured model refs because OpenRouter could
return final answer text in reasoning fields for that retired route.DeepSeek V4 reasoning replay
DeepSeek V4 reasoning replay
On verified OpenRouter routes,
openrouter/deepseek/deepseek-v4-flash and
openrouter/deepseek/deepseek-v4-pro fill missing reasoning_content on
replayed assistant turns so thinking/tool conversations keep DeepSeek V4’s
required follow-up shape. OpenClaw sends OpenRouter-supported
reasoning_effort values for these routes; xhigh is the highest advertised
level, and stale max overrides are mapped to xhigh.OpenAI-only request shaping
OpenAI-only request shaping
OpenRouter still runs through the proxy-style OpenAI-compatible path, so
native OpenAI-only request shaping such as
serviceTier, Responses store,
OpenAI reasoning-compat payloads, and prompt-cache hints is not forwarded.Gemini-backed routes
Gemini-backed routes
Gemini-backed OpenRouter refs stay on the proxy-Gemini path: OpenClaw keeps
Gemini thought-signature sanitation there, but does not enable native Gemini
replay validation or bootstrap rewrites.
Provider routing metadata
Provider routing metadata
If you pass OpenRouter provider routing under model params, OpenClaw forwards
it as OpenRouter routing metadata before the shared stream wrappers run.
Related
Model selection
Choosing providers, model refs, and failover behavior.
Configuration reference
Full config reference for agents, models, and providers.