Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

Fireworks exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin that ships with two pre-cataloged Kimi models and accepts any Fireworks model or router id at runtime.
PropertyValue
Provider idfireworks (alias: fireworks-ai)
Pluginbundled, enabledByDefault: true
Auth env varFIREWORKS_API_KEY
Onboarding flag--auth-choice fireworks-api-key
Direct CLI flag--fireworks-api-key <key>
APIOpenAI-compatible (openai-completions)
Base URLhttps://api.fireworks.ai/inference/v1
Default modelfireworks/accounts/fireworks/routers/kimi-k2p5-turbo
Default aliasKimi K2.5 Turbo

Getting started

1

Set the Fireworks API key

openclaw onboard --auth-choice fireworks-api-key
Onboarding stores the key against the fireworks provider in your auth profiles and sets the Fire Pass Kimi K2.5 Turbo router as the default model.
2

Verify the model is available

openclaw models list --provider fireworks
The list should include Kimi K2.6 and Kimi K2.5 Turbo (Fire Pass). If FIREWORKS_API_KEY is unresolved, openclaw models status --json reports the missing credential under auth.unusableProfiles.

Non-interactive setup

For scripted or CI installs, pass everything on the command line:
openclaw onboard --non-interactive \
  --mode local \
  --auth-choice fireworks-api-key \
  --fireworks-api-key "$FIREWORKS_API_KEY" \
  --skip-health \
  --accept-risk

Built-in catalog

Model refNameInputContextMax outputThinking
fireworks/accounts/fireworks/models/kimi-k2p6Kimi K2.6text + image262,144262,144Forced off
fireworks/accounts/fireworks/routers/kimi-k2p5-turboKimi K2.5 Turbo (Fire Pass)text + image256,000256,000Forced off (default)
OpenClaw pins all Fireworks Kimi models to thinking: off because Fireworks rejects Kimi thinking parameters in production. Routing the same model through Moonshot directly preserves Kimi reasoning output. See thinking modes for switching between providers.

Custom Fireworks model ids

OpenClaw accepts any Fireworks model or router id at runtime. Use the exact id shown by Fireworks and prefix it with fireworks/. Dynamic resolution clones the Fire Pass template (text + image input, OpenAI-compatible API, default cost zero) and disables thinking automatically when the id matches the Kimi pattern.
{
  agents: {
    defaults: {
      model: {
        primary: "fireworks/accounts/fireworks/models/<your-model-id>",
      },
    },
  },
}
Every Fireworks model ref in OpenClaw starts with fireworks/ followed by the exact id or router path from the Fireworks platform. For example:
  • Router model: fireworks/accounts/fireworks/routers/kimi-k2p5-turbo
  • Direct model: fireworks/accounts/fireworks/models/<model-name>
OpenClaw strips the fireworks/ prefix when constructing the API request and sends the remaining path to the Fireworks endpoint as the OpenAI-compatible model field.
Fireworks K2.6 returns a 400 if the request carries reasoning_* parameters even though Kimi supports thinking through Moonshot’s own API. The bundled policy (extensions/fireworks/thinking-policy.ts) advertises only the off thinking level for Kimi model ids, so manual /think switches and provider-policy surfaces stay aligned with the runtime contract.To use Kimi reasoning end-to-end, configure the Moonshot provider and route the same model through it.
If the Gateway runs as a managed service (launchd, systemd, Docker), the Fireworks key must be visible to that process — not just to your interactive shell.
A key sitting only in ~/.profile will not help a launchd or systemd daemon unless that environment is imported there too. Set the key in ~/.openclaw/.env or via env.shellEnv to make it readable from the gateway process.
On macOS, openclaw gateway install already wires ~/.openclaw/.env into the LaunchAgent environment file. Re-run install (or openclaw doctor --fix) after rotating the key.

Model providers

Choosing providers, model refs, and failover behavior.

Thinking modes

/think levels, provider policies, and routing reasoning-capable models.

Moonshot

Run Kimi with native thinking output through Moonshot’s own API.

Troubleshooting

General troubleshooting and FAQ.