Fireworks exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin that ships with two pre-cataloged Kimi models and accepts any Fireworks model or router id at runtime.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
| Property | Value |
|---|---|
| Provider id | fireworks (alias: fireworks-ai) |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | FIREWORKS_API_KEY |
| Onboarding flag | --auth-choice fireworks-api-key |
| Direct CLI flag | --fireworks-api-key <key> |
| API | OpenAI-compatible (openai-completions) |
| Base URL | https://api.fireworks.ai/inference/v1 |
| Default model | fireworks/accounts/fireworks/routers/kimi-k2p5-turbo |
| Default alias | Kimi K2.5 Turbo |
Getting started
Set the Fireworks API key
fireworks provider in your auth profiles and sets the Fire Pass Kimi K2.5 Turbo router as the default model.Non-interactive setup
For scripted or CI installs, pass everything on the command line:Built-in catalog
| Model ref | Name | Input | Context | Max output | Thinking |
|---|---|---|---|---|---|
fireworks/accounts/fireworks/models/kimi-k2p6 | Kimi K2.6 | text + image | 262,144 | 262,144 | Forced off |
fireworks/accounts/fireworks/routers/kimi-k2p5-turbo | Kimi K2.5 Turbo (Fire Pass) | text + image | 256,000 | 256,000 | Forced off (default) |
OpenClaw pins all Fireworks Kimi models to
thinking: off because Fireworks rejects Kimi thinking parameters in production. Routing the same model through Moonshot directly preserves Kimi reasoning output. See thinking modes for switching between providers.Custom Fireworks model ids
OpenClaw accepts any Fireworks model or router id at runtime. Use the exact id shown by Fireworks and prefix it withfireworks/. Dynamic resolution clones the Fire Pass template (text + image input, OpenAI-compatible API, default cost zero) and disables thinking automatically when the id matches the Kimi pattern.
How model id prefixing works
How model id prefixing works
Every Fireworks model ref in OpenClaw starts with
fireworks/ followed by the exact id or router path from the Fireworks platform. For example:- Router model:
fireworks/accounts/fireworks/routers/kimi-k2p5-turbo - Direct model:
fireworks/accounts/fireworks/models/<model-name>
fireworks/ prefix when constructing the API request and sends the remaining path to the Fireworks endpoint as the OpenAI-compatible model field.Why thinking is forced off for Kimi
Why thinking is forced off for Kimi
Fireworks K2.6 returns a 400 if the request carries
reasoning_* parameters even though Kimi supports thinking through Moonshot’s own API. The bundled policy (extensions/fireworks/thinking-policy.ts) advertises only the off thinking level for Kimi model ids, so manual /think switches and provider-policy surfaces stay aligned with the runtime contract.To use Kimi reasoning end-to-end, configure the Moonshot provider and route the same model through it.Environment availability for the daemon
Environment availability for the daemon
If the Gateway runs as a managed service (launchd, systemd, Docker), the Fireworks key must be visible to that process — not just to your interactive shell.On macOS,
openclaw gateway install already wires ~/.openclaw/.env into the LaunchAgent environment file. Re-run install (or openclaw doctor --fix) after rotating the key.Related
Model providers
Choosing providers, model refs, and failover behavior.
Thinking modes
/think levels, provider policies, and routing reasoning-capable models.Moonshot
Run Kimi with native thinking output through Moonshot’s own API.
Troubleshooting
General troubleshooting and FAQ.