Cerebras provides high-speed OpenAI-compatible inference on custom inference hardware. OpenClaw includes a bundled Cerebras provider plugin with a static four-model catalog.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
| Property | Value |
|---|---|
| Provider id | cerebras |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | CEREBRAS_API_KEY |
| Onboarding flag | --auth-choice cerebras-api-key |
| Direct CLI flag | --cerebras-api-key <key> |
| API | OpenAI-compatible (openai-completions) |
| Base URL | https://api.cerebras.ai/v1 |
| Default model | cerebras/zai-glm-4.7 |
Getting started
Get an API key
Create an API key in the Cerebras Cloud Console.
Non-interactive setup
Built-in catalog
OpenClaw ships a static Cerebras catalog that mirrors the public OpenAI-compatible endpoint. All four models share a 128k context and 8,192 max-output tokens.| Model ref | Name | Reasoning | Notes |
|---|---|---|---|
cerebras/zai-glm-4.7 | Z.ai GLM 4.7 | yes | Default model; preview reasoning model |
cerebras/gpt-oss-120b | GPT OSS 120B | yes | Production reasoning model |
cerebras/qwen-3-235b-a22b-instruct-2507 | Qwen 3 235B Instruct | no | Preview non-reasoning model |
cerebras/llama3.1-8b | Llama 3.1 8B | no | Production speed-focused model |
Manual config
The bundled plugin usually means you only need the API key. Use explicitmodels.providers.cerebras config when you want to override model metadata or run in mode: "merge" against the static catalog:
If the Gateway runs as a daemon (launchd, systemd, Docker), make sure
CEREBRAS_API_KEY is available to that process — for example in ~/.openclaw/.env or through env.shellEnv. A key sitting only in ~/.profile will not help a managed service unless the env is imported separately.Related
Model providers
Choosing providers, model refs, and failover behavior.
Thinking modes
Reasoning effort levels for the two reasoning-capable Cerebras models.
Configuration reference
Agent defaults and model configuration.
Models FAQ
Auth profiles, switching models, and resolving “no profile” errors.