Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

Cerebras provides high-speed OpenAI-compatible inference on custom inference hardware. OpenClaw includes a bundled Cerebras provider plugin with a static four-model catalog.
PropertyValue
Provider idcerebras
Pluginbundled, enabledByDefault: true
Auth env varCEREBRAS_API_KEY
Onboarding flag--auth-choice cerebras-api-key
Direct CLI flag--cerebras-api-key <key>
APIOpenAI-compatible (openai-completions)
Base URLhttps://api.cerebras.ai/v1
Default modelcerebras/zai-glm-4.7

Getting started

1

Get an API key

Create an API key in the Cerebras Cloud Console.
2

Run onboarding

openclaw onboard --auth-choice cerebras-api-key
3

Verify models are available

openclaw models list --provider cerebras
The list should include all four bundled models. If CEREBRAS_API_KEY is unresolved, openclaw models status --json reports the missing credential under auth.unusableProfiles.

Non-interactive setup

openclaw onboard --non-interactive \
  --mode local \
  --auth-choice cerebras-api-key \
  --cerebras-api-key "$CEREBRAS_API_KEY"

Built-in catalog

OpenClaw ships a static Cerebras catalog that mirrors the public OpenAI-compatible endpoint. All four models share a 128k context and 8,192 max-output tokens.
Model refNameReasoningNotes
cerebras/zai-glm-4.7Z.ai GLM 4.7yesDefault model; preview reasoning model
cerebras/gpt-oss-120bGPT OSS 120ByesProduction reasoning model
cerebras/qwen-3-235b-a22b-instruct-2507Qwen 3 235B InstructnoPreview non-reasoning model
cerebras/llama3.1-8bLlama 3.1 8BnoProduction speed-focused model
Cerebras marks zai-glm-4.7 and qwen-3-235b-a22b-instruct-2507 as preview models, and llama3.1-8b plus qwen-3-235b-a22b-instruct-2507 are documented for deprecation on May 27, 2026. Check Cerebras’ supported-models page before relying on them for production workloads.

Manual config

The bundled plugin usually means you only need the API key. Use explicit models.providers.cerebras config when you want to override model metadata or run in mode: "merge" against the static catalog:
{
  env: { CEREBRAS_API_KEY: "csk-..." },
  agents: {
    defaults: {
      model: { primary: "cerebras/zai-glm-4.7" },
    },
  },
  models: {
    mode: "merge",
    providers: {
      cerebras: {
        baseUrl: "https://api.cerebras.ai/v1",
        apiKey: "${CEREBRAS_API_KEY}",
        api: "openai-completions",
        models: [
          { id: "zai-glm-4.7", name: "Z.ai GLM 4.7" },
          { id: "gpt-oss-120b", name: "GPT OSS 120B" },
        ],
      },
    },
  },
}
If the Gateway runs as a daemon (launchd, systemd, Docker), make sure CEREBRAS_API_KEY is available to that process — for example in ~/.openclaw/.env or through env.shellEnv. A key sitting only in ~/.profile will not help a managed service unless the env is imported separately.

Model providers

Choosing providers, model refs, and failover behavior.

Thinking modes

Reasoning effort levels for the two reasoning-capable Cerebras models.

Configuration reference

Agent defaults and model configuration.

Models FAQ

Auth profiles, switching models, and resolving “no profile” errors.