Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

Groq provides ultra-fast inference on open-weight models (Llama, Gemma, Kimi, Qwen, GPT OSS, and more) using custom LPU hardware. OpenClaw includes a bundled Groq plugin that registers both an OpenAI-compatible chat provider and an audio media-understanding provider.
PropertyValue
Provider idgroq
Pluginbundled, enabledByDefault: true
Auth env varGROQ_API_KEY
Onboarding flag--auth-choice groq-api-key
APIOpenAI-compatible (openai-completions)
Base URLhttps://api.groq.com/openai/v1
Audio transcriptionwhisper-large-v3-turbo (default)
Suggested chat defaultgroq/llama-3.3-70b-versatile

Getting started

1

Get an API key

Create an API key at console.groq.com/keys.
2

Set the API key

openclaw onboard --auth-choice groq-api-key
3

Set a default model

{
  agents: {
    defaults: {
      model: { primary: "groq/llama-3.3-70b-versatile" },
    },
  },
}
4

Verify the catalog is reachable

openclaw models list --provider groq

Config file example

{
  env: { GROQ_API_KEY: "gsk_..." },
  agents: {
    defaults: {
      model: { primary: "groq/llama-3.3-70b-versatile" },
    },
  },
}

Built-in catalog

OpenClaw ships a manifest-backed Groq catalog with both reasoning and non-reasoning entries. Run openclaw models list --provider groq to see the bundled rows for your installed version, or check console.groq.com/docs/models for Groq’s authoritative list.
Model refNameReasoningInputContext
groq/llama-3.3-70b-versatileLlama 3.3 70B Versatilenotext131,072
groq/llama-3.1-8b-instantLlama 3.1 8B Instantnotext131,072
groq/meta-llama/llama-4-maverick-17b-128e-instructLlama 4 Maverick 17Bnotext + image131,072
groq/meta-llama/llama-4-scout-17b-16e-instructLlama 4 Scout 17Bnotext + image131,072
groq/llama3-70b-8192Llama 3 70Bnotext8,192
groq/llama3-8b-8192Llama 3 8Bnotext8,192
groq/gemma2-9b-itGemma 2 9Bnotext8,192
groq/mistral-saba-24bMistral Saba 24Bnotext32,768
groq/moonshotai/kimi-k2-instructKimi K2 Instructnotext131,072
groq/moonshotai/kimi-k2-instruct-0905Kimi K2 Instruct 0905notext262,144
groq/openai/gpt-oss-120bGPT OSS 120Byestext131,072
groq/openai/gpt-oss-20bGPT OSS 20Byestext131,072
groq/openai/gpt-oss-safeguard-20bSafety GPT OSS 20Byestext131,072
groq/qwen-qwq-32bQwen QwQ 32Byestext131,072
groq/qwen/qwen3-32bQwen3 32Byestext131,072
groq/deepseek-r1-distill-llama-70bDeepSeek R1 Distill Llama 70Byestext131,072
groq/groq/compoundCompoundyestext131,072
groq/groq/compound-miniCompound Miniyestext131,072
The catalog evolves with each OpenClaw release. openclaw models list --provider groq shows the rows known to your installed version; cross-check with console.groq.com/docs/models for newly-added or deprecated models.

Reasoning models

OpenClaw maps its shared /think levels to Groq’s model-specific reasoning_effort values:
  • For qwen/qwen3-32b, disabled thinking sends none and enabled thinking sends default.
  • For Groq GPT OSS reasoning models (openai/gpt-oss-*), OpenClaw sends low, medium, or high based on /think level. Disabled thinking omits reasoning_effort because those models do not support a disabled value.
  • DeepSeek R1 Distill, Qwen QwQ, and Compound use Groq’s native reasoning surface; /think controls visibility but the model always reasons.
See Thinking modes for the shared /think levels and how OpenClaw translates them per provider.

Audio transcription

Groq’s bundled plugin also registers an audio media-understanding provider so voice messages can be transcribed through the shared tools.media.audio surface.
PropertyValue
Shared config pathtools.media.audio
Default base URLhttps://api.groq.com/openai/v1
Default modelwhisper-large-v3-turbo
Auto priority20
API endpointOpenAI-compatible /audio/transcriptions
To make Groq the default audio backend:
{
  tools: {
    media: {
      audio: {
        models: [{ provider: "groq" }],
      },
    },
  },
}
If the Gateway runs as a managed service (launchd, systemd, Docker), GROQ_API_KEY must be visible to that process — not just to your interactive shell.
A key sitting only in ~/.profile will not help a launchd or systemd daemon unless that environment is imported there too. Set the key in ~/.openclaw/.env or via env.shellEnv to make it readable from the gateway process.
OpenClaw accepts any Groq model id at runtime. Use the exact id shown by Groq and prefix it with groq/. The bundled catalog covers the common cases; uncatalogued ids fall through to the default OpenAI-compatible template.
{
  agents: {
    defaults: {
      model: { primary: "groq/<your-model-id>" },
    },
  },
}

Model providers

Choosing providers, model refs, and failover behavior.

Thinking modes

Reasoning effort levels and provider-policy interaction.

Configuration reference

Full config schema including provider and audio settings.

Groq Console

Groq dashboard, API docs, and pricing.