Groq provides ultra-fast inference on open-weight models (Llama, Gemma, Kimi, Qwen, GPT OSS, and more) using custom LPU hardware. OpenClaw includes a bundled Groq plugin that registers both an OpenAI-compatible chat provider and an audio media-understanding provider.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
| Property | Value |
|---|---|
| Provider id | groq |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | GROQ_API_KEY |
| Onboarding flag | --auth-choice groq-api-key |
| API | OpenAI-compatible (openai-completions) |
| Base URL | https://api.groq.com/openai/v1 |
| Audio transcription | whisper-large-v3-turbo (default) |
| Suggested chat default | groq/llama-3.3-70b-versatile |
Getting started
Get an API key
Create an API key at console.groq.com/keys.
Config file example
Built-in catalog
OpenClaw ships a manifest-backed Groq catalog with both reasoning and non-reasoning entries. Runopenclaw models list --provider groq to see the bundled rows for your installed version, or check console.groq.com/docs/models for Groq’s authoritative list.
| Model ref | Name | Reasoning | Input | Context |
|---|---|---|---|---|
groq/llama-3.3-70b-versatile | Llama 3.3 70B Versatile | no | text | 131,072 |
groq/llama-3.1-8b-instant | Llama 3.1 8B Instant | no | text | 131,072 |
groq/meta-llama/llama-4-maverick-17b-128e-instruct | Llama 4 Maverick 17B | no | text + image | 131,072 |
groq/meta-llama/llama-4-scout-17b-16e-instruct | Llama 4 Scout 17B | no | text + image | 131,072 |
groq/llama3-70b-8192 | Llama 3 70B | no | text | 8,192 |
groq/llama3-8b-8192 | Llama 3 8B | no | text | 8,192 |
groq/gemma2-9b-it | Gemma 2 9B | no | text | 8,192 |
groq/mistral-saba-24b | Mistral Saba 24B | no | text | 32,768 |
groq/moonshotai/kimi-k2-instruct | Kimi K2 Instruct | no | text | 131,072 |
groq/moonshotai/kimi-k2-instruct-0905 | Kimi K2 Instruct 0905 | no | text | 262,144 |
groq/openai/gpt-oss-120b | GPT OSS 120B | yes | text | 131,072 |
groq/openai/gpt-oss-20b | GPT OSS 20B | yes | text | 131,072 |
groq/openai/gpt-oss-safeguard-20b | Safety GPT OSS 20B | yes | text | 131,072 |
groq/qwen-qwq-32b | Qwen QwQ 32B | yes | text | 131,072 |
groq/qwen/qwen3-32b | Qwen3 32B | yes | text | 131,072 |
groq/deepseek-r1-distill-llama-70b | DeepSeek R1 Distill Llama 70B | yes | text | 131,072 |
groq/groq/compound | Compound | yes | text | 131,072 |
groq/groq/compound-mini | Compound Mini | yes | text | 131,072 |
Reasoning models
OpenClaw maps its shared/think levels to Groq’s model-specific reasoning_effort values:
- For
qwen/qwen3-32b, disabled thinking sendsnoneand enabled thinking sendsdefault. - For Groq GPT OSS reasoning models (
openai/gpt-oss-*), OpenClaw sendslow,medium, orhighbased on/thinklevel. Disabled thinking omitsreasoning_effortbecause those models do not support a disabled value. - DeepSeek R1 Distill, Qwen QwQ, and Compound use Groq’s native reasoning surface;
/thinkcontrols visibility but the model always reasons.
/think levels and how OpenClaw translates them per provider.
Audio transcription
Groq’s bundled plugin also registers an audio media-understanding provider so voice messages can be transcribed through the sharedtools.media.audio surface.
| Property | Value |
|---|---|
| Shared config path | tools.media.audio |
| Default base URL | https://api.groq.com/openai/v1 |
| Default model | whisper-large-v3-turbo |
| Auto priority | 20 |
| API endpoint | OpenAI-compatible /audio/transcriptions |
Environment availability for the daemon
Environment availability for the daemon
If the Gateway runs as a managed service (launchd, systemd, Docker),
GROQ_API_KEY must be visible to that process — not just to your interactive shell.Custom Groq model ids
Custom Groq model ids
OpenClaw accepts any Groq model id at runtime. Use the exact id shown by Groq and prefix it with
groq/. The bundled catalog covers the common cases; uncatalogued ids fall through to the default OpenAI-compatible template.Related
Model providers
Choosing providers, model refs, and failover behavior.
Thinking modes
Reasoning effort levels and provider-policy interaction.
Configuration reference
Full config schema including provider and audio settings.
Groq Console
Groq dashboard, API docs, and pricing.