OpenClaw includes a bundled Mistral plugin that registers four contracts: chat completions, media understanding (Voxtral batch transcription), realtime STT for Voice Call (Voxtral Realtime), and memory embeddings (Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
mistral-embed).
| Property | Value |
|---|---|
| Provider id | mistral |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | MISTRAL_API_KEY |
| Onboarding flag | --auth-choice mistral-api-key |
| Direct CLI flag | --mistral-api-key <key> |
| API | OpenAI-compatible (openai-completions) |
| Base URL | https://api.mistral.ai/v1 |
| Default model | mistral/mistral-large-latest |
| Embedding model | mistral-embed |
| Voxtral batch | voxtral-mini-latest (audio transcription) |
| Voxtral realtime | voxtral-mini-transcribe-realtime-2602 |
Getting started
Get your API key
Create an API key in the Mistral Console.
Built-in LLM catalog
Mistral Medium 3.5 is the current blended Medium model in the bundled catalog: 128B dense weights, text and image input, 256K context, function calling, structured output, coding, and adjustable reasoning through the Chat Completions API. Usemistral/mistral-medium-3-5 when you want Mistral’s newer unified
agentic/coding model instead of the default mistral/mistral-large-latest.
OpenClaw currently ships this bundled Mistral catalog:
| Model ref | Input | Context | Max output | Notes |
|---|---|---|---|---|
mistral/mistral-large-latest | text, image | 262,144 | 16,384 | Default model |
mistral/mistral-medium-2508 | text, image | 262,144 | 8,192 | Mistral Medium 3.1 |
mistral/mistral-medium-3-5 | text, image | 262,144 | 8,192 | Mistral Medium 3.5; adjustable reasoning |
mistral/mistral-small-latest | text, image | 128,000 | 16,384 | Mistral Small 4; adjustable reasoning via API reasoning_effort |
mistral/pixtral-large-latest | text, image | 128,000 | 32,768 | Pixtral |
mistral/codestral-latest | text | 256,000 | 4,096 | Coding |
mistral/devstral-medium-latest | text | 262,144 | 32,768 | Devstral 2 |
mistral/magistral-small | text | 128,000 | 40,000 | Reasoning-enabled |
Audio transcription (Voxtral)
Use Voxtral for batch audio transcription through the media understanding pipeline.Voice Call streaming STT
The bundledmistral plugin registers Voxtral Realtime as a Voice Call
streaming STT provider.
| Setting | Config path | Default |
|---|---|---|
| API key | plugins.entries.voice-call.config.streaming.providers.mistral.apiKey | Falls back to MISTRAL_API_KEY |
| Model | ...mistral.model | voxtral-mini-transcribe-realtime-2602 |
| Encoding | ...mistral.encoding | pcm_mulaw |
| Sample rate | ...mistral.sampleRate | 8000 |
| Target delay | ...mistral.targetStreamingDelayMs | 800 |
OpenClaw defaults Mistral realtime STT to
pcm_mulaw at 8 kHz so Voice Call
can forward Twilio media frames directly. Use encoding: "pcm_s16le" and a
matching sampleRate only if your upstream stream is already raw PCM.Advanced configuration
Adjustable reasoning
Adjustable reasoning
mistral/mistral-small-latest (Mistral Small 4) and mistral/mistral-medium-3-5 support adjustable reasoning on the Chat Completions API via reasoning_effort (none minimizes extra thinking in the output; high surfaces full thinking traces before the final answer). Mistral recommends reasoning_effort="high" for Medium 3.5 agentic and code use cases.OpenClaw maps the session thinking level to Mistral’s API:| OpenClaw thinking level | Mistral reasoning_effort |
|---|---|
| off / minimal | none |
| low / medium / high / xhigh / adaptive / max | high |
Other bundled Mistral catalog models do not use this parameter. Keep using
magistral-* models when you want Mistral’s native reasoning-first behavior.Memory embeddings
Memory embeddings
Mistral can serve memory embeddings via
/v1/embeddings (default model: mistral-embed).Auth and base URL
Auth and base URL
- Mistral auth uses
MISTRAL_API_KEY(Bearer header). - Provider base URL defaults to
https://api.mistral.ai/v1and accepts the standard OpenAI-compatible chat-completions request shape. - Onboarding default model is
mistral/mistral-large-latest. - Override the base URL under
models.providers.mistral.baseUrlonly when Mistral explicitly publishes a regional endpoint you need.
Related
Model selection
Choosing providers, model refs, and failover behavior.
Media understanding
Audio transcription setup and provider selection.