Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

OpenClaw includes a bundled Mistral plugin that registers four contracts: chat completions, media understanding (Voxtral batch transcription), realtime STT for Voice Call (Voxtral Realtime), and memory embeddings (mistral-embed).
PropertyValue
Provider idmistral
Pluginbundled, enabledByDefault: true
Auth env varMISTRAL_API_KEY
Onboarding flag--auth-choice mistral-api-key
Direct CLI flag--mistral-api-key <key>
APIOpenAI-compatible (openai-completions)
Base URLhttps://api.mistral.ai/v1
Default modelmistral/mistral-large-latest
Embedding modelmistral-embed
Voxtral batchvoxtral-mini-latest (audio transcription)
Voxtral realtimevoxtral-mini-transcribe-realtime-2602

Getting started

1

Get your API key

Create an API key in the Mistral Console.
2

Run onboarding

openclaw onboard --auth-choice mistral-api-key
Or pass the key directly:
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
3

Set a default model

{
  env: { MISTRAL_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}
4

Verify the model is available

openclaw models list --provider mistral

Built-in LLM catalog

Mistral Medium 3.5 is the current blended Medium model in the bundled catalog: 128B dense weights, text and image input, 256K context, function calling, structured output, coding, and adjustable reasoning through the Chat Completions API. Use mistral/mistral-medium-3-5 when you want Mistral’s newer unified agentic/coding model instead of the default mistral/mistral-large-latest. OpenClaw currently ships this bundled Mistral catalog:
Model refInputContextMax outputNotes
mistral/mistral-large-latesttext, image262,14416,384Default model
mistral/mistral-medium-2508text, image262,1448,192Mistral Medium 3.1
mistral/mistral-medium-3-5text, image262,1448,192Mistral Medium 3.5; adjustable reasoning
mistral/mistral-small-latesttext, image128,00016,384Mistral Small 4; adjustable reasoning via API reasoning_effort
mistral/pixtral-large-latesttext, image128,00032,768Pixtral
mistral/codestral-latesttext256,0004,096Coding
mistral/devstral-medium-latesttext262,14432,768Devstral 2
mistral/magistral-smalltext128,00040,000Reasoning-enabled
After onboarding, smoke-test Medium 3.5 without starting the Gateway:
openclaw infer model run --local \
  --model mistral/mistral-medium-3-5 \
  --prompt "Reply with exactly: mistral-ok" \
  --json
To browse the bundled catalog row before changing config:
openclaw models list --all --provider mistral --plain

Audio transcription (Voxtral)

Use Voxtral for batch audio transcription through the media understanding pipeline.
{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}
The media transcription path uses /v1/audio/transcriptions. The default audio model for Mistral is voxtral-mini-latest.

Voice Call streaming STT

The bundled mistral plugin registers Voxtral Realtime as a Voice Call streaming STT provider.
SettingConfig pathDefault
API keyplugins.entries.voice-call.config.streaming.providers.mistral.apiKeyFalls back to MISTRAL_API_KEY
Model...mistral.modelvoxtral-mini-transcribe-realtime-2602
Encoding...mistral.encodingpcm_mulaw
Sample rate...mistral.sampleRate8000
Target delay...mistral.targetStreamingDelayMs800
{
  plugins: {
    entries: {
      "voice-call": {
        config: {
          streaming: {
            enabled: true,
            provider: "mistral",
            providers: {
              mistral: {
                apiKey: "${MISTRAL_API_KEY}",
                targetStreamingDelayMs: 800,
              },
            },
          },
        },
      },
    },
  },
}
OpenClaw defaults Mistral realtime STT to pcm_mulaw at 8 kHz so Voice Call can forward Twilio media frames directly. Use encoding: "pcm_s16le" and a matching sampleRate only if your upstream stream is already raw PCM.

Advanced configuration

mistral/mistral-small-latest (Mistral Small 4) and mistral/mistral-medium-3-5 support adjustable reasoning on the Chat Completions API via reasoning_effort (none minimizes extra thinking in the output; high surfaces full thinking traces before the final answer). Mistral recommends reasoning_effort="high" for Medium 3.5 agentic and code use cases.OpenClaw maps the session thinking level to Mistral’s API:
OpenClaw thinking levelMistral reasoning_effort
off / minimalnone
low / medium / high / xhigh / adaptive / maxhigh
Do not combine Medium 3.5 reasoning mode with temperature: 0. The Mistral HTTP API rejects reasoning_effort="high" plus temperature: 0 with a 400 response. Leave temperature unset so Mistral uses its default, or follow the Medium 3.5 recommended settings and use temperature: 0.7 for high reasoning. For deterministic direct answers, turn thinking off/minimal so OpenClaw sends reasoning_effort: "none" before you lower temperature.
Example model-scoped config for Medium 3.5 reasoning:
{
  agents: {
    defaults: {
      model: { primary: "mistral/mistral-medium-3-5" },
      models: {
        "mistral/mistral-medium-3-5": {
          params: { thinking: "high" },
        },
      },
    },
  },
}
Other bundled Mistral catalog models do not use this parameter. Keep using magistral-* models when you want Mistral’s native reasoning-first behavior.
Mistral can serve memory embeddings via /v1/embeddings (default model: mistral-embed).
{
  memorySearch: { provider: "mistral" },
}
  • Mistral auth uses MISTRAL_API_KEY (Bearer header).
  • Provider base URL defaults to https://api.mistral.ai/v1 and accepts the standard OpenAI-compatible chat-completions request shape.
  • Onboarding default model is mistral/mistral-large-latest.
  • Override the base URL under models.providers.mistral.baseUrl only when Mistral explicitly publishes a regional endpoint you need.

Model selection

Choosing providers, model refs, and failover behavior.

Media understanding

Audio transcription setup and provider selection.