The builtin engine is the default memory backend. It stores your memory index in a per-agent SQLite database and needs no extra dependencies to get started.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
What it provides
- Keyword search via FTS5 full-text indexing (BM25 scoring).
- Vector search via embeddings from any supported provider.
- Hybrid search that combines both for best results.
- CJK support via trigram tokenization for Chinese, Japanese, and Korean.
- sqlite-vec acceleration for in-database vector queries (optional).
Getting started
If you have an API key for OpenAI, Gemini, Voyage, Mistral, or DeepInfra, the builtin engine auto-detects it and enables vector search. No config needed. To set a provider explicitly:node-llama-cpp runtime package next to OpenClaw, then point local.modelPath
at a GGUF file:
Supported embedding providers
| Provider | ID | Auto-detected | Notes |
|---|---|---|---|
| OpenAI | openai | Yes | Default: text-embedding-3-small |
| Gemini | gemini | Yes | Supports multimodal (image + audio) |
| Voyage | voyage | Yes | |
| Mistral | mistral | Yes | |
| DeepInfra | deepinfra | Yes | Default: BAAI/bge-m3 |
| Ollama | ollama | No | Local, set explicitly |
| Local | local | Yes (first) | Optional node-llama-cpp runtime |
memorySearch.provider to override.
How indexing works
OpenClaw indexesMEMORY.md and memory/*.md into chunks (~400 tokens with
80-token overlap) and stores them in a per-agent SQLite database.
- Index location:
~/.openclaw/memory/<agentId>.sqlite - Storage maintenance: SQLite WAL sidecars are bounded with periodic and shutdown checkpoints.
- File watching: changes to memory files trigger a debounced reindex (1.5s).
- Auto-reindex: when the embedding provider, model, or chunking config changes, the entire index is rebuilt automatically.
- Reindex on demand:
openclaw memory index --force
You can also index Markdown files outside the workspace with
memorySearch.extraPaths. See the
configuration reference.When to use
The builtin engine is the right choice for most users:- Works out of the box with no extra dependencies.
- Handles keyword and vector search well.
- Supports all embedding providers.
- Hybrid search combines the best of both retrieval approaches.
Troubleshooting
Memory search disabled? Checkopenclaw memory status. If no provider is
detected, set one explicitly or add an API key.
Local provider not detected? Confirm the local path exists and run:
local provider id.
If the provider is set to auto, local embeddings are considered first only
when memorySearch.local.modelPath points to an existing local file.
Stale results? Run openclaw memory index --force to rebuild. The watcher
may miss changes in rare edge cases.
sqlite-vec not loading? OpenClaw falls back to in-process cosine similarity
automatically. openclaw memory status --deep reports the local vector store
separately from the embedding provider, so Vector store: unavailable points
at sqlite-vec loading while Embeddings: unavailable points at provider/auth
or model readiness. Check logs for the specific load error.