GitHub Copilot is GitHub’s AI coding assistant. It provides access to Copilot models for your GitHub account and plan. OpenClaw can use Copilot as a model provider in two different ways.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
Two ways to use Copilot in OpenClaw
- Built-in provider (github-copilot)
- Copilot Proxy plugin (copilot-proxy)
Use the native device-login flow to obtain a GitHub token, then exchange it for
Copilot API tokens when OpenClaw runs. This is the default and simplest path
because it does not require VS Code.
Optional flags
| Flag | Description |
|---|---|
--yes | Skip the confirmation prompt |
--set-default | Also apply the provider’s recommended default model |
Non-interactive onboarding
If you already have a GitHub OAuth access token for Copilot, import it during headless setup withopenclaw onboard --non-interactive:
--auth-choice; passing --github-copilot-token infers the
GitHub Copilot provider auth choice. If the flag is omitted, onboarding falls
back to COPILOT_GITHUB_TOKEN, GH_TOKEN, then GITHUB_TOKEN. Use
--secret-input-mode ref with COPILOT_GITHUB_TOKEN set to store an env-backed
tokenRef instead of plaintext in auth-profiles.json.
Interactive TTY required
Interactive TTY required
The device-login flow requires an interactive TTY. Run it directly in a
terminal, not in a non-interactive script or CI pipeline.
Model availability depends on your plan
Model availability depends on your plan
Copilot model availability depends on your GitHub plan. If a model is
rejected, try another ID (for example
github-copilot/gpt-4.1).Live catalog refresh from the Copilot API
Live catalog refresh from the Copilot API
Once the device-login (or env-var) auth path has resolved a GitHub token,
OpenClaw refreshes the model catalog on demand from
${baseUrl}/models
(the same endpoint VS Code Copilot uses) so the runtime tracks
per-account entitlement and accurate context windows without manifest
churn. Newly published Copilot models become visible without an OpenClaw
upgrade, and context windows reflect the real per-model limits
(e.g. 400k for the gpt-5.x series, 1M for the internal
claude-opus-*-1m variants).The bundled static catalog stays as the visible fallback when discovery
is disabled, the user has no GitHub auth profile, the token-exchange
fails, or the /models HTTPS call errors. To opt out and rely entirely
on the static manifest catalog (offline / air-gapped scenarios):Transport selection
Transport selection
Claude model IDs use the Anthropic Messages transport automatically. GPT,
o-series, and Gemini models keep the OpenAI Responses transport. OpenClaw
selects the correct transport based on the model ref.
Request compatibility
Request compatibility
OpenClaw sends Copilot IDE-style request headers on Copilot transports,
including built-in compaction, tool-result, and image follow-up turns. It
does not enable provider-level Responses continuation for Copilot unless
that behavior has been verified against Copilot’s API.
Environment variable resolution order
Environment variable resolution order
OpenClaw resolves Copilot auth from environment variables in the following
priority order:
When multiple variables are set, OpenClaw uses the highest-priority one.
The device-login flow (
| Priority | Variable | Notes |
|---|---|---|
| 1 | COPILOT_GITHUB_TOKEN | Highest priority, Copilot-specific |
| 2 | GH_TOKEN | GitHub CLI token (fallback) |
| 3 | GITHUB_TOKEN | Standard GitHub token (lowest) |
openclaw models auth login-github-copilot) stores
its token in the auth profile store and takes precedence over all environment
variables.Token storage
Token storage
The login stores a GitHub token in the auth profile store and exchanges it
for a Copilot API token when OpenClaw runs. You do not need to manage the
token manually.
Memory search embeddings
GitHub Copilot can also serve as an embedding provider for memory search. If you have a Copilot subscription and have logged in, OpenClaw can use it for embeddings without a separate API key.Auto-detection
WhenmemorySearch.provider is "auto" (the default), GitHub Copilot is tried
at priority 15 — after local embeddings but before OpenAI and other paid
providers. If a GitHub token is available, OpenClaw discovers available
embedding models from the Copilot API and picks the best one automatically.
Explicit config
How it works
- OpenClaw resolves your GitHub token (from env vars or auth profile).
- Exchanges it for a short-lived Copilot API token.
- Queries the Copilot
/modelsendpoint to discover available embedding models. - Picks the best model (prefers
text-embedding-3-small). - Sends embedding requests to the Copilot
/embeddingsendpoint.
Related
Model selection
Choosing providers, model refs, and failover behavior.
OAuth and auth
Auth details and credential reuse rules.