Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

GitHub Copilot is GitHub’s AI coding assistant. It provides access to Copilot models for your GitHub account and plan. OpenClaw can use Copilot as a model provider in two different ways.

Two ways to use Copilot in OpenClaw

Use the native device-login flow to obtain a GitHub token, then exchange it for Copilot API tokens when OpenClaw runs. This is the default and simplest path because it does not require VS Code.
1

Run the login command

openclaw models auth login-github-copilot
You will be prompted to visit a URL and enter a one-time code. Keep the terminal open until it completes.
2

Set a default model

openclaw models set github-copilot/claude-opus-4.7
Or in config:
{
  agents: {
    defaults: { model: { primary: "github-copilot/claude-opus-4.7" } },
  },
}

Optional flags

FlagDescription
--yesSkip the confirmation prompt
--set-defaultAlso apply the provider’s recommended default model
# Skip confirmation
openclaw models auth login-github-copilot --yes

# Login and set the default model in one step
openclaw models auth login --provider github-copilot --method device --set-default

Non-interactive onboarding

If you already have a GitHub OAuth access token for Copilot, import it during headless setup with openclaw onboard --non-interactive:
openclaw onboard --non-interactive --accept-risk \
  --auth-choice github-copilot \
  --github-copilot-token "$COPILOT_GITHUB_TOKEN" \
  --skip-channels --skip-health
You can also omit --auth-choice; passing --github-copilot-token infers the GitHub Copilot provider auth choice. If the flag is omitted, onboarding falls back to COPILOT_GITHUB_TOKEN, GH_TOKEN, then GITHUB_TOKEN. Use --secret-input-mode ref with COPILOT_GITHUB_TOKEN set to store an env-backed tokenRef instead of plaintext in auth-profiles.json.
The device-login flow requires an interactive TTY. Run it directly in a terminal, not in a non-interactive script or CI pipeline.
Copilot model availability depends on your GitHub plan. If a model is rejected, try another ID (for example github-copilot/gpt-4.1).
Once the device-login (or env-var) auth path has resolved a GitHub token, OpenClaw refreshes the model catalog on demand from ${baseUrl}/models (the same endpoint VS Code Copilot uses) so the runtime tracks per-account entitlement and accurate context windows without manifest churn. Newly published Copilot models become visible without an OpenClaw upgrade, and context windows reflect the real per-model limits (e.g. 400k for the gpt-5.x series, 1M for the internal claude-opus-*-1m variants).The bundled static catalog stays as the visible fallback when discovery is disabled, the user has no GitHub auth profile, the token-exchange fails, or the /models HTTPS call errors. To opt out and rely entirely on the static manifest catalog (offline / air-gapped scenarios):
{
  plugins: {
    entries: {
      "github-copilot": {
        config: { discovery: { enabled: false } },
      },
    },
  },
}
Claude model IDs use the Anthropic Messages transport automatically. GPT, o-series, and Gemini models keep the OpenAI Responses transport. OpenClaw selects the correct transport based on the model ref.
OpenClaw sends Copilot IDE-style request headers on Copilot transports, including built-in compaction, tool-result, and image follow-up turns. It does not enable provider-level Responses continuation for Copilot unless that behavior has been verified against Copilot’s API.
OpenClaw resolves Copilot auth from environment variables in the following priority order:
PriorityVariableNotes
1COPILOT_GITHUB_TOKENHighest priority, Copilot-specific
2GH_TOKENGitHub CLI token (fallback)
3GITHUB_TOKENStandard GitHub token (lowest)
When multiple variables are set, OpenClaw uses the highest-priority one. The device-login flow (openclaw models auth login-github-copilot) stores its token in the auth profile store and takes precedence over all environment variables.
The login stores a GitHub token in the auth profile store and exchanges it for a Copilot API token when OpenClaw runs. You do not need to manage the token manually.
The device-login command requires an interactive TTY. Use non-interactive onboarding when you need headless setup.

Memory search embeddings

GitHub Copilot can also serve as an embedding provider for memory search. If you have a Copilot subscription and have logged in, OpenClaw can use it for embeddings without a separate API key.

Auto-detection

When memorySearch.provider is "auto" (the default), GitHub Copilot is tried at priority 15 — after local embeddings but before OpenAI and other paid providers. If a GitHub token is available, OpenClaw discovers available embedding models from the Copilot API and picks the best one automatically.

Explicit config

{
  agents: {
    defaults: {
      memorySearch: {
        provider: "github-copilot",
        // Optional: override the auto-discovered model
        model: "text-embedding-3-small",
      },
    },
  },
}

How it works

  1. OpenClaw resolves your GitHub token (from env vars or auth profile).
  2. Exchanges it for a short-lived Copilot API token.
  3. Queries the Copilot /models endpoint to discover available embedding models.
  4. Picks the best model (prefers text-embedding-3-small).
  5. Sends embedding requests to the Copilot /embeddings endpoint.
Model availability depends on your GitHub plan. If no embedding models are available, OpenClaw skips Copilot and tries the next provider.

Model selection

Choosing providers, model refs, and failover behavior.

OAuth and auth

Auth details and credential reuse rules.