coderClaw

Model providers

This page covers LLM/model providers (not chat channels like WhatsApp/Telegram). For model selection rules, see /concepts/models.

Quick rules

API key rotation

Built-in providers (pi-ai catalog)

CoderClaw ships with the pi‑ai catalog. These providers require no models.providers config; just set auth + pick a model.

OpenAI

{
  agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
}

Anthropic

{
  agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}

OpenAI Code (Codex)

{
  agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
}

OpenCode Zen

{
  agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } },
}

Google Gemini (API key)

Google Vertex, Antigravity, and Gemini CLI

Z.AI (GLM)

Vercel AI Gateway

Other built-in providers

Providers via models.providers (custom/base URL)

Use models.providers (or models.json) to add custom providers or OpenAI/Anthropic‑compatible proxies.

Moonshot AI (Kimi)

Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:

Kimi K2 model IDs:

{/moonshot-kimi-k2-model-refs:start/ && null}

{
  agents: {
    defaults: { model: { primary: "moonshot/kimi-k2.5" } },
  },
  models: {
    mode: "merge",
    providers: {
      moonshot: {
        baseUrl: "https://api.moonshot.ai/v1",
        apiKey: "${MOONSHOT_API_KEY}",
        api: "openai-completions",
        models: [{ id: "kimi-k2.5", name: "Kimi K2.5" }],
      },
    },
  },
}

Kimi Coding

Kimi Coding uses Moonshot AI’s Anthropic-compatible endpoint:

{
  env: { KIMI_API_KEY: "sk-..." },
  agents: {
    defaults: { model: { primary: "kimi-coding/k2p5" } },
  },
}

Qwen OAuth (free tier)

Qwen provides OAuth access to Qwen Coder + Vision via a device-code flow. Enable the bundled plugin, then log in:

coderclaw plugins enable qwen-portal-auth
coderclaw models auth login --provider qwen-portal --set-default

Model refs:

See /providers/qwen for setup details and notes.

Synthetic

Synthetic provides Anthropic-compatible models behind the synthetic provider:

{
  agents: {
    defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" } },
  },
  models: {
    mode: "merge",
    providers: {
      synthetic: {
        baseUrl: "https://api.synthetic.new/anthropic",
        apiKey: "${SYNTHETIC_API_KEY}",
        api: "anthropic-messages",
        models: [{ id: "hf:MiniMaxAI/MiniMax-M2.1", name: "MiniMax M2.1" }],
      },
    },
  },
}

MiniMax

MiniMax is configured via models.providers because it uses custom endpoints:

See /providers/minimax for setup details, model options, and config snippets.

Ollama

Ollama is a local LLM runtime that provides an OpenAI-compatible API:

# Install Ollama, then pull a model:
ollama pull llama3.3
{
  agents: {
    defaults: { model: { primary: "ollama/llama3.3" } },
  },
}

Ollama is automatically detected when running locally at http://127.0.0.1:11434/v1. See /providers/ollama for model recommendations and custom configuration.

vLLM

vLLM is a local (or self-hosted) OpenAI-compatible server:

To opt in to auto-discovery locally (any value works if your server doesn’t enforce auth):

export VLLM_API_KEY="vllm-local"

Then set a model (replace with one of the IDs returned by /v1/models):

{
  agents: {
    defaults: { model: { primary: "vllm/your-model-id" } },
  },
}

See /providers/vllm for details.

Local proxies (LM Studio, vLLM, LiteLLM, etc.)

Example (OpenAI‑compatible):

{
  agents: {
    defaults: {
      model: { primary: "lmstudio/minimax-m2.1-gs32" },
      models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
    },
  },
  models: {
    providers: {
      lmstudio: {
        baseUrl: "http://localhost:1234/v1",
        apiKey: "LMSTUDIO_KEY",
        api: "openai-completions",
        models: [
          {
            id: "minimax-m2.1-gs32",
            name: "MiniMax M2.1",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 200000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Notes:

CLI examples

coderclaw onboard --auth-choice opencode-zen
coderclaw models set opencode/claude-opus-4-6
coderclaw models list

See also: /gateway/configuration for full configuration examples.