coderClaw

vLLM

vLLM can serve open-source (and some custom) models via an OpenAI-compatible HTTP API. CoderClaw can connect to vLLM using the openai-completions API.

CoderClaw can also auto-discover available models from vLLM when you opt in with VLLM_API_KEY (any value works if your server doesn’t enforce auth) and you do not define an explicit models.providers.vllm entry.

Quick start

  1. Start vLLM with an OpenAI-compatible server.

Your base URL should expose /v1 endpoints (e.g. /v1/models, /v1/chat/completions). vLLM commonly runs on:

  1. Opt in (any value works if no auth is configured):
export VLLM_API_KEY="vllm-local"
  1. Select a model (replace with one of your vLLM model IDs):
{
  agents: {
    defaults: {
      model: { primary: "vllm/your-model-id" },
    },
  },
}

Model discovery (implicit provider)

When VLLM_API_KEY is set (or an auth profile exists) and you do not define models.providers.vllm, CoderClaw will query:

…and convert the returned IDs into model entries.

If you set models.providers.vllm explicitly, auto-discovery is skipped and you must define models manually.

Explicit configuration (manual models)

Use explicit config when:

{
  models: {
    providers: {
      vllm: {
        baseUrl: "http://127.0.0.1:8000/v1",
        apiKey: "${VLLM_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "your-model-id",
            name: "Local vLLM Model",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 128000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Troubleshooting

curl http://127.0.0.1:8000/v1/models