vLLM can serve open-source (and some custom) models via an OpenAI-compatible HTTP API. CoderClaw can connect to vLLM using the openai-completions API.
CoderClaw can also auto-discover available models from vLLM when you opt in with VLLM_API_KEY (any value works if your server doesn’t enforce auth) and you do not define an explicit models.providers.vllm entry.
Your base URL should expose /v1 endpoints (e.g. /v1/models, /v1/chat/completions). vLLM commonly runs on:
http://127.0.0.1:8000/v1export VLLM_API_KEY="vllm-local"
{
agents: {
defaults: {
model: { primary: "vllm/your-model-id" },
},
},
}
When VLLM_API_KEY is set (or an auth profile exists) and you do not define models.providers.vllm, CoderClaw will query:
GET http://127.0.0.1:8000/v1/models…and convert the returned IDs into model entries.
If you set models.providers.vllm explicitly, auto-discovery is skipped and you must define models manually.
Use explicit config when:
contextWindow/maxTokens values.{
models: {
providers: {
vllm: {
baseUrl: "http://127.0.0.1:8000/v1",
apiKey: "${VLLM_API_KEY}",
api: "openai-completions",
models: [
{
id: "your-model-id",
name: "Local vLLM Model",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 8192,
},
],
},
},
},
}
curl http://127.0.0.1:8000/v1/models
VLLM_API_KEY that matches your server configuration, or configure the provider explicitly under models.providers.vllm.