Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. CoderClaw integrates with Ollama’s native API (/api/chat), supporting streaming and tool calling, and can auto-discover tool-capable models when you opt in with OLLAMA_API_KEY (or an auth profile) and do not define an explicit models.providers.ollama entry.
Install Ollama: https://ollama.ai
Pull a model:
ollama pull gpt-oss:20b
# or
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
coderclaw config set models.providers.ollama.apiKey "ollama-local"
{
agents: {
defaults: {
model: { primary: "ollama/gpt-oss:20b" },
},
},
}
When you set OLLAMA_API_KEY (or an auth profile) and do not define models.providers.ollama, CoderClaw discovers models from the local Ollama instance at http://127.0.0.1:11434:
/api/tags and /api/showtools capabilityreasoning when the model reports thinkingcontextWindow from model_info["<arch>.context_length"] when availablemaxTokens to 10× the context window0This avoids manual model entries while keeping the catalog aligned with Ollama’s capabilities.
To see what models are available:
ollama list
coderclaw models list
To add a new model, simply pull it with Ollama:
ollama pull mistral
The new model will be automatically discovered and available to use.
If you set models.providers.ollama explicitly, auto-discovery is skipped and you must define models manually (see below).
The simplest way to enable Ollama is via environment variable:
export OLLAMA_API_KEY="ollama-local"
Use explicit config when:
{
models: {
providers: {
ollama: {
baseUrl: "http://ollama-host:11434",
apiKey: "ollama-local",
api: "ollama",
models: [
{
id: "gpt-oss:20b",
name: "GPT-OSS 20B",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 8192,
maxTokens: 8192 * 10
}
]
}
}
}
}
If OLLAMA_API_KEY is set, you can omit apiKey in the provider entry and CoderClaw will fill it for availability checks.
If Ollama is running on a different host or port (explicit config disables auto-discovery, so define models manually):
{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://ollama-host:11434",
},
},
},
}
Once configured, all your Ollama models are available:
{
agents: {
defaults: {
model: {
primary: "ollama/gpt-oss:20b",
fallbacks: ["ollama/llama3.3", "ollama/qwen2.5-coder:32b"],
},
},
},
}
CoderClaw marks models as reasoning-capable when Ollama reports thinking in /api/show:
ollama pull deepseek-r1:32b
Ollama is free and runs locally, so all model costs are set to $0.
CoderClaw’s Ollama integration uses the native Ollama API (/api/chat) by default, which fully supports streaming and tool calling simultaneously. No special configuration is needed.
If you need to use the OpenAI-compatible endpoint instead (e.g., behind a proxy that only supports OpenAI format), set api: "openai-completions" explicitly:
{
models: {
providers: {
ollama: {
baseUrl: "http://ollama-host:11434/v1",
api: "openai-completions",
apiKey: "ollama-local",
models: [...]
}
}
}
}
Note: The OpenAI-compatible endpoint may not support streaming + tool calling simultaneously. You may need to disable streaming with params: { streaming: false } in model config.
For auto-discovered models, CoderClaw uses the context window reported by Ollama when available, otherwise it defaults to 8192. You can override contextWindow and maxTokens in explicit provider config.
Make sure Ollama is running and that you set OLLAMA_API_KEY (or an auth profile), and that you did not define an explicit models.providers.ollama entry:
ollama serve
And that the API is accessible:
curl http://localhost:11434/api/tags
CoderClaw only auto-discovers models that report tool support. If your model isn’t listed, either:
models.providers.ollama.To add models:
ollama list # See what's installed
ollama pull gpt-oss:20b # Pull a tool-capable model
ollama pull llama3.3 # Or another model
Check that Ollama is running on the correct port:
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve