MiniMax is an AI company that builds the M2/M2.1 model family. The current coding-focused release is MiniMax M2.1 (December 23, 2025), built for real-world complex tasks.
Source: MiniMax M2.1 release note
MiniMax highlights these improvements in M2.1:
Best for: quick setup with MiniMax Coding Plan via OAuth, no API key required.
Enable the bundled OAuth plugin and authenticate:
coderclaw plugins enable minimax-portal-auth # skip if already loaded.
coderclaw gateway restart # restart if gateway is already running
coderclaw onboard --auth-choice minimax-portal
You will be prompted to select an endpoint:
api.minimax.io)api.minimaxi.com)See MiniMax OAuth plugin README for details.
Best for: hosted MiniMax with Anthropic-compatible API.
Configure via CLI:
coderclaw configure{
env: { MINIMAX_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
models: {
mode: "merge",
providers: {
minimax: {
baseUrl: "https://api.minimax.io/anthropic",
apiKey: "${MINIMAX_API_KEY}",
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
reasoning: false,
input: ["text"],
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
contextWindow: 200000,
maxTokens: 8192,
},
],
},
},
},
}
Best for: keep Opus 4.6 as primary, fail over to MiniMax M2.1.
{
env: { MINIMAX_API_KEY: "sk-..." },
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-6": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" },
},
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["minimax/MiniMax-M2.1"],
},
},
},
}
Best for: local inference with LM Studio. We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a desktop/server) using LM Studio’s local server.
Configure manually via coderclaw.json:
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
},
},
models: {
mode: "merge",
providers: {
lmstudio: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "lmstudio",
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 196608,
maxTokens: 8192,
},
],
},
},
},
}
coderclaw configureUse the interactive config wizard to set MiniMax without editing JSON:
coderclaw configure.models.providers.minimax.baseUrl: prefer https://api.minimax.io/anthropic (Anthropic-compatible); https://api.minimax.io/v1 is optional for OpenAI-compatible payloads.models.providers.minimax.api: prefer anthropic-messages; openai-completions is optional for OpenAI-compatible payloads.models.providers.minimax.apiKey: MiniMax API key (MINIMAX_API_KEY).models.providers.minimax.models: define id, name, reasoning, contextWindow, maxTokens, cost.agents.defaults.models: alias models you want in the allowlist.models.mode: keep merge if you want to add MiniMax alongside built-ins.minimax/<model>.https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains (requires a coding plan key).models.json if you need exact cost tracking.coderclaw models list and coderclaw models set minimax/MiniMax-M2.1 to switch.This usually means the MiniMax provider isn’t configured (no provider entry and no MiniMax auth profile/env key found). A fix for this detection is in 2026.1.12 (unreleased at the time of writing). Fix by:
main), then restarting the gateway.coderclaw configure and selecting MiniMax M2.1, ormodels.providers.minimax block manually, orMINIMAX_API_KEY (or a MiniMax auth profile) so the provider can be injected.Make sure the model id is case‑sensitive:
minimax/MiniMax-M2.1minimax/MiniMax-M2.1-lightningThen recheck with:
coderclaw models list