很多人以为 Kimi Code CLI 只能用 Kimi 模型, 其实它原生支持 OpenAI, Claude, Gemini 等多种模型. 本文手把手教你配置.
Kimi Code CLI 是月之暗面开源的 AI 编程助手, 名字里带 "Kimi" 让很多人误以为它只能调用 Kimi API.
事实并非如此.
从源码可以看到, Kimi CLI 通过 kosong 库抽象了 LLM 层, 原生支持:
虽然 Kimi K2.5 很强, 但以下场景你可能需要其他模型:
| 场景 | 推荐替代方案 |
|---|---|
| 需要 Claude 的代码能力 | Claude 3.5 Sonnet |
| 想用 GPT-4 的某些特性 | OpenAI GPT-4 |
| 预算有限, 想用第三方代理 | OpenRouter, OneAPI |
| 完全离线, 本地运行 | Ollama, vLLM, llama.cpp |
| 需要 Gemini 的长上下文 | Gemini 1.5 Pro (200万 tokens) |
编辑 ~/.kimi/config.toml:
[providers.openai]
type = "openai_legacy"
base_url = "https://api.openai.com/v1"
api_key = "sk-your-openai-key"
[models.gpt4]
provider = "openai"
model = "gpt-4o"
max_context_size = 128000
capabilities = ["thinking", "image_in"]
default_model = "gpt4"
[providers.openrouter]
type = "openai_legacy"
base_url = "https://openrouter.ai/api/v1"
api_key = "sk-or-your-openrouter-key"
[models.claude_via_openrouter]
provider = "openrouter"
model = "anthropic/claude-3.5-sonnet"
max_context_size = 200000
default_model = "claude_via_openrouter"
[providers.ollama]
type = "openai_legacy"
base_url = "http://localhost:11434/v1"
api_key = "ollama" # Ollama 不需要真实 key, 但不能为空
[models.local_llama]
provider = "ollama"
model = "codellama:13b"
max_context_size = 16000
default_model = "local_llama"
[providers.anthropic]
type = "anthropic"
base_url = "https://api.anthropic.com"
api_key = "sk-ant-your-claude-key"
[models.claude]
provider = "anthropic"
model = "claude-3-5-sonnet-20241022"
max_context_size = 200000
capabilities = ["thinking", "image_in"]
default_model = "claude"
[providers.gemini]
type = "gemini"
base_url = "https://generativelanguage.googleapis.com"
api_key = "your-gemini-key"
[models.gemini_pro]
provider = "gemini"
model = "gemini-1.5-pro"
max_context_size = 2000000 # 200万 tokens!
capabilities = ["thinking", "image_in", "video_in"]
default_model = "gemini_pro"
重要提示: 环境变量只对 type="kimi" 或 type="openai_legacy" 生效, 且需要匹配对应前缀.
export OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_API_KEY="sk-your-key"
但是! 这有个坑: 如果你没有在配置文件中定义 type="openai_legacy" 的 provider, 这些环境变量不会生效.
Kimi CLI 默认创建的 provider 是 type="kimi", 它只读取 KIMI_* 前缀的环境变量.
# 如果你在用 Kimi provider, 但想换端点
export KIMI_BASE_URL="https://your-openai-compatible-endpoint.com/v1"
export KIMI_API_KEY="your-key"
export KIMI_MODEL_NAME="gpt-4"
检查:
~/.kimi/config.tomldefault_model 是否指向存在的模型provider 是否指向存在的 provider检查 provider type:
[providers.something]
type = "openai_legacy" # 必须是这个, 不是 "kimi"
检查:
ollama list/v1 后缀启动 kimi 后看日志, 或运行:
/model
会显示当前模型信息.
你可以在配置中定义多个模型, 启动时切换:
[providers.openai]
type = "openai_legacy"
base_url = "https://api.openai.com/v1"
api_key = "sk-xxx"
[providers.anthropic]
type = "anthropic"
base_url = "https://api.anthropic.com"
api_key = "sk-ant-xxx"
[models.gpt4]
provider = "openai"
model = "gpt-4o"
max_context_size = 128000
[models.claude]
provider = "anthropic"
model = "claude-3-5-sonnet-20241022"
max_context_size = 200000
default_model = "gpt4" # 默认用 GPT-4
启动时切换:
# 使用 Claude
kimi --model claude
# 使用 GPT-4
kimi --model gpt4
Kimi Code CLI 的架构设计是开放的, 不要被名字误导. 通过简单的配置, 你可以:
config.toml 中定义 provider (指定 type, baseurl, apikey)希望这篇指南能帮你解锁 Kimi CLI 的全部潜力!
还没有人回复