OpenClaw第三方模型接入

更新时间:
复制为 MD 格式

OpenClaw 默认支持接入阿里云百炼的Coding Plan套餐,如需接入其他厂商的模型套餐,可按本文方式修改配置文件实现。本文以 MiniMax Token Plan、智谱 GLM Coding Plan 、腾讯云Token Plan、 Coding Plan、DeepSeek V4、火山引擎 Agent Plan 六种接入方式为例,进行说明。

各供应商接入步骤

MiniMax Token Plan

  1. 登录 轻量应用服务器控制台,找到目标服务器卡片,单击远程连接,在Workbench 一键连接区域单击立即登录

  2. 请确保账号已订阅MiniMax Token Plan 套餐。

  3. 在终端执行 openclaw configure,依次选择:

    Where will the Gateway run?     → Local (this machine)
    Select sections to configure    → Model
    Model/auth provider             → MiniMax
    Auth method                     → MiniMax CN — API Key (minimaxi.com)   # 海外用户选 MiniMax Global
    Enter API Key                   → 粘贴上一步的 Token Plan API Key
    Default model                   → 保持默认(MiniMax-M2.7)
    Select sections to configure    → Continue

智谱 GLM Coding Plan

  1. 登录 轻量应用服务器控制台,找到目标服务器卡片,单击远程连接,在Workbench 一键连接区域单击立即登录

  2. 请确保账号已订阅 GLM Coding Plan 套餐。

  3. 在终端执行 openclaw config 选 model,依次选择:

    Where will the Gateway run?                                  → Local (this machine)
    Select sections to configure                                 → Model
    Model/auth provider                                          → Z.AI
    Z.AI auth method                                             → Coding-Plan-CN
    Enter API Key                                                → 粘贴上一步的智谱 API Key
    Select model                                                 → zai/glm-5.1
    Select sections to configure                                 → Continue

腾讯云 Token Plan

  1. 登录腾讯云大模型服务平台 查看 API Key

获取 API Key 后,通过以下方式修改配置文件:

WebUI(仅限 2026.4.14 之前版本)

  1. 进入 OpenClaw 的 WebUI,在左侧菜单栏中选择配置 > Raw,在 openclaw.json 配置项中更改 models.providers 的配置(将 <USER_API_KEY> 替换为上一步获取的 API Key):

    "models": {
      "mode": "merge",
      "providers": {
        "tencent-token-plan": {
          "baseUrl": "https://api.lkeap.cloud.tencent.com/plan/v3",
          "apiKey": "USER_API_KEY",
          "api": "openai-completions",
          "models": [
            {
              "id": "tc-code-latest",
              "name": "Auto",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "hunyuan-2.0-instruct",
              "name": "Tencent HY 2.0 Instruct",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 16000
            },
            {
              "id": "hunyuan-2.0-thinking",
              "name": "Tencent HY 2.0 Think",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-t1",
              "name": "Hunyuan-T1",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 64000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-turbos",
              "name": "hunyuan-turbos",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 32000,
              "maxTokens": 16000
            },
            {
              "id": "minimax-m2.5",
              "name": "MiniMax-M2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "kimi-k2.5",
              "name": "Kimi-K2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 262144,
              "maxTokens": 32768
            },
            {
              "id": "glm-5",
              "name": "GLM-5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 202752,
              "maxTokens": 16384
            }
          ]
        }
      }
    }

    openclaw.json 配置项中修改 agents.defaults,增加模型列表信息(primary 字段默认选择glm-5 ,可以根据models中支持的模型进行更改):

    "agents": {
      "defaults": {
        "model": {
          "primary": "tencent-token-plan/glm-5"
        },
        "models": {
          "tencent-token-plan/tc-code-latest": {},
          "tencent-token-plan/hunyuan-2.0-instruct": {},
          "tencent-token-plan/hunyuan-2.0-thinking": {},
          "tencent-token-plan/hunyuan-t1": {},
          "tencent-token-plan/hunyuan-turbos": {},
          "tencent-token-plan/minimax-m2.5": {},
          "tencent-token-plan/kimi-k2.5": {},
          "tencent-token-plan/glm-5": {}
        }
      }
    }

    修改完成后,在界面上单击 Save 保存配置,再单击 Update 使配置生效。

终端修改配置文件

  1. 免密登录终端:登录 轻量应用服务器控制台,找到目标服务器卡片,单击远程连接,在Workbench 一键连接区域单击立即登录

  2. 修改配置文件:在终端执行以下命令编辑配置文件。

    vim ~/.openclaw/openclaw.json

    models.providers 中更改以下配置(将 <USER_API_KEY> 替换为上一步获取的 API Key):

    "models": {
      "mode": "merge",
      "providers": {
        "tencent-token-plan": {
          "baseUrl": "https://api.lkeap.cloud.tencent.com/plan/v3",
          "apiKey": "USER_API_KEY",
          "api": "openai-completions",
          "models": [
            {
              "id": "tc-code-latest",
              "name": "Auto",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "hunyuan-2.0-instruct",
              "name": "Tencent HY 2.0 Instruct",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 16000
            },
            {
              "id": "hunyuan-2.0-thinking",
              "name": "Tencent HY 2.0 Think",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-t1",
              "name": "Hunyuan-T1",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 64000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-turbos",
              "name": "hunyuan-turbos",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 32000,
              "maxTokens": 16000
            },
            {
              "id": "minimax-m2.5",
              "name": "MiniMax-M2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "kimi-k2.5",
              "name": "Kimi-K2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 262144,
              "maxTokens": 32768
            },
            {
              "id": "glm-5",
              "name": "GLM-5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 202752,
              "maxTokens": 16384
            }
          ]
        }
      }
    }

    openclaw.json 中修改 agents.defaults,增加模型列表信息(primary 字段默认选择 glm-5,可以根据 models 中支持的模型进行更改):

    "agents": {
      "defaults": {
        "model": {
          "primary": "tencent-token-plan/glm-5"
        },
        "models": {
          "tencent-token-plan/tc-code-latest": {},
          "tencent-token-plan/hunyuan-2.0-instruct": {},
          "tencent-token-plan/hunyuan-2.0-thinking": {},
          "tencent-token-plan/hunyuan-t1": {},
          "tencent-token-plan/hunyuan-turbos": {},
          "tencent-token-plan/minimax-m2.5": {},
          "tencent-token-plan/kimi-k2.5": {},
          "tencent-token-plan/glm-5": {}
        }
      }
    }
  3. 重启服务使配置生效:保存文件后,在终端执行以下命令重启 OpenClaw。

    openclaw gateway restart

腾讯云 Coding Plan

  1. 登录腾讯云大模型服务平台查看 API Key

获取 API Key 后,通过以下方式修改配置文件:

WebUI(仅限 2026.4.14 之前版本)

  1. 进入 OpenClaw 的 WebUI,在左侧菜单栏中选择配置 > Raw,在 openclaw.json 配置项中修改 models.providers 的配置(将 <USER_API_KEY> 替换为上一步获取的 API Key):

    "models": {
      "mode": "merge",
      "providers": {
        "tencent-coding-plan": {
          "baseUrl": "https://api.lkeap.cloud.tencent.com/coding/v3",
          "apiKey": "USER_API_KEY",
          "api": "openai-completions",
          "models": [
            {
              "id": "tc-code-latest",
              "name": "Auto",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "hunyuan-2.0-instruct",
              "name": "Tencent HY 2.0 Instruct",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 16000
            },
            {
              "id": "hunyuan-2.0-thinking",
              "name": "Tencent HY 2.0 Think",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-t1",
              "name": "Hunyuan-T1",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 64000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-turbos",
              "name": "hunyuan-turbos",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 32000,
              "maxTokens": 16000
            },
            {
              "id": "minimax-m2.5",
              "name": "MiniMax-M2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "kimi-k2.5",
              "name": "Kimi-K2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 262144,
              "maxTokens": 32768
            },
            {
              "id": "glm-5",
              "name": "GLM-5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 202752,
              "maxTokens": 16384
            }
          ]
        }
      }
    }

    openclaw.json 配置项中修改 agents.defaults,增加模型列表信息(primary 字段默认选择glm-5 ,可以根据models中支持的模型进行更改):

    "agents": {
      "defaults": {
        "model": {
          "primary": "tencent-coding-plan/glm-5"
        },
        "models": {
          "tencent-coding-plan/tc-code-latest": {},
          "tencent-coding-plan/hunyuan-2.0-instruct": {},
          "tencent-coding-plan/hunyuan-2.0-thinking": {},
          "tencent-coding-plan/hunyuan-t1": {},
          "tencent-coding-plan/hunyuan-turbos": {},
          "tencent-coding-plan/minimax-m2.5": {},
          "tencent-coding-plan/kimi-k2.5": {},
          "tencent-coding-plan/glm-5": {}
        }
      }
    }

    修改完成后,在界面上单击 Save 保存配置,再单击 Update 使配置生效。

终端修改配置文件

  1. 免密登录终端:登录 轻量应用服务器控制台,找到目标服务器卡片,单击远程连接,在Workbench 一键连接区域单击立即登录

  2. 修改配置文件:在终端执行以下命令编辑配置文件。

    vim ~/.openclaw/openclaw.json

    models.providers 中修改以下配置(将 <USER_API_KEY> 替换为上一步获取的 API Key):

    "models": {
      "mode": "merge",
      "providers": {
        "tencent-coding-plan": {
          "baseUrl": "https://api.lkeap.cloud.tencent.com/coding/v3",
          "apiKey": "USER_API_KEY",
          "api": "openai-completions",
          "models": [
            {
              "id": "tc-code-latest",
              "name": "Auto",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "hunyuan-2.0-instruct",
              "name": "Tencent HY 2.0 Instruct",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 16000
            },
            {
              "id": "hunyuan-2.0-thinking",
              "name": "Tencent HY 2.0 Think",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 128000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-t1",
              "name": "Hunyuan-T1",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 64000,
              "maxTokens": 32000
            },
            {
              "id": "hunyuan-turbos",
              "name": "hunyuan-turbos",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 32000,
              "maxTokens": 16000
            },
            {
              "id": "minimax-m2.5",
              "name": "MiniMax-M2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 196608,
              "maxTokens": 32768
            },
            {
              "id": "kimi-k2.5",
              "name": "Kimi-K2.5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 262144,
              "maxTokens": 32768
            },
            {
              "id": "glm-5",
              "name": "GLM-5",
              "reasoning": false,
              "input": ["text"],
              "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
              "contextWindow": 202752,
              "maxTokens": 16384
            }
          ]
        }
      }
    }

    openclaw.json 中修改 agents.defaults,增加模型列表信息(primary 字段默认选择 glm-5,可以根据 models 中支持的模型进行更改):

    "agents": {
      "defaults": {
        "model": {
          "primary": "tencent-coding-plan/glm-5"
        },
        "models": {
          "tencent-coding-plan/tc-code-latest": {},
          "tencent-coding-plan/hunyuan-2.0-instruct": {},
          "tencent-coding-plan/hunyuan-2.0-thinking": {},
          "tencent-coding-plan/hunyuan-t1": {},
          "tencent-coding-plan/hunyuan-turbos": {},
          "tencent-coding-plan/minimax-m2.5": {},
          "tencent-coding-plan/kimi-k2.5": {},
          "tencent-coding-plan/glm-5": {}
        }
      }
    }
  3. 重启服务使配置生效:保存文件后,在终端执行以下命令重启 OpenClaw。

    openclaw gateway restart

DeepSeek V4

  1. 免密登录终端:登录 轻量应用服务器控制台,找到目标服务器卡片,单击远程连接,在Workbench 一键连接区域单击立即登录

  2. 前往 DeepSeek 开放平台获取 API Key

  3. 修改配置文件:在终端执行以下命令编辑配置文件。

    vim ~/.openclaw/openclaw.json

    models.providers 中添加以下配置保存退出(将 <YOUR_API_KEY> 替换为上一步获取的 API Key):

    "models": {
      "mode": "merge",
      "providers": {
        "deepseek": {
          "baseUrl": "https://api.deepseek.com/v1",
          "apiKey": "YOUR_API_KEY",
          "api": "openai-completions",
          "models": [
            {"id": "deepseek-v4-flash", "name": "DeepSeek V4 Flash"},
            {"id": "deepseek-v4-pro", "name": "DeepSeek V4 Pro"}
          ]
        }
      }
    }

    openclaw.json 中修改 agents.defaults,增加模型列表信息保存退出(primary 字段默认选择 deepseek-v4-flash,可以根据 models 中支持的模型进行更改):

    "agents": {
      "defaults": {
        "model": {
          "primary": "deepseek/deepseek-v4-flash"
        },
        "models": {
          "deepseek/deepseek-v4-flash": {},
          "deepseek/deepseek-v4-pro": {}
        }
      }
    }
  4. 重启服务使配置生效:保存文件后,在终端执行以下命令重启 OpenClaw。

    openclaw gateway restart

火山引擎 Agent Plan

  1. 免密登录终端:登录 轻量应用服务器控制台,找到目标服务器卡片,单击远程连接,在Workbench 一键连接区域单击立即登录

  2. 请确保账号已订阅方舟 Agent Plan 套餐,并前往火山引擎控制台获取 API Key

  3. 修改配置文件:在终端执行以下命令编辑配置文件。

    vim ~/.openclaw/openclaw.json

    models.providers 中添加以下配置(将 <ARK_API_KEY> 替换为上一步获取的 API Key):

    警告

    请勿使用 https://ark.cn-beijing.volces.com/api/v3 作为 Base URL,该地址不会消耗 Agent Plan 额度,而是会产生额外费用。

    "models": {
      "mode": "merge",
      "providers": {
        "volcengine-agent-plan": {
          "baseUrl": "https://ark.cn-beijing.volces.com/api/plan/v3",
          "apiKey": "ARK_API_KEY",
          "api": "openai-completions",
          "models": [
            {
              "id": "ark-code-latest",
              "name": "ark-code-latest",
              "contextWindow": 256000,
              "maxTokens": 32000,
              "input": ["text", "image"]
            },
            {
              "id": "doubao-seed-2.0-code",
              "name": "doubao-seed-2.0-code",
              "contextWindow": 256000,
              "maxTokens": 128000,
              "input": ["text", "image"]
            },
            {
              "id": "doubao-seed-2.0-pro",
              "name": "doubao-seed-2.0-pro",
              "contextWindow": 256000,
              "maxTokens": 128000,
              "input": ["text", "image"]
            },
            {
              "id": "doubao-seed-2.0-lite",
              "name": "doubao-seed-2.0-lite",
              "contextWindow": 256000,
              "maxTokens": 128000,
              "input": ["text", "image"]
            },
            {
              "id": "doubao-seed-2.0-mini",
              "name": "doubao-seed-2.0-mini",
              "contextWindow": 256000,
              "maxTokens": 128000,
              "input": ["text", "image"]
            },
            {
              "id": "glm-5.1",
              "name": "glm-5.1",
              "contextWindow": 200000,
              "maxTokens": 128000,
              "input": ["text"]
            },
            {
              "id": "deepseek-v3.2",
              "name": "deepseek-v3.2",
              "contextWindow": 128000,
              "maxTokens": 32000,
              "input": ["text"]
            },
            {
              "id": "minimax-2.7",
              "name": "minimax-2.7",
              "contextWindow": 200000,
              "maxTokens": 128000,
              "input": ["text"]
            },
            {
              "id": "kimi-k2.6",
              "name": "kimi-k2.6",
              "contextWindow": 256000,
              "maxTokens": 32000,
              "input": ["text", "image"]
            }
          ]
        }
      }
    }

    openclaw.json 中修改 agents.defaults,增加模型列表信息(primary 字段默认选择 ark-code-latest,可以根据 models 中支持的模型进行更改):

    "agents": {
      "defaults": {
        "model": {
          "primary": "volcengine-agent-plan/glm-5.1"
        },
        "models": {
          "volcengine-agent-plan/ark-code-latest": {},
          "volcengine-agent-plan/doubao-seed-2.0-code": {},
          "volcengine-agent-plan/doubao-seed-2.0-pro": {},
          "volcengine-agent-plan/doubao-seed-2.0-lite": {},
          "volcengine-agent-plan/doubao-seed-2.0-mini": {},
          "volcengine-agent-plan/glm-5.1": {},
          "volcengine-agent-plan/deepseek-v3.2": {},
          "volcengine-agent-plan/minimax-2.7": {},
          "volcengine-agent-plan/kimi-k2.6": {}
        }
      }
    }
  4. 重启服务使配置生效:保存文件后,在终端执行以下命令重启 OpenClaw。

    openclaw gateway restart