OpenAI接口兼容
DashScope提供了与OpenAI兼容的使用方式。如果您之前使用OpenAI SDK、langchain_openai SDK或HTTP方式调用OpenAI的服务,您只需在原有框架下调整API-KEY、endpoint、model等参数,即可切换至调用DashScope模型服务。
兼容OpenAI需要信息
Endpoint
Endpoint表示服务在网络上的具体地址或访问点,通过该地址,您可以访问服务提供的功能或数据。在Web服务或API的使用中,endpoint通常对应于服务的具体操作或资源的URL。当您使用OpenAI兼容接口来使用DashScope模型服务时,需要配置endpoint。
当您通过OpenAI SDK或langchain_openai SDK调用时,需要配置的endpoint如下:
https://dashscope.aliyuncs.com/compatible-mode/v1
当您通过HTTP请求调用时,需要配置的endpoint如下:
POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
灵积API-KEY
您需要开通灵积模型服务并获得API-KEY,详情请参考:开通DashScope并创建API-KEY。
支持的模型列表
当前OpenAI兼容接口支持的通义千问系列模型如下表所示。
模型分类 | 模型名称 |
通义千问 | qwen-turbo qwen-plus qwen-max qwen-max-0403 qwen-max-0107 qwen-max-longcontext qwen-max-0428 |
通义千问开源系列 | qwen1.5-110b-chat qwen1.5-72b-chat qwen1.5-32b-chat qwen1.5-14b-chat qwen1.5-7b-chat qwen1.5-1.8b-chat qwen1.5-0.5b-chat codeqwen1.5-7b-chat qwen-72b-chat qwen-14b-chat qwen-7b-chat qwen-1.8b-longcontext-chat qwen-1.8b-chat |
通过OpenAI SDK调用
前提条件
请确保您的计算机上安装了Python环境。
请安装最新版OpenAI SDK。
# 如果下述命令报错,请将pip替换为pip3 pip install -U openai
已开通灵积模型服务并获得API-KEY:开通DashScope并创建API-KEY。
我们推荐您将API-KEY配置到环境变量中以降低API-KEY的泄漏风险,配置方法可参考通过环境变量配置API-KEY。您也可以在代码中配置API-KEY,但是泄漏风险会提高。
请选择您需要使用的模型:支持的模型列表。
使用方式
您可以参考以下非流式输出与流式输出示例来使用OpenAI SDK访问DashScope的qwen-turbo模型。
非流式调用示例
from openai import OpenAI
import os
def get_response():
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope SDK的endpoint
)
completion = client.chat.completions.create(
model="qwen-turbo", # 设置为您需要使用的模型
messages=[{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': '你是谁'}]
)
print(completion.model_dump_json())
if __name__ == '__main__':
get_response()
运行代码可以获得以下结果:
{
"id": "chatcmpl-xxx",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "我是来自阿里云的大规模语言模型,我叫通义千问。",
"role": "assistant",
"function_call": null,
"tool_calls": null
}
}
],
"created": 1715239858,
"model": "qwen-turbo",
"object": "chat.completion",
"system_fingerprint": "",
"usage": {
"completion_tokens": 16,
"prompt_tokens": 21,
"total_tokens": 37
}
}
流式调用示例
from openai import OpenAI
import os
def get_response():
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope SDK的endpoint
)
completion = client.chat.completions.create(
model="qwen-turbo", # 设置为您需要使用的模型
messages=[{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': '你是谁?'}],
stream=True
)
for chunk in completion:
print(chunk.model_dump_json())
if __name__ == '__main__':
get_response()
运行代码可以获得以下结果:
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"","function_call":null,"role":"assistant","tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"我是","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"来自","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"阿里","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"云的大规模语言模型","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":",我叫通义千问。","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"","function_call":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1715934114,"model":"qwen-turbo","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}
输入参数配置
输入参数与OpenAI的接口参数对齐,当前已支持的参数如下:
参数 | 类型 | 默认值 | 说明 |
model | string | - | 用户使用model参数指明对应的模型,可选的模型请见支持的模型列表。 |
messages | array | - | 用户与模型的对话历史。array中的每个元素形式为 |
top_p (可选) | float | - | 生成过程中的核采样方法概率阈值,例如,取值为0.8时,仅保留概率加起来大于等于0.8的最可能token的最小集合作为候选集。取值范围为(0,1.0),取值越大,生成的随机性越高;取值越低,生成的确定性越高。 |
temperature(可选) | float | - | 用于控制模型回复的随机性和多样性。具体来说,temperature值控制了生成文本时对每个候选词的概率分布进行平滑的程度。较高的temperature值会降低概率分布的峰值,使得更多的低概率词被选择,生成结果更加多样化;而较低的temperature值则会增强概率分布的峰值,使得高概率词更容易被选择,生成结果更加确定。 取值范围: [0, 2),不建议取值为0,无意义。 |
max_tokens(可选) | integer | - | 指定模型可生成的最大token个数。根据模型不同有不同的上限限制,一般不超过2000。 |
stream (可选) | boolean | False | 用于控制是否使用流式输出。当以stream模式输出结果时,接口返回结果为generator,需要通过迭代获取结果,每次输出为当前生成的增量序列。 |
stop (可选) | string or array | None | stop参数用于实现内容生成过程的精确控制,在模型生成的内容即将包含指定的字符串或token_id时自动停止。stop可以为string类型或array类型。
|
tools(可选) | array | None | 用于指定可供模型调用的工具库,一次function call流程模型会从中选择其中一个工具。tools中每一个tool的结构如下:
在function call流程中,无论是发起function call的轮次,还是向模型提交工具函数的执行结果,均需设置tools参数。当前支持的模型包括qwen-turbo、qwen-plus、qwen-max和qwen-max-longcontext。 说明 tools暂时无法与stream=True同时使用。 |
stream_options(可选) | object | None | 该参数用于配置在流式输出时是否展示使用的token数目。只有当stream为True的时候该参数才会激活生效。若您需要统计流式输出模式下的token数目,可将该参数配置为 |
enable_search (可选,通过extra_body配置) | boolean | False | 用于控制模型在生成文本时是否使用互联网搜索结果进行参考。取值如下:
配置方式为: |
返回参数说明
返回参数 | 数据类型 | 说明 | 备注 |
id | string | 系统生成的标识本次调用的id。 | |
model | string | 本次调用的模型名。 | |
system_fingerprint | string | 模型运行时使用的配置版本,当前暂时不支持,返回为空字符串“”。 | |
choices | array | 模型生成内容的详情。 | |
choices[i].finish_reason | string | 有三种情况:
| |
choices[i].message | object | 模型输出的消息。 | |
choices[i].message.role | string | 模型的角色,固定为assistant。 | |
choices[i].message.content | string | 模型生成的文本。 | |
choices[i].index | integer | 生成的结果序列编号,默认为0。 | |
created | integer | 当前生成结果的时间戳(s)。 | |
usage | object | 计量信息,表示本次请求所消耗的token数据。 | |
usage.prompt_tokens | integer | 用户输入文本转换成token后的长度。 | 您可以参考本地tokenizer统计token数据进行token的估计。 |
usage.completion_tokens | integer | 模型生成回复转换为token后的长度。 | |
usage.total_tokens | integer | usage.prompt_tokens与usage.completion_tokens的总和。 |
通过langchain_openai SDK调用
前提条件
请确保您的计算机上安装了Python环境。
通过运行以下命令安装langchain_openai SDK。
# 如果下述命令报错,请将pip替换为pip3
pip install -U langchain_openai
已开通灵积模型服务并获得API-KEY:开通DashScope并创建API-KEY。
我们推荐您将API-KEY配置到环境变量中以降低API-KEY的泄漏风险,详情可参考通过环境变量配置API-KEY。您也可以在代码中配置API-KEY,但是泄漏风险会提高。
请选择您需要使用的模型:支持的模型列表。
使用方式
您可以参考以下非流式输出与流式输出示例来使用langchain_openai SDK访问DashScope的qwen-turbo模型。
非流式输出
非流式输出使用invoke方法实现,请参考以下示例代码:
from langchain_openai import ChatOpenAI
import os
def get_response():
llm = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope endpoint
model="qwen-turbo"
)
messages = [
{"role":"system","content":"You are a helpful assistant."},
{"role":"user","content":"你是谁?"}
]
response = llm.invoke(messages)
print(response.json(ensure_ascii=False))
if __name__ == "__main__":
get_response()
运行代码,可以得到以下结果:
{
"content": "我是来自阿里云的大规模语言模型,我叫通义千问。",
"additional_kwargs": {},
"response_metadata": {
"token_usage": {
"completion_tokens": 16,
"prompt_tokens": 22,
"total_tokens": 38
},
"model_name": "qwen-turbo",
"system_fingerprint": "",
"finish_reason": "stop",
"logprobs": null
},
"type": "ai",
"name": null,
"id": "run-xxx",
"example": false,
"tool_calls": [],
"invalid_tool_calls": []
}
流式输出
流式输出使用stream方法实现,无需在参数中配置stream参数。
from langchain_openai import ChatOpenAI
import os
def get_response():
llm = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model="qwen-turbo"
)
messages = [
{"role":"system","content":"You are a helpful assistant."},
{"role":"user","content":"你是谁?"},
]
response = llm.stream(messages)
for chunk in response:
print(chunk.json(ensure_ascii=False))
if __name__ == "__main__":
get_response()
运行代码,可以得到以下结果:
{"content": "", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
{"content": "我是", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
{"content": "来自", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
{"content": "阿里", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
{"content": "云的大规模语言模型", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
{"content": ",我叫通义千问。", "additional_kwargs": {}, "response_metadata": {}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
{"content": "", "additional_kwargs": {}, "response_metadata": {"finish_reason": "stop"}, "type": "AIMessageChunk", "name": null, "id": "run-58c183ab-68c8-4007-8cdd-37725ad54266", "example": false, "tool_calls": [], "invalid_tool_calls": [], "tool_call_chunks": []}
关于输入参数的配置,可以参考输入参数配置,相关参数在ChatOpenAI对象中定义。
通过HTTP接口调用
您可以通过HTTP接口来调用DashScope服务,获得与通过HTTP接口调用OpenAI服务相同结构的返回结果。
前提条件
已开通灵积模型服务并获得API-KEY:开通DashScope并创建API-KEY。
我们推荐您将API-KEY配置到环境变量中以降低API-KEY的泄漏风险,配置方法可参考通过环境变量配置API-KEY。您也可以在代码中配置API-KEY,但是泄漏风险会提高。
提交接口调用
POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
请求示例
以下示例展示通过CURL
命令来调用API的脚本。
如果您没有配置API-KEY为环境变量,需将$DASHSCOPE_API_KEY更改为您的API-KEY。
非流式输出
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "你是谁?"
}
]
}'
运行命令可得到以下结果:
{
"choices": [
{
"message": {
"role": "assistant",
"content": "我是来自阿里云的大规模语言模型,我叫通义千问。"
},
"finish_reason": "stop",
"index": 0,
"logprobs": null
}
],
"object": "chat.completion",
"usage": {
"prompt_tokens": 11,
"completion_tokens": 16,
"total_tokens": 27
},
"created": 1715252778,
"system_fingerprint": "",
"model": "qwen-turbo",
"id": "chatcmpl-xxx"
}
流式输出
如果您需要使用流式输出,请在请求体中指定stream参数为true。
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "你是谁?"
}
],
"stream":true
}'
运行命令可得到以下结果:
data: {"choices":[{"delta":{"content":"","role":"assistant"},"index":0,"logprobs":null,"finish_reason":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: {"choices":[{"finish_reason":null,"delta":{"content":"我是"},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: {"choices":[{"delta":{"content":"来自"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: {"choices":[{"delta":{"content":"阿里"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: {"choices":[{"delta":{"content":"云的大规模语言模型"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: {"choices":[{"delta":{"content":",我叫通义千问。"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: {"choices":[{"delta":{"content":""},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1715931028,"system_fingerprint":null,"model":"qwen-turbo","id":"chatcmpl-3bb05cf5cd819fbca5f0b8d67a025022"}
data: [DONE]
输入参数的详情请参考输入参数配置。
异常响应示例
在访问请求出错的情况下,输出的结果中会通过 code 和 message 指明出错原因。
{
"error": {
"message": "Incorrect API key provided. ",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}
状态码说明
错误码 | 说明 |
401 - Incorrect API key provided | apikey不正确 |
429 - Rate limit reached for requests | qps、qpm等超限 |
429 - You exceeded your current quota, please check your plan and billing details | 额度超限或者欠费 |
500 - The server had an error while processing your request | 服务端错误 |
503 - The engine is currently overloaded, please try again later | 服务端负载过高,可重试 |