您与大模型的对话(输入和输出)会受到模型上下文长度的限制。对于128K以内的长上下文场景,通义千问Plus或通义千问Turbo均能满足需求;而对于超过128K的超长上下文场景,建议使用Qwen-Long模型。Qwen-Long提供长达1,000万Token(约1,500万字)的上下文长度,支持上传文档并基于文档进行问答,且使用成本极低,每1元可处理200万Token(约300万字)。
场景示例
Qwen-Long可以用于快速分析代码、网页、论文、报告、合同、书籍、规范手册、技术文档等。示例场景如下:
支持的模型
模型名称 | 上下文长度 | 最大输入 | 最大输出 | 输入成本 | 输出成本 | 免费额度 |
(Token数) | (每千Token) | |||||
qwen-long | 10,000,000 | 10,000,000 | 6,000 | 0.0005元 | 0.002元 | 100万Token 有效期:百炼开通后30天内 2024年9月19日0点后开通百炼的用户,免费额度有效期为180天。 |
在Qwen-Long模型体验页面,您可以上传文档,在线提问。
通过API使用
前提条件
已安装OpenAI的Python SDK。
文档上传方式选择
在选择文档上传方式时,请考虑以下因素:
通过 file-id 上传
推荐:适合需要频繁引用和管理的文档。可以有效减少文本输入错误,操作简便。
文件格式仅限于纯文本文件类型,包括 txt、docx、pdf、epub、mobi、md 等,目前不支持图片或扫描文档等本质是图片形式的内容。每个文件的大小限制为 150MB,最多可以上传 1 万个文件,总文件大小不得超过 100GB。
通过纯文本上传
适用场景:适合小规模文档或临时内容。如果文档较短且不需要长期存储,可以选择此方式。受限于API调用请求体大小,如果您的文本内容长度超过1M Tokens,请参考,通过
file-id
传入。
通过 JSON 字符串上传
适用场景:适合需要传递复杂数据结构的情况。如果您的文档包含多层次信息,使用 JSON 字符串可以确保数据的完整性。
请根据您的具体需求和文档特性选择最合适的上传方式。我们建议优先考虑 file-id 上传,以获得最佳体验。
通过file-id传入文档信息
您可以通过OpenAI兼容接口上传文档,并将返回的file-id
输入到System Message中,使得模型在回复时参考文档信息。
简单示例
Qwen-Long模型可以基于您上传的文档进行回复。此处以百炼系列手机产品介绍.docx作为示例文件。
将文件通过OpenAI兼容接口上传到百炼平台,获取
file-id
。有关文档上传接口的详细参数解释及调用方式,请参考API文档页面进行了解。Python
import os from pathlib import Path from openai import OpenAI client = OpenAI( api_key=os.getenv("DASHSCOPE_API_KEY"), base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", ) file_object = client.files.create(file=Path("百炼系列手机产品介绍.docx"), purpose="file-extract") print(file_object.id)
curl
curl --location --request POST 'https://dashscope.aliyuncs.com/compatible-mode/v1/files' \ --header "Authorization: Bearer $DASHSCOPE_API_KEY" \ --form 'file=@"百炼系列手机产品介绍.docx"' \ --form 'purpose="file-extract"'
运行以上代码,您可以得到本次上传文件对应的
file-id
。将
file-id
传入System Message中,并在User Message中输入问题。在通过
system message
提供文档信息时,建议同时设置一个正常role-play的system message,如默认的 “You are a helpful assistant.”,角色设定会对文档的处理效果产生影响,因此建议在消息中明确设定自己的角色。Python
import os from openai import OpenAI client = OpenAI( api_key=os.getenv("DASHSCOPE_API_KEY"), base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", ) completion = client.chat.completions.create( model="qwen-long", messages=[ {'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'system', 'content': 'fileid://file-fe-xxx'}, {'role': 'user', 'content': '这篇文章讲了什么?'} ], stream=True, stream_options={"include_usage": True} ) for chunk in completion: print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \ --header "Authorization: Bearer $DASHSCOPE_API_KEY" \ --header "Content-Type: application/json" \ --data '{ "model": "qwen-long", "messages": [ {"role": "system","content": "You are a helpful assistant."}, {"role": "system","content": "fileid://file-fe-xxx"}, {"role": "user","content": "这篇文章讲了什么?"} ], "stream": true, "stream_options": { "include_usage": true } }'
通过配置
stream
及stream_options
参数,Qwen-Long模型会流式输出回复,并在最后返回的对象中通过usage字段展示Token使用情况。本文中的所有代码示例均采用流式输出,以清晰和直观地展示模型输出过程。如果您希望查看非流式输出的案例,请参见此处的非流式输出案例。
Python
{"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"","function_call":null,"role":"assistant","tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"这篇文章","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"介绍了","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"百","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} ...... {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"满足不同的使用需求","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"。","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[{"delta":{"content":"","function_call":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null} {"id":"chatcmpl-565151e8-7b41-9a78-ae88-472edbad8c47","choices":[],"created":1726023099,"model":"qwen-long","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":{"completion_tokens":93,"prompt_tokens":5395,"total_tokens":5488}}
curl
data: {"choices":[{"delta":{"content":"","role":"assistant"},"index":0,"logprobs":null,"finish_reason":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"finish_reason":null,"delta":{"content":"这篇文章"},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"delta":{"content":"是"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"delta":{"content":"关于"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} ..... data: {"choices":[{"delta":{"content":"描述了每款"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"delta":{"content":"手机的主要特点和"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"delta":{"content":"规格,并提供了参考"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"delta":{"content":"售价信息。"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[{"finish_reason":"stop","delta":{"content":""},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: {"choices":[],"object":"chat.completion.chunk","usage":{"prompt_tokens":5395,"completion_tokens":71,"total_tokens":5466},"created":1728649489,"system_fingerprint":null,"model":"qwen-long","id":"chatcmpl-e2434284-140a-9e3a-8ca5-f81e65e98d01"} data: [DONE]
除了传入单个file-id
外,您还可以通过传入多个file-id
来向模型传入多个文档,或在对话过程中追加file-id
使模型能够参考新的文档信息。
传入多文档
您可以在一条System Message中传入多个file-id
,在一次请求中处理多个文档。使用方式请参考示例代码。
示例代码
Python
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
completion = client.chat.completions.create(
model="qwen-long",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
# 请将 'file-fe-xxx1' 和 'file-fe-xxx2' 替换为您实际对话场景所使用的 file-id。
{'role': 'system', 'content': f"fileid://file-fe-xxx1,fileid://file-fe-xxx2"},
{'role': 'user', 'content': '这几篇文章讲了什么?'}
],
stream=True,
stream_options={"include_usage": True}
)
for chunk in completion:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system","content": "You are a helpful assistant."},
{"role": "system","content": "fileid://file-fe-xxx1"},
{"role": "system","content": "fileid://file-fe-xxx2"},
{"role": "user","content": "这两篇文章讲了什么?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
追加文档
在您与模型的交互过程中,可能需要补充新的文档信息。您可以在Messages 数组中添加新的file-id
到System Message中来实现这一效果。
Python
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处替换您的API-KEY
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope服务base_url
)
# 初始化messages列表
messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
# 请将 'file-fe-xxx1' 替换为您实际对话场景所使用的 file-id。
{'role': 'system', 'content': f'fileid://file-fe-xxx1'},
{'role': 'user', 'content': '这篇文章讲了什么?'}
]
# 第一轮响应
completion_1 = client.chat.completions.create(
model="qwen-long",
messages=messages,
stream=False
)
# 打印出第一轮响应
# 如果需要流式输出第一轮的响应,需要将stream设置为True,并拼接每一段输出内容,在构造assistant_message的content时传入拼接后的字符
print(f"第一轮响应:{completion_1.choices[0].message.model_dump()}")
# 构造assistant_message
assistant_message = {
"role": "assistant",
"content": completion_1.choices[0].message.content}
# 将assistant_message添加到messages中
messages.append(assistant_message)
# 将追加文档的fileid添加到messages中
# 请将 'file-fe-xxx2' 替换为您实际对话场景所使用的 file-id。
system_message = {'role': 'system', 'content': f'fileid://file-fe-xxx2'}
messages.append(system_message)
# 添加用户问题
messages.append({'role': 'user', 'content': '这两篇文章讨论的方法有什么异同点?'})
# 追加文档后的响应
completion_2 = client.chat.completions.create(
model="qwen-long",
messages=messages,
stream=True,
stream_options={
"include_usage": True
}
)
# 流式打印出追加文档后的响应
print("追加文档后的响应:")
for chunk in completion_2:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system","content": "You are a helpful assistant."},
{"role": "system","content": "fileid://file-fe-xxx1"},
{"role": "user","content": "这篇文章讲了什么?"},
{"role": "system","content": "fileid://file-fe-xxx2"},
{"role": "user","content": "这两篇文章讨论的方法有什么异同点?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
通过纯文本传入信息
除了通过file-id
传入文档信息的方法外,您还可以直接通过字符串传入文档信息。
受限于API调用请求体大小,如果您的文本内容长度超过1M Tokens,请参考通过file-id传入文档信息,通过file-id
传入。
简单示例
您可以直接将文档内容输入System Message中。
Python
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope服务endpoint
)
completion = client.chat.completions.create(
model="qwen-long",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'system', 'content': '百炼手机产品介绍 百炼X1 ——————畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕...'},
{'role': 'user', 'content': '文章讲了什么?'}
],
stream=True,
stream_options={"include_usage": True}
)
for chunk in completion:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system","content": "You are a helpful assistant."},
{"role": "system","content": "百炼X1 —— 畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率,..."},
{"role": "user","content": "这篇文章讲了什么?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
传入多文档
当您在本轮对话需要传入多个文档时,可以将文档内容放在不同的System Message中。
Python
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
completion = client.chat.completions.create(
model="qwen-long",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'system', 'content': '百炼X1————畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率...'},
{'role': 'system', 'content': '星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素屏下摄像头设计...'},
{'role': 'user', 'content': '这两篇文章讨论的产品有什么异同点?'}
],
stream=True,
stream_options={"include_usage": True}
)
for chunk in completion:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system","content": "You are a helpful assistant."},
{"role": "system","content": "百炼X1 —— 畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率..."},
{"role": "system","content": "星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素屏下摄像头设计..."},
{"role": "user","content": "这两篇文章讨论的产品有什么异同点?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
追加文档
在您与模型的交互过程中,可能需要补充新的文档信息。您可以在Messages 数组中添加新的文档内容到System Message中来实现这一效果。
Python
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处替换您的API-KEY
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope服务base_url
)
# 初始化messages列表
messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'system', 'content': '百炼X1 —— 畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率...'},
{'role': 'user', 'content': '这篇文章讲了什么?'}
]
# 第一轮响应
completion_1 = client.chat.completions.create(
model="qwen-long",
messages=messages,
stream=False
)
# 打印出第一轮响应
# 如果需要流式输出第一轮的响应,需要将stream设置为True,并拼接每一段输出内容,在构造assistant_message的content时传入拼接后的字符
print(f"第一轮响应:{completion_1.choices[0].message.model_dump()}")
# 构造assistant_message
assistant_message = {
"role": "assistant",
"content": completion_1.choices[0].message.content}
# 将assistant_message添加到messages中
messages.append(assistant_message)
# 将追加文档内容添加到messages中
system_message = {
'role': 'system',
'content': '星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素屏下摄像头设计,带来无界视觉享受...'
}
messages.append(system_message)
# 添加用户问题
messages.append({
'role': 'user',
'content': '这两篇文章讨论的产品有什么异同点?'
})
# 追加文档后的响应
completion_2 = client.chat.completions.create(
model="qwen-long",
messages=messages,
stream=True,
stream_options={
"include_usage": True
}
)
# 流式打印出追加文档后的响应
print("追加文档后的响应:")
for chunk in completion_2:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system","content": "You are a helpful assistant."},
{"role": "system","content": "百炼X1 —— 畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率..."},
{"role": "user","content": "这篇文章讲了什么?"},
{"role": "system","content": "星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素屏下摄像头设计,带来无界视觉享受..."},
{"role": "user","content": "这两篇文章讨论的产品有什么异同点"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
通过JSON字符串传入文档信息
您可以通过JSON字符串传入文档的内容、类型、名称与标题,使模型在本轮对话中可以参考这些信息。
JSON格式的文档信息需要按照文档内容(content)、文档类型(file_type)、文档名称(filename)、文档标题(title)的格式进行组织。请先将结构化的文档信息转换为JSON 字符串,再输入System Message中。
简单示例
Python
import os
import json
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 替换成真实DashScope的API_KEY,如已配置API-KEY到环境变量请使用变量名(推荐)
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope服务endpoint
)
file_info = {
# 全文内容省略,只做格式示意
'content': '百炼X1 —— 畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率...',
'file_type': 'docx',
'filename': '百炼系列手机产品介绍',
'title': '百炼手机产品介绍'
}
completion = client.chat.completions.create(
model="qwen-long",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
# 通过json.dumps方法将JSON object转化为字符串
{'role': 'system', 'content': json.dumps(file_info, ensure_ascii=False)},
{'role': 'user', 'content': '文章讲了什么?'}
],
stream=True
)
for chunk in completion:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "system",
"content": "{\"content\": \"百炼X1 搭载6.7英寸1440 x 3200像素超清屏幕...\n\", \"file_type\": \"docx\", \"filename\": \"百炼系列手机产品介绍\", \"title\": \"百炼手机产品介绍\"}"
},
{"role": "user", "content": "文章讲了什么?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
传入多文档
Python
import os
import json
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
file_info_1 = {
'content': '百炼X1————畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率...',
'file_type': 'pdf',
'filename': 'test_case_1',
'title': 'test_case_1'
}
file_info_2 = {
'content': '星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素屏下摄像头设计:...',
'file_type': 'pdf',
'filename': 'test_case_2',
'title': 'test_case_2'
}
# 首次对话会等待文档解析完成,首轮响应时间可能较长
completion = client.chat.completions.create(
model="qwen-long",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'system', 'content': json.dumps(file_info_1, ensure_ascii=False)},
{'role': 'system', 'content': json.dumps(file_info_2, ensure_ascii=False)},
{'role': 'user', 'content': '这两篇文章讨论的产品有什么异同点?'},
],
stream=True,
stream_options={"include_usage": True}
)
for chunk in completion:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "system",
"content": "{\"content\": \"百炼X1 搭载6.7英寸1440 x 3200像素超清屏幕...\n\", \"file_type\": \"pdf\", \"filename\": \"test_case_1\", \"title\": \"test_case_1\"}"
},
{
"role": "system",
"content": "{\"content\": \"星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素...\n\", \"file_type\": \"pdf\", \"filename\": \"test_case_2\", \"title\": \"test_case_2\"}"
},
{"role": "user", "content": "这两篇文章讨论的产品有什么异同点?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
追加文档
在您与模型的交互过程中,可能需要补充新的文档信息。您可以在Messages 数组中添加新的JSON格式化后的文档内容到System Message中,来实现这一效果。
Python
import os
import json
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处替换您的API-KEY
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", # 填写DashScope服务base_url
)
# 初始化messages列表
file_info_1 = {
'content': '星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素屏下摄像头设计,带来无界视觉享受。',
'file_type': 'pdf',
'filename': 'test_case_1',
'title': 'test_case_1'
}
file_info_2 = {
'content': '百炼X1 —— 畅享极致视界:搭载6.7英寸1440 x 3200像素超清屏幕,搭配120Hz刷新率,流畅视觉体验跃然眼前。',
'file_type': 'pdf',
'filename': 'test_case_2',
'title': 'test_case_2'
}
messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'system', 'content': json.dumps(file_info_1, ensure_ascii=False)},
{'role': 'user', 'content': '这篇文章讲了什么?'},
]
# 第一轮响应
completion_1 = client.chat.completions.create(
model="qwen-long",
messages=messages,
stream=False
)
# 打印出第一轮响应
# 如果需要流式输出第一轮的响应,需要将stream设置为True,并拼接每一段输出内容,在构造assistant_message的content时传入拼接后的字符
print(f"第一轮响应:{completion_1.choices[0].message.model_dump()}")
# 构造assistant_message
assistant_message = {
"role": "assistant",
"content": completion_1.choices[0].message.content}
# 将assistant_message添加到messages中
messages.append(assistant_message)
# 将追加文档的信息添加到messages中
system_message = {
'role': 'system',
'content': json.dumps(file_info_2, ensure_ascii=False)
}
messages.append(system_message)
# 添加用户问题
messages.append({
'role': 'user',
'content': '这两篇文章讨论的产品有什么异同点?'
})
# 追加文档后的响应
completion_2 = client.chat.completions.create(
model="qwen-long",
messages=messages,
stream=True,
stream_options={
"include_usage": True
}
)
# 流式打印出追加文档后的响应
print("追加文档后的响应:")
for chunk in completion_2:
print(chunk.model_dump())
curl
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "system",
"content": "{\"content\": \"百炼X1 搭载6.7英寸1440 x 3200像素超清屏幕...\n\", \"file_type\": \"pdf\", \"filename\": \"test_case_1\", \"title\": \"test_case_1\"}"
},
{"role": "user", "content": "这篇文章讲了什么?"},
{
"role": "system",
"content": "{\"content\": \"星尘S9 Pro —— 创新视觉盛宴:突破性6.9英寸1440 x 3088像素...\n\", \"file_type\": \"pdf\", \"filename\": \"test_case_2\", \"title\": \"test_case_2\"}"
},
{"role": "user", "content": "这两篇文章讨论的产品有什么异同点?"}
],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
限制
文件上传限制:文件格式支持常见的文本文件(txt、doc、docx、pdf、epub、mobi、md),单文件大小限制为150M,总量限制为1万个文件,总文件大小限制为100G。更多文件上传相关内容请参见上传文件。
输入限制:
当直接输入纯文本时,
content
的最大输入限制为 9,000 Token。通过file-id传入文档信息并在
system
消息中使用返回的file-id
,此时content
的最大输入限制可扩展至 10,000,000 Token。
输出限制:最大输出为 6,000 Token。
免费额度: 100万Token的免费额度仅在百炼开通后的180天内有效。使用超出免费额度的部分将按照相应的输入输出成本收费。
调用限制:关于模型的限流条件,请参见限流。
常见问题
如何解决API调用过程中遇到的错误?
请根据详细报错状态码,参考状态码说明。
Dashscope SDK的调用方式是否兼容?
是的,Dashscope SDK对模型调用仍然兼容,但文件上传与
file-id
获取目前只支持通过OpenAI SDK进行调用,且通过此方式获得的file-id
与Dashscope对模型进行调用所需要的file-id
通用。Qwen-Long是否支持通System Message来指定模型行为?
是的,Qwen-Long仍支持通过System Message标准化指定模型行为的功能,详情请参照上方“通过API使用”部分。
如何在JSON格式中组织文档信息?
请参照上方“通过JSON字符串传入文档信息”部分,构造messages时,为避免格式问题,JSON格式的文档信息应按照文档内容(content)、文档类型(file_type)、文档名称(filename)、文档标题(title)的格式组织。
Qwen-Long支持以流式回复吗?
通过对
stream
参数及stream_options
配置选项字典中include_usage
选项的配置,Qwen-Long模型会以流式的形式进行回复,并在最后返回的对象中通过usage字段展示token使用情况。
API参考
关于Qwen-Long模型的输入与输出参数,请参考通义千问。