QVQ模型具有强大的视觉推理能力,它能够先输出思考过程,再输出回答内容。QVQ 模型当前仅支持通过流式输出方式调用。
本文介绍的 QVQ 模型使用方法不适用于 qvq-72b-preview,如果您使用 qvq-72b-preview 模型,请参考视觉理解-开始使用。
支持的模型
通义千问QVQ是视觉推理模型,支持视觉输入及思维链输出,在数学、编程、视觉分析、创作以及通用任务上都表现了更强的能力。
并发限流请参考限流。
快速开始
API 使用前提:已获取API Key并完成配置API Key到环境变量。如果通过OpenAI SDK或DashScope SDK进行调用,还需要安装最新版SDK。(DashScope Java SDK 版本需要不低于2.19.0)。
由于 QVQ 类模型的思考过程较长,当前仅支持使用流式输出方式调用。
DashScope 方式不支持设置 incremental_output
参数为 false
。
下面是传入图像URL进行视觉推理的示例代码。
您可以在使用说明处查看对输入图像的限制,如需传入本地图像请参见使用本地文件
示例代码
from openai import OpenAI
import os
# 初始化OpenAI客户端
client = OpenAI(
# 如果没有配置环境变量,请用百炼API Key替换:api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
# 创建聊天完成请求
completion = client.chat.completions.create(
model="qvq-max", # 此处以 qvq-max 为例,可按需更换模型名称
messages=[
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}],
},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
},
},
{"type": "text", "text": "这道题怎么解答?"},
],
},
],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
示例代码
import OpenAI from "openai";
import process from 'process';
// 初始化 openai 客户端
const openai = new OpenAI({
apiKey: process.env.DASHSCOPE_API_KEY, // 从环境变量读取
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1'
});
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [
{
role: "system",
content: [{"type": "text", "text": "You are a helpful assistant."}]},
{
role: "user",
content: [
{ type: "image_url", image_url: { "url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg" } },
{ type: "text", text: "解答这道题" },
]
}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
示例代码
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"model": "qvq-max",
"messages": [
{
"role": "system",
"content": [{"type":"text","text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
}
},
{
"type": "text",
"text": "请解答这道题"
}
]
}
],
"stream":true,
"stream_options":{"include_usage":true}
}'
示例代码
import os
from dashscope import MultiModalConversation
messages = [
{
"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"},
{"text": "解答这道题?"}
]
}
]
response = MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qvq-max", # 此处以qvq-max为例,可按需更换模型名称。
messages=messages,
stream=True,
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
示例代码
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(content.get(0).get("text"));
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMsg = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(Collections.singletonMap("image", "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"),
Collections.singletonMap("text", "请解答这道题")))
.build();
streamCallWithMessage(conv, userMsg);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}
}
示例代码
curl -X POST https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-H 'X-DashScope-SSE: enable' \
-d '{
"model": "qvq-max",
"input":{
"messages":[
{
"role": "system",
"content": [
{"text": "You are a helpful assistant."}
]
},
{
"role": "user",
"content": [
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"},
{"text": "请解答这道题"}
]
}
]
}
}'
多轮对话
QVQ 模型 API 默认不会记录您的历史对话信息。多轮对话功能可以让大模型“拥有记忆”,满足如追问、信息采集等需要连续交流的场景。您在使用 QVQ 模型时,会接收到reasoning_content
字段(思考过程)与content
(回复内容),您可以将content
字段通过{'role': 'assistant', 'content':拼接后的流式输出content}
添加到上下文,无需添加reasoning_content
字段。
您可以通过 OpenAI SDK 或 OpenAI 兼容的 HTTP 方式使用多轮对话功能。
示例代码
from openai import OpenAI
import os
# 初始化OpenAI客户端
client = OpenAI(
# 如果没有配置环境变量,请用百炼API Key替换:api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
},
},
{"type": "text", "text": "请解答这道题"},
],
}
]
conversation_idx = 1
while True:
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
print("="*20+f"第{conversation_idx}轮对话"+"="*20)
conversation_idx += 1
# 创建聊天完成请求
completion = client.chat.completions.create(
model="qvq-max", # 此处以 qvq-max 为例,可按需更换模型名称
messages=messages,
stream=True
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
messages.append({"role": "assistant", "content": answer_content})
messages.append({
"role": "user",
"content": [
{
"type": "text",
"text": input("\n请输入你的消息:")
}
]
})
print("\n")
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
示例代码
import OpenAI from "openai";
import process from 'process';
import readline from 'readline/promises';
// 初始化 readline 接口
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
// 初始化 openai 客户端
const openai = new OpenAI({
apiKey: process.env.DASHSCOPE_API_KEY, // 从环境变量读取
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1'
});
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
},
},
{"type": "text", "text": "请解答这道题"},
],
}
];
let conversationIdx = 1;
async function main() {
while (true) {
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
console.log("=".repeat(20) + `第${conversationIdx}轮对话` + "=".repeat(20));
conversationIdx++;
// 重置状态
reasoningContent = '';
answerContent = '';
isAnswering = false;
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log("\n" + "=".repeat(20) + "思考过程" + "=".repeat(20) + "\n");
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
if (delta.content) {
if (!isAnswering) {
console.log('\n' + "=".repeat(20) + "完整回复" + "=".repeat(20) + "\n");
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
// 将完整回复加入消息历史
messages.push({ role: 'assistant', content: answerContent });
const userInput = await rl.question("请输入你的消息:");
messages.push({"role": "user", "content":userInput});
console.log("\n");
} catch (error) {
console.error('Error:', error);
}
}
}
// 启动程序
main().catch(console.error);
示例代码
curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qvq-max",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
}
},
{
"type": "text",
"text": "请解答这道题"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "长方体:面积为52,体积为24;正方形:面积为54,体积为27"
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "三角形的面积公式是什么"
}
]
}
],
"stream":true,
"stream_options":{"include_usage":true}
}'
您可以通过 DashScope SDK 或 HTTP 方式使用多轮对话功能。
示例代码
import os
from dashscope import MultiModalConversation
messages = [
{
"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"},
{"text": "请解答这道题"}
]
}
]
conversation_idx = 1
while True:
print("=" * 20 + f"第{conversation_idx}轮对话" + "=" * 20)
conversation_idx += 1
response = MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qvq-max", # 此处以qvq-max为例,可按需更换模型名称。
messages=messages,
stream=True,
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
messages.append({"role": "assistant", "content": answer_content})
messages.append({
"role": "user",
"content": [
{
"type": "text",
"text": input("\n请输入你的消息:")
}
]
})
print("\n")
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
示例代码
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(content.get(0).get("text"));
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(List<MultiModalMessage> Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Msg)
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, List<MultiModalMessage> Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMsg1 = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(Collections.singletonMap("image", "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"),
Collections.singletonMap("text", "请解答这道题")))
.build();
MultiModalMessage AssistantMsg = MultiModalMessage.builder()
.role(Role.ASSISTANT.getValue())
.content(Arrays.asList(Collections.singletonMap("text", "长方体:面积为52,体积为24;正方形:面积为54,体积为27")))
.build();
MultiModalMessage userMsg2 = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(Collections.singletonMap("text", "三角形面积公式是什么?")))
.build();
List<MultiModalMessage> Msg = Arrays.asList(userMsg1,AssistantMsg,userMsg2);
streamCallWithMessage(conv, Msg);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}
}
示例代码
curl -X POST https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-H 'X-DashScope-SSE: enable' \
-d '{
"model": "qvq-max",
"input":{
"messages":[
{
"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"},
{"text": "请解答这道题"}
]
},
{
"role": "assistant",
"content": [
{"text": "长方体:面积为52,体积为24;正方形:面积为54,体积为27"}
]
},
{
"role": "user",
"content": [
{"text": "三角形的面积公式是什么"}
]
}
]
}
}'
多图像输入
QVQ模型可在一次请求中同时输入多张图像,模型会根据传入的全部图像进行回答。传入方式为图像URL或本地文件,也支持二者组合传入,下面是以图像URL方式传入的示例代码。
输入图像的总Token数必须小于模型的最大输入,您可以参考图像数量限制,计算可传入图像的最大数量。
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
completion = client.chat.completions.create(
model="qvq-max",
messages=[
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant."}]},
{"role": "user", "content": [
# 第一张图像链接,如果传入本地文件,请将url的值替换为图像的Base64编码格式
{"type": "image_url", "image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"}, },
# 第二张图像链接,如果传入本地文件,请将url的值替换为图像的Base64编码格式
{"type": "image_url",
"image_url": {"url": "https://img.alicdn.com/imgextra/i1/O1CN01ukECva1cisjyK6ZDK_!!6000000003635-0-tps-1500-1734.jpg"}, },
{"type": "text", "text": "解答第一张图像中的题目,然后再解读第二张图的文章。"},
],
}
],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
import OpenAI from "openai";
import process from 'process';
// 初始化 openai 客户端
const openai = new OpenAI({
apiKey: process.env.DASHSCOPE_API_KEY, // 从环境变量读取
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1'
});
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [
{role: "system",content:[{ type: "text", text: "You are a helpful assistant." }]},
{role: "user",content: [
// 第一张图像链接,如果传入本地文件,请将url的值替换为图像的Base64编码格式
{type: "image_url",image_url: {"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"}},
// 第二张图像链接,,如果传入本地文件,请将url的值替换为图像的Base64编码格式
{type: "image_url",image_url: {"url": "https://img.alicdn.com/imgextra/i1/O1CN01ukECva1cisjyK6ZDK_!!6000000003635-0-tps-1500-1734.jpg"}},
{type: "text", text: "解答第一张图像中的题目,然后再解读第二张图的文章。" },
]}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qvq-max",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
}
},
{
"type": "image_url",
"image_url": {
"url": "https://img.alicdn.com/imgextra/i1/O1CN01ukECva1cisjyK6ZDK_!!6000000003635-0-tps-1500-1734.jpg"
}
},
{
"type": "text",
"text": "解答第一张图像中的题目,然后再解读第二张图的文章。"
}
]
}
],
"stream":true,
"stream_options":{"include_usage":true}
}'
import os
import dashscope
from dashscope import MultiModalConversation
messages = [
{
"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
# 第一张图像链接,如果传入本地文件,请将url的值替换为图像的Base64编码格式
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"},
# 第二张图像链接,如果传入本地文件,请将url的值替换为图像的Base64编码格式
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01ukECva1cisjyK6ZDK_!!6000000003635-0-tps-1500-1734.jpg"},
{"text": "解答第一张图像中的题目,然后再解读第二张图的文章。"}
]
}
]
response = MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qvq-max", # 此处以qvq-max为例,可按需更换模型名称。
messages=messages,
stream=True,
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(text);
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(
// 第一张图像链接
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241022/emyrja/dog_and_girl.jpeg"),
// 如果使用本地图像,请并释放下面注释
// new HashMap<String, Object>(){{put("image", filePath);}},
// 第二张图像链接
Collections.singletonMap("image", "https://dashscope.oss-cn-beijing.aliyuncs.com/images/tiger.png"),
// 第三张图像链接
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/hbygyo/rabbit.jpg"),
Collections.singletonMap("text", "这些图描绘了什么内容?"))).build();
streamCallWithMessage(conv, userMessage);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}
}
curl --location 'https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
-H 'X-DashScope-SSE: enable' \
--data '{
"model": "qvq-max",
"input":{
"messages":[
{
"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"},
{"image": "https://img.alicdn.com/imgextra/i1/O1CN01ukECva1cisjyK6ZDK_!!6000000003635-0-tps-1500-1734.jpg"},
{"text": "解答第一张图像中的题目,然后再解读第二张图的文章。"}
]
}
]
}
}'
视频理解
视频的传入方式可以为图片列表形式或视频文件形式。当前只支持以流式输出的方式进行调用。
图片列表形式
最少传入4张图片,最多可传入512张图片。
import os
from openai import OpenAI
client = OpenAI(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
completion = client.chat.completions.create(
model="qvq-max",
messages=[{"role": "user","content": [
# 传入图像列表时,用户消息中的"type"参数为"video"
{"type": "video","video": [
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"]},
{"type": "text","text": "描述这个视频的具体过程?"},
]}],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
import OpenAI from "openai";
import process from 'process';
// 初始化 openai 客户端
const openai = new OpenAI({
apiKey: process.env.DASHSCOPE_API_KEY, // 从环境变量读取
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1'
});
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [{
role: "user",
content: [
{
// 传入图像列表时,用户消息中的"type"参数为"video"
type: "video",
video: [
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"
]
},
{
type: "text",
text: "描述这个视频的具体过程"
}
]
}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qvq-max",
"messages": [{"role": "user",
"content": [{"type": "video",
"video": ["https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"]},
{"type": "text",
"text": "描述这个视频的具体过程"}]}]
"stream":true,
"stream_options":{"include_usage":true}
}'
import os
import dashscope
messages = [{"role": "user",
"content": [
{"video":["https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"],
},
{"text": "描述这个视频的具体过程"}]}]
response = dashscope.MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
model='qvq-max',
messages=messages,
stream=True
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(text);
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
MultiModalConversation conv = new MultiModalConversation();
Map<String, Object> params = Map.of(
"video", Arrays.asList("https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg")
);
MultiModalMessage userMessage = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(
params,
Collections.singletonMap("text", "描述这个视频的具体过程")))
.build();
streamCallWithMessage(conv, userMessage);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}}
curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-H 'X-DashScope-SSE: enable' \
-d '{
"model": "qvq-max",
"input": {
"messages": [
{
"role": "user",
"content": [
{
"video": [
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"
]
},
{
"text": "描述这个视频的具体过程"
}
]
}
]
}
}'
视频文件形式
视频文件大小:视频大小不超过1 GB。
视频文件格式: MP4、AVI、MKV、MOV、FLV、WMV 等。
视频时长:支持的视频时长为2秒至10分钟
视频尺寸:无限制,但是视频文件会被调整到约 60万 像素数,更大尺寸的视频文件不会有更好的理解效果。
暂时不支持对视频文件的音频进行理解。
import os
from openai import OpenAI
client = OpenAI(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
completion = client.chat.completions.create(
model="qvq-max",
messages=[{"role": "user","content": [
# 传入图像列表时,用户消息中的"type"参数为"video"
{"type": "video_url","video_url":{"url": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250328/eepdcq/phase_change_480p.mov"} },
{"type": "text","text": "这是视频的开头部分,请分析并猜测该视频在讲解什么知识。"},
]}],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
import OpenAI from "openai";
import process from 'process';
// 初始化 openai 客户端
const openai = new OpenAI({
apiKey: process.env.DASHSCOPE_API_KEY, // 从环境变量读取
baseURL: 'https://dashscope.aliyuncs.com/compatible-mode/v1'
});
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [
{
role: "system",
content: [{"type": "text", "text": "You are a helpful assistant."}]},
{
role: "user",
content: [
{ type: "video_url", video_url: { "url": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250328/eepdcq/phase_change_480p.mov" } },
{ type: "text", text: "这是视频的开头部分,请分析并猜测该视频在讲解什么知识。" },
]
}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
curl --location 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"model": "qvq-max",
"messages": [
{
"role": "system",
"content": [{"type":"text","text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "video_url",
"video_url": {
"url": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250328/eepdcq/phase_change_480p.mov"
}
},
{
"type": "text",
"text": "这是视频的开头部分,请分析并猜测该视频在讲解什么知识。"
}
]
}
],
"stream":true,
"stream_options":{"include_usage":true}
}'
import os
from dashscope import MultiModalConversation
messages = [
{
"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{"video": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250328/eepdcq/phase_change_480p.mov"},
{"text": "这是视频的开头部分,请分析并猜测该视频在讲解什么知识。"}
]
}
]
response = MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qvq-max", # 此处以qvq-max为例,可按需更换模型名称。
messages=messages,
stream=True,
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(content.get(0).get("text"));
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMsg = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(Collections.singletonMap("video", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250328/eepdcq/phase_change_480p.mov"),
Collections.singletonMap("text", "这是视频的开头部分,请分析并猜测该视频在讲解什么知识。")))
.build();
streamCallWithMessage(conv, userMsg);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}}
curl -X POST https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-H 'X-DashScope-SSE: enable' \
-d '{
"model": "qvq-max",
"input":{
"messages":[
{
"role": "system",
"content": [
{"text": "You are a helpful assistant."}
]
},
{
"role": "user",
"content": [
{"video": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250328/eepdcq/phase_change_480p.mov"},
{"text": "这是视频的开头部分,请分析并猜测该视频在讲解什么知识。"}
]
}
]
}
}'
使用本地文件(Base64编码或本地路径)
下列是传入本地图像或视频文件的示例代码,使用OpenAI SDK或HTTP方式时,需要将本地文件编码为Base64格式后再传入;使用DashScope SDK时,直接传入本地文件的路径即可。
图像:您可以在使用说明处查看对输入图像的限制,如需传入图像URL请参见快速开始。
from openai import OpenAI
import os
import base64
# base 64 编码格式
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
# 将xxxx/test.jpg替换为你本地图像的绝对路径
base64_image = encode_image("xxx/test.jpg")
# 初始化OpenAI客户端
client = OpenAI(
# 如果没有配置环境变量,请用百炼API Key替换:api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
# 创建聊天完成请求
completion = client.chat.completions.create(
model="qvq-max", # 此处以 qvq-max 为例,可按需更换模型名称
messages=[
{
"role": "system",
"content": [{"type":"text","text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "image_url",
# 需要注意,传入Base64,图像格式(即image/{format})需要与支持的图片列表中的Content Type保持一致。"f"是字符串格式化的方法。
# PNG图像: f"data:image/png;base64,{base64_image}"
# JPEG图像: f"data:image/jpeg;base64,{base64_image}"
# WEBP图像: f"data:image/webp;base64,{base64_image}"
"image_url": {"url": f"data:image/png;base64,{base64_image}"},
},
{"type": "text", "text": "这道题怎么解答?"},
],
}
],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
import OpenAI from "openai";
import { readFileSync } from 'fs';
const openai = new OpenAI(
{
// 若没有配置环境变量,请用百炼API Key将下行替换为:apiKey: "sk-xxx"
apiKey: process.env.DASHSCOPE_API_KEY,
baseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
);
const encodeImage = (imagePath) => {
const imageFile = readFileSync(imagePath);
return imageFile.toString('base64');
};
// 将xxxx/test.jpg替换为你本地图像的绝对路径
const base64Image = encodeImage("xxx/test.jpg")
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [
{
role: "system",
content: [{"type": "text", "text": "You are a helpful assistant."}]},
{
role: "user",
content: [
{ type: "image_url", image_url: {"url": `data:image/png;base64,${base64Image}`} },
{ type: "text", text: "请解答这道题" },
]
}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
使用DashScope SDK处理本地图像文件时,需要传入文件路径。请您参考下表,结合您的使用方式与操作系统进行文件路径的创建。
系统 | SDK | 传入的文件路径 | 示例 |
Linux或macOS系统 | Python SDK | file://{文件的绝对路径} | file:///home/images/test.png |
Java SDK | |||
Windows系统 | Python SDK | file://{文件的绝对路径} | file://D:/images/test.png |
Java SDK | file:///{文件的绝对路径} | file:///D:images/test.png |
import os
from dashscope import MultiModalConversation
# 将xxxx/test.jpg替换为你本地图像的绝对路径
local_path = "xxx/test.jpg"
image_path = f"file://{local_path}"
messages = [{"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{'role':'user',
'content': [{'image': image_path},
{'text': '请解答这道题'}]}]
response = MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qvq-max", # 此处以qvq-max为例,可按需更换模型名称。
messages=messages,
stream=True,
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(text);
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
String localPath = "/Users/wsp/IdeaProjects/qwen/src/main/java/org/example/test.png";
String filePath = "file://"+ localPath;
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(new HashMap<String, Object>(){{put("image", filePath);}},
new HashMap<String, Object>(){{put("text", "请解答这道题");}})).build();
streamCallWithMessage(conv, userMessage);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}
}
本地视频大小限制为100MB,时长限制为 10分钟。
from openai import OpenAI
import os
import base64
# base 64 编码格式
def encode_video(video_path):
with open(video_path, "rb") as video_file:
return base64.b64encode(video_file.read()).decode("utf-8")
# 将xxxx/test.mp4替换为你本地视频的绝对路径
base64_video = encode_video("xxx/test.mp4")
# 初始化OpenAI客户端
client = OpenAI(
# 如果没有配置环境变量,请用百炼API Key替换:api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
# 创建聊天完成请求
completion = client.chat.completions.create(
model="qvq-max", # 此处以 qvq-max 为例,可按需更换模型名称
messages=[
{
"role": "system",
"content": [{"type":"text","text": "You are a helpful assistant."}]},
{
"role": "user",
"content": [
{
"type": "video_url",
# 需要注意,传入Base64,图像格式(即image/{format})需要与支持的图片列表中的Content Type保持一致。"f"是字符串格式化的方法。
# PNG图像: f"data:image/png;base64,{base64_image}"
# JPEG图像: f"data:image/jpeg;base64,{base64_image}"
# WEBP图像: f"data:image/webp;base64,{base64_image}"
"video_url": {"url": f"data:image/png;base64,{base64_video}"},
},
{"type": "text", "text": "这段视频讲的是什么内容?"},
],
}
],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
import OpenAI from "openai";
import { readFileSync } from 'fs';
const openai = new OpenAI(
{
// 若没有配置环境变量,请用百炼API Key将下行替换为:apiKey: "sk-xxx"
apiKey: process.env.DASHSCOPE_API_KEY,
baseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
);
const encodeVideo = (videoPath) => {
const videoFile = readFileSync(videoPath);
return videoFile.toString('base64');
};
// 将xxxx/test.mp4替换为你本地视频的绝对路径
const base64Video = encodeVideo("xxx/test.mp4")
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [
{
role: "system",
content: [{"type": "text", "text": "You are a helpful assistant."}]},
{
role: "user",
content: [
{ type: "video_url", video_url: {"url": `data:image/png;base64,${base64Video}`} },
{ type: "text", text: "这段视频讲的是什么内容?" },
]
}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
使用DashScope SDK处理本地图像文件时,需要传入文件路径。请您参考下表,结合您的使用方式与操作系统进行文件路径的创建。
系统 | SDK | 传入的文件路径 | 示例 |
Linux或macOS系统 | Python SDK | file://{文件的绝对路径} | file:///home/images/test.mp4 |
Java SDK | |||
Windows系统 | Python SDK | file://{文件的绝对路径} | file://D:/images/test.mp4 |
Java SDK | file:///{文件的绝对路径} | file:///D:images/test.mp4 |
import os
from dashscope import MultiModalConversation
# 将xxxx/test.mp4替换为你本地视频的绝对路径
local_path = "xxx/test.mp4"
video_path = f"file://{local_path}"
messages = [{"role": "system",
"content": [{"text": "You are a helpful assistant."}]},
{'role':'user',
'content': [{'video': video_path},
{'text': '请解答这道题'}]}]
response = MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qvq-max", # 此处以qvq-max为例,可按需更换模型名称。
messages=messages,
stream=True,
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(text);
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
String localPath = "xxx/test.mp4";
String filePath = "file://"+ localPath;
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(new HashMap<String, Object>(){{put("video", filePath);}},
new HashMap<String, Object>(){{put("text", "这段视频讲的是什么?");}})).build();
streamCallWithMessage(conv, userMessage);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}
}
import os
from openai import OpenAI
import base64
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
base64_image1 = encode_image("football1.jpg")
base64_image2 = encode_image("football2.jpg")
base64_image3 = encode_image("football3.jpg")
base64_image4 = encode_image("football4.jpg")
client = OpenAI(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)
reasoning_content = "" # 定义完整思考过程
answer_content = "" # 定义完整回复
is_answering = False # 判断是否结束思考过程并开始回复
completion = client.chat.completions.create(
model="qvq-max",
messages=[{"role": "user","content": [
# 传入图像列表时,用户消息中的"type"参数为"video"
{"type": "video","video": [
f"data:image/png;base64,{base64_image1}",
f"data:image/png;base64,{base64_image2}",
f"data:image/png;base64,{base64_image3}",
f"data:image/png;base64,{base64_image4}",]},
{"type": "text","text": "描述这个视频的具体过程?"},
]}],
stream=True,
# 解除以下注释会在最后一个chunk返回Token使用量
# stream_options={
# "include_usage": True
# }
)
print("\n" + "=" * 20 + "思考过程" + "=" * 20 + "\n")
for chunk in completion:
# 如果chunk.choices为空,则打印usage
if not chunk.choices:
print("\nUsage:")
print(chunk.usage)
else:
delta = chunk.choices[0].delta
# 打印思考过程
if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
else:
# 开始回复
if delta.content != "" and is_answering is False:
print("\n" + "=" * 20 + "完整回复" + "=" * 20 + "\n")
is_answering = True
# 打印回复过程
print(delta.content, end='', flush=True)
answer_content += delta.content
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(reasoning_content)
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(answer_content)
import OpenAI from "openai";
import { readFileSync } from 'fs';
const openai = new OpenAI(
{
// 若没有配置环境变量,请用百炼API Key将下行替换为:apiKey: "sk-xxx"
apiKey: process.env.DASHSCOPE_API_KEY,
baseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
);
const encodeImage = (imagePath) => {
const imageFile = readFileSync(imagePath);
return imageFile.toString('base64');
};
const base64Image1 = encodeImage("football1.jpg")
const base64Image2 = encodeImage("football2.jpg")
const base64Image3 = encodeImage("football3.jpg")
const base64Image4 = encodeImage("football4.jpg")
let reasoningContent = '';
let answerContent = '';
let isAnswering = false;
let messages = [{
role: "user",
content: [
{
// 传入图像列表时,用户消息中的"type"参数为"video"
type: "video",
video: [
`data:image/png;base64,${base64Image1}`,
`data:image/png;base64,${base64Image2}`,
`data:image/png;base64,${base64Image3}`,
`data:image/png;base64,${base64Image4}`
]
},
{
type: "text",
text: "描述这个视频的具体过程"
}
]
}]
async function main() {
try {
const stream = await openai.chat.completions.create({
model: 'qvq-max',
messages: messages,
stream: true
});
console.log('\n' + '='.repeat(20) + '思考过程' + '='.repeat(20) + '\n');
for await (const chunk of stream) {
if (!chunk.choices?.length) {
console.log('\nUsage:');
console.log(chunk.usage);
continue;
}
const delta = chunk.choices[0].delta;
// 处理思考过程
if (delta.reasoning_content) {
process.stdout.write(delta.reasoning_content);
reasoningContent += delta.reasoning_content;
}
// 处理正式回复
else if (delta.content) {
if (!isAnswering) {
console.log('\n' + '='.repeat(20) + '完整回复' + '='.repeat(20) + '\n');
isAnswering = true;
}
process.stdout.write(delta.content);
answerContent += delta.content;
}
}
} catch (error) {
console.error('Error:', error);
}
}
main();
import os
import dashscope
local_path1 = "football1.jpg"
local_path2 = "football2.jpg"
local_path3 = "football3.jpg"
local_path4 = "football4.jpg"
image_path1 = f"file://{local_path1}"
image_path2 = f"file://{local_path2}"
image_path3 = f"file://{local_path3}"
image_path4 = f"file://{local_path4}"
messages = [{"role": "user",
"content": [
{"video":[image_path1,image_path2,image_path3,image_path4]},
{"text": "描述这个视频的具体过程"}]}]
response = dashscope.MultiModalConversation.call(
# 若没有配置环境变量,请用百炼API Key将下行替换为:api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
model='qvq-max',
messages=messages,
stream=True
)
# 定义完整思考过程
reasoning_content = ""
# 定义完整回复
answer_content = ""
# 判断是否结束思考过程并开始回复
is_answering = False
print("=" * 20 + "思考过程" + "=" * 20)
for chunk in response:
# 如果思考过程与回复皆为空,则忽略
message = chunk.output.choices[0].message
reasoning_content_chuck = message.get("reasoning_content", None)
if (chunk.output.choices[0].message.content == [] and
reasoning_content_chuck == ""):
pass
else:
# 如果当前为思考过程
if reasoning_content_chuck != None and chunk.output.choices[0].message.content == []:
print(chunk.output.choices[0].message.reasoning_content, end="")
reasoning_content += chunk.output.choices[0].message.reasoning_content
# 如果当前为回复
elif chunk.output.choices[0].message.content != []:
if not is_answering:
print("\n" + "=" * 20 + "完整回复" + "=" * 20)
is_answering = True
print(chunk.output.choices[0].message.content[0]["text"], end="")
answer_content += chunk.output.choices[0].message.content[0]["text"]
# 如果您需要打印完整思考过程与完整回复,请将以下代码解除注释后运行
# print("=" * 20 + "完整思考过程" + "=" * 20 + "\n")
# print(f"{reasoning_content}")
# print("=" * 20 + "完整回复" + "=" * 20 + "\n")
# print(f"{answer_content}")
// dashscope SDK的版本 >= 2.19.0
import java.util.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.exception.InputRequiredException;
import java.lang.System;
import com.alibaba.dashscope.utils.Constants;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static StringBuilder reasoningContent = new StringBuilder();
private static StringBuilder finalContent = new StringBuilder();
private static boolean isFirstPrint = true;
private static void handleGenerationResult(MultiModalConversationResult message) {
String re = message.getOutput().getChoices().get(0).getMessage().getReasoningContent();
String reasoning = Objects.isNull(re)?"":re; // 默认值
List<Map<String, Object>> content = message.getOutput().getChoices().get(0).getMessage().getContent();
if (!reasoning.isEmpty()) {
reasoningContent.append(reasoning);
if (isFirstPrint) {
System.out.println("====================思考过程====================");
isFirstPrint = false;
}
System.out.print(reasoning);
}
if (Objects.nonNull(content) && !content.isEmpty()) {
Object text = content.get(0).get("text");
finalContent.append(text);
if (!isFirstPrint) {
System.out.println("\n====================完整回复====================");
isFirstPrint = true;
}
System.out.print(text);
}
}
public static MultiModalConversationParam buildMultiModalConversationParam(MultiModalMessage Msg) {
return MultiModalConversationParam.builder()
// 若没有配置环境变量,请用百炼API Key将下行替换为:.apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// 此处以 qvq-max 为例,可按需更换模型名称
.model("qvq-max")
.messages(Arrays.asList(Msg))
.incrementalOutput(true)
.build();
}
public static void streamCallWithMessage(MultiModalConversation conv, MultiModalMessage Msg)
throws NoApiKeyException, ApiException, InputRequiredException, UploadFileException {
MultiModalConversationParam param = buildMultiModalConversationParam(Msg);
Flowable<MultiModalConversationResult> result = conv.streamCall(param);
result.blockingForEach(message -> {
handleGenerationResult(message);
});
}
public static void main(String[] args) {
try {
MultiModalConversation conv = new MultiModalConversation();
String localPath1 = "xxx/football1.jpg";
String localPath2 = "xxx/football2.jpg";
String localPath3 = "xxx/football3.jpg";
String localPath4 = "xxx/football4.jpg";
String filePath1 = "file://" + localPath1;
String filePath2 = "file://" + localPath2;
String filePath3 = "file://" + localPath3;
String filePath4 = "file://" + localPath4;
MultiModalMessage userMessage = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(Collections.singletonMap("video", Arrays.asList(
filePath1,
filePath2,
filePath3,
filePath4)),
Collections.singletonMap("text", "描述这个视频的具体过程")))
.build();
streamCallWithMessage(conv, userMessage);
// 打印最终结果
// if (reasoningContent.length() > 0) {
// System.out.println("\n====================完整回复====================");
// System.out.println(finalContent.toString());
// }
} catch (ApiException | NoApiKeyException | UploadFileException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}}
使用说明
支持的图像
模型支持的图像格式如下表,需注意在使用OpenAI SDK 传入本地图像时,请根据实际的图像格式,将代码中的image/{format}
设置为对应的Content Type值。
图像格式 | 文件扩展名 | Content Type |
图像格式 | 文件扩展名 | Content Type |
BMP | .bmp | image/bmp |
DIB | .dib | image/bmp |
ICNS | .icns | image/icns |
ICO | .ico | image/x-icon |
JPEG | .jfif, .jpe, .jpeg, .jpg | image/jpeg |
JPEG2000 | .j2c, .j2k, .jp2, .jpc, .jpf, .jpx | image/jp2 |
PNG | .apng, .png | image/png |
SGI | .bw, .rgb, .rgba, .sgi | image/sgi |
TIFF | .tif, .tiff | image/tiff |
WEBP | .webp | image/webp |
图像大小限制
单个图像文件的大小不超过10 MB。
图像的宽度和高度均应大于10像素,宽高比不应超过200:1或1:200。
对单图的像素总数无限制,因为模型在进行图像理解前会对图像进行缩放等预处理。过大的图像不会有更好的理解效果,推荐的像素值如下:
输入
qvq-max
、qvq-max-latest
、qvq-max-2025-03-25
模型的单张图像,像素数推荐不超过805952。
图像数量限制
在多图像输入中,图像数量受模型图文总Token上限(即最大输入)的限制,所有图像的总Token数必须小于模型的最大输入。
如:qvq-max模型的最大输入为98304个Token,单图默认Token上限1280,可在DashScope方式中设置vl_high_resolution_images
参数,将Token上限提高至16384,若传入的图像像素均为1280 × 1280:
单图Token上限 | 调整后的图像宽高 | 图像Token数 | 最多可传入的图像数量(张) |
单图Token上限 | 调整后的图像宽高 | 图像Token数 | 最多可传入的图像数量(张) |
1280(默认值) | 1288 x 1288 | 2118 | 46 |
16384 | 980 x 980 | 1227 | 80 |
API参考
关于通义千问QVQ模型的输入输出参数,请参见文本生成-通义千问。
错误码
如果模型调用失败并返回报错信息,请参见错误信息进行解决。
- 本页导读 (1)
- 支持的模型
- 快速开始
- 多轮对话
- 多图像输入
- 视频理解
- 图片列表形式
- 视频文件形式
- 使用本地文件(Base64编码或本地路径)
- 使用说明
- 支持的图像
- 图像大小限制
- 图像数量限制
- API参考
- 错误码