快速开始

通义千问VL

说明

支持的领域 / 任务:aigc

通义千问开源视觉理解大模型Qwen-VL于2023年12月1日发布重大更新,不仅大幅提升通用OCR、视觉推理、中文文本理解基础能力,还能处理各种分辨率和规格的图像,甚至能“看图做题”。

升级的Qwen-VL(qwen-vl-plus/qwen-vl-max)模型现有几大特点:

‒ 大幅增强了图片中文字处理能力,能够成为生产力小帮手,提取、整理、总结文字信息不在话下。

‒ 增加可处理分辨率范围,各分辨率和长宽比的图都能处理,大图和长图能看清。

‒ 增强视觉推理和决策能力,适于搭建视觉Agent,让大模型Agent的想象力进一步扩展。

‒ 升级看图做题能力,拍一拍习题图发给Qwen-VL,大模型能帮用户一步步解题。

快速开始

前提条件

示例代码

以下示例展示了调用通义千问VL API对一个用户指令进行响应的代码。

说明

需要使用您的API-KEY替换示例中的 YOUR_DASHSCOPE_API_KEY,代码才能正常运行。

设置API-KEY

export DASHSCOPE_API_KEY=YOUR_DASHSCOPE_API_KEY
from http import HTTPStatus
import dashscope


def simple_multimodal_conversation_call():
    """Simple single round multimodal conversation call.
    """
    messages = [
        {
            "role": "user",
            "content": [
                {"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"},
                {"text": "这是什么?"}
            ]
        }
    ]
    response = dashscope.MultiModalConversation.call(model='qwen-vl-plus',
                                                     messages=messages)
    # The response status_code is HTTPStatus.OK indicate success,
    # otherwise indicate request is failed, you can get error code
    # and message from code and message.
    if response.status_code == HTTPStatus.OK:
        print(response)
    else:
        print(response.code)  # The error code.
        print(response.message)  # The error message.


if __name__ == '__main__':
    simple_multimodal_conversation_call()
// Copyright (c) Alibaba, Inc. and its affiliates.

import java.util.Arrays;
import java.util.Collections;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;

public class Main {
    public static void simpleMultiModalConversationCall()
            throws ApiException, NoApiKeyException, UploadFileException {
        MultiModalConversation conv = new MultiModalConversation();
        MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
        .content(Arrays.asList(Collections.singletonMap("image", "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"), 
        Collections.singletonMap("text", "这是什么?"))).build();
        MultiModalConversationParam param = MultiModalConversationParam.builder()
                .model(MultiModalConversation.Models.QWEN_VL_PLUS)
                .message(userMessage)
                .build();
        MultiModalConversationResult result = conv.call(param);
        System.out.println(result);
    }

    public static void main(String[] args) {
        try {
            simpleMultiModalConversationCall();
        } catch (ApiException | NoApiKeyException | UploadFileException e) {
            System.out.println(e.getMessage());
        }
        System.exit(0);
    }
}

python 调用成功后,将会返回如下示例结果。

{
    "status_code": 200,
    "request_id": "cd828016-bcf5-94c7-82ed-5b45bf06886c",
    "code": "",
    "message": "",
    "output": {
        "text": null,
        "finish_reason": null,
        "choices": [
            {
                "finish_reason": null,
                "message": {
                    "role": "assistant",
                    "content": [
                        {
                            "text": "图中是一名女子在沙滩上和狗玩耍,旁边有海水,应该是在海边。"
                        }
                    ]
                }
            }
        ]
    },
    "usage": {
        "input_tokens": 1276,
        "output_tokens": 19
    }
}

了解更多

有关通义千问VL API的详细调用文档可前往API详情页面进行了解。