文档

文本生成

更新时间:

文本生成是一种人工智能技术,它基于深度学习算法,根据给定的提示信息创作出有逻辑、连贯的文本内容。

文本生成所需的输入(提示或Prompt)可以是简单的关键词、一句话概述或是更复杂的指令和上下文信息。文本生成模型通过分析大量现有数据,学习语言模式,广泛应用于以下领域:

  • 内容创作:自动生成新闻报道、商品介绍、短视频脚本等。

  • 客户服务:在聊天机器人中应用,提供24小时客服支持,解答常见问题。

  • 文本翻译:快速准确地将文本从一种语言翻译成另一种语言。

  • 摘要生成:为长篇文章、报告、客户邮件自动生成摘要。

  • 法律文档编写:自动化生成合同模板、法律意见书的基础框架。

文本生成示例:根据需求提取客户邮件中的关键信息

Prompt

请从这封客户反馈邮件中提取以下信息:1.客户信息 2.软件版本 3.错误代码 4.问题描述 5.操作系统 6.其他相关细节
------------
邮件正文:
主题:【报障求助】智能百炼助手启动故障
尊敬的百炼客服和技术支持团队,
我是来自通义科技的张伟,我的用户ID是12345。我正在使用的智能百炼助手软件版本为V3.1.8,在此向您报告一个严重影响工作进度的技术问题,望尽快协助解决。
自昨日下午起,我在启动智能百炼助手时遇到严重障碍,软件启动过程卡在初始化界面,错误码为ERR-2007,并弹出错误提示“数据库连接失败”,导致所有功能模块无法正常使用。这一情况已经持续至今,严重影响了我司的办公效率。
我目前使用的服务器操作系统是Windows 10版本1909,针对64位架构进行优化。在遇到问题后,我已经采取了若干初步故障排除措施,其中包括完全关闭并重新启动系统以试图清除可能的临时软件冲突或系统挂起状态,以及彻底卸载并重新安装相关软件,旨在消除可能存在的软件损坏或配置错误问题。然而,尽管进行了这些常规解决尝试,问题仍然持续存在,未见任何改善。
为了便于更深入地诊断问题,我已将详细的错误截图和系统日志文件作为附件一并提供。这些材料应当能精确展示错误发生时的软件状态、异常详情以及任何相关的错误代码,为快速定位并解决当前遇到的技术障碍提供了关键信息。
期待您的尽快回复!

模型输出

1.客户信息:通义科技张伟(ID12345)
2.软件版本:智能百炼助手V3.1.8
3.错误代码:ERR-2007
4.问题描述:能百炼助手在启动过程中卡在初始化界面,出现“数据库连接失败”的错误提示,导致所有功能模块无法使用,问题自昨日下午开始持续存在。
5.操作系统:Win10 1909 64位
6.其他相关细节:已尝试的解决措施包括重启系统和重装软件,但问题未解决。附件包含错误截图与日志文件。

更多示例可以参考文本生成样例

文本生成模型

百炼大模型服务平台支持通义千问商业版、通义千问开源版与一些知名第三方模型,详细的模型列表请参考文本生成模型列表

通义千问商业版

说明

通义千问VL的体验需要您在百炼的模型广场开通模型服务。

通义千问

通义new 通义千问-Max

通义new 通义千问-Plus

通义new 通义千问-Turbo

通义new Qwen-Long

通义new 通义千问VL

特点

适合复杂任务,推理能力最强

性能均衡,介于Max和Turbo之间

适合简单任务,速度快、成本低

支持长达千万字文档,且成本极低

可输入超百万像素和任意长宽比的图像

上下文长度

(Token数)

8k/30k(-longcontext)

约131k

8k

10000k

32k(含图片

输出长度

(Token数)

2k

8k

1.5k

2k

2k

每千Token成本

0.04元(输入)

0.12元(输出)

0.004元(输入)

0.012元(输出)

0.002元(输入)

0.006元(输出)

0.0005元(输入)

0.002元(输出)

0.008元(输入)

0.02元(输出)

免费额度

(有效期:百炼开通后30天内)

100万Token

100万Token

100万Token

100万Token

100万Token

通义千问开源版

说明

通义千问2-开源版、通义千问1-开源版的体验需要您在百炼的模型广场开通模型服务。

通义千问开源系列

通义new 通义千问2-开源版

通义new 通义千问1.5-开源版

通义new 通义千问1-开源版

上下文长度

(Token数)

30k ~ 130k

8k ~ 32k

8k ~ 32k

输出长度

(Token数)

6k

2 ~ 8k

1.5k ~ 2k

参数规模

(B: Billion)

0.5B ~ 72B

0.5B ~ 110B

1.8B ~ 72B

第三方模型

百炼提供了众多的第三方大语言模型,包括 Llama、百川、月之暗面、零一万物、ChatGLM 等知名第三方大语言模型。

完整模型列表请参考 文本生成-第三方模型

模型选型建议

  • 如果您暂时不确定哪个模型最适合,建议尝试使用通义千问-Max,因为它是目前阿里推出的最强大模型,能应对复杂业务场景。如果是简单任务场景,则可以尝试使用通义千问-Turbo,成本更低且响应速度更快。通义千问-Plus在效果、速度、成本方面相对均衡,介于 Max 和 Turbo 之间。三个模型都兼容OpenAI 调用方式,相关细节请参考如何通过OpenAI接口调用通义千问模型

  • 如果您有明确的业务诉求,也可以选择更适合该场景的模型,比如:

  • 您也可以结合具体任务充分体验和评测,对比模型表现后再做决定:

    • 使用模型体验功能,快速、直观地比较模型表现。通过选择多个文本生成模型,根据相同输入对模型能力进行快速横向比较。

    • 使用模型评测功能,系统地比较模型表现。通过使用不同的模型评测方法,帮助您系统地评估模型表现。

开始使用

文本生成模型将接收的信息作为提示(Prompt),并返回一个根据提示信息生成的输出。百炼支持 OpenAI SDK、DashScope SDK、HTTP 接入方式。

本文以调用通义千问模型为例,介绍如何使用文本生成模型。OpenAI 兼容详细信息请参见OpenAI接口兼容,模型调用的完整参数列表参见通义千问 API 输入与输出参数

消息类型

您通过API与大模型进行交互时的输入和输出也被称为消息(Message)。每条消息都属于一个角色(Role),角色包括系统(System)、用户(User)和助手(Assistant)。

  • 系统消息(System Message,也称为System Prompt):用于告知模型要扮演的角色或行为。例如,您可以让模型扮演一个严谨的科学家等。默认值是“You are a helpful assistant”。您也可以将此类指令放在用户消息中,但放在系统消息中会更有效。

  • 用户消息(User Message):您输入给模型的文本。

  • 助手消息(Assistant Message):模型的回复。您也可以预先填写助手消息,作为后续助手消息的示例。

单轮对话

OpenAI兼容

您可以通过OpenAI SDK或OpenAI兼容的HTTP方式调用通义千问模型,体验单轮对话的功能。

Python

示例代码

from openai import OpenAI
import os

def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务的base_url
    )
    completion = client.chat.completions.create(
        model="qwen-turbo",
        messages=[
            {'role': 'system', 'content': 'You are a helpful assistant.'},
            {'role': 'user', 'content': '你是谁?'}]
        )
    print(completion.model_dump_json())

if __name__ == '__main__':
    get_response()

返回结果

{
  "id": "chatcmpl-ee338a7c-b5b3-9139-a726-b7b749d6b49d",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "message": {
        "content": "我是阿里云开发的一款超大规模语言模型,我叫通义千问。",
        "refusal": null,
        "role": "assistant",
        "function_call": null,
        "tool_calls": null
      }
    }
  ],
  "created": 1725005215,
  "model": "qwen-turbo",
  "object": "chat.completion",
  "service_tier": null,
  "system_fingerprint": null,
  "usage": {
    "completion_tokens": 17,
    "prompt_tokens": 22,
    "total_tokens": 39
  }
}

curl

示例代码

curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-turbo",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user", 
            "content": "你是谁?"
        }
    ]
}'

返回结果

{
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "我是阿里云开发的一款超大规模语言模型,我叫通义千问。"
            },
            "finish_reason": "stop",
            "index": 0,
            "logprobs": null
        }
    ],
    "object": "chat.completion",
    "usage": {
        "prompt_tokens": 22,
        "completion_tokens": 17,
        "total_tokens": 39
    },
    "created": 1726127645,
    "system_fingerprint": null,
    "model": "qwen-turbo",
    "id": "chatcmpl-81951b98-28b8-9659-ab07-cd30d25600e7"
}

DashScope

您可以通过DashScope SDK或HTTP方式调用通义千问模型,体验单轮对话的功能。

Python

示例代码

import random
from http import HTTPStatus
# 建议dashscope SDK 的版本 >= 1.14.0
from dashscope import Generation


def call_with_messages():
    messages = [{'role': 'system', 'content': 'You are a helpful assistant.'},
                {'role': 'user', 'content': '你是谁?'}]
    response = Generation.call(model="qwen-turbo",
                               messages=messages,
                               # 将输出设置为"message"格式
                               result_format='message')
    if response.status_code == HTTPStatus.OK:
        print(response)
    else:
        print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
            response.request_id, response.status_code,
            response.code, response.message
        ))


if __name__ == '__main__':
    call_with_messages()

返回结果

{
  "status_code": 200,
  "request_id": "902fee3b-f7f0-9a8c-96a1-6b4ea25af114",
  "code": "",
  "message": "",
  "output": {
    "text": null,
    "finish_reason": null,
    "choices": [
      {
        "finish_reason": "stop",
        "message": {
          "role": "assistant",
          "content": "我是阿里云开发的一款超大规模语言模型,我叫通义千问。"
        }
      }
    ]
  },
  "usage": {
    "input_tokens": 22,
    "output_tokens": 17,
    "total_tokens": 39
  }
}

Java

示例代码

// Copyright (c) Alibaba, Inc. and its affiliates.
// 建议dashscope SDK的版本 >= 2.12.0
import java.util.Arrays;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.utils.JsonUtils;

public class Main {
    public static GenerationResult callWithMessage() throws ApiException, NoApiKeyException, InputRequiredException {
        Generation gen = new Generation();
        Message systemMsg = Message.builder()
                .role(Role.SYSTEM.getValue())
                .content("You are a helpful assistant.")
                .build();
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("你是谁?")
                .build();
        GenerationParam param = GenerationParam.builder()
                .model("qwen-turbo")
                .messages(Arrays.asList(systemMsg, userMsg))
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .build();
        return gen.call(param);
    }
    public static void main(String[] args) {
        try {
            GenerationResult result = callWithMessage();
            System.out.println(JsonUtils.toJson(result));
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            // 使用日志框架记录异常信息
            System.err.println("An error occurred while calling the generation service: " + e.getMessage());
        }
        System.exit(0);
    }
}

返回结果

{
  "requestId": "86dd52a9-23ec-9804-8f82-85f4c7fd5114",
  "usage": {
    "input_tokens": 22,
    "output_tokens": 17,
    "total_tokens": 39
  },
  "output": {
    "choices": [
      {
        "finish_reason": "stop",
        "message": {
          "role": "assistant",
          "content": "我是阿里云开发的一款超大规模语言模型,我叫通义千问。"
        }
      }
    ]
  }
}

curl

示例代码

curl -X POST https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-turbo",
    "input":{
        "messages":[      
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "你是谁?"
            }
        ]
    },
    "parameters": {
        "result_format": "message"
    }
}'

返回结果

{
    "output": {
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": "我是阿里云开发的一款超大规模语言模型,我叫通义千问。"
                }
            }
        ]
    },
    "usage": {
        "total_tokens": 39,
        "output_tokens": 17,
        "input_tokens": 22
    },
    "request_id": "0d74b97a-5b87-9a22-8cc3-00cb8b088756"
}

多轮对话

相比于单轮对话,多轮对话可以让大模型参考历史对话信息,更符合日常交流的场景。实现多轮对话的关键在于维护一个存放历史对话信息的数组,并将更新后的列表作为大模型的输入,从而使大模型可以参考历史对话信息进行回复。您可以将每一轮的对话历史添加到messages列表中,实现多轮对话的功能。多轮对话示例:

image

OpenAI兼容

您可以通过OpenAI SDK或OpenAI兼容的HTTP方式调用通义千问模型,体验多轮对话的功能。

Python

import os
from openai import OpenAI


def get_response(messages):
    client = OpenAI(
        # 如果您没有配置环境变量,请在此处用您的API Key进行替换
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    )
    completion = client.chat.completions.create(model="qwen-plus", messages=messages)
    return completion


messages = [
    {
        "role": "system",
        "content": """你是一名百炼手机商店的店员,你负责给用户推荐手机。手机有两个参数:屏幕尺寸(包括6.1英寸、6.5英寸、6.7英寸)、分辨率(包括2K、4K)。
        你一次只能向用户提问一个参数。如果用户提供的信息不全,你需要反问他,让他提供没有提供的参数。如果参数收集完成,你要说:我已了解您的购买意向,请稍等。""",
    }
]
assistant_output = "欢迎光临百炼手机商店,您需要购买什么尺寸的手机呢?"
print(f"模型输出:{assistant_output}\n")
while "我已了解您的购买意向" not in assistant_output:
    user_input = input("请输入:")
    # 将用户问题信息添加到messages列表中
    messages.append({"role": "user", "content": user_input})
    assistant_output = get_response(messages).choices[0].message.content
    # 将大模型的回复信息添加到messages列表中
    messages.append({"role": "assistant", "content": assistant_output})
    print(f"模型输出:{assistant_output}")
    print("\n")

curl

curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-max",
    "messages":[      
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "你好"
        },
        {
            "role": "assistant",
            "content": "你好啊,我是通义千问。"
        },
        {
            "role": "user",
            "content": "你有哪些技能?"
        }
    ]
}'

DashScope

您可以通过DashScope SDK或HTTP方式调用通义千问模型,体验多轮对话的功能。

Python

from dashscope import Generation


def get_response(messages):
    response = Generation.call(
        model="qwen-plus",
        messages=messages,
        # 将输出设置为"message"格式
        result_format="message",
    )
    return response


messages = [
    {
        "role": "system",
        "content": """你是一名百炼手机商店的店员,你负责给用户推荐手机。手机有两个参数:屏幕尺寸(包括6.1英寸、6.5英寸、6.7英寸)、分辨率(包括2K、4K)。
        你一次只能向用户提问一个参数。如果用户提供的信息不全,你需要反问他,让他提供没有提供的参数。如果参数收集完成,你要说:我已了解您的购买意向,请稍等。""",
    }
]

assistant_output = "欢迎光临百炼手机商店,您需要购买什么尺寸的手机呢?"
print(f"模型输出:{assistant_output}\n")
while "我已了解您的购买意向" not in assistant_output:
    user_input = input("请输入:")
    # 将用户问题信息添加到messages列表中
    messages.append({"role": "user", "content": user_input})
    assistant_output = get_response(messages).output.choices[0].message.content
    # 将大模型的回复信息添加到messages列表中
    messages.append({"role": "assistant", "content": assistant_output})
    print(f"模型输出:{assistant_output}")
    print("\n")

Java

import java.util.ArrayList;
import java.util.List;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import java.util.Scanner;

public class Main {

    public static GenerationParam createGenerationParam(List<Message> messages) {
        return GenerationParam.builder()
                .model("qwen-max")
                .messages(messages)
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .build();
    }

    public static GenerationResult callGenerationWithMessages(GenerationParam param) throws ApiException, NoApiKeyException, InputRequiredException {
        Generation gen = new Generation();
        return gen.call(param);
    }

    public static void main(String[] args) {
        try {
            List<Message> messages = new ArrayList<>();

            messages.add(createMessage(Role.SYSTEM, "You are a helpful assistant."));
            for (int i = 0; i < 3;i++) {
                Scanner scanner = new Scanner(System.in);
                System.out.print("请输入:");
                String userInput = scanner.nextLine();
                if ("exit".equalsIgnoreCase(userInput)) {
                    break;
                }
                messages.add(createMessage(Role.USER, userInput));
                GenerationParam param = createGenerationParam(messages);
                GenerationResult result = callGenerationWithMessages(param);
                System.out.println("模型输出:"+result.getOutput().getChoices().get(0).getMessage().getContent());
                messages.add(result.getOutput().getChoices().get(0).getMessage());
            }
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            e.printStackTrace();
        }
        System.exit(0);
    }

    private static Message createMessage(Role role, String content) {
        return Message.builder().role(role.getValue()).content(content).build();
    }
}

curl

curl -X POST https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-max",
    "input":{
        "messages":[      
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "你好"
            },
            {
                "role": "assistant",
                "content": "你好啊,我是通义千问。"
            },
            {
                "role": "user",
                "content": "你有哪些技能?"
            }
        ]
    }
}'

流式输出

大模型收到输入后并不是一次性生成最终结果,而是逐步地生成中间结果,最终结果由中间结果拼接而成。使用非流式输出方式需要等待模型生成结束后再将生成的中间结果拼接后返回,而流式输出可以实时地将中间结果返回,您可以在模型进行输出的同时进行阅读,减少等待模型回复的时间。

OpenAI兼容

您可以通过OpenAI SDK或OpenAI兼容的HTTP方式调用通义千问模型,体验流式输出的功能。

Python

import os
from openai import OpenAI

def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"),
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
    )
    completion = client.chat.completions.create(
        model="qwen-max",
        messages=[{'role': 'system', 'content': 'You are a helpful assistant.'},
                  {'role': 'user', 'content': '你是谁?'}],
        stream=True,
        # 可选,配置以后会在流式输出的最后一行展示Token使用信息
        stream_options={"include_usage": True}
        )
    for chunk in completion:
        print(chunk.model_dump_json())

if __name__ == '__main__':
    get_response()

返回结果

{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"","function_call":null,"role":"assistant","tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"我是","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"通","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"义","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"千问,由阿里","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"云开发的AI助手。我被","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"设计用来回答各种问题、提供信息","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"和与用户进行对话。有什么我可以","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[{"delta":{"content":"帮助你的吗?","function_call":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-xxx","choices":[],"created":1721099636,"model":"qwen-max","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":{"completion_tokens":36,"prompt_tokens":22,"total_tokens":58}}

curl

curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-max",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user", 
            "content": "你是谁?"
        }
    ],
    "stream":true,
    "stream_options":{
        "include_usage":true
    }
}'

返回结果

data: {"choices":[{"delta":{"content":"","role":"assistant"},"index":0,"logprobs":null,"finish_reason":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"finish_reason":null,"delta":{"content":"我是"},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"delta":{"content":"来自"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"delta":{"content":"阿里"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"delta":{"content":"云的超大规模语言"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"delta":{"content":"模型,我叫通义千问"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"delta":{"content":"。"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[{"finish_reason":"stop","delta":{"content":""},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: {"choices":[],"object":"chat.completion.chunk","usage":{"prompt_tokens":22,"completion_tokens":17,"total_tokens":39},"created":1726132850,"system_fingerprint":null,"model":"qwen-max","id":"chatcmpl-428b414f-fdd4-94c6-b179-8f576ad653a8"}

data: [DONE]

DashScope

您可以通过DashScope SDK或HTTP方式调用通义千问模型,体验流式输出的功能。

Python

from http import HTTPStatus
from dashscope import Generation


def call_with_stream():
    messages = [
        {'role':'system','content':'you are a helpful assistant'},
        {'role': 'user','content': '你是谁?'}
        ]
    responses = Generation.call(
        model="qwen-max",
        messages=messages,
        # 设置输出为'message'格式
        result_format='message',
        # 设置输出方式为流式输出
        stream=True,
        # 增量式流式输出
        incremental_output=True
        )
    full_content = ""
    for response in responses:
        if response.status_code == HTTPStatus.OK:
            print(response)
            full_content += response.output.choices[0].message.content
        else:
            print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
                response.request_id, response.status_code,
                response.code, response.message
            ))
    print(f"Full content:{full_content}")

if __name__ == '__main__':
    call_with_stream()

返回结果

{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "我是"}}]}, "usage": {"input_tokens": 21, "output_tokens": 1, "total_tokens": 22}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "通"}}]}, "usage": {"input_tokens": 21, "output_tokens": 2, "total_tokens": 23}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "义"}}]}, "usage": {"input_tokens": 21, "output_tokens": 3, "total_tokens": 24}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "千问,由阿里"}}]}, "usage": {"input_tokens": 21, "output_tokens": 8, "total_tokens": 29}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "云开发的AI助手。我被"}}]}, "usage": {"input_tokens": 21, "output_tokens": 16, "total_tokens": 37}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "设计用来回答各种问题、提供信息"}}]}, "usage": {"input_tokens": 21, "output_tokens": 24, "total_tokens": 45}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "null", "message": {"role": "assistant", "content": "和与用户进行对话。有什么我可以"}}]}, "usage": {"input_tokens": 21, "output_tokens": 32, "total_tokens": 53}}
{"status_code": 200, "request_id": "xxx", "code": "", "message": "", "output": {"text": null, "finish_reason": null, "choices": [{"finish_reason": "stop", "message": {"role": "assistant", "content": "帮助你的吗?"}}]}, "usage": {"input_tokens": 21, "output_tokens": 36, "total_tokens": 57}}

Java

// Copyright (c) Alibaba, Inc. and its affiliates.

import java.util.Arrays;
import java.util.concurrent.Semaphore;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.Semaphore;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.ResultCallback;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.utils.JsonUtils;
import io.reactivex.Flowable;

public class Main {

    private static final Logger logger = LoggerFactory.getLogger(Main.class);

    private static void handleGenerationResult(GenerationResult message, StringBuilder fullContent) {
        fullContent.append(message.getOutput().getChoices().get(0).getMessage().getContent());
        logger.info("Received message: {}", JsonUtils.toJson(message));
    }

    public static void streamCallWithMessage(Generation gen, Message userMsg)
            throws NoApiKeyException, ApiException, InputRequiredException {
        GenerationParam param = buildGenerationParam(userMsg);
        Flowable<GenerationResult> result = gen.streamCall(param);
        StringBuilder fullContent = new StringBuilder();

        result.blockingForEach(message -> handleGenerationResult(message, fullContent));

        logger.info("Full content: \n{}", fullContent.toString());
    }

    public static void streamCallWithCallback(Generation gen, Message userMsg)
            throws NoApiKeyException, ApiException, InputRequiredException, InterruptedException {
        GenerationParam param = buildGenerationParam(userMsg);
        Semaphore semaphore = new Semaphore(0);
        StringBuilder fullContent = new StringBuilder();

        gen.streamCall(param, new ResultCallback<GenerationResult>() {
            @Override
            public void onEvent(GenerationResult message) {
                handleGenerationResult(message, fullContent);
            }

            @Override
            public void onError(Exception err) {
                logger.error("Exception occurred: {}", err.getMessage());
                semaphore.release();
            }

            @Override
            public void onComplete() {
                logger.info("Completed");
                semaphore.release();
            }
        });

        semaphore.acquire();
        logger.info("Full content: \n{}", fullContent.toString());
    }

    private static GenerationParam buildGenerationParam(Message userMsg) {
        return GenerationParam.builder()
                .model("qwen-max")
                .messages(Arrays.asList(userMsg))
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .incrementalOutput(true)
                .build();
    }

    public static void main(String[] args) {
        try {
            Generation gen = new Generation();
            Message userMsg = Message.builder().role(Role.USER.getValue()).content("如何做西红柿炖牛腩?").build();

            streamCallWithMessage(gen, userMsg);
            streamCallWithCallback(gen, userMsg);
        } catch (ApiException | NoApiKeyException | InputRequiredException | InterruptedException e) {
            logger.error("An exception occurred: {}", e.getMessage());
        }
    }
}

返回结果

{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":1,"total_tokens":12},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"我是"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":2,"total_tokens":13},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"通"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":3,"total_tokens":14},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"义"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":8,"total_tokens":19},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"千问,由阿里"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":16,"total_tokens":27},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"云开发的AI助手。我被"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":24,"total_tokens":35},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"设计用来回答各种问题、提供信息"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":32,"total_tokens":43},"output":{"choices":[{"finish_reason":"null","message":{"role":"assistant","content":"和与用户进行对话。有什么我可以"}}]}}
{"requestId":"xxx","usage":{"input_tokens":11,"output_tokens":36,"total_tokens":47},"output":{"choices":[{"finish_reason":"stop","message":{"role":"assistant","content":"帮助你的吗?"}}]}}

curl

curl -X POST https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-H "X-DashScope-SSE: enable" \
-d '{
    "model": "qwen-max",
    "input":{
        "messages":[      
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "你是谁?"
            }
        ]
    },
    "parameters": {
        "result_format": "message",
        "incremental_output":true
    }
}'

返回结果

id:1
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"我是","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":23,"input_tokens":22,"output_tokens":1},"request_id":"xxx"}

id:2
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"通","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":24,"input_tokens":22,"output_tokens":2},"request_id":"xxx"}

id:3
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"义","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":25,"input_tokens":22,"output_tokens":3},"request_id":"xxx"}

id:4
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"千问,由阿里","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":30,"input_tokens":22,"output_tokens":8},"request_id":"xxx"}

id:5
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"云开发的AI助手。我被","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":38,"input_tokens":22,"output_tokens":16},"request_id":"xxx"}

id:6
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"设计用来回答各种问题、提供信息","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":46,"input_tokens":22,"output_tokens":24},"request_id":"xxx"}

id:7
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"和与用户进行对话。有什么我可以","role":"assistant"},"finish_reason":"null"}]},"usage":{"total_tokens":54,"input_tokens":22,"output_tokens":32},"request_id":"xxx"}

id:8
event:result
:HTTP_STATUS/200
data:{"output":{"choices":[{"message":{"content":"帮助你的吗?","role":"assistant"},"finish_reason":"stop"}]},"usage":{"total_tokens":58,"input_tokens":22,"output_tokens":36},"request_id":"xxx"}

Function Call(调用外部工具)

大模型在面对实时性问题、私域知识型问题或数学计算等问题时可能效果不佳。您可以使用function call功能,通过调用外部工具来提升模型的输出效果。您可以在调用大模型时,通过tools参数传入工具的名称、描述、入参等信息。Function Call的工作流程示意图如下所示:

image

Function Call的使用涉及到参数解析功能,因此对大模型的响应质量要求较高,推荐您优先使用qwen-plus或qwen-max模型。

说明

Function Call信息暂时不支持增量输出。

OpenAI兼容

您可以通过OpenAI SDK或OpenAI兼容的HTTP方式调用通义千问模型,体验Function Call的功能。

Python

示例代码

from openai import OpenAI
from datetime import datetime
import json
import os

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope SDK的base_url
)

# 定义工具列表,模型在选择使用哪个工具时会参考工具的name和description
tools = [
    # 工具1 获取当前时刻的时间
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "当你想知道现在的时间时非常有用。",
            # 因为获取当前时间无需输入参数,因此parameters为空字典
            "parameters": {}
        }
    },  
    # 工具2 获取指定城市的天气
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "当你想查询指定城市的天气时非常有用。",
            "parameters": {  
                "type": "object",
                "properties": {
                    # 查询天气时需要提供位置,因此参数设置为location
                    "location": {
                        "type": "string",
                        "description": "城市或县区,比如北京市、杭州市、余杭区等。"
                    }
                }
            },
            "required": [
                "location"
            ]
        }
    }
]

# 模拟天气查询工具。返回结果示例:“北京今天是晴天。”
def get_current_weather(location):
    return f"{location}今天是雨天。 "

# 查询当前时间的工具。返回结果示例:“当前时间:2024-04-15 17:15:18。“
def get_current_time():
    # 获取当前日期和时间
    current_datetime = datetime.now()
    # 格式化当前日期和时间
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # 返回格式化后的当前时间
    return f"当前时间:{formatted_time}。"

# 封装模型响应函数
def get_response(messages):
    completion = client.chat.completions.create(
        model="qwen-max",
        messages=messages,
        tools=tools
        )
    return completion.model_dump()

def call_with_messages():
    print('\n')
    messages = [
            {
                "content": input('请输入:'),  # 提问示例:"现在几点了?" "一个小时后几点" "北京天气如何?"
                "role": "user"
            }
    ]
    print("-"*60)
    # 模型的第一轮调用
    i = 1
    first_response = get_response(messages)
    assistant_output = first_response['choices'][0]['message']
    print(f"\n第{i}轮大模型输出信息:{first_response}\n")
    if  assistant_output['content'] is None:
        assistant_output['content'] = ""
    messages.append(assistant_output)
    # 如果不需要调用工具,则直接返回最终答案
    if assistant_output['tool_calls'] == None:  # 如果模型判断无需调用工具,则将assistant的回复直接打印出来,无需进行模型的第二轮调用
        print(f"无需调用工具,我可以直接回复:{assistant_output['content']}")
        return
    # 如果需要调用工具,则进行模型的多轮调用,直到模型判断无需调用工具
    while assistant_output['tool_calls'] != None:
        # 如果判断需要调用查询天气工具,则运行查询天气工具
        if assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
            tool_info = {"name": "get_current_weather", "role":"tool"}
            # 提取位置参数信息
            location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['location']
            tool_info['content'] = get_current_weather(location)
        # 如果判断需要调用查询时间工具,则运行查询时间工具
        elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
            tool_info = {"name": "get_current_time", "role":"tool"}
            tool_info['content'] = get_current_time()
        print(f"工具输出信息:{tool_info['content']}\n")
        print("-"*60)
        messages.append(tool_info)
        assistant_output = get_response(messages)['choices'][0]['message']
        if  assistant_output['content'] is None:
            assistant_output['content'] = ""
        messages.append(assistant_output)
        i += 1
        print(f"第{i}轮大模型输出信息:{assistant_output}\n")
    print(f"最终答案:{assistant_output['content']}")

if __name__ == '__main__':
    call_with_messages()

返回结果

当输入:几点了?时,程序会进行如下输出:

2024-07-25_15-37-20 (1)

以下是发起Function Call流程(模型的第一轮调用)时模型的返回信息。当输入“杭州天气”时,模型会返回tool_calls参数;当输入“你好”时,模型判断无需调用工具,模型不会返回tool_calls参数。

输入:杭州天气

{
    'id': 'chatcmpl-e2f045fd-2604-9cdb-bb61-37c805ecd15a',
    'choices': [
        {
            'finish_reason': 'tool_calls',
            'index': 0,
            'logprobs': None,
            'message': {
                'content': '',
                'role': 'assistant',
                'function_call': None,
                'tool_calls': [
                    {
                        'id': 'call_7a33ebc99d5342969f4868',
                        'function': {
                            'arguments': '{
                                "location": "杭州市"
                            }',
                            'name': 'get_current_weather'
                        },
                        'type': 'function',
                        'index': 0
                    }
                ]
            }
        }
    ],
    'created': 1726049697,
    'model': 'qwen-max',
    'object': 'chat.completion',
    'service_tier': None,
    'system_fingerprint': None,
    'usage': {
        'completion_tokens': 18,
        'prompt_tokens': 217,
        'total_tokens': 235
    }
}

输入:你好

{
    'id': 'chatcmpl-5d890637-9211-9bda-b184-961acf3be38d',
    'choices': [
        {
            'finish_reason': 'stop',
            'index': 0,
            'logprobs': None,
            'message': {
                'content': '你好!有什么可以帮助你的吗?',
                'role': 'assistant',
                'function_call': None,
                'tool_calls': None
            }
        }
    ],
    'created': 1726049765,
    'model': 'qwen-max',
    'object': 'chat.completion',
    'service_tier': None,
    'system_fingerprint': None,
    'usage': {
        'completion_tokens': 7,
        'prompt_tokens': 216,
        'total_tokens': 223
    }
}

HTTP

示例代码

import requests
import os
from datetime import datetime
import json

# 定义工具列表,模型在选择使用哪个工具时会参考工具的name和description
tools = [
    # 工具1 获取当前时刻的时间
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "当你想知道现在的时间时非常有用。",
            "parameters": {}  # 因为获取当前时间无需输入参数,因此parameters为空字典
        }
    },  
    # 工具2 获取指定城市的天气
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "当你想查询指定城市的天气时非常有用。",
            "parameters": {  # 查询天气时需要提供位置,因此参数设置为location
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "城市或县区,比如北京市、杭州市、余杭区等。"
                    }
                }
            },
            "required": [
                "location"
            ]
        }
    }
]

# 模拟天气查询工具。返回结果示例:“北京今天是晴天。”
def get_current_weather(location):
    return f"{location}今天是晴天。 "

# 查询当前时间的工具。返回结果示例:“当前时间:2024-04-15 17:15:18。“
def get_current_time():
    # 获取当前日期和时间
    current_datetime = datetime.now()
    # 格式化当前日期和时间
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # 返回格式化后的当前时间
    return f"当前时间:{formatted_time}。"

def get_response(messages):
    api_key = os.getenv("DASHSCOPE_API_KEY")
    url = 'https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions'
    headers = {'Content-Type': 'application/json',
            'Authorization':f'Bearer {api_key}'}
    body = {
        'model': 'qwen-max',
        "messages": messages,
        "tools":tools
    }

    response = requests.post(url, headers=headers, json=body)
    return response.json()


def call_with_messages():
    messages = [
            {
                "content": input('请输入:'),  # 提问示例:"现在几点了?" "一个小时后几点" "北京天气如何?"
                "role": "user"
            }
    ]
    
    # 模型的第一轮调用
    first_response = get_response(messages)
    print(f"\n第一轮调用结果:{first_response}")
    assistant_output = first_response['choices'][0]['message']
    if  assistant_output['content'] is None:
        assistant_output['content'] = ""
    messages.append(assistant_output)
    if 'tool_calls' not in assistant_output:  # 如果模型判断无需调用工具,则将assistant的回复直接打印出来,无需进行模型的第二轮调用
        print(f"最终答案:{assistant_output['content']}")
        return
    # 如果模型选择的工具是get_current_weather
    elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
        tool_info = {"name": "get_current_weather", "role":"tool"}
        location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['location']
        tool_info['content'] = get_current_weather(location)
    # 如果模型选择的工具是get_current_time
    elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
        tool_info = {"name": "get_current_time", "role":"tool"}
        tool_info['content'] = get_current_time()
    print(f"工具输出信息:{tool_info['content']}")
    messages.append(tool_info)

    # 模型的第二轮调用,对工具的输出进行总结
    second_response = get_response(messages)
    print(f"第二轮调用结果:{second_response}")
    print(f"最终答案:{second_response['choices'][0]['message']['content']}")

if __name__ == '__main__':
    call_with_messages()

返回结果

当输入:杭州天气时,程序会进行如下输出:

2024-07-16_14-43-10 (1)

以下是发起Function Call流程(模型的第一轮调用)时模型的返回信息。当输入“杭州天气”时,模型会返回tool_calls参数;当输入“你好”时,模型判断无需调用工具,模型不会返回tool_calls参数。

输入:杭州天气

{
    'choices': [
        {
            'message': {
                'content': '',
                'role': 'assistant',
                'tool_calls': [
                    {
                        'function': {
                            'name': 'get_current_weather',
                            'arguments': '{
                                "location": "杭州市"
                            }'
                        },
                        'index': 0,
                        'id': 'call_416cd81b8e7641edb654c4',
                        'type': 'function'
                    }
                ]
            },
            'finish_reason': 'tool_calls',
            'index': 0,
            'logprobs': None
        }
    ],
    'object': 'chat.completion',
    'usage': {
        'prompt_tokens': 217,
        'completion_tokens': 18,
        'total_tokens': 235
    },
    'created': 1726050222,
    'system_fingerprint': None,
    'model': 'qwen-max',
    'id': 'chatcmpl-61e30855-ee69-93ab-98d5-4194c51a9980'
}

输入:你好

{
    'choices': [
        {
            'message': {
                'content': '你好!有什么可以帮助你的吗?',
                'role': 'assistant'
            },
            'finish_reason': 'stop',
            'index': 0,
            'logprobs': None
        }
    ],
    'object': 'chat.completion',
    'usage': {
        'prompt_tokens': 216,
        'completion_tokens': 7,
        'total_tokens': 223
    },
    'created': 1726050238,
    'system_fingerprint': None,
    'model': 'qwen-max',
    'id': 'chatcmpl-2f2f86d1-bc4e-9494-baca-aac5b0555091'
}

DashScope

您可以通过DashScope SDK或HTTP方式调用通义千问模型,体验Function Call的功能。

Python

示例代码

from dashscope import Generation
from datetime import datetime
import random
import json


# 定义工具列表,模型在选择使用哪个工具时会参考工具的name和description
tools = [
    # 工具1 获取当前时刻的时间
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "当你想知道现在的时间时非常有用。",
            "parameters": {}  # 因为获取当前时间无需输入参数,因此parameters为空字典
        }
    },  
    # 工具2 获取指定城市的天气
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "当你想查询指定城市的天气时非常有用。",
            "parameters": {  
                # 查询天气时需要提供位置,因此参数设置为location
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "城市或县区,比如北京市、杭州市、余杭区等。"
                    }
                }
            },
            "required": [
                "location"
            ]
        }
    }
]

# 模拟天气查询工具。返回结果示例:“北京今天是晴天。”
def get_current_weather(location):
    return f"{location}今天是晴天。 "

# 查询当前时间的工具。返回结果示例:“当前时间:2024-04-15 17:15:18。“
def get_current_time():
    # 获取当前日期和时间
    current_datetime = datetime.now()
    # 格式化当前日期和时间
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # 返回格式化后的当前时间
    return f"当前时间:{formatted_time}。"

# 封装模型响应函数
def get_response(messages):
    response = Generation.call(
        model='qwen-max',
        messages=messages,
        tools=tools,
        seed=random.randint(1, 10000),  # 设置随机数种子seed,如果没有设置,则随机数种子默认为1234
        result_format='message'  # 将输出设置为message形式
    )
    return response

def call_with_messages():
    print('\n')
    messages = [
            {
                "content": input('请输入:'),  # 提问示例:"现在几点了?" "一个小时后几点" "北京天气如何?"
                "role": "user"
            }
    ]
    
    # 模型的第一轮调用
    first_response = get_response(messages)
    assistant_output = first_response.output.choices[0].message
    print(f"\n大模型第一轮输出信息:{first_response}\n")
    messages.append(assistant_output)
    if 'tool_calls' not in assistant_output:  # 如果模型判断无需调用工具,则将assistant的回复直接打印出来,无需进行模型的第二轮调用
        print(f"最终答案:{assistant_output.content}")
        return
    # 如果模型选择的工具是get_current_weather
    elif assistant_output.tool_calls[0]['function']['name'] == 'get_current_weather':
        tool_info = {"name": "get_current_weather", "role":"tool"}
        location = json.loads(assistant_output.tool_calls[0]['function']['arguments'])['location']
        tool_info['content'] = get_current_weather(location)
    # 如果模型选择的工具是get_current_time
    elif assistant_output.tool_calls[0]['function']['name'] == 'get_current_time':
        tool_info = {"name": "get_current_time", "role":"tool"}
        tool_info['content'] = get_current_time()
    print(f"工具输出信息:{tool_info['content']}\n")
    messages.append(tool_info)

    # 模型的第二轮调用,对工具的输出进行总结
    second_response = get_response(messages)
    print(f"大模型第二轮输出信息:{second_response}\n")
    print(f"最终答案:{second_response.output.choices[0].message['content']}")

if __name__ == '__main__':
    call_with_messages()
    

返回结果

通过运行以上代码,您可以输入问题,得到在工具辅助条件下模型的输出结果。使用过程示例如下图所示:2024-04-29_11-22-10 (1).gif

以下是发起Function Call流程(模型的第一轮调用)时模型的返回信息。当输入“杭州天气”时,模型会返回tool_calls参数;当输入“你好”时,模型判断无需调用工具,模型不会返回tool_calls参数。

输入:杭州天气

{
  "status_code": 200,
  "request_id": "33cf0a53-ea38-9f47-8fce-b93b55d86573",
  "code": "",
  "message": "",
  "output": {
    "text": null,
    "finish_reason": null,
    "choices": [
      {
        "finish_reason": "tool_calls",
        "message": {
          "role": "assistant",
          "content": "",
          "tool_calls": [
            {
              "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\": \"杭州市\"}"
              },
              "index": 0,
              "id": "call_9f62f52f3a834a8194f634",
              "type": "function"
            }
          ]
        }
      }
    ]
  },
  "usage": {
    "input_tokens": 217,
    "output_tokens": 18,
    "total_tokens": 235
  }
}

输入:你好

{
  "status_code": 200,
  "request_id": "4818ce03-e7c9-96de-a7bc-781649d98465",
  "code": "",
  "message": "",
  "output": {
    "text": null,
    "finish_reason": null,
    "choices": [
      {
        "finish_reason": "stop",
        "message": {
          "role": "assistant",
          "content": "你好!有什么可以帮助你的吗?"
        }
      }
    ]
  },
  "usage": {
    "input_tokens": 216,
    "output_tokens": 7,
    "total_tokens": 223
  }
}

Java

示例代码

// Copyright (c) Alibaba, Inc. and its affiliates.
// version >= 2.12.0
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import com.alibaba.dashscope.aigc.conversation.ConversationParam.ResultFormat;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationOutput.Choice;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.tools.FunctionDefinition;
import com.alibaba.dashscope.tools.ToolCallBase;
import com.alibaba.dashscope.tools.ToolCallFunction;
import com.alibaba.dashscope.tools.ToolFunction;
import com.alibaba.dashscope.utils.JsonUtils;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.github.victools.jsonschema.generator.Option;
import com.github.victools.jsonschema.generator.OptionPreset;
import com.github.victools.jsonschema.generator.SchemaGenerator;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfig;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfigBuilder;
import com.github.victools.jsonschema.generator.SchemaVersion;

import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Scanner;

public class Main {

    public class GetWhetherTool {
        private String location;

        public GetWhetherTool(String location) {
            this.location = location;
        }

        public String call() {
            return location+"今天是晴天";
        }
    }

    public class GetTimeTool {

        public GetTimeTool() {
        }

        public String call() {
            LocalDateTime now = LocalDateTime.now();
            DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
            String currentTime = "当前时间:" + now.format(formatter) + "。";
            return currentTime;
        }
    }

    public static void SelectTool()
            throws NoApiKeyException, ApiException, InputRequiredException {

        SchemaGeneratorConfigBuilder configBuilder =
                new SchemaGeneratorConfigBuilder(SchemaVersion.DRAFT_2020_12, OptionPreset.PLAIN_JSON);
        SchemaGeneratorConfig config = configBuilder.with(Option.EXTRA_OPEN_API_FORMAT_VALUES)
                .without(Option.FLATTENED_ENUMS_FROM_TOSTRING).build();
        SchemaGenerator generator = new SchemaGenerator(config);


        ObjectNode jsonSchema_whether = generator.generateSchema(GetWhetherTool.class);
        ObjectNode jsonSchema_time = generator.generateSchema(GetTimeTool.class);


        FunctionDefinition fd_whether = FunctionDefinition.builder().name("get_current_whether").description("获取指定地区的天气")
                .parameters(JsonUtils.parseString(jsonSchema_whether.toString()).getAsJsonObject()).build();

        FunctionDefinition fd_time = FunctionDefinition.builder().name("get_current_time").description("获取当前时刻的时间")
                .parameters(JsonUtils.parseString(jsonSchema_time.toString()).getAsJsonObject()).build();

        Message systemMsg = Message.builder().role(Role.SYSTEM.getValue())
                .content("You are a helpful assistant. When asked a question, use tools wherever possible.")
                .build();

        Scanner scanner = new Scanner(System.in);
        System.out.print("\n请输入:");
        String userInput = scanner.nextLine();
        Message userMsg =
                Message.builder().role(Role.USER.getValue()).content(userInput).build();

        List<Message> messages = new ArrayList<>();
        messages.addAll(Arrays.asList(systemMsg, userMsg));

        GenerationParam param = GenerationParam.builder().model("qwen-max")
                .messages(messages).resultFormat(ResultFormat.MESSAGE)
                .tools(Arrays.asList(ToolFunction.builder().function(fd_whether).build(),ToolFunction.builder().function(fd_time).build())).build();
        // 大模型的第一轮调用
        Generation gen = new Generation();
        GenerationResult result = gen.call(param);

        System.out.println("\n大模型第一轮输出信息:"+JsonUtils.toJson(result));

        for (Choice choice : result.getOutput().getChoices()) {
            messages.add(choice.getMessage());
            // 如果需要调用工具
            if (result.getOutput().getChoices().get(0).getMessage().getToolCalls() != null) {
                for (ToolCallBase toolCall : result.getOutput().getChoices().get(0).getMessage()
                        .getToolCalls()) {
                    if (toolCall.getType().equals("function")) {
                        // 获取工具函数名称和入参
                        String functionName = ((ToolCallFunction) toolCall).getFunction().getName();
                        String functionArgument = ((ToolCallFunction) toolCall).getFunction().getArguments();
                        // 大模型判断调用天气查询工具的情况
                        if (functionName.equals("get_current_whether")) {
                            GetWhetherTool GetWhetherFunction =
                                    JsonUtils.fromJson(functionArgument, GetWhetherTool.class);
                            String whether = GetWhetherFunction.call();
                            Message toolResultMessage = Message.builder().role("tool")
                                    .content(String.valueOf(whether)).toolCallId(toolCall.getId()).build();
                            messages.add(toolResultMessage);
                            System.out.println("\n工具输出信息:"+whether);
                        }
                        // 大模型判断调用时间查询工具的情况
                        else if (functionName.equals("get_current_time")) {
                            GetTimeTool GetTimeFunction =
                                    JsonUtils.fromJson(functionArgument, GetTimeTool.class);
                            String time = GetTimeFunction.call();
                            Message toolResultMessage = Message.builder().role("tool")
                                    .content(String.valueOf(time)).toolCallId(toolCall.getId()).build();
                            messages.add(toolResultMessage);
                            System.out.println("\n工具输出信息:"+time);
                        }
                    }
                }
            }
            // 如果无需调用工具,直接输出大模型的回复
            else {
                System.out.println("\n最终答案:"+result.getOutput().getChoices().get(0).getMessage().getContent());
                return;
            }
        }
        // 大模型的第二轮调用 包含工具输出信息
        param.setMessages(messages);
        result = gen.call(param);
        System.out.println("\n大模型第二轮输出信息:"+JsonUtils.toJson(result));
        System.out.println(("\n最终答案:"+result.getOutput().getChoices().get(0).getMessage().getContent()));
    }


    public static void main(String[] args) {
        try {
            SelectTool();
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            System.out.println(String.format("Exception %s", e.getMessage()));
        }
        System.exit(0);
    }
}

返回结果

通过运行以上代码,您可以输入问题,得到在工具辅助条件下模型的输出结果。使用过程示例如下图所示:2024-07-17_10-20-07 (1)

以下是发起Function Call流程(模型的第一轮调用)时模型的返回信息。当输入“杭州天气”时,模型会返回tool_calls参数;当输入“你好”时,模型判断无需调用工具,模型不会返回tool_calls参数。

输入:杭州天气

{
    "requestId": "e2faa5cf-1707-973b-b216-36aa4ef52afc",
    "usage": {
        "input_tokens": 254,
        "output_tokens": 19,
        "total_tokens": 273
    },
    "output": {
        "choices": [
            {
                "finish_reason": "tool_calls",
                "message": {
                    "role": "assistant",
                    "content": "",
                    "tool_calls": [
                        {
                            "type": "function",
                            "id": "",
                            "function": {
                                "name": "get_current_whether",
                                "arguments": "{\"location\": \"杭州\"}"
                            }
                        }
                    ]
                }
            }
        ]
    }
}

输入:你好

{
    "requestId": "f6ca3828-3b5f-99bf-8bae-90b4aa88923f",
    "usage": {
        "input_tokens": 253,
        "output_tokens": 7,
        "total_tokens": 260
    },
    "output": {
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": "你好!有什么可以帮助你的吗?"
                }
            }
        ]
    }
}

HTTP

示例代码

import requests
import os
from datetime import datetime
import json

# 定义工具列表,模型在选择使用哪个工具时会参考工具的name和description
tools = [
    # 工具1 获取当前时刻的时间
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "当你想知道现在的时间时非常有用。",
            "parameters": {}  # 因为获取当前时间无需输入参数,因此parameters为空字典
        }
    },  
    # 工具2 获取指定城市的天气
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "当你想查询指定城市的天气时非常有用。",
            "parameters": {  # 查询天气时需要提供位置,因此参数设置为location
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "城市或县区,比如北京市、杭州市、余杭区等。"
                    }
                }
            },
            "required": [
                "location"
            ]
        }
    }
]

# 模拟天气查询工具。返回结果示例:“北京今天是晴天。”
def get_current_weather(location):
    return f"{location}今天是晴天。 "

# 查询当前时间的工具。返回结果示例:“当前时间:2024-04-15 17:15:18。“
def get_current_time():
    # 获取当前日期和时间
    current_datetime = datetime.now()
    # 格式化当前日期和时间
    formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
    # 返回格式化后的当前时间
    return f"当前时间:{formatted_time}。"

def get_response(messages):
    api_key = os.getenv("DASHSCOPE_API_KEY")
    url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
    headers = {'Content-Type': 'application/json',
            'Authorization':f'Bearer {api_key}'}
    body = {
        'model': 'qwen-max',
        "input": {
            "messages": messages
        },
        "parameters": {
            "result_format": "message",
            "tools": tools
        }
    }

    response = requests.post(url, headers=headers, json=body)
    return response.json()

messages = [
    {
        "role": "user",
        "content": "今天天气怎么样?"
    }
]

def call_with_messages():
    messages = [
            {
                "content": input('请输入:'),  # 提问示例:"现在几点了?" "一个小时后几点" "北京天气如何?"
                "role": "user"
            }
    ]
    
    # 模型的第一轮调用
    first_response = get_response(messages)
    print(f"\n第一轮调用结果:{first_response}")
    assistant_output = first_response['output']['choices'][0]['message']
    messages.append(assistant_output)
    if 'tool_calls' not in assistant_output:  # 如果模型判断无需调用工具,则将assistant的回复直接打印出来,无需进行模型的第二轮调用
        print(f"最终答案:{assistant_output['content']}")
        return
    # 如果模型选择的工具是get_current_weather
    elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
        tool_info = {"name": "get_current_weather", "role":"tool"}
        location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['location']
        tool_info['content'] = get_current_weather(location)
    # 如果模型选择的工具是get_current_time
    elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
        tool_info = {"name": "get_current_time", "role":"tool"}
        tool_info['content'] = get_current_time()
    print(f"工具输出信息:{tool_info['content']}")
    messages.append(tool_info)

    # 模型的第二轮调用,对工具的输出进行总结
    second_response = get_response(messages)
    print(f"第二轮调用结果:{second_response}")
    print(f"最终答案:{second_response['output']['choices'][0]['message']['content']}")

if __name__ == '__main__':
    call_with_messages()
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.Scanner;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import org.json.JSONArray;
import org.json.JSONObject;

public class Main {
    private static final String userAGENT = "Java-HttpURLConnection/1.0";
    public static void main(String[] args) throws Exception {
        // 用户输入问题
        Scanner scanner = new Scanner(System.in);
        System.out.println("请输入:");
        String UserInput = scanner.nextLine();
        // 初始化messages
        JSONArray messages = new JSONArray();
        // 定义系统信息system_message
        JSONObject systemMessage = new JSONObject();
        systemMessage.put("role","system");
        systemMessage.put("content","You are a helpful assistant.");
        // 根据用户的输入构造user_message
        JSONObject userMessage = new JSONObject();
        userMessage.put("role","user");
        userMessage.put("content",UserInput);
        // 将system_message和user_message依次添加到messages中
        messages.put(systemMessage);
        messages.put(userMessage);
        // 进行模型的第一轮调用,并打印出结果
        JSONObject responseJson = getResponse(messages);
        System.out.println("第一轮调用结果:"+responseJson);
        // 获取助手信息assistant_message
        JSONObject assistantMessage = responseJson.getJSONObject("output").getJSONArray("choices").getJSONObject(0).getJSONObject("message");
        // 初始化工具信息tool_message
        JSONObject toolMessage = new JSONObject();

        // 如果assistant_message没有tool_calls参数,则直接打印出assistant_message中的响应信息并返回
        if (! assistantMessage.has("tool_calls")){
            System.out.println("最终答案:"+assistantMessage.get("content"));
            return;
        }
        // 如果assistant_message有tool_calls参数,说明模型判断需要调用工具
        else {
            // 将assistant_message添加到messages中
            messages.put(assistantMessage);
            // 如果模型判断需要调用get_current_weather函数
            if (assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("name").equals("get_current_weather")) {
                // 获取参数arguments信息,并提取出location参数
                JSONObject argumentsJson = new JSONObject(assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("arguments"));
                String location = argumentsJson.getString("location");
                // 运行工具函数,得到工具的输出,并打印
                String toolOutput = getCurrentWeather(location);
                System.out.println("工具输出信息:"+toolOutput);
                // 构造tool_message信息
                toolMessage.put("name","get_current_weather");
                toolMessage.put("role","tool");
                toolMessage.put("content",toolOutput);
            }
            // 如果模型判断需要调用get_current_time函数
            if (assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("name").equals("get_current_time")) {
                // 运行工具函数,得到工具的输出,并打印
                String toolOutput = getCurrentTime();
                System.out.println("工具输出信息:"+toolOutput);
                // 构造tool_message信息
                toolMessage.put("name","get_current_time");
                toolMessage.put("role","tool");
                toolMessage.put("content",toolOutput);
            }
        }
        // 将tool_message添加到messages中
        messages.put(toolMessage);
        // 进行模型的第二轮调用,并打印出结果
        JSONObject secondResponse = getResponse(messages);
        System.out.println("第二轮调用结果:"+secondResponse);
        System.out.println("最终答案:"+secondResponse.getJSONObject("output").getJSONArray("choices").getJSONObject(0).getJSONObject("message").getString("content"));
    }
    // 定义获取天气的函数
    public static String getCurrentWeather(String location) {
        return location+"今天是晴天";
    }
    // 定义获取当前时间的函数
    public static String getCurrentTime() {
        LocalDateTime now = LocalDateTime.now();
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
        String currentTime = "当前时间:" + now.format(formatter) + "。";
        return currentTime;
    }
    // 封装模型响应函数,输入:messages,输出:json格式化后的http响应
    public static JSONObject getResponse(JSONArray messages) throws Exception{
        // 初始化工具库
        JSONArray tools = new JSONArray();
        // 定义工具1:获取当前时间
        String jsonStringTime = "{\"type\": \"function\", \"function\": {\"name\": \"get_current_time\", \"description\": \"当你想知道现在的时间时非常有用。\", \"parameters\": {}}}";
        JSONObject getCurrentTimeJson = new JSONObject(jsonStringTime);
        // 定义工具2:获取指定地区天气
        String jsonString_weather = "{\"type\": \"function\", \"function\": {\"name\": \"get_current_weather\", \"description\": \"当你想查询指定城市的天气时非常有用。\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"城市或县区,比如北京市、杭州市、余杭区等。\"}}}, \"required\": [\"location\"]}}";
        JSONObject getCurrentWeatherJson = new JSONObject(jsonString_weather);
        // 将两个工具添加到工具库中
        tools.put(getCurrentTimeJson);
        tools.put(getCurrentWeatherJson);
        String toolsString = tools.toString();
        // 接口调用URL
        String urlStr = "https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation";
        // 通过环境变量获取DASHSCOPE_API_KEY
        String apiKey = System.getenv("DASHSCOPE_API_KEY");

        URL url = new URL(urlStr);
        HttpURLConnection connection = (HttpURLConnection) url.openConnection();
        connection.setRequestMethod("POST");
        // 定义请求头信息
        connection.setRequestProperty("Content-Type", "application/json");
        connection.setRequestProperty("Authorization", "Bearer " + apiKey);
        connection.setDoOutput(true);
        // 定义请求体信息
        String jsonInputString = String.format("{\"model\": \"qwen-max\", \"input\": {\"messages\":%s}, \"parameters\": {\"result_format\": \"message\",\"tools\":%s}}",messages.toString(),toolsString);

        // 获取http响应response
        try (DataOutputStream wr = new DataOutputStream(connection.getOutputStream())) {
            wr.write(jsonInputString.getBytes(StandardCharsets.UTF_8));
            wr.flush();
        }
        StringBuilder response = new StringBuilder();
        try (BufferedReader in = new BufferedReader(
                new InputStreamReader(connection.getInputStream()))) {
            String inputLine;
            while ((inputLine = in.readLine()) != null) {
                response.append(inputLine);
            }
        }
        connection.disconnect();
        // 返回json格式化后的response
        return new JSONObject(response.toString());
    }
}

返回结果

当输入:杭州天气时,程序会进行如下输出:

2024-07-16_14-07-04 (1)

以下是发起Function Call流程(模型的第一轮调用)时模型的返回信息。当输入“杭州天气”时,模型会返回tool_calls参数;当输入“你好”时,模型判断无需调用工具,模型不会返回tool_calls参数。

输入:杭州天气

{
    'output': {
        'choices': [
            {
                'finish_reason': 'tool_calls',
                'message': {
                    'role': 'assistant',
                    'tool_calls': [
                        {
                            'function': {
                                'name': 'get_current_weather',
                                'arguments': '{
                                    "location": "杭州市"
                                }'
                            },
                            'index': 0,
                            'id': 'call_240d6341de4c484384849d',
                            'type': 'function'
                        }
                    ],
                    'content': ''
                }
            }
        ]
    },
    'usage': {
        'total_tokens': 235,
        'output_tokens': 18,
        'input_tokens': 217
    },
    'request_id': '235ed6a4-b6c0-9df0-aa0f-3c6dce89f3bd'
}

输入:你好

{
    'output': {
        'choices': [
            {
                'finish_reason': 'stop',
                'message': {
                    'role': 'assistant',
                    'content': '你好!有什么可以帮助你的吗?'
                }
            }
        ]
    },
    'usage': {
        'total_tokens': 223,
        'output_tokens': 7,
        'input_tokens': 216
    },
    'request_id': '42c42853-3caf-9815-96e8-9c950f4c26a0'
}

结构化输出

如果您的业务需要输出结构化数据,可以通过OpenAI兼容的方式调用qwen-plus模型,来确保生成的字符串符合标准的JSON格式。在调用时,设置response_format{"type": "json_object"},并通过系统消息或用户消息指引模型输出JSON格式即可。

结构化输出功能暂时只支持qwen-plus模型。

Python

示例代码

from openai import OpenAI
import os

def get_response():
    client = OpenAI(
        api_key=os.getenv("DASHSCOPE_API_KEY"), # 如果您没有配置环境变量,请在此处用您的API Key进行替换
        base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务的base_url
    )
    completion = client.chat.completions.create(
        model="qwen-plus",
        messages=[
            {'role': 'system', 'content': 'You are a helpful assistant.'},
            {'role': 'user', 'content': '请用json格式输出一个学生的信息,姓名是张三,学号是12345678"'}],
        response_format={
            "type": "json_object"
        }
        )
    print(completion.model_dump_json())


if __name__ == '__main__':
    get_response()

返回结果

{
  "id": "chatcmpl-433c9186-8dae-9213-9093-49bd049706ae",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "message": {
        "content": "{\n  \"姓名\": \"张三\",\n  \"学号\": \"12345678\"\n}",
        "refusal": null,
        "role": "assistant",
        "function_call": null,
        "tool_calls": null
      }
    }
  ],
  "created": 1726110506,
  "model": "qwen-plus",
  "object": "chat.completion",
  "service_tier": null,
  "system_fingerprint": null,
  "usage": {
    "completion_tokens": 25,
    "prompt_tokens": 47,
    "total_tokens": 72
  }
}

curl

示例请求

curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-plus",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user", 
            "content": "请用json格式输出一个学生的信息,姓名是张三,学号是12345678"
        }
    ],
    "response_format": {
        "type": "json_object"
    }
}'

返回结果

{
  "choices": [
    {
      "message": {
        "content": "{\"姓名\": \"张三\", \"学号\": \"12345678\"}",
        "role": "assistant"
      },
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null
    }
  ],
  "object": "chat.completion",
  "usage": {
    "prompt_tokens": 46,
    "completion_tokens": 21,
    "total_tokens": 67
  },
  "created": 1726110523,
  "system_fingerprint": null,
  "model": "qwen-plus",
  "id": "chatcmpl-f208fb06-9ef2-994e-af5e-8234b9e31d94"
}

您可以使用加载JSON字符串的工具(如Python中的json.loads()),将content字符串解析为JSON对象,以便后续业务逻辑进行数据提取和处理。

异步调用

您可以使用Asyncio接口调用实现并发,提高程序的效率。示例代码如下:

OpenAI SDK

示例代码

import os
import asyncio
from openai import AsyncOpenAI
import platform

# 创建异步客户端实例
client = AsyncOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)

# 定义异步任务列表
async def task(question):
    print(f"Sending question: {question}")
    response = await client.chat.completions.create(
        messages=[
            {"role": "user", "content": question}
        ],
        model="qwen-max",
    )
    print(f"Received answer: {response.choices[0].message.content}")

# 主异步函数
async def main():
    questions = ["你是谁?", "你会什么?", "天气怎么样?"]
    tasks = [task(q) for q in questions]
    await asyncio.gather(*tasks)

if __name__ == '__main__':
    # 设置事件循环策略
    if platform.system() == 'Windows':
        asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
    # 运行主协程
    asyncio.run(main(), debug=False)
    

DashScope SDK

示例代码

您的Dashscope Python SDK版本需要不低于 1.19.0。
import asyncio
import platform
from dashscope.aigc.generation import AioGeneration

# 定义异步任务列表
async def task(question):
    print(f"Sending question: {question}")
    response = await AioGeneration.call("qwen-turbo",
                                        prompt=question)
    print(f"Received answer: {response.output.text}")

# 主异步函数
async def main():
    questions = ["你是谁?", "你会什么?", "天气怎么样?"]
    tasks = [task(q) for q in questions]
    await asyncio.gather(*tasks)

if __name__ == '__main__':
    # 设置事件循环策略
    if platform.system() == 'Windows':
        asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
    # 运行主协程
    asyncio.run(main(), debug=False)
    

了解更多

提示(Prompt)工程

提示(Prompt)是输入给大语言模型的文本信息,用于明确地告诉模型想要解决的问题或完成的任务,也是模型理解需求并生成相关、准确内容的基础。通过精心设计和优化 Prompt,向模型“明确”任务目的,使模型输出的结果更符合预期,这一过程被称之为"提示工程(Prompt Engineering)"这个过程包括以下关键步骤:

payukogjvzcwbvvabpvc.png

如果您对提示工程感兴趣,请前往Prompt最佳实践了解如何构建有效的Prompt来提升模型表现。

也可以浏览百炼服务平台的 Prompt工程页面,从而快速了解如何利用模板来快速生成所需的文本内容。

工具调用

大语言模型虽然在许多领域已经有广泛运用,但仍然在某些具体任务上表现不佳,比如无法获取最新信息、存在幻觉倾向、不能进行精确计算等。

为了解决这些问题,‌模型需要借助外部工具来辅助其功能。‌工具调用(‌Function Calling)‌便指的是,在必要时,模型会调用相应的外部函数或API,帮助模型获得更准确、‌更实时的信息,‌提高模型表现和实用性。‌

例如,当模型能够调用计算器工具时,便能借助工具获得复杂计算的正确结果。

示例输入

模型输出

工具调用后输出

123,456 * 5,678 =

说明

(正确答案:700,983,168)

20240811171630.jpg

20240811173026.jpg

百炼支持在构建大模型应用中添加各种各样的工具(插件)调用,包括但不限于:Python代码解释器、计算器、图片生成、夸克搜索等,您可以在构建大模型应用时,添加合适的插件。

关于插件的详细介绍,请参考插件概述

多模态能力

多模态能力是指模型能够处理和结合多种不同类型的数据模态(如文本、图像、音频、视频等)进行信息的理解、处理和生成的能力。这种能力使得模型能够更全面地理解和生成内容,增强上下文理解,提高模型表现。

当前百炼支持的多模态模型有:

  • 通义千问VL(文+图->文):具有图像理解能力的通义千问模型,能完成 OCR、视觉推理、本文理解等任务,支持超百万像素分辨率和任意长宽比规格的图像。

  • Paraformer 语音识别SenseVoice (音->文):识别并转写音频中的语音内容,支持 中文(含粤语等各种方言)、英文、日语、韩语等。

常见问题

  1. 文本生成模型的后缀,比如:-chat、-instruct 等,具体是什么意思,会影响我的模型表现吗?

    这些后缀代表了模型经过了微调和强化学习,并有场景专精方向。您需要根据您的业务场景,选择合适的模型。

    • -chat 表示模型专为处理人机交互而设计,‌善于理解上下文和生成连贯且情境相关的响应。‌适用于对话型任务,‌如聊天机器人、‌虚拟助手或客户支持场景,‌善于提供自然、‌流畅且符合对话习惯的回复。‌

    • -Instruct 表示模型能够理解和执行复杂的自然语言指令,拥有强大的工具调用能力,适用于执行具体指令,如回答问题、生成文本、翻译等任务。

  2. 通义千问模型的后缀,比如:-0428、-0206,是什么意思?

    -0428 这种数字后缀的意思是该模型在4月28日的快照版本。

  3. 通义千问模型的快照版本和最新版本有什么区别

    通义千问模型的快照版本与最新版本在输入输出规格,使用费用、免费额度等各个方面均没有区别。

  4. 什么时候应该选择通义千问模型的快照版本

    由于通义千问系列模型会不定期更新升级(不带数字后缀的版本)。

    如果希望您的应用在上线后有稳定表现,不受模型更新影响,可以优先选择快照版本。(当前快照版本只能通过 API 调用进行应用创建等)

    推荐您在通义千问系列模型发布新版本后,对比评测当前您使用的快照版本和新版本的表现差异,并根据评测结果灵活切换模型版本,以保证您应用的最佳表现。