实时语音合成-CosyVoice/Sambert

语音合成,又称文本转语音(Text-to-Speech,TTS),是将文本转换为自然语音的技术。该技术基于机器学习算法,通过学习大量语音样本,掌握语言的韵律、语调和发音规则,从而在接收到文本输入时生成真人般自然的语音内容。

核心功能

  • 实时生成高保真语音,支持中英等多语种自然发声

  • 提供声音复刻能力,快速定制个性化音色

  • 支持流式输入输出,低延迟响应实时交互场景

  • 可调节语速、语调、音量与码率,精细控制语音表现

  • 兼容主流音频格式,最高支持48kHz采样率输出

适用范围

  • 支持的地域:仅支持中国大陆(北京)地域,需使用“中国大陆(北京)”地域的API Key

  • 支持的模型:

    • CosyVoice:cosyvoice-v3-plus、cosyvoice-v3、cosyvoice-v2、cosyvoice-v1

    • Sambert:sambert-zhinan-v1、sambert-zhiqi-v1、sambert-zhichu-v1、sambert-zhide-v1、sambert-zhijia-v1、sambert-zhiru-v1、sambert-zhiqian-v1、sambert-zhixiang-v1、sambert-zhiwei-v1、sambert-zhihao-v1、sambert-zhijing-v1、sambert-zhiming-v1、sambert-zhimo-v1、sambert-zhina-v1、sambert-zhishu-v1、sambert-zhistella-v1、sambert-zhiting-v1、sambert-zhixiao-v1、sambert-zhiya-v1、sambert-zhiye-v1、sambert-zhiying-v1、sambert-zhiyuan-v1、sambert-zhiyue-v1、sambert-zhigui-v1、sambert-zhishuo-v1、sambert-zhimiao-emo-v1、sambert-zhimao-v1、sambert-zhilun-v1、sambert-zhifei-v1、sambert-zhida-v1、sambert-camila-v1、sambert-perla-v1、sambert-indah-v1、sambert-clara-v1、sambert-hanna-v1、sambert-beth-v1、sambert-betty-v1、sambert-cally-v1、sambert-cindy-v1、sambert-eva-v1、sambert-donna-v1、sambert-brian-v1、sambert-waan-v1,详情请参见Sambert模型列表

  • 支持的音色:

模型选型

场景

推荐模型

理由

注意事项

品牌形象语音定制/个性化语音克隆服务

cosyvoice-v3-plus

声音复刻能力最强,支持48kHz高音质输出,高音质+声音复刻,打造拟人化品牌声纹

成本较高(2元/万字符),建议用于核心场景;不支持设置情感

智能客服 / 语音助手

cosyvoice-v3

成本最低(0.4元/万字符),支持流式交互、情感表达,响应快,性价比高

无 SSML 支持,复杂播报逻辑受限

移动端嵌入式语音合成

CosyVoice全系列

SDK全覆盖,资源优化好,流式支持强,延迟可控

cosyvoice-v2支持 SSML

方言广播系统

cosyvoice-v2

支持东北话、粤语、闽南语等多种方言,适合地方内容播报

成本较高(2元/万字符,不支持设置情感

教育类应用(含公式朗读)

cosyvoice-v2

支持LaTeX公式转语音,适合数理化课程讲解

成本较高(2元/万字符,不支持设置情感

结构化语音播报(新闻/公告)

cosyvoice-v2

支持SSML控制语速、停顿、发音等,提升播报专业度

需额外开发 SSML 生成逻辑

语音与文本精准对齐(如字幕生成、教学回放、听写训练)

cosyvoice-v2/Sambert

支持时间戳输出,可实现合成语音与原文同步

需显式启用时间戳功能,默认关闭,cosyvoice-v2不支持设置情感,Sambert不支持流式输入

多语言出海产品

Sambert

支持10+语种,高并发(20 RPS)

不支持流式输入,价格高于cosyvoice-v3

更多说明请参见模型功能特性对比

在线体验

目前,仅以下实时语音识别模型支持在线体验

  • cosyvoice-v1

  • cosyvoice-v2

  • cosyvoice-v3

cosyvoice-v3-plusSambert暂不支持,如需使用请通过API接入。

快速开始

下面是调用API的示例代码。更多常用场景的代码示例,请参见GitHub

您需要已获取API Key配置API Key到环境变量。如果通过SDK调用,还需要安装DashScope SDK

CosyVoice

将合成音频保存为文件

Python

# coding=utf-8

import dashscope
from dashscope.audio.tts_v2 import *

# 若没有将API Key配置到环境变量中,需将your-api-key替换为自己的API Key
# dashscope.api_key = "your-api-key"

# 模型
model = "cosyvoice-v2"
# 音色
voice = "longxiaochun_v2"

# 实例化SpeechSynthesizer,并在构造方法中传入模型(model)、音色(voice)等请求参数
synthesizer = SpeechSynthesizer(model=model, voice=voice)
# 发送待合成文本,获取二进制音频
audio = synthesizer.call("今天天气怎么样?")
# 首次发送文本时需建立 WebSocket 连接,因此首包延迟会包含连接建立的耗时
print('[Metric] requestId为:{},首包延迟为:{}毫秒'.format(
    synthesizer.get_last_request_id(),
    synthesizer.get_first_package_delay()))

# 将音频保存至本地
with open('output.mp3', 'wb') as f:
    f.write(audio)

Java

import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesisParam;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesizer;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;

public class Main {
    // 模型
    private static String model = "cosyvoice-v2";
    // 音色
    private static String voice = "longxiaochun_v2";

    public static void streamAudioDataToSpeaker() {
        // 请求参数
        SpeechSynthesisParam param =
                SpeechSynthesisParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将your-api-key替换为自己的API Key
                        // .apiKey("your-api-key")
                        .model(model) // 模型
                        .voice(voice) // 音色
                        .build();

        // 同步模式:禁用回调(第二个参数为null)
        SpeechSynthesizer synthesizer = new SpeechSynthesizer(param, null);
        ByteBuffer audio = null;
        try {
            // 阻塞直至音频返回
            audio = synthesizer.call("今天天气怎么样?");
        } catch (Exception e) {
            throw new RuntimeException(e);
        } finally {
            // 任务结束关闭websocket连接
            synthesizer.getDuplexApi().close(1000, "bye");
        }
        if (audio != null) {
            // 将音频数据保存到本地文件“output.mp3”中
            File file = new File("output.mp3");
            // 首次发送文本时需建立 WebSocket 连接,因此首包延迟会包含连接建立的耗时
            System.out.println(
                    "[Metric] requestId为:"
                            + synthesizer.getLastRequestId()
                            + "首包延迟(毫秒)为:"
                            + synthesizer.getFirstPackageDelay());
            try (FileOutputStream fos = new FileOutputStream(file)) {
                fos.write(audio.array());
            } catch (IOException e) {
                throw new RuntimeException(e);
            }
        }
    }

    public static void main(String[] args) {
        streamAudioDataToSpeaker();
        System.exit(0);
    }
}

LLM生成的文本实时转成语音并通过扬声器播放

以下代码展示通过本地设备播放通义千问大语言模型(qwen-turbo)实时返回的文本内容。

Python

运行Python示例前,需要通过pip安装第三方音频播放库。

# coding=utf-8
# Installation instructions for pyaudio:
# APPLE Mac OS X
#   brew install portaudio
#   pip install pyaudio
# Debian/Ubuntu
#   sudo apt-get install python-pyaudio python3-pyaudio
#   or
#   pip install pyaudio
# CentOS
#   sudo yum install -y portaudio portaudio-devel && pip install pyaudio
# Microsoft Windows
#   python -m pip install pyaudio

import pyaudio
import dashscope
from dashscope.audio.tts_v2 import *


from http import HTTPStatus
from dashscope import Generation

# 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
# dashscope.api_key = "apiKey"
model = "cosyvoice-v2"
voice = "longxiaochun_v2"


class Callback(ResultCallback):
    _player = None
    _stream = None

    def on_open(self):
        print("websocket is open.")
        self._player = pyaudio.PyAudio()
        self._stream = self._player.open(
            format=pyaudio.paInt16, channels=1, rate=22050, output=True
        )

    def on_complete(self):
        print("speech synthesis task complete successfully.")

    def on_error(self, message: str):
        print(f"speech synthesis task failed, {message}")

    def on_close(self):
        print("websocket is closed.")
        # stop player
        self._stream.stop_stream()
        self._stream.close()
        self._player.terminate()

    def on_event(self, message):
        print(f"recv speech synthsis message {message}")

    def on_data(self, data: bytes) -> None:
        print("audio result length:", len(data))
        self._stream.write(data)


def synthesizer_with_llm():
    callback = Callback()
    synthesizer = SpeechSynthesizer(
        model=model,
        voice=voice,
        format=AudioFormat.PCM_22050HZ_MONO_16BIT,
        callback=callback,
    )

    messages = [{"role": "user", "content": "请介绍一下你自己"}]
    responses = Generation.call(
        model="qwen-turbo",
        messages=messages,
        result_format="message",  # set result format as 'message'
        stream=True,  # enable stream output
        incremental_output=True,  # enable incremental output 
    )
    for response in responses:
        if response.status_code == HTTPStatus.OK:
            print(response.output.choices[0]["message"]["content"], end="")
            synthesizer.streaming_call(response.output.choices[0]["message"]["content"])
        else:
            print(
                "Request id: %s, Status code: %s, error code: %s, error message: %s"
                % (
                    response.request_id,
                    response.status_code,
                    response.code,
                    response.message,
                )
            )
    synthesizer.streaming_complete()
    print('requestId: ', synthesizer.get_last_request_id())


if __name__ == "__main__":
    synthesizer_with_llm()

Java

import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisResult;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesisAudioFormat;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesisParam;
import com.alibaba.dashscope.audio.ttsv2.SpeechSynthesizer;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.ResultCallback;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import io.reactivex.Flowable;
import java.nio.ByteBuffer;
import java.util.Arrays;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicBoolean;
import javax.sound.sampled.*;

public class Main {
    private static String model = "cosyvoice-v2";
    private static String voice = "longxiaochun_v2";
    public static void process() throws NoApiKeyException, InputRequiredException {
        // Playback thread
        class PlaybackRunnable implements Runnable {
            // Set the audio format. Please configure according to your actual device,
            // synthesized audio parameters, and platform choice Here it is set to
            // 22050Hz16bit single channel. It is recommended that customers choose other
            // sample rates and formats based on the model sample rate and device
            // compatibility.
            private AudioFormat af = new AudioFormat(22050, 16, 1, true, false);
            private DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
            private SourceDataLine targetSource = null;
            private AtomicBoolean runFlag = new AtomicBoolean(true);
            private ConcurrentLinkedQueue<ByteBuffer> queue =
                    new ConcurrentLinkedQueue<>();

            // Prepare the player
            public void prepare() throws LineUnavailableException {
                targetSource = (SourceDataLine) AudioSystem.getLine(info);
                targetSource.open(af, 4096);
                targetSource.start();
            }

            public void put(ByteBuffer buffer) {
                queue.add(buffer);
            }

            // Stop playback
            public void stop() {
                runFlag.set(false);
            }

            @Override
            public void run() {
                if (targetSource == null) {
                    return;
                }

                while (runFlag.get()) {
                    if (queue.isEmpty()) {
                        try {
                            Thread.sleep(100);
                        } catch (InterruptedException e) {
                        }
                        continue;
                    }

                    ByteBuffer buffer = queue.poll();
                    if (buffer == null) {
                        continue;
                    }

                    byte[] data = buffer.array();
                    targetSource.write(data, 0, data.length);
                }

                // Play all remaining cache
                if (!queue.isEmpty()) {
                    ByteBuffer buffer = null;
                    while ((buffer = queue.poll()) != null) {
                        byte[] data = buffer.array();
                        targetSource.write(data, 0, data.length);
                    }
                }
                // Release the player
                targetSource.drain();
                targetSource.stop();
                targetSource.close();
            }
        }

        // Create a subclass inheriting from ResultCallback<SpeechSynthesisResult>
        // to implement the callback interface
        class ReactCallback extends ResultCallback<SpeechSynthesisResult> {
            private PlaybackRunnable playbackRunnable = null;

            public ReactCallback(PlaybackRunnable playbackRunnable) {
                this.playbackRunnable = playbackRunnable;
            }

            // Callback when the service side returns the streaming synthesis result
            @Override
            public void onEvent(SpeechSynthesisResult result) {
                // Get the binary data of the streaming result via getAudio
                if (result.getAudioFrame() != null) {
                    // Stream the data to the player
                    playbackRunnable.put(result.getAudioFrame());
                }
            }

            // Callback when the service side completes the synthesis
            @Override
            public void onComplete() {
                // Notify the playback thread to end
                playbackRunnable.stop();
            }

            // Callback when an error occurs
            @Override
            public void onError(Exception e) {
                // Tell the playback thread to end
                System.out.println(e);
                playbackRunnable.stop();
            }
        }

        PlaybackRunnable playbackRunnable = new PlaybackRunnable();
        try {
            playbackRunnable.prepare();
        } catch (LineUnavailableException e) {
            throw new RuntimeException(e);
        }
        Thread playbackThread = new Thread(playbackRunnable);
        // Start the playback thread
        playbackThread.start();
        /*******  Call the Generative AI Model to get streaming text *******/
        // Prepare for the LLM call
        Generation gen = new Generation();
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("请介绍一下你自己")
                .build();
        GenerationParam genParam =
                GenerationParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
                        // .apiKey("apikey")
                        .model("qwen-turbo")
                        .messages(Arrays.asList(userMsg))
                        .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                        .topP(0.8)
                        .incrementalOutput(true)
                        .build();
        // Prepare the speech synthesis task
        SpeechSynthesisParam param =
                SpeechSynthesisParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
                        // .apiKey("apikey")
                        .model(model)
                        .voice(voice)
                        .format(SpeechSynthesisAudioFormat
                                .PCM_22050HZ_MONO_16BIT)
                        .build();
        SpeechSynthesizer synthesizer =
                new SpeechSynthesizer(param, new ReactCallback(playbackRunnable));
        Flowable<GenerationResult> result = gen.streamCall(genParam);
        result.blockingForEach(message -> {
            String text =
                    message.getOutput().getChoices().get(0).getMessage().getContent();
            System.out.println("LLM output:" + text);
            synthesizer.streamingCall(text);
        });
        synthesizer.streamingComplete();
        System.out.print("requestId: " + synthesizer.getLastRequestId());
        try {
            // Wait for the playback thread to finish playing all
            playbackThread.join();
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }

    public static void main(String[] args) throws NoApiKeyException, InputRequiredException {
        process();
        System.exit(0);
    }
}

Sambert

将合成音频保存为文件

import dashscope
from dashscope.audio.tts import SpeechSynthesizer

# 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
# dashscope.api_key = "apiKey"
result = SpeechSynthesizer.call(model='sambert-zhichu-v1',
                                # 当text内容的语种发生变化时,请确认model是否匹配。不同model支持不同的语种,详情请参见Sambert音色列表中的“语言”列。
                                text='今天天气怎么样',
                                sample_rate=48000,
                                format='wav')
print('requestId: ', result.get_response()['request_id'])
if result.get_audio_data() is not None:
    with open('output.wav', 'wb') as f:
        f.write(result.get_audio_data())
print(' get response: %s' % (result.get_response()))

import com.alibaba.dashscope.audio.tts.SpeechSynthesizer;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisParam;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisResult;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisAudioFormat;
import com.alibaba.dashscope.common.ResultCallback;
import com.alibaba.dashscope.common.Status;

import java.io.*;
import java.nio.ByteBuffer;

public class Main {

    public static void SyncAudioDataToFile() {
        SpeechSynthesizer synthesizer = new SpeechSynthesizer();
        SpeechSynthesisParam param = SpeechSynthesisParam.builder()
          // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
          // .apiKey(apikey)
          .model("sambert-zhichu-v1")
          // 当text内容的语种发生变化时,请确认model是否匹配。不同model支持不同的语种,详情请参见Sambert音色列表中的“语言”列。
          .text("今天天气怎么样")
          .sampleRate(48000)
          .format(SpeechSynthesisAudioFormat.WAV)
          .build();

        File file = new File("output.wav");
        // 调用call方法,传入param参数,获取合成音频
        ByteBuffer audio = synthesizer.call(param);
        System.out.println("requestId: " + synthesizer.getLastRequestId());
        try (FileOutputStream fos = new FileOutputStream(file)) {
            fos.write(audio.array());
            System.out.println("synthesis done!");
        } catch (IOException e) {
            throw new RuntimeException(e);
        }
    }

    public static void main(String[] args) {
        SyncAudioDataToFile();
        System.exit(0);
    }
}

将合成的音频通过扬声器播放

合成语音后,通过本地设备播放实时返回的音频内容。

运行Python示例前,需要通过pip安装第三方音频播放库。

# coding=utf-8
#
# Installation instructions for pyaudio:
# APPLE Mac OS X
#   brew install portaudio
#   pip install pyaudio
# Debian/Ubuntu
#   sudo apt-get install python-pyaudio python3-pyaudio
#   or
#   pip install pyaudio
# CentOS
#   sudo yum install -y portaudio portaudio-devel && pip install pyaudio
# Microsoft Windows
#   python -m pip install pyaudio

import dashscope
import sys
import pyaudio
from dashscope.api_entities.dashscope_response import SpeechSynthesisResponse
from dashscope.audio.tts import ResultCallback, SpeechSynthesizer, SpeechSynthesisResult

# 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
# dashscope.api_key = "apiKey"

class Callback(ResultCallback):
    _player = None
    _stream = None

    def on_open(self):
        print('Speech synthesizer is opened.')
        self._player = pyaudio.PyAudio()
        self._stream = self._player.open(
            format=pyaudio.paInt16,
            channels=1,
            rate=48000,
            output=True)

    def on_complete(self):
        print('Speech synthesizer is completed.')

    def on_error(self, response: SpeechSynthesisResponse):
        print('Speech synthesizer failed, response is %s' % (str(response)))

    def on_close(self):
        print('Speech synthesizer is closed.')
        self._stream.stop_stream()
        self._stream.close()
        self._player.terminate()

    def on_event(self, result: SpeechSynthesisResult):
        if result.get_audio_frame() is not None:
            print('audio result length:', sys.getsizeof(result.get_audio_frame()))
            self._stream.write(result.get_audio_frame())

        if result.get_timestamp() is not None:
            print('timestamp result:', str(result.get_timestamp()))

callback = Callback()
result = SpeechSynthesizer.call(model='sambert-zhichu-v1',
                       text='今天天气怎么样',
                       sample_rate=48000,
                       format='pcm',
                       callback=callback)
print('requestId: ', result.get_response()['request_id'])
import com.alibaba.dashscope.audio.tts.SpeechSynthesizer;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisAudioFormat;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisParam;
import com.alibaba.dashscope.audio.tts.SpeechSynthesisResult;
import com.alibaba.dashscope.common.ResultCallback;

import java.nio.ByteBuffer;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicBoolean;

import javax.sound.sampled.*;

public class Main {

    public static void StreamAuidoDataToSpeaker() {
        CountDownLatch latch = new CountDownLatch(1);
        SpeechSynthesizer synthesizer = new SpeechSynthesizer();
        SpeechSynthesisParam param =
                SpeechSynthesisParam.builder()
                        // 若没有将API Key配置到环境变量中,需将下面这行代码注释放开,并将apiKey替换为自己的API Key
                        // .apiKey("apikey")
                        .text("今天天气怎么样")
                        .model("sambert-zhichu-v1")
                        .sampleRate(48000)
                        .format(SpeechSynthesisAudioFormat.PCM) // 流式合成使用PCM或者MP3
                        .build();

        // 播放线程
        class PlaybackRunnable implements Runnable {
            // 设置音频格式,请根据实际自身设备,合成音频参数和平台选择配置
            // 这里选择48k16bit单通道,建议客户根据选用的模型采样率情况和自身设备兼容性选择其他采样率和格式
            private AudioFormat af = new AudioFormat(48000, 16, 1, true, false);
            private DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
            private SourceDataLine targetSource = null;
            private AtomicBoolean runFlag = new AtomicBoolean(true);
            private ConcurrentLinkedQueue<ByteBuffer> queue = new ConcurrentLinkedQueue<>();

            // 准备播放器
            public void prepare() throws LineUnavailableException {
                targetSource = (SourceDataLine) AudioSystem.getLine(info);
                targetSource.open(af, 4096);
                targetSource.start();
            }

            public void put(ByteBuffer buffer) {
                queue.add(buffer);
            }

            // 停止播放
            public void stop() {
                runFlag.set(false);
            }

            @Override
            public void run() {
                if (targetSource == null) {
                    return;
                }

                while (runFlag.get()) {
                    if (queue.isEmpty()) {
                        try {
                            Thread.sleep(100);
                        } catch (InterruptedException e) {

                        }
                        continue;
                    }

                    ByteBuffer buffer = queue.poll();
                    if (buffer == null) {
                        continue;
                    }

                    byte[] data = buffer.array();
                    targetSource.write(data, 0, data.length);
                }

                // 将缓存全部播放完
                if (!queue.isEmpty()) {
                    ByteBuffer buffer = null;
                    while ((buffer = queue.poll()) != null) {
                        byte[] data = buffer.array();
                        targetSource.write(data, 0, data.length);
                    }
                }

                // 释放播放器
                targetSource.drain();
                targetSource.stop();
                targetSource.close();
            }
        }

        // 创建一个继承自ResultCallback<SpeechSynthesisResult>的子类来实现回调接口
        class ReactCallback extends ResultCallback<SpeechSynthesisResult> {
            private PlaybackRunnable playbackRunnable = null;

            public ReactCallback(PlaybackRunnable playbackRunnable) {
                this.playbackRunnable = playbackRunnable;
            }

            // 当服务侧返回流式合成结果后回调
            @Override
            public void onEvent(SpeechSynthesisResult result) {
                // 通过getAudio获取流式结果二进制数据
                if (result.getAudioFrame() != null) {
                    // 将数据流式推给播放器
                    playbackRunnable.put(result.getAudioFrame());
                }
            }

            // 当服务侧完成合成后回调
            @Override
            public void onComplete() {
                // 告知播放线程结束
                playbackRunnable.stop();
                latch.countDown();
            }

            // 当出现错误时回调
            @Override
            public void onError(Exception e) {
                // 告诉播放线程结束
                System.out.println(e);
                playbackRunnable.stop();
                latch.countDown();
            }
        }

        PlaybackRunnable playbackRunnable = new PlaybackRunnable();
        try {
            playbackRunnable.prepare();
        } catch (LineUnavailableException e) {
            throw new RuntimeException(e);
        }
        Thread playbackThread = new Thread(playbackRunnable);
        // 启动播放线程
        playbackThread.start();
        // 带Callbackcall方法将不会阻塞当前线程
        synthesizer.call(param, new ReactCallback(playbackRunnable));
        System.out.println("requestId: " + synthesizer.getLastRequestId());
        // 等待合成完成
        try {
            latch.await();
            // 等待播放线程全部播放完
            playbackThread.join();
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }

    public static void main(String[] args) {
        StreamAuidoDataToSpeaker();
        System.exit(0);
    }
}

API参考

模型应用上架及备案

参见应用合规备案

模型功能特性对比

功能/特性

cosyvoice-v3-plus

cosyvoice-v3

cosyvoice-v2

cosyvoice-v1

Sambert

支持语言

中文、英文

音色而异:中文(普通话、东北话、陕北话、闽南语、粤语)、英文(英式、美式)、韩语、日语

音色而异:中文(普通话、东北话)、英文

音色而异:中文、英文、美式英文、意大利语、西班牙语、印尼语、法语、德语、泰语

音频格式

pcm、wav、mp3、opus

pcm、wav、mp3

音频采样率

8kHz、16kHz、22.05kHz、24kHz、44.1kHz、48kHz

16kHz、48kHz

声音复刻

支持

不支持

SSML

不支持

支持

不支持

支持

LaTeX

支持

不支持

音量调节

支持

语速调节

支持

语调(音高)调节

支持

码率调节

支持

不支持

时间戳

不支持

支持 默认关闭,可开启

不支持

支持 默认关闭,可开启

设置情感

支持

不支持

流式输入

支持

不支持

流式输出

支持

限流(RPS)

3

20

接入方式

Java/Python/Android/iOS SDK、WebSocket API

价格

2元/万字符

0.4元/万字符

2元/万字符

1元/万字符

常见问题

Q:语音合成的发音读错怎么办?多音字如何控制发音?

  • 将多音字替换成同音的其他汉字,快速解决发音问题。

  • 使用SSML标记语言控制发音:SambertCosyvoice都支持SSML