ChatGLM-6B模型轻量微调和推理

更新时间: 2024-04-17 10:01:25

learn

手动配置

45

教程简介

在本教程中,您将学习如何在阿里云交互式建模(PAI-DSW)中,基于ChatGLM-6B语言模型进行模型微调训练和推理。

ChatGLM-6B是一个开源的、支持中英双语的对话语言模型,基于General Language Model(GLM)架构,具有62亿参数。另外,通过模型量化技术,您可以在消费级的显卡上进行本地部署,且在INT4量化级别下最低只需6 GB显存。ChatGLM-6B使用和ChatGPT相似的技术,针对中文问答和对话进行了优化。经过约1 TB Token的中英双语训练,同时配合监督微调、反馈自助、人类反馈强化学习等技术进行优化,使得这个拥有62亿参数的模型能够生成符合人类偏好的回答。

WebUI ChatGLM-6B微调模型推理效果图如图所示。image..png

我能学到什么

  • 学会如何在DSW中进行模型训练和模型推理。

  • 学会如何基于ChatGLM开源语言模型进行模型微调训练和推理。

操作难度

所需时间

45分钟

使用的阿里云产品

交互式建模(PAI-DSW)

阿里云在华北2(北京)、华东2(上海)、华东1(杭州)、华南1(深圳)地域为您提供了免费的PAI-DSW资源供您免费体验,您可根据需要选择对应地域申请试用,本教程以杭州地域为例。

【重要】:PAI-DSW免费资源包只适用于本教程中的PAI-DSW产品。如果您领取了PAI-DSW资源包后,使用了PAI-DSW及PAI的其他产品功能(如PAI-DLC、PAI-EAS等),PAI-DSW产品产生的费用由资源包抵扣,其他产品功能产生的费用无法抵扣,会产生对应的费用账单。

所需费用

0元

准备环境及资源

5

开始教程前,请按以下步骤准备环境和资源:

【重要】:PAI-DSW免费资源包只适用于本教程中的PAI-DSW产品。如果您领取了PAI-DSW资源包后,使用了PAI-DSW及PAI的其他产品功能(如PAI-DLC、PAI-EAS等),PAI-DSW产品产生的费用由资源包抵扣,其他产品功能产生的费用无法抵扣,会产生对应的费用账单。

  1. 访问阿里云免费试用。单击页面右上方的登录/注册按钮,并根据页面提示完成账号登录(已有阿里云账号)、账号注册(尚无阿里云账号)或实名认证(根据试用产品要求完成个人实名认证或企业实名认证)。

  2. 成功登录后,在产品类别下选择人工智能与机器学习 > 机器学习平台,在交互式建模PAI-DSW卡片上单击立即试用

    【说明】:如果您此前已申请过试用PAI的免费资源包,此时界面会提示为已试用,您可以直接单击已试用按钮,进入PAI的控制台。

  3. 交互式建模PAI-DSW面板,勾选服务协议后,单击立即试用,进入免费开通页面。

    【重要】以下几种情况可能产生额外费用

    • 使用了除免费资源类型外的计费资源类型:

      您申请试用的是PAI-DSW免费资源包,但您创建的DSW实例使用的资源类型非阿里云免费试用提供的资源类型。当前可申请免费使用的资源类型有:ecs.gn6v-c8g1.2xlarge、ecs.g6.xlarge、ecs.gn7i-c8g1.2xlarge。

    • 申请试用的免费资源包与使用的产品资源不对应:

      • 您创建了DSW实例,但您申请试用的是DLCEAS产品的免费资源包。您使用DSW产品产生的费用无法使用免费资源包抵扣,会产生后付费账单。

      • 您申请试用的是DSW免费资源包,但您使用的产品是DLCEAS。使用DLCEAS产品产生的费用无法使用DSW免费资源包抵扣,会产生后付费账单。

    • 免费额度用尽或超出试用期:

      领取免费资源包后,请在免费额度和有效试用期内使用。如果免费额度用尽或试用期结束后,继续使用计算资源,会产生后付费账单。

      请前往资源实例管理页面,查看免费额度使用量和过期时间,如下图所示。image

  4. 开通机器学习PAI并创建默认工作空间。其中关键参数配置如下,更多详细内容,请参见开通并创建默认工作空间

    • 本教程地域选择:华东1(杭州)。您也可以根据情况选择华北2(北京)、华东2(上海)、华南1(深圳)地域。

    • 单击免费开通并创建默认工作空间:在弹出的开通页面中配置订单详情。配置要点如下。

      • 本教程不需要开通其他产品,您需要在组合开通配置模块,去勾选其他产品的复选框。

      • 服务角色授权模块单击去授权,根据界面提示为PAI完成授权,然后返回开通页面,刷新页面,继续开通操作。

    image

  5. 开通成功后单击进入PAI控制台,在默认工作空间中创建DSW实例。其中关键参数配置如下,其他参数取默认配置即可。更多详细内容,请参见创建DSW实例

    【说明】:创建DSW实例需要一定时间,与当前的资源数有关,通常大约需要15分钟。如果您使用地域资源不足,可更换其他支持免费试用的地域申请开通试用并创建DSW实例。

    参数

    描述

    地域及可用区

    本教程选择:华东1(杭州)

    实例名称

    您可以自定义实例名称,本教程示例为:ChatGLM_test

    资源配额

    本教程需选择公共资源(后付费)GPU规格,规格名称为ecs.gn6v-c8g1.2xlarge

    【说明】:阿里云免费试用提供的资源类型包括以下几种类型:

    • ecs.gn7i-c8g1.2xlarge(此规格可运行本教程内容)

    • ecs.g6.xlarge

    • ecs.gn6v-c8g1.2xlarge(此规格可运行本教程内容)

    选择镜像

    选择官方镜像中的pytorch-develop:1.12-gpu-py39-cu113-ubuntu20.04

在DSW中打开教程文件

5

  1. 进入PAI-DSW开发环境。

    1. 登录PAI控制台

    2. 在页面左上方,选择DSW实例所在的地域。

    3. 在左侧导航栏单击工作空间列表,在工作空间列表页面中单击默认工作空间名称,进入对应工作空间内。

    4. 在左侧导航栏,选择模型开发与训练>交互式建模(DSW)

    5. 单击需要打开的实例操作列下的打开,进入PAI-DSW实例开发环境。

  2. 在Notebook页签的Launcher页面,单击快速开始区域Tool下的DSW Gallery,打开DSW Gallery页面。image.png

  3. 在DSW Gallery页面中,搜索并找到轻量微调和推理ChatGLM模型实践教程,单击教程卡片中的在DSW中打开

    单击后即会自动将本教程所需的资源和教程文件下载至DSW实例中,并在下载完成后自动打开教程文件。50fd09229b45a3a4a41fe5b2ac6f5517..png

运行教程文件

25

在打开的教程文件chatglm_6b.ipynb文件中,您可以直接看到教程文本,您可以在教程文件中直接运行对应的步骤的命令,当成功运行结束一个步骤命令后,再顺次运行下个步骤的命令。image.png

本教程包含的操作步骤以及每个步骤的执行结果如下。

  1. 下载模型文件和数据集。

    1. 下载ChatGLM-6B模型文件。由于ChatGLM-6B模型文件过大,为了加快下载速度,您可以获取合适的下载地址,再进行数据下载。下载过程大约需要持续15分钟。

      单击此处查看运行结果

      --2023-09-09 15:59:52--  https://atp-modelzoo.oss-cn-hangzhou-internal.aliyuncs.com/release/tutorials/chatGLM/ChatGLM-6B-main.tar.gz
      Resolving atp-modelzoo.oss-cn-hangzhou-internal.aliyuncs.com (atp-modelzoo.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.44, 100.118.28.50, 100.118.28.49, ...
      Connecting to atp-modelzoo.oss-cn-hangzhou-internal.aliyuncs.com (atp-modelzoo.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.44|:443... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 12665436993 (12G) [application/gzip]
      Saving to: ‘ChatGLM-6B-main.tar.gz’
      
      ChatGLM-6B-main.tar 100%[===================>]  11.79G  15.9MB/s    in 10m 0s  
      
      2023-09-09 16:09:52 (20.1 MB/s) - ‘ChatGLM-6B-main.tar.gz’ saved [12665436993/12665436993]
      
      ChatGLM-6B-main/
      ChatGLM-6B-main/README_en.md
      ChatGLM-6B-main/resources/
      ChatGLM-6B-main/resources/web-demo.png
      ChatGLM-6B-main/resources/cli-demo.png
      ChatGLM-6B-main/resources/web-demo.gif
      ChatGLM-6B-main/web_demo2.py
      ChatGLM-6B-main/README.md
      ChatGLM-6B-main/.github/
      ChatGLM-6B-main/.github/ISSUE_TEMPLATE/
      ChatGLM-6B-main/.github/ISSUE_TEMPLATE/feature_request.yml
      ChatGLM-6B-main/.github/ISSUE_TEMPLATE/config.yml
      ChatGLM-6B-main/.github/ISSUE_TEMPLATE/bug_report.yaml
      ChatGLM-6B-main/cli_demo.py
      ChatGLM-6B-main/api.py
      ChatGLM-6B-main/limitations/
      ChatGLM-6B-main/limitations/self-confusion_tencent.jpg
      ChatGLM-6B-main/limitations/factual_error.png
      ChatGLM-6B-main/limitations/self-confusion_openai.jpg
      ChatGLM-6B-main/limitations/self-confusion_google.jpg
      ChatGLM-6B-main/limitations/math_error.png
      ChatGLM-6B-main/MODEL_LICENSE
      ChatGLM-6B-main/requirements.txt
      ChatGLM-6B-main/examples/
      ChatGLM-6B-main/examples/tour-guide.png
      ChatGLM-6B-main/examples/self-introduction.png
      ChatGLM-6B-main/examples/information-extraction.png
      ChatGLM-6B-main/examples/role-play.png
      ChatGLM-6B-main/examples/blog-outline.png
      ChatGLM-6B-main/examples/email-writing-2.png
      ChatGLM-6B-main/examples/ad-writing-2.png
      ChatGLM-6B-main/examples/sport.png
      ChatGLM-6B-main/examples/comments-writing.png
      ChatGLM-6B-main/examples/email-writing-1.png
      ChatGLM-6B-main/ptuning/
      ChatGLM-6B-main/ptuning/arguments.py
      ChatGLM-6B-main/ptuning/README.md
      ChatGLM-6B-main/ptuning/main.py
      ChatGLM-6B-main/ptuning/train.sh
      ChatGLM-6B-main/ptuning/trainer_seq2seq.py
      ChatGLM-6B-main/ptuning/chatglm-6b/
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00001-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/tokenization_chatglm.py
      ChatGLM-6B-main/ptuning/chatglm-6b/gitattributes.txt
      ChatGLM-6B-main/ptuning/chatglm-6b/README.md
      ChatGLM-6B-main/ptuning/chatglm-6b/ice_text.model
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00005-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00006-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/tokenizer_config.json
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00003-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/config.json
      ChatGLM-6B-main/ptuning/chatglm-6b/quantization.py
      ChatGLM-6B-main/ptuning/chatglm-6b/configuration_chatglm.py
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00007-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00008-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00002-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model.bin.index.json
      ChatGLM-6B-main/ptuning/chatglm-6b/pytorch_model-00004-of-00008.bin
      ChatGLM-6B-main/ptuning/chatglm-6b/LICENSE.txt
      ChatGLM-6B-main/ptuning/chatglm-6b/modeling_chatglm.py
      ChatGLM-6B-main/ptuning/chatglm-6b/MODEL_LICENSE.txt
      ChatGLM-6B-main/ptuning/evaluate.sh
      ChatGLM-6B-main/LICENSE
      ChatGLM-6B-main/web_demo.py
    2. 安装模型训练依赖的库文件。回显结果中出现的ERROR和WARNING信息可以忽略。

      单击此处查看运行结果

      Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
      Collecting protobuf<3.20.1,>=3.19.5
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/7f/d9/6b9e97c6498a29c5e99badce383a8711c4f0ff586a464851b3f8b06cc66d/protobuf-3.20.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 19.9 MB/s eta 0:00:00a 0:00:01
      Collecting transformers==4.27.1
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/6d/9b/2f536f9e73390209e0b27b74691355dac494b7ec8154f3012fdc6debbae7/transformers-4.27.1-py3-none-any.whl (6.7 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.7/6.7 MB 50.2 MB/s eta 0:00:0000:0100:01
      Collecting icetk
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/bf/8a/731927e0901273815b779e6ce0e081a95ecf78835ff80be30830505ae06c/icetk-0.0.7-py3-none-any.whl (16 kB)
      Collecting cpm_kernels
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/af/84/1831ce6ffa87b8fd4d9673c3595d0fc4e6631c0691eb43f406d3bf89b951/cpm_kernels-1.0.11-py3-none-any.whl (416 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 416.6/416.6 kB 70.9 MB/s eta 0:00:00
      Requirement already satisfied: torch>=1.10 in /home/pai/lib/python3.9/site-packages (from -r requirements.txt (line 5)) (1.12.1+cu113)
      Collecting gradio
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/fd/ef/0ba6ceb0239a690a7a87dee2c5457be4759a6135fdecde7b83c1ecdec212/gradio-3.43.2-py3-none-any.whl (20.1 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.1/20.1 MB 72.8 MB/s eta 0:00:0000:0100:01
      Requirement already satisfied: tqdm>=4.27 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (4.65.0)
      Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (0.13.3)
      Requirement already satisfied: numpy>=1.17 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (1.23.5)
      Requirement already satisfied: huggingface-hub<1.0,>=0.11.0 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (0.13.4)
      Requirement already satisfied: requests in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (2.28.1)
      Requirement already satisfied: pyyaml>=5.1 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (6.0)
      Requirement already satisfied: filelock in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (3.11.0)
      Requirement already satisfied: regex!=2019.12.17 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (2023.3.23)
      Requirement already satisfied: packaging>=20.0 in /home/pai/lib/python3.9/site-packages (from transformers==4.27.1->-r requirements.txt (line 2)) (23.0)
      Collecting sentencepiece
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/6b/22/4157918b2112d47014fb1e79b0dd6d5a141b8d1b049bae695d405150ebaf/sentencepiece-0.1.99-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 35.6 MB/s eta 0:00:0000:01
      Collecting icetk
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/ca/eb/db3b8d7e891a959bd53641019f7b7e0ece6bfe9d89a6316d011bb6e0afd2/icetk-0.0.6-py3-none-any.whl (15 kB)
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/1e/21/4dc97f0ffc0b833dd3bf667e214e65f98fc361af56a82b34039383a9e05c/icetk-0.0.5-py3-none-any.whl (15 kB)
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/18/c6/1fe059fff5d532122b5a93be15b23bc9eedb5e0eda24d51ae9e389584f17/icetk-0.0.4-py3-none-any.whl (15 kB)
      Requirement already satisfied: torchvision in /home/pai/lib/python3.9/site-packages (from icetk->-r requirements.txt (line 3)) (0.13.1+cu113)
      Requirement already satisfied: typing-extensions in /home/pai/lib/python3.9/site-packages (from torch>=1.10->-r requirements.txt (line 5)) (4.5.0)
      Collecting huggingface-hub<1.0,>=0.11.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/7f/c4/adcbe9a696c135578cabcbdd7331332daad4d49b7c43688bc2d36b3a47d2/huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 268.8/268.8 kB 41.5 MB/s eta 0:00:00
      Collecting fastapi
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/76/e5/ca411b260caa4e72f9ac5482f331fe74fd4eb5b97aa74d1d2806ccf07e2c/fastapi-0.103.1-py3-none-any.whl (66 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.2/66.2 kB 19.3 MB/s eta 0:00:00
      Requirement already satisfied: jinja2<4.0 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (3.1.2)
      Requirement already satisfied: matplotlib~=3.0 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (3.5.2)
      Collecting python-multipart
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/b4/ff/b1e11d8bffb5e0e1b6d27f402eeedbeb9be6df2cdbc09356a1ae49806dbf/python_multipart-0.0.6-py3-none-any.whl (45 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.7/45.7 kB 16.7 MB/s eta 0:00:00
      Collecting httpx
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/ec/91/e41f64f03d2a13aee7e8c819d82ee3aa7cdc484d18c0ae859742597d5aa0/httpx-0.24.1-py3-none-any.whl (75 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 20.8 MB/s eta 0:00:00
      Collecting pydub
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/a6/53/d78dc063216e62fc55f6b2eebb447f6a4b0a59f55c8406376f76bf959b08/pydub-0.25.1-py2.py3-none-any.whl (32 kB)
      Collecting uvicorn>=0.14.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/79/96/b0882a1c3f7ef3dd86879e041212ae5b62b4bd352320889231cc735a8e8f/uvicorn-0.23.2-py3-none-any.whl (59 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.5/59.5 kB 24.1 MB/s eta 0:00:00
      Requirement already satisfied: pandas<3.0,>=1.0 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (2.0.0)
      Collecting pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,<3.0.0,>=1.7.4
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/82/06/fafdc75e48b248eff364b4249af4bcc6952225e8f20e8205820afc66e88e/pydantic-2.3.0-py3-none-any.whl (374 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 374.5/374.5 kB 29.4 MB/s eta 0:00:00
      Collecting ffmpy
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/9a/06/49b275a312eb207e2a2718a7414dedfded05088437352b67aaa9a355f948/ffmpy-0.3.1.tar.gz (5.5 kB)
        Preparing metadata (setup.py) ... done
      Collecting gradio-client==0.5.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/fe/85/ec0323f39192c4bee04e8e06e64213aff816b9d1b61c3c8367e75b1c7e10/gradio_client-0.5.0-py3-none-any.whl (298 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.2/298.2 kB 24.9 MB/s eta 0:00:00
      Collecting orjson~=3.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/0e/31/49db18d0728852eea1f633e92cf189acf819508da9ff1b30c99baf401c85/orjson-3.9.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (138 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 138.6/138.6 kB 19.9 MB/s eta 0:00:00
      Collecting semantic-version~=2.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/6a/23/8146aad7d88f4fcb3a6218f41a60f6c2d4e3a72de72da1825dc7c8f7877c/semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)
      Requirement already satisfied: pillow<11.0,>=8.0 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (9.5.0)
      Requirement already satisfied: importlib-resources<7.0,>=1.3 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (5.12.0)
      Requirement already satisfied: websockets<12.0,>=10.0 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (11.0.1)
      Requirement already satisfied: markupsafe~=2.0 in /home/pai/lib/python3.9/site-packages (from gradio->-r requirements.txt (line 6)) (2.1.2)
      Collecting aiofiles<24.0,>=22.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/c5/19/5af6804c4cc0fed83f47bff6e413a98a36618e7d40185cd36e69737f3b0e/aiofiles-23.2.1-py3-none-any.whl (15 kB)
      Collecting altair<6.0,>=4.2.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/f2/b4/02a0221bd1da91f6e6acdf0525528db24b4b326a670a9048da474dfe0667/altair-5.1.1-py3-none-any.whl (520 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 520.6/520.6 kB 40.1 MB/s eta 0:00:00
      Collecting fsspec
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/3a/9f/b40e8e5be886143379000af5fc0c675352d59e82fd869d24bf784161dc77/fsspec-2023.9.0-py3-none-any.whl (173 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 173.2/173.2 kB 30.8 MB/s eta 0:00:00
      Requirement already satisfied: toolz in /home/pai/lib/python3.9/site-packages (from altair<6.0,>=4.2.0->gradio->-r requirements.txt (line 6)) (0.12.0)
      Requirement already satisfied: jsonschema>=3.0 in /home/pai/lib/python3.9/site-packages (from altair<6.0,>=4.2.0->gradio->-r requirements.txt (line 6)) (4.17.3)
      Requirement already satisfied: zipp>=3.1.0 in /home/pai/lib/python3.9/site-packages (from importlib-resources<7.0,>=1.3->gradio->-r requirements.txt (line 6)) (3.15.0)
      Requirement already satisfied: fonttools>=4.22.0 in /home/pai/lib/python3.9/site-packages (from matplotlib~=3.0->gradio->-r requirements.txt (line 6)) (4.39.3)
      Requirement already satisfied: pyparsing>=2.2.1 in /home/pai/lib/python3.9/site-packages (from matplotlib~=3.0->gradio->-r requirements.txt (line 6)) (3.0.9)
      Requirement already satisfied: kiwisolver>=1.0.1 in /home/pai/lib/python3.9/site-packages (from matplotlib~=3.0->gradio->-r requirements.txt (line 6)) (1.4.4)
      Requirement already satisfied: python-dateutil>=2.7 in /home/pai/lib/python3.9/site-packages (from matplotlib~=3.0->gradio->-r requirements.txt (line 6)) (2.8.2)
      Requirement already satisfied: cycler>=0.10 in /home/pai/lib/python3.9/site-packages (from matplotlib~=3.0->gradio->-r requirements.txt (line 6)) (0.11.0)
      Requirement already satisfied: pytz>=2020.1 in /home/pai/lib/python3.9/site-packages (from pandas<3.0,>=1.0->gradio->-r requirements.txt (line 6)) (2023.3)
      Requirement already satisfied: tzdata>=2022.1 in /home/pai/lib/python3.9/site-packages (from pandas<3.0,>=1.0->gradio->-r requirements.txt (line 6)) (2023.3)
      Collecting annotated-types>=0.4.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/d8/f0/a2ee543a96cc624c35a9086f39b1ed2aa403c6d355dfe47a11ee5c64a164/annotated_types-0.5.0-py3-none-any.whl (11 kB)
      Collecting pydantic-core==2.6.3
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/18/54/6d64dff3e49e7faf4f5b989b49e46dd8b592d1e3f3db2113f4aaf1defdd3/pydantic_core-2.6.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.9 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 51.3 MB/s eta 0:00:0000:01
      Collecting typing-extensions
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl (33 kB)
      Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/pai/lib/python3.9/site-packages (from requests->transformers==4.27.1->-r requirements.txt (line 2)) (1.26.13)
      Requirement already satisfied: certifi>=2017.4.17 in /home/pai/lib/python3.9/site-packages (from requests->transformers==4.27.1->-r requirements.txt (line 2)) (2022.12.7)
      Requirement already satisfied: idna<4,>=2.5 in /home/pai/lib/python3.9/site-packages (from requests->transformers==4.27.1->-r requirements.txt (line 2)) (3.4)
      Requirement already satisfied: charset-normalizer<3,>=2 in /home/pai/lib/python3.9/site-packages (from requests->transformers==4.27.1->-r requirements.txt (line 2)) (2.0.4)
      Collecting h11>=0.8
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/95/04/ff642e65ad6b90db43e668d70ffb6736436c7ce41fcc549f4e9472234127/h11-0.14.0-py3-none-any.whl (58 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 23.6 MB/s eta 0:00:00
      Collecting click>=7.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/00/2e/d53fa4befbf2cfa713304affc7ca780ce4fc1fd8710527771b58311a3229/click-8.1.7-py3-none-any.whl (97 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.9/97.9 kB 29.4 MB/s eta 0:00:00
      Collecting starlette<0.28.0,>=0.27.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/58/f8/e2cca22387965584a409795913b774235752be4176d276714e15e1a58884/starlette-0.27.0-py3-none-any.whl (66 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.0/67.0 kB 25.1 MB/s eta 0:00:00
      Collecting anyio<4.0.0,>=3.7.1
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/19/24/44299477fe7dcc9cb58d0a57d5a7588d6af2ff403fdd2d47a246c91a3246/anyio-3.7.1-py3-none-any.whl (80 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 80.9/80.9 kB 28.2 MB/s eta 0:00:00
      Requirement already satisfied: sniffio in /home/pai/lib/python3.9/site-packages (from httpx->gradio->-r requirements.txt (line 6)) (1.3.0)
      Collecting httpcore<0.18.0,>=0.15.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/94/2c/2bde7ff8dd2064395555220cbf7cba79991172bf5315a07eb3ac7688d9f1/httpcore-0.17.3-py3-none-any.whl (74 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 27.5 MB/s eta 0:00:00
      Collecting exceptiongroup
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/ad/83/b71e58666f156a39fb29417e4c8ca4bc7400c0dd4ed9e8842ab54dc8c344/exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
      Requirement already satisfied: attrs>=17.4.0 in /home/pai/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->-r requirements.txt (line 6)) (22.2.0)
      Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /home/pai/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->-r requirements.txt (line 6)) (0.19.3)
      Requirement already satisfied: six>=1.5 in /home/pai/lib/python3.9/site-packages (from python-dateutil>=2.7->matplotlib~=3.0->gradio->-r requirements.txt (line 6)) (1.16.0)
      Building wheels for collected packages: ffmpy
        Building wheel for ffmpy (setup.py) ... done
        Created wheel for ffmpy: filename=ffmpy-0.3.1-py3-none-any.whl size=5580 sha256=982021c43cfb5ed5b4225b04604b1ab87cf0b0ffabdbf80575a89becb6269c8e
        Stored in directory: /root/.cache/pip/wheels/c4/b0/12/e419f8857084329206be2beacadf848e4f63f8b17bb650c082
      Successfully built ffmpy
      Installing collected packages: sentencepiece, pydub, ffmpy, cpm_kernels, typing-extensions, semantic-version, python-multipart, protobuf, orjson, h11, fsspec, exceptiongroup, click, annotated-types, aiofiles, uvicorn, pydantic-core, huggingface-hub, anyio, transformers, starlette, pydantic, httpcore, altair, icetk, httpx, fastapi, gradio-client, gradio
        Attempting uninstall: typing-extensions
          Found existing installation: typing_extensions 4.5.0
          Uninstalling typing_extensions-4.5.0:
            Successfully uninstalled typing_extensions-4.5.0
        Attempting uninstall: protobuf
          Found existing installation: protobuf 3.20.3
          Uninstalling protobuf-3.20.3:
            Successfully uninstalled protobuf-3.20.3
        Attempting uninstall: huggingface-hub
          Found existing installation: huggingface-hub 0.13.4
          Uninstalling huggingface-hub-0.13.4:
            Successfully uninstalled huggingface-hub-0.13.4
        Attempting uninstall: anyio
          Found existing installation: anyio 3.6.2
          Uninstalling anyio-3.6.2:
            Successfully uninstalled anyio-3.6.2
        Attempting uninstall: transformers
          Found existing installation: transformers 4.27.4
          Uninstalling transformers-4.27.4:
            Successfully uninstalled transformers-4.27.4
      ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
      googleapis-common-protos 1.59.0 requires protobuf!=3.20.0,!=3.20.1,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.19.5, but you have protobuf 3.20.0 which is incompatible.
      Successfully installed aiofiles-23.2.1 altair-5.1.1 annotated-types-0.5.0 anyio-3.7.1 click-8.1.7 cpm_kernels-1.0.11 exceptiongroup-1.1.3 fastapi-0.103.1 ffmpy-0.3.1 fsspec-2023.9.0 gradio-3.43.2 gradio-client-0.5.0 h11-0.14.0 httpcore-0.17.3 httpx-0.24.1 huggingface-hub-0.16.4 icetk-0.0.4 orjson-3.9.7 protobuf-3.20.0 pydantic-2.3.0 pydantic-core-2.6.3 pydub-0.25.1 python-multipart-0.0.6 semantic-version-2.10.0 sentencepiece-0.1.99 starlette-0.27.0 transformers-4.27.1 typing-extensions-4.7.1 uvicorn-0.23.2
      WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
      Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
      Collecting rouge_chinese
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/03/0f/394cf877be7b903881020ef7217f7dc644dad158d52a9353fcab22e3464d/rouge_chinese-1.0.3-py3-none-any.whl (21 kB)
      Collecting nltk
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/a6/0a/0d20d2c0f16be91b9fa32a77b76c60f9baf6eba419e5ef5deca17af9c582/nltk-3.8.1-py3-none-any.whl (1.5 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 16.5 MB/s eta 0:00:00a 0:00:01
      Collecting jieba
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/c6/cb/18eeb235f833b726522d7ebed54f2278ce28ba9438e3135ab0278d9792a2/jieba-0.42.1.tar.gz (19.2 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.2/19.2 MB 71.5 MB/s eta 0:00:0000:0100:01
        Preparing metadata (setup.py) ... done
      Collecting datasets
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/09/7e/fd4d6441a541dba61d0acb3c1fd5df53214c2e9033854e837a99dd9e0793/datasets-2.14.5-py3-none-any.whl (519 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 519.6/519.6 kB 34.9 MB/s eta 0:00:00
      Requirement already satisfied: six in /home/pai/lib/python3.9/site-packages (from rouge_chinese) (1.16.0)
      Requirement already satisfied: click in /home/pai/lib/python3.9/site-packages (from nltk) (8.1.7)
      Requirement already satisfied: tqdm in /home/pai/lib/python3.9/site-packages (from nltk) (4.65.0)
      Requirement already satisfied: joblib in /home/pai/lib/python3.9/site-packages (from nltk) (1.2.0)
      Requirement already satisfied: regex>=2021.8.3 in /home/pai/lib/python3.9/site-packages (from nltk) (2023.3.23)
      Requirement already satisfied: numpy>=1.17 in /home/pai/lib/python3.9/site-packages (from datasets) (1.23.5)
      Collecting multiprocess
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/c6/c9/820b5ab056f4ada76fbe05bd481a948f287957d6cbfd59e2dd2618b408c1/multiprocess-0.70.15-py39-none-any.whl (133 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 39.4 MB/s eta 0:00:00
      Requirement already satisfied: huggingface-hub<1.0.0,>=0.14.0 in /home/pai/lib/python3.9/site-packages (from datasets) (0.16.4)
      Requirement already satisfied: aiohttp in /home/pai/lib/python3.9/site-packages (from datasets) (3.8.4)
      Requirement already satisfied: packaging in /home/pai/lib/python3.9/site-packages (from datasets) (23.0)
      Collecting pyarrow>=8.0.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/49/db/0a40d2a5b2382c77536479894ce2900e5f4c40251681a72d397ba6430f8d/pyarrow-13.0.0-cp39-cp39-manylinux_2_28_x86_64.whl (40.1 MB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.1/40.1 MB 44.6 MB/s eta 0:00:0000:0100:01
      Requirement already satisfied: pyyaml>=5.1 in /home/pai/lib/python3.9/site-packages (from datasets) (6.0)
      Collecting xxhash
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/45/63/40da996350689cf29db7f8819aafa74c9d36feca4f0e4393d220c619a1dc/xxhash-3.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (193 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 193.8/193.8 kB 52.1 MB/s eta 0:00:00
      Collecting fsspec[http]<2023.9.0,>=2023.1.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/e3/bd/4c0a4619494188a9db5d77e2100ab7d544a42e76b2447869d8e124e981d8/fsspec-2023.6.0-py3-none-any.whl (163 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 163.8/163.8 kB 43.9 MB/s eta 0:00:00
      Requirement already satisfied: requests>=2.19.0 in /home/pai/lib/python3.9/site-packages (from datasets) (2.28.1)
      Requirement already satisfied: pandas in /home/pai/lib/python3.9/site-packages (from datasets) (2.0.0)
      Requirement already satisfied: dill<0.3.8,>=0.3.0 in /home/pai/lib/python3.9/site-packages (from datasets) (0.3.6)
      Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (2.0.4)
      Requirement already satisfied: multidict<7.0,>=4.5 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (6.0.4)
      Requirement already satisfied: yarl<2.0,>=1.0 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (1.8.2)
      Requirement already satisfied: aiosignal>=1.1.2 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (1.3.1)
      Requirement already satisfied: attrs>=17.3.0 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (22.2.0)
      Requirement already satisfied: frozenlist>=1.1.1 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (1.3.3)
      Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/pai/lib/python3.9/site-packages (from aiohttp->datasets) (4.0.2)
      Requirement already satisfied: typing-extensions>=3.7.x.x in /home/pai/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.14.0->datasets) (4.7.1)
      Requirement already satisfied: filelock in /home/pai/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.14.0->datasets) (3.11.0)
      Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/pai/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (1.26.13)
      Requirement already satisfied: idna<4,>=2.5 in /home/pai/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (3.4)
      Requirement already satisfied: certifi>=2017.4.17 in /home/pai/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (2022.12.7)
      Collecting dill<0.3.8,>=0.3.0
        Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/f5/3a/74a29b11cf2cdfcd6ba89c0cecd70b37cd1ba7b77978ce611eb7a146a832/dill-0.3.7-py3-none-any.whl (115 kB)
           ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 115.3/115.3 kB 36.8 MB/s eta 0:00:00
      Requirement already satisfied: tzdata>=2022.1 in /home/pai/lib/python3.9/site-packages (from pandas->datasets) (2023.3)
      Requirement already satisfied: pytz>=2020.1 in /home/pai/lib/python3.9/site-packages (from pandas->datasets) (2023.3)
      Requirement already satisfied: python-dateutil>=2.8.2 in /home/pai/lib/python3.9/site-packages (from pandas->datasets) (2.8.2)
      Building wheels for collected packages: jieba
        Building wheel for jieba (setup.py) ... done
        Created wheel for jieba: filename=jieba-0.42.1-py3-none-any.whl size=19314458 sha256=03df29893dbb2b1450b47c6fcd575107854d0f3b85c68a9153aeaea3f13abcc6
        Stored in directory: /root/.cache/pip/wheels/f2/57/b8/6282b0f100b6e2e068b839954be062c44c9580ff8bca2d6437
      Successfully built jieba
      Installing collected packages: jieba, xxhash, rouge_chinese, pyarrow, nltk, fsspec, dill, multiprocess, datasets
        Attempting uninstall: fsspec
          Found existing installation: fsspec 2023.9.0
          Uninstalling fsspec-2023.9.0:
            Successfully uninstalled fsspec-2023.9.0
        Attempting uninstall: dill
          Found existing installation: dill 0.3.6
          Uninstalling dill-0.3.6:
            Successfully uninstalled dill-0.3.6
      Successfully installed datasets-2.14.5 dill-0.3.7 fsspec-2023.6.0 jieba-0.42.1 multiprocess-0.70.15 nltk-3.8.1 pyarrow-13.0.0 rouge_chinese-1.0.3 xxhash-3.3.0
      WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
    3. 获取PAI提供的AdvertiseGen_Simple数据。后续将使用AdvertiseGen_Simple数据作为训练数据集和测试数据集进行模型训练。

      单击此处查看运行结果

      --2023-09-09 16:38:52--  https://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/chatGLM/AdvertiseGen_Simple.zip
      Resolving atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com (atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com)... 47.101.xx.xx
      Connecting to atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com (atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com)|47.101.xx.xx|:443... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 20198 (20K) [application/zip]
      Saving to: ‘AdvertiseGen_Simple.zip’
      
      AdvertiseGen_Simple 100%[===================>]  19.72K  --.-KB/s    in 0.006s  
      
      2023-09-09 16:38:52 (3.38 MB/s) - ‘AdvertiseGen_Simple.zip’ saved [20198/20198]
      
      Archive:  AdvertiseGen_Simple.zip
         creating: AdvertiseGen_Simple/
        inflating: __MACOSX/._AdvertiseGen_Simple  
        inflating: AdvertiseGen_Simple/dev.json  
        inflating: __MACOSX/AdvertiseGen_Simple/._dev.json  
        inflating: AdvertiseGen_Simple/train.json  
  2. 微调模型。

    单击此处查看运行结果

    09/09/2023 16:39:05 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
    09/09/2023 16:39:05 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
    _n_gpu=1,
    adafactor=False,
    adam_beta1=0.9,
    adam_beta2=0.999,
    adam_epsilon=1e-08,
    auto_find_batch_size=False,
    bf16=False,
    bf16_full_eval=False,
    data_seed=None,
    dataloader_drop_last=False,
    dataloader_num_workers=0,
    dataloader_pin_memory=True,
    ddp_bucket_cap_mb=None,
    ddp_find_unused_parameters=None,
    ddp_timeout=1800,
    debug=[],
    deepspeed=None,
    disable_tqdm=False,
    do_eval=False,
    do_predict=False,
    do_train=True,
    eval_accumulation_steps=None,
    eval_delay=0,
    eval_steps=None,
    evaluation_strategy=no,
    fp16=False,
    fp16_backend=auto,
    fp16_full_eval=False,
    fp16_opt_level=O1,
    fsdp=[],
    fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
    fsdp_min_num_params=0,
    fsdp_transformer_layer_cls_to_wrap=None,
    full_determinism=False,
    generation_max_length=None,
    generation_num_beams=None,
    gradient_accumulation_steps=16,
    gradient_checkpointing=False,
    greater_is_better=None,
    group_by_length=False,
    half_precision_backend=auto,
    hub_model_id=None,
    hub_private_repo=False,
    hub_strategy=every_save,
    hub_token=<HUB_TOKEN>,
    ignore_data_skip=False,
    include_inputs_for_metrics=False,
    jit_mode_eval=False,
    label_names=None,
    label_smoothing_factor=0.0,
    learning_rate=0.01,
    length_column_name=length,
    load_best_model_at_end=False,
    local_rank=-1,
    log_level=passive,
    log_level_replica=warning,
    log_on_each_node=True,
    logging_dir=output/adgen-chatglm-6b-pt-8-1e-2/runs/Sep09_16-39-05_dsw-101262-5c8f7bb5b4-vm5lw,
    logging_first_step=False,
    logging_nan_inf_filter=True,
    logging_steps=10,
    logging_strategy=steps,
    lr_scheduler_type=linear,
    max_grad_norm=1.0,
    max_steps=-1,
    metric_for_best_model=None,
    mp_parameters=,
    no_cuda=False,
    num_train_epochs=1.0,
    optim=adamw_hf,
    optim_args=None,
    output_dir=output/adgen-chatglm-6b-pt-8-1e-2,
    overwrite_output_dir=True,
    past_index=-1,
    per_device_eval_batch_size=1,
    per_device_train_batch_size=1,
    predict_with_generate=True,
    prediction_loss_only=False,
    push_to_hub=False,
    push_to_hub_model_id=None,
    push_to_hub_organization=None,
    push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
    ray_scope=last,
    remove_unused_columns=True,
    report_to=['tensorboard'],
    resume_from_checkpoint=None,
    run_name=output/adgen-chatglm-6b-pt-8-1e-2,
    save_on_each_node=False,
    save_steps=6,
    save_strategy=steps,
    save_total_limit=None,
    seed=42,
    sharded_ddp=[],
    skip_memory_metrics=True,
    sortish_sampler=False,
    tf32=None,
    torch_compile=False,
    torch_compile_backend=None,
    torch_compile_mode=None,
    torchdynamo=None,
    tpu_metrics_debug=False,
    tpu_num_cores=None,
    use_ipex=False,
    use_legacy_prediction_loop=False,
    use_mps_device=False,
    warmup_ratio=0.0,
    warmup_steps=0,
    weight_decay=0.0,
    xpu_backend=None,
    )
    /home/pai/lib/python3.9/site-packages/datasets/load.py:2089: FutureWarning: 'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.
    You can remove this warning by passing 'token=None' instead.
      warnings.warn(
    Downloading data files: 100%|██████████████████| 2/2 [00:00<00:00, 17225.07it/s]
    Extracting data files: 100%|████████████████████| 2/2 [00:00<00:00, 1954.48it/s]
    Generating train split: 100 examples [00:00, 36957.48 examples/s]
    Generating validation split: 10 examples [00:00, 8224.13 examples/s]
    [INFO|configuration_utils.py:666] 2023-09-09 16:39:06,723 >> loading configuration file chatglm-6b/config.json
    [WARNING|configuration_auto.py:905] 2023-09-09 16:39:06,723 >> Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
    [INFO|configuration_utils.py:666] 2023-09-09 16:39:06,766 >> loading configuration file chatglm-6b/config.json
    [INFO|configuration_utils.py:720] 2023-09-09 16:39:06,768 >> Model config ChatGLMConfig {
      "_name_or_path": "chatglm-6b",
      "architectures": [
        "ChatGLMModel"
      ],
      "auto_map": {
        "AutoConfig": "configuration_chatglm.ChatGLMConfig",
        "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
        "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration"
      },
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "hidden_size": 4096,
      "inner_hidden_size": 16384,
      "layernorm_epsilon": 1e-05,
      "max_sequence_length": 2048,
      "model_type": "chatglm",
      "num_attention_heads": 32,
      "num_layers": 28,
      "pad_token_id": 20003,
      "position_encoding_2d": true,
      "pre_seq_len": null,
      "prefix_projection": false,
      "quantization_bit": 0,
      "torch_dtype": "float16",
      "transformers_version": "4.27.1",
      "use_cache": true,
      "vocab_size": 150528
    }
    
    [WARNING|tokenization_auto.py:652] 2023-09-09 16:39:06,768 >> Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:39:07,506 >> loading file ice_text.model
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:39:07,507 >> loading file added_tokens.json
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:39:07,507 >> loading file special_tokens_map.json
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:39:07,507 >> loading file tokenizer_config.json
    [WARNING|auto_factory.py:456] 2023-09-09 16:39:09,108 >> Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
    [INFO|modeling_utils.py:2400] 2023-09-09 16:39:09,201 >> loading weights file chatglm-6b/pytorch_model.bin.index.json
    [INFO|configuration_utils.py:575] 2023-09-09 16:39:09,202 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
    Loading checkpoint shards: 100%|██████████████████| 8/8 [00:13<00:00,  1.66s/it]
    [INFO|modeling_utils.py:3032] 2023-09-09 16:39:22,673 >> All model checkpoint weights were used when initializing ChatGLMForConditionalGeneration.
    
    [WARNING|modeling_utils.py:3034] 2023-09-09 16:39:22,673 >> Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at chatglm-6b and are newly initialized: ['transformer.prefix_encoder.embedding.weight']
    You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
    [INFO|modeling_utils.py:2690] 2023-09-09 16:39:22,719 >> Generation config file not found, using a generation config created from the model config.
    Quantized to 4 bit
    /home/pai/lib/python3.9/site-packages/dill/_dill.py:412: PicklingWarning: Cannot locate reference to <class 'google.protobuf.pyext._message.CMessage'>.
      StockPickler.save(self, obj, save_persistent_id)
    /home/pai/lib/python3.9/site-packages/dill/_dill.py:412: PicklingWarning: Cannot pickle <class 'google.protobuf.pyext._message.CMessage'>: google.protobuf.pyext._message.CMessage has recursive self-references that trigger a RecursionError.
      StockPickler.save(self, obj, save_persistent_id)
    Parameter 'function'=<function main.<locals>.preprocess_function_train at 0x7f78988ef430> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
    09/09/2023 16:41:50 - WARNING - datasets.fingerprint - Parameter 'function'=<function main.<locals>.preprocess_function_train at 0x7f78988ef430> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
    Running tokenizer on train dataset: 100%|█| 100/100 [00:00<00:00, 1514.19 exampl
    input_ids [20005, 85421, 20061, 87329, 20032, 118339, 20061, 92043, 20032, 85347, 20061, 90872, 20032, 89768, 20061, 88944, 20032, 87329, 84103, 20061, 116914, 150001, 150004, 20005, 107052, 116914, 101471, 84562, 85759, 84493, 84988, 20006, 85840, 85388, 94531, 83825, 95786, 84009, 83823, 85626, 83882, 84619, 85388, 20006, 84480, 85604, 105646, 130945, 20010, 84089, 85966, 107052, 87329, 85544, 20006, 91964, 90533, 84417, 83862, 109978, 83991, 83823, 97284, 108473, 84219, 83848, 132012, 20006, 91231, 85099, 91252, 86800, 105768, 84566, 84338, 120323, 95469, 83823, 137317, 84218, 84257, 84051, 94197, 20006, 83893, 150005, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003]
    inputs 类型#裤*版型#宽松*风格#性感*图案#线条*裤型#阔腿裤 宽松的阔腿裤这两年真的吸粉不少,明星时尚达人的心头爱。毕竟好穿时尚,谁都能穿出腿长2米的效果宽松的裤腿,当然是遮肉小能手啊。上身随性自然不拘束,面料亲肤舒适贴身体验感棒棒哒。系带部分增加设计看点,还
    label_ids [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 150004, 20005, 107052, 116914, 101471, 84562, 85759, 84493, 84988, 20006, 85840, 85388, 94531, 83825, 95786, 84009, 83823, 85626, 83882, 84619, 85388, 20006, 84480, 85604, 105646, 130945, 20010, 84089, 85966, 107052, 87329, 85544, 20006, 91964, 90533, 84417, 83862, 109978, 83991, 83823, 97284, 108473, 84219, 83848, 132012, 20006, 91231, 85099, 91252, 86800, 105768, 84566, 84338, 120323, 95469, 83823, 137317, 84218, 84257, 84051, 94197, 20006, 83893, 150005, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003, 20003]
    labels 宽松的阔腿裤这两年真的吸粉不少,明星时尚达人的心头爱。毕竟好穿时尚,谁都能穿出腿长2米的效果宽松的裤腿,当然是遮肉小能手啊。上身随性自然不拘束,面料亲肤舒适贴身体验感棒棒哒。系带部分增加设计看点,还
    /home/pai/lib/python3.9/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
      warnings.warn(
    [INFO|trainer.py:1740] 2023-09-09 16:41:51,803 >> ***** Running training *****
    [INFO|trainer.py:1741] 2023-09-09 16:41:51,803 >>   Num examples = 100
    [INFO|trainer.py:1742] 2023-09-09 16:41:51,803 >>   Num Epochs = 1
    [INFO|trainer.py:1743] 2023-09-09 16:41:51,803 >>   Instantaneous batch size per device = 1
    [INFO|trainer.py:1744] 2023-09-09 16:41:51,803 >>   Total train batch size (w. parallel, distributed & accumulation) = 16
    [INFO|trainer.py:1745] 2023-09-09 16:41:51,803 >>   Gradient Accumulation steps = 16
    [INFO|trainer.py:1746] 2023-09-09 16:41:51,803 >>   Total optimization steps = 6
    [INFO|trainer.py:1747] 2023-09-09 16:41:51,804 >>   Number of trainable parameters = 1835008
      0%|                                                     | 0/6 [00:00<?, ?it/s]09/09/2023 16:41:51 - WARNING - transformers_modules.chatglm-6b.modeling_chatglm - `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
    100%|█████████████████████████████████████████████| 6/6 [00:29<00:00,  4.66s/it][INFO|trainer.py:2814] 2023-09-09 16:42:21,246 >> Saving model checkpoint to output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6
    [INFO|configuration_utils.py:457] 2023-09-09 16:42:21,250 >> Configuration saved in output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/config.json
    [INFO|configuration_utils.py:362] 2023-09-09 16:42:21,252 >> Configuration saved in output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/generation_config.json
    [INFO|modeling_utils.py:1762] 2023-09-09 16:42:32,230 >> Model weights saved in output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/pytorch_model.bin
    [INFO|tokenization_utils_base.py:2163] 2023-09-09 16:42:32,701 >> tokenizer config file saved in output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/tokenizer_config.json
    [INFO|tokenization_utils_base.py:2170] 2023-09-09 16:42:32,701 >> Special tokens file saved in output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/special_tokens_map.json
    [INFO|trainer.py:2012] 2023-09-09 16:42:32,727 >> 
    
    Training completed. Do not forget to share your model on huggingface.co/models =)
    
    
    {'train_runtime': 40.9236, 'train_samples_per_second': 2.444, 'train_steps_per_second': 0.147, 'train_loss': 7.525146484375, 'epoch': 0.96}
    100%|█████████████████████████████████████████████| 6/6 [00:40<00:00,  6.82s/it]
    ***** train metrics *****
      epoch                    =       0.96
      train_loss               =     7.5251
      train_runtime            = 0:00:40.92
      train_samples            =        100
      train_samples_per_second =      2.444
      train_steps_per_second   =      0.147
  3. 模型推理。

    单击此处查看运行结果

    09/09/2023 16:42:45 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
    09/09/2023 16:42:45 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
    _n_gpu=1,
    adafactor=False,
    adam_beta1=0.9,
    adam_beta2=0.999,
    adam_epsilon=1e-08,
    auto_find_batch_size=False,
    bf16=False,
    bf16_full_eval=False,
    data_seed=None,
    dataloader_drop_last=False,
    dataloader_num_workers=0,
    dataloader_pin_memory=True,
    ddp_bucket_cap_mb=None,
    ddp_find_unused_parameters=None,
    ddp_timeout=1800,
    debug=[],
    deepspeed=None,
    disable_tqdm=False,
    do_eval=False,
    do_predict=True,
    do_train=False,
    eval_accumulation_steps=None,
    eval_delay=0,
    eval_steps=None,
    evaluation_strategy=no,
    fp16=False,
    fp16_backend=auto,
    fp16_full_eval=False,
    fp16_opt_level=O1,
    fsdp=[],
    fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
    fsdp_min_num_params=0,
    fsdp_transformer_layer_cls_to_wrap=None,
    full_determinism=False,
    generation_max_length=None,
    generation_num_beams=None,
    gradient_accumulation_steps=1,
    gradient_checkpointing=False,
    greater_is_better=None,
    group_by_length=False,
    half_precision_backend=auto,
    hub_model_id=None,
    hub_private_repo=False,
    hub_strategy=every_save,
    hub_token=<HUB_TOKEN>,
    ignore_data_skip=False,
    include_inputs_for_metrics=False,
    jit_mode_eval=False,
    label_names=None,
    label_smoothing_factor=0.0,
    learning_rate=5e-05,
    length_column_name=length,
    load_best_model_at_end=False,
    local_rank=-1,
    log_level=passive,
    log_level_replica=warning,
    log_on_each_node=True,
    logging_dir=./output/adgen-chatglm-6b-pt-8-1e-2/runs/Sep09_16-42-45_dsw-101262-5c8f7bb5b4-vm5lw,
    logging_first_step=False,
    logging_nan_inf_filter=True,
    logging_steps=500,
    logging_strategy=steps,
    lr_scheduler_type=linear,
    max_grad_norm=1.0,
    max_steps=-1,
    metric_for_best_model=None,
    mp_parameters=,
    no_cuda=False,
    num_train_epochs=3.0,
    optim=adamw_hf,
    optim_args=None,
    output_dir=./output/adgen-chatglm-6b-pt-8-1e-2,
    overwrite_output_dir=True,
    past_index=-1,
    per_device_eval_batch_size=1,
    per_device_train_batch_size=8,
    predict_with_generate=True,
    prediction_loss_only=False,
    push_to_hub=False,
    push_to_hub_model_id=None,
    push_to_hub_organization=None,
    push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
    ray_scope=last,
    remove_unused_columns=True,
    report_to=['tensorboard'],
    resume_from_checkpoint=None,
    run_name=./output/adgen-chatglm-6b-pt-8-1e-2,
    save_on_each_node=False,
    save_steps=500,
    save_strategy=steps,
    save_total_limit=None,
    seed=42,
    sharded_ddp=[],
    skip_memory_metrics=True,
    sortish_sampler=False,
    tf32=None,
    torch_compile=False,
    torch_compile_backend=None,
    torch_compile_mode=None,
    torchdynamo=None,
    tpu_metrics_debug=False,
    tpu_num_cores=None,
    use_ipex=False,
    use_legacy_prediction_loop=False,
    use_mps_device=False,
    warmup_ratio=0.0,
    warmup_steps=0,
    weight_decay=0.0,
    xpu_backend=None,
    )
    /home/pai/lib/python3.9/site-packages/datasets/load.py:2089: FutureWarning: 'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.
    You can remove this warning by passing 'token=None' instead.
      warnings.warn(
    Downloading data files: 100%|██████████████████| 2/2 [00:00<00:00, 17189.77it/s]
    Extracting data files: 100%|████████████████████| 2/2 [00:00<00:00, 1850.56it/s]
    Generating validation split: 10 examples [00:00, 4133.13 examples/s]
    Generating test split: 10 examples [00:00, 7375.25 examples/s]
    [INFO|configuration_utils.py:666] 2023-09-09 16:42:47,054 >> loading configuration file ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/config.json
    [WARNING|configuration_auto.py:905] 2023-09-09 16:42:47,054 >> Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
    [INFO|configuration_utils.py:666] 2023-09-09 16:42:47,100 >> loading configuration file ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/config.json
    [INFO|configuration_utils.py:720] 2023-09-09 16:42:47,101 >> Model config ChatGLMConfig {
      "_name_or_path": "./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6",
      "architectures": [
        "ChatGLMForConditionalGeneration"
      ],
      "auto_map": {
        "AutoConfig": "configuration_chatglm.ChatGLMConfig",
        "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
        "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration"
      },
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "hidden_size": 4096,
      "inner_hidden_size": 16384,
      "layernorm_epsilon": 1e-05,
      "max_sequence_length": 2048,
      "model_type": "chatglm",
      "num_attention_heads": 32,
      "num_layers": 28,
      "pad_token_id": 20003,
      "position_encoding_2d": true,
      "pre_seq_len": 8,
      "prefix_projection": false,
      "quantization_bit": 4,
      "torch_dtype": "float16",
      "transformers_version": "4.27.1",
      "use_cache": true,
      "vocab_size": 150528
    }
    
    [WARNING|tokenization_auto.py:652] 2023-09-09 16:42:47,101 >> Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:42:47,395 >> loading file ice_text.model
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:42:47,395 >> loading file added_tokens.json
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:42:47,395 >> loading file special_tokens_map.json
    [INFO|tokenization_utils_base.py:1800] 2023-09-09 16:42:47,395 >> loading file tokenizer_config.json
    [WARNING|auto_factory.py:456] 2023-09-09 16:42:49,012 >> Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
    [INFO|modeling_utils.py:2400] 2023-09-09 16:42:49,097 >> loading weights file ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/pytorch_model.bin
    [INFO|configuration_utils.py:575] 2023-09-09 16:42:51,824 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
    [INFO|modeling_utils.py:3032] 2023-09-09 16:45:17,067 >> All model checkpoint weights were used when initializing ChatGLMForConditionalGeneration.
    
    [INFO|modeling_utils.py:3040] 2023-09-09 16:45:17,067 >> All the weights of ChatGLMForConditionalGeneration were initialized from the model checkpoint at ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6.
    If your task is similar to the task the model of the checkpoint was trained on, you can already use ChatGLMForConditionalGeneration for predictions without further training.
    [INFO|configuration_utils.py:535] 2023-09-09 16:45:17,113 >> loading configuration file ./output/adgen-chatglm-6b-pt-8-1e-2/checkpoint-6/generation_config.json
    [INFO|configuration_utils.py:575] 2023-09-09 16:45:17,114 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
    Quantized to 4 bit
    /home/pai/lib/python3.9/site-packages/dill/_dill.py:412: PicklingWarning: Cannot locate reference to <class 'google.protobuf.pyext._message.CMessage'>.
      StockPickler.save(self, obj, save_persistent_id)
    /home/pai/lib/python3.9/site-packages/dill/_dill.py:412: PicklingWarning: Cannot pickle <class 'google.protobuf.pyext._message.CMessage'>: google.protobuf.pyext._message.CMessage has recursive self-references that trigger a RecursionError.
      StockPickler.save(self, obj, save_persistent_id)
    Parameter 'function'=<function main.<locals>.preprocess_function_eval at 0x7f29b8fc6790> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
    09/09/2023 16:45:17 - WARNING - datasets.fingerprint - Parameter 'function'=<function main.<locals>.preprocess_function_eval at 0x7f29b8fc6790> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
    Running tokenizer on prediction dataset: 100%|█| 10/10 [00:00<00:00, 605.32 exam
    input_ids [20005, 85421, 20061, 95898, 20032, 88554, 20061, 97257, 84555, 20032, 85107, 20061, 86268, 20032, 85347, 20061, 91689, 20032, 89768, 20061, 105428, 20032, 85173, 93942, 20061, 90984, 20032, 85173, 90936, 20061, 84703, 85509, 150001, 150004]
    inputs 类型#上衣*材质#牛仔布*颜色#白色*风格#简约*图案#刺绣*衣样式#外套*衣款式#破洞
    label_ids [20005, 91689, 86561, 87061, 97257, 90984, 20006, 92194, 85173, 84290, 84622, 101549, 83823, 85173, 84290, 103343, 83832, 83912, 85209, 84703, 85509, 84051, 20006, 89418, 98598, 107019, 20006, 84257, 91319, 86069, 94197, 83823, 85173, 92265, 84880, 84131, 83832, 93416, 105428, 86261, 20006, 85594, 107834, 20006, 93412, 125145, 85388, 83823, 150001, 150004]
    labels 简约而不简单的牛仔外套,白色的衣身十分百搭。衣身多处有做旧破洞设计,打破单调乏味,增加一丝造型看点。衣身后背处有趣味刺绣装饰,丰富层次感,彰显别样时尚。
    09/09/2023 16:45:18 - INFO - __main__ - *** Predict ***
    [INFO|trainer.py:3068] 2023-09-09 16:45:18,278 >> ***** Running Prediction *****
    [INFO|trainer.py:3070] 2023-09-09 16:45:18,278 >>   Num examples = 10
    [INFO|trainer.py:3073] 2023-09-09 16:45:18,278 >>   Batch size = 1
    [INFO|configuration_utils.py:575] 2023-09-09 16:45:18,284 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
      0%|                                                    | 0/10 [00:00<?, ?it/s][INFO|configuration_utils.py:575] 2023-09-09 16:45:22,578 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     20%|████████▊                                   | 2/10 [00:03<00:14,  1.80s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:26,172 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     30%|█████████████▏                              | 3/10 [00:05<00:12,  1.80s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:27,975 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     40%|█████████████████▌                          | 4/10 [00:20<00:41,  6.87s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:43,368 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     50%|██████████████████████                      | 5/10 [00:25<00:30,  6.13s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:48,094 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     60%|██████████████████████████▍                 | 6/10 [00:28<00:20,  5.20s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:51,445 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     70%|██████████████████████████████▊             | 7/10 [00:32<00:14,  4.68s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:55,030 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     80%|███████████████████████████████████▏        | 8/10 [00:36<00:08,  4.33s/it][INFO|configuration_utils.py:575] 2023-09-09 16:45:58,605 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
     90%|███████████████████████████████████████▌    | 9/10 [00:38<00:03,  3.74s/it][INFO|configuration_utils.py:575] 2023-09-09 16:46:01,029 >> Generate config GenerationConfig {
      "_from_model_config": true,
      "bos_token_id": 150004,
      "eos_token_id": 150005,
      "pad_token_id": 20003,
      "transformers_version": "4.27.1"
    }
    
    100%|███████████████████████████████████████████| 10/10 [00:39<00:00,  2.93s/it]Building prefix dict from the default dictionary ...
    09/09/2023 16:46:02 - DEBUG - jieba - Building prefix dict from the default dictionary ...
    Dumping model to file cache /tmp/jieba.cache
    09/09/2023 16:46:02 - DEBUG - jieba - Dumping model to file cache /tmp/jieba.cache
    Loading model cost 0.743 seconds.
    09/09/2023 16:46:02 - DEBUG - jieba - Loading model cost 0.743 seconds.
    Prefix dict has been built successfully.
    09/09/2023 16:46:02 - DEBUG - jieba - Prefix dict has been built successfully.
    /home/pai/lib/python3.9/site-packages/nltk/translate/bleu_score.py:552: UserWarning: 
    The hypothesis contains 0 counts of 3-gram overlaps.
    Therefore the BLEU score evaluates to 0, independently of
    how many N-gram overlaps of lower order it contains.
    Consider using lower n-gram order or use SmoothingFunction()
      warnings.warn(_msg)
    /home/pai/lib/python3.9/site-packages/nltk/translate/bleu_score.py:552: UserWarning: 
    The hypothesis contains 0 counts of 4-gram overlaps.
    Therefore the BLEU score evaluates to 0, independently of
    how many N-gram overlaps of lower order it contains.
    Consider using lower n-gram order or use SmoothingFunction()
      warnings.warn(_msg)
    100%|███████████████████████████████████████████| 10/10 [00:40<00:00,  4.04s/it]
    ***** predict metrics *****
      predict_bleu-4             =     0.6288
      predict_rouge-1            =    22.1595
      predict_rouge-2            =     1.2756
      predict_rouge-l            =    12.8913
      predict_runtime            = 0:00:44.65
      predict_samples            =         10
      predict_samples_per_second =      0.224
      predict_steps_per_second   =      0.224
  4. 试玩模型。

    1. 检查./ChatGLM-6B-main/web_demo.py的内容是否满足以下要求。如果不满足,请按要求修改对应内容。本教程不需要修改,保持默认即可。

    2. 启动WebUI。

      单击此处查看运行结果

      Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
      Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
      Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
      Loading checkpoint shards: 100%|██████████████████| 8/8 [00:23<00:00,  2.96s/it]
      /mnt/workspace/demos/chatglm_6b/ChatGLM-6B-main/web_demo.py:37: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
        txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter", lines=11).style(
      Running on local URL:  http://127.0.0.1:7860
      
      Could not create share link. Missing file: /home/pai/lib/python3.9/site-packages/gradio/frpc_linux_amd64_v0.2. 
      
      Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 
      
      1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
      2. Rename the downloaded file to: frpc_linux_amd64_v0.2
      3. Move the file to this location: /home/pai/lib/python3.9/site-packages/gradio

  5. 在返回结果中,单击URL链接(http://127.0.0.1:7860),进入WebUI页面。后续您可以在该页面,进行模型推理。

    【说明】由于http://127.0.0.1:7860为内网访问地址,仅支持在当前的DSW实例内部通过单击链接来访问WebUI页面,不支持通过外部浏览器直接访问。

完成

5

完成以上操作后,您已经成功完成了ChatGLM模型微调训练及WebUI部署。您可以在WebUI页面,进行模型推理验证。

Enter text and press enter中输入文本内容,例如:阿里云交互式建模(PAI-DSW)支持哪些功能模块?

单击Generate,输出如图推理结果。image..png

清理及后续

5

清理

  • 如果无需继续使用DSW实例,您可以按照以下操作步骤停止DSW实例。

    1. 登录PAI控制台

    2. 在页面左上方,选择DSW实例的地域。

    3. 在左侧导航栏单击工作空间列表,在工作空间列表页面中单击默认工作空间名称,进入对应工作空间内。

    4. 在工作空间页面的左侧导航栏选择模型开发与训练>交互式建模(DSW),进入交互式建模(DSW)页面。

    5. 单击目标实例操作列下的停止,成功停止后即停止资源消耗。

  • 领取免费资源包后,请在免费额度和有效试用期内使用。如果免费额度用尽或试用期结束后,继续使用计算资源,会产生后付费账单。

    请前往资源实例管理页面,查看免费额度使用量和过期时间,如下图所示。

    image

  • 如果需要继续使用DSW实例,请务必至少在试用到期1小时前为您的阿里云账号充值,到期未续费的DSW实例会因欠费而被自动停止。

后续

在试用有效期期间,您还可以继续使用DSW实例进行模型训练和推理验证。

总结

常用知识点

问题1:本教程使用了DSW的哪个功能完成了ChatGLM的训练及推理?(单选题)

  • Notebook

  • Terminal

  • WebIDE

正确答案是Notebook。

延伸阅读