AI图片修复

本方案整合了来自开源社区的高质量图像修复、去噪、上色等算法,并使用Stable Diffusion WebUI进行交互式图像修复。您可以根据需要调整参数,组合不同的处理方法,以达到最佳的修复效果。本文为您介绍如何在阿里云DSW中,进行交互式图像修复。

准备环境和资源

  • 创建工作空间,详情请参见创建工作空间

  • 创建DSW实例,其中关键参数配置如下。具体操作,请参见创建及管理DSW实例

    • 实例规格选择:ecs.gn7i-c8g1.2xlarge

    • 镜像选择:在官方镜像中选择stable-diffusion-webui-env:pytorch1.13-gpu-py310-cu117-ubuntu22.04

步骤一:在DSW中打开教程文件

  1. 进入PAI-DSW开发环境。

    1. 登录PAI控制台

    2. 在页面左上方,选择DSW实例所在的地域。

    3. 在左侧导航栏单击工作空间列表,在工作空间列表页面中单击默认工作空间名称,进入对应工作空间内。

    4. 在左侧导航栏,选择模型开发与训练>交互式建模(DSW)

    5. 单击需要打开的实例操作列下的打开,进入PAI-DSW实例开发环境。

  2. Notebook页签的Launcher页面,单击快速开始区域Tool下的DSW Gallery,打开DSW Gallery页面。image.png

  3. DSW Gallery页面中,搜索并找到AI重燃亚运经典教程,单击教程卡片中的DSW中打开

    单击后即会自动将本教程所需的资源和教程文件下载至DSW实例中,并在下载完成后自动打开教程文件。41d0fd0d16860a211e170c2688213ba8.png

步骤二:运行教程文件

在打开的教程文件image_restoration.ipynb文件中,您可以直接看到教程文本,您可以在教程文件中直接运行对应的步骤的命令,当成功运行结束一个步骤命令后,再顺次运行下个步骤的命令。b8217b563ccc9fd3b51421cd0136750c.png本教程包含的操作步骤以及每个步骤的执行结果如下。

  1. 导入待修复的照片。依次运行数据准备章节的命令,下载提供的亚运老照片并解压至input文件夹中。

    1. 安装工具。

      单击此处查看运行结果

      Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu jammy InRelease [270 kB]
      Get:2 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates InRelease [119 kB]
      Get:3 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-backports InRelease [109 kB]
      Get:4 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security InRelease [110 kB]
      Get:5 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/multiverse Sources [361 kB]
      Get:6 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/main Sources [1668 kB]
      Get:7 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/restricted Sources [28.2 kB]
      Get:8 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/universe Sources [22.0 MB]
      Get:9 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
      Get:10 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
      Get:11 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/restricted amd64 Packages [164 kB]
      Get:12 http://mirrors.cloud.aliyuncs.com/ubuntu jammy/main amd64 Packages [1792 kB]
      Get:13 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/multiverse Sources [21.0 kB]
      Get:14 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/universe Sources [347 kB]
      Get:15 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/main Sources [531 kB]
      Get:16 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/restricted Sources [56.3 kB]
      Get:17 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/restricted amd64 Packages [1015 kB]
      Get:18 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/main amd64 Packages [1185 kB]
      Get:19 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/universe amd64 Packages [1251 kB]
      Get:20 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/multiverse amd64 Packages [49.8 kB]
      Get:21 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-backports/main Sources [9392 B]
      Get:22 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-backports/universe Sources [10.5 kB]
      Get:23 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-backports/main amd64 Packages [50.3 kB]
      Get:24 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-backports/universe amd64 Packages [28.1 kB]
      Get:25 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/restricted Sources [53.4 kB]
      Get:26 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/universe Sources [202 kB]
      Get:27 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/multiverse Sources [11.3 kB]
      Get:28 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/main Sources [270 kB]
      Get:29 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/main amd64 Packages [915 kB]
      Get:30 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/multiverse amd64 Packages [44.0 kB]
      Get:31 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/restricted amd64 Packages [995 kB]
      Get:32 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-security/universe amd64 Packages [990 kB]
      Fetched 52.4 MB in 2s (26.0 MB/s)                        
      Reading package lists... Done
      Reading package lists... Done
      Building dependency tree... Done
      Reading state information... Done
      Suggested packages:
        zip
      The following NEW packages will be installed:
        unzip
      0 upgraded, 1 newly installed, 0 to remove and 99 not upgraded.
      Need to get 174 kB of archives.
      After this operation, 385 kB of additional disk space will be used.
      Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.1 [174 kB]
      Fetched 174 kB in 0s (7160 kB/s)
      debconf: delaying package configuration, since apt-utils is not installed
      Selecting previously unselected package unzip.
      (Reading database ... 20089 files and directories currently installed.)
      Preparing to unpack .../unzip_6.0-26ubuntu3.1_amd64.deb ...
      Unpacking unzip (6.0-26ubuntu3.1) ...
      Setting up unzip (6.0-26ubuntu3.1) ...
    2. 使用内网下载链接可以提升下载速度。

    3. 下载图片数据并解压至input目录。

      单击此处查看运行结果

      http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/img/input.zip
      cn-hangzhou
      --2023-09-04 11:20:55--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/img/input.zip
      Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.49, 100.118.28.45, 100.118.28.44, ...
      Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.49|:80... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 1657458 (1.6M) [application/zip]
      Saving to: ‘input.zip’
      
      input.zip           100%[===================>]   1.58M  --.-KB/s    in 0.09s   
      
      2023-09-04 11:20:56 (16.8 MB/s) - ‘input.zip’ saved [1657458/1657458]
      
      Archive:  input.zip
         creating: input/
        inflating: input/54.jpg            
        inflating: input/20.jpg            
        inflating: input/10.jpg            
        inflating: input/50.jpg            
        inflating: input/2.jpg             
        inflating: input/40.jpg            
        inflating: input/4.png             
        inflating: input/70.jpg            
        inflating: input/34.jpg            
        inflating: input/64.jpg            
  2. 针对给定的图片,您可以选择以下两种方式,或将它们进行组合,以进行老照片修复任务。

    基于源码修复图片

    Gallery中,PAI集成了大量相关领域的开源算法和预训练模型,供您方便地进行一键式使用。您也可以在Notebook中进一步优化或开发您的老照片修复算法。根据模型处理方式的不同,PAI将老照片修复任务大致分为以下几个操作步骤:

    1. 图像去噪,即去除图像中的噪声、模糊等。支持以下两种算法,您可以任意选择一种来处理图像。

      Restormer

      1. 下载代码及预训练文件。下载解压完成后,您可以在./Restormer文件夹中查看该算法的源代码。

        单击此处查看运行结果

        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/restormer.zip
        cn-hangzhou
        --2023-09-04 11:30:30--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/restormer.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.49, 100.118.28.45, 100.118.28.44, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.49|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 2859106485 (2.7G) [application/zip]
        Saving to: ‘restormer.zip’
        
        restormer.zip       100%[===================>]   2.66G  14.2MB/s    in 3m 18s  
        
        2023-09-04 11:33:48 (13.8 MB/s) - ‘restormer.zip’ saved [2859106485/2859106485]
        
        Archive:  restormer.zip
           creating: Restormer/
           creating: Restormer/.ipynb_checkpoints/
          inflating: Restormer/.ipynb_checkpoints/demo-checkpoint.py  
          inflating: Restormer/setup.cfg     
           creating: Restormer/Denoising/
          inflating: Restormer/Denoising/test_real_denoising_dnd.py  
           creating: Restormer/Denoising/pretrained_models/
          inflating: Restormer/Denoising/pretrained_models/gaussian_gray_denoising_sigma25.pth  
          inflating: Restormer/Denoising/pretrained_models/real_denoising.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_color_denoising_blind.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_gray_denoising_sigma15.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_color_denoising_sigma50.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_color_denoising_sigma25.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_color_denoising_sigma15.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_gray_denoising_blind.pth  
          inflating: Restormer/Denoising/pretrained_models/gaussian_gray_denoising_sigma50.pth  
          inflating: Restormer/Denoising/test_gaussian_color_denoising.py  
          inflating: Restormer/Denoising/evaluate_gaussian_gray_denoising.py  
          inflating: Restormer/Denoising/test_real_denoising_sidd.py  
           creating: Restormer/Denoising/Datasets/
          inflating: Restormer/Denoising/Datasets/README.md  
          inflating: Restormer/Denoising/generate_patches_sidd.py  
          inflating: Restormer/Denoising/generate_patches_dfwb.py  
           creating: Restormer/Denoising/Options/
          inflating: Restormer/Denoising/Options/GaussianColorDenoising_RestormerSigma50.yml  
          inflating: Restormer/Denoising/Options/GaussianGrayDenoising_Restormer.yml  
          inflating: Restormer/Denoising/Options/GaussianColorDenoising_Restormer.yml  
          inflating: Restormer/Denoising/Options/GaussianColorDenoising_RestormerSigma15.yml  
          inflating: Restormer/Denoising/Options/GaussianGrayDenoising_RestormerSigma25.yml  
          inflating: Restormer/Denoising/Options/GaussianGrayDenoising_RestormerSigma15.yml  
          inflating: Restormer/Denoising/Options/GaussianGrayDenoising_RestormerSigma50.yml  
          inflating: Restormer/Denoising/Options/GaussianColorDenoising_RestormerSigma25.yml  
          inflating: Restormer/Denoising/Options/RealDenoising_Restormer.yml  
          inflating: Restormer/Denoising/evaluate_gaussian_color_denoising.py  
          inflating: Restormer/Denoising/README.md  
          inflating: Restormer/Denoising/test_gaussian_gray_denoising.py  
          inflating: Restormer/Denoising/evaluate_sidd.m  
          inflating: Restormer/Denoising/download_data.py  
          inflating: Restormer/Denoising/utils.py  
         extracting: Restormer/hat.zip       
           creating: Restormer/Deraining/
           creating: Restormer/Deraining/pretrained_models/
          inflating: Restormer/Deraining/pretrained_models/deraining.pth  
          inflating: Restormer/Deraining/pretrained_models/README.md  
          inflating: Restormer/Deraining/test.py  
           creating: Restormer/Deraining/Datasets/
          inflating: Restormer/Deraining/Datasets/README.md  
           creating: Restormer/Deraining/Options/
          inflating: Restormer/Deraining/Options/Deraining_Restormer.yml  
          inflating: Restormer/Deraining/README.md  
          inflating: Restormer/Deraining/evaluate_PSNR_SSIM.m  
          inflating: Restormer/Deraining/download_data.py  
          inflating: Restormer/Deraining/utils.py  
           creating: Restormer/demo/
           creating: Restormer/demo/restored/
           creating: Restormer/demo/restored/Single_Image_Defocus_Deblurring/
          inflating: Restormer/demo/restored/Single_Image_Defocus_Deblurring/couple.png  
          inflating: Restormer/demo/restored/Single_Image_Defocus_Deblurring/engagement.png  
          inflating: Restormer/demo/restored/Single_Image_Defocus_Deblurring/portrait.png  
           creating: Restormer/demo/degraded/
          inflating: Restormer/demo/degraded/couple.jpg  
          inflating: Restormer/demo/degraded/portrait.jpg  
          inflating: Restormer/demo/degraded/engagement.jpg  
          inflating: Restormer/.gitignore    
          inflating: Restormer/LICENSE.md    
          inflating: Restormer/demo.py       
         extracting: Restormer/Resformer_pretrain.zip  
           creating: Restormer/Motion_Deblurring/
          inflating: Restormer/Motion_Deblurring/evaluate_gopro_hide.m  
           creating: Restormer/Motion_Deblurring/pretrained_models/
          inflating: Restormer/Motion_Deblurring/pretrained_models/motion_deblurring.pth  
          inflating: Restormer/Motion_Deblurring/pretrained_models/README.md  
          inflating: Restormer/Motion_Deblurring/test.py  
           creating: Restormer/Motion_Deblurring/Datasets/
          inflating: Restormer/Motion_Deblurring/Datasets/README.md  
           creating: Restormer/Motion_Deblurring/Options/
          inflating: Restormer/Motion_Deblurring/Options/Deblurring_Restormer.yml  
          inflating: Restormer/Motion_Deblurring/README.md  
          inflating: Restormer/Motion_Deblurring/evaluate_realblur.py  
          inflating: Restormer/Motion_Deblurring/generate_patches_gopro.py  
          inflating: Restormer/Motion_Deblurring/download_data.py  
          inflating: Restormer/Motion_Deblurring/utils.py  
           creating: Restormer/basicsr/
           creating: Restormer/basicsr/models/
          inflating: Restormer/basicsr/models/base_model.py  
           creating: Restormer/basicsr/models/archs/
          inflating: Restormer/basicsr/models/archs/restormer_arch.py  
          inflating: Restormer/basicsr/models/archs/arch_util.py  
          inflating: Restormer/basicsr/models/archs/__init__.py  
          inflating: Restormer/basicsr/models/lr_scheduler.py  
           creating: Restormer/basicsr/models/losses/
          inflating: Restormer/basicsr/models/losses/loss_util.py  
          inflating: Restormer/basicsr/models/losses/__init__.py  
          inflating: Restormer/basicsr/models/losses/losses.py  
          inflating: Restormer/basicsr/models/__init__.py  
          inflating: Restormer/basicsr/models/image_restoration_model.py  
          inflating: Restormer/basicsr/train.py  
          inflating: Restormer/basicsr/version.py  
          inflating: Restormer/basicsr/test.py  
           creating: Restormer/basicsr/utils/
          inflating: Restormer/basicsr/utils/bundle_submissions.py  
          inflating: Restormer/basicsr/utils/file_client.py  
          inflating: Restormer/basicsr/utils/face_util.py  
          inflating: Restormer/basicsr/utils/create_lmdb.py  
          inflating: Restormer/basicsr/utils/logger.py  
          inflating: Restormer/basicsr/utils/options.py  
          inflating: Restormer/basicsr/utils/img_util.py  
          inflating: Restormer/basicsr/utils/matlab_functions.py  
          inflating: Restormer/basicsr/utils/download_util.py  
          inflating: Restormer/basicsr/utils/__init__.py  
          inflating: Restormer/basicsr/utils/misc.py  
          inflating: Restormer/basicsr/utils/dist_util.py  
          inflating: Restormer/basicsr/utils/lmdb_util.py  
          inflating: Restormer/basicsr/utils/flow_util.py  
           creating: Restormer/basicsr/metrics/
          inflating: Restormer/basicsr/metrics/fid.py  
          inflating: Restormer/basicsr/metrics/metric_util.py  
          inflating: Restormer/basicsr/metrics/niqe_pris_params.npz  
          inflating: Restormer/basicsr/metrics/psnr_ssim.py  
          inflating: Restormer/basicsr/metrics/niqe.py  
          inflating: Restormer/basicsr/metrics/__init__.py  
           creating: Restormer/basicsr/data/
           creating: Restormer/basicsr/data/meta_info/
          inflating: Restormer/basicsr/data/meta_info/meta_info_Vimeo90K_test_medium_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_REDS4_test_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_Vimeo90K_test_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_DIV2K800sub_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_REDS_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_Vimeo90K_test_slow_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_REDSofficial4_test_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_Vimeo90K_test_fast_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_Vimeo90K_train_GT.txt  
          inflating: Restormer/basicsr/data/meta_info/meta_info_REDSval_official_test_GT.txt  
          inflating: Restormer/basicsr/data/paired_image_dataset.py  
          inflating: Restormer/basicsr/data/data_util.py  
          inflating: Restormer/basicsr/data/reds_dataset.py  
          inflating: Restormer/basicsr/data/video_test_dataset.py  
          inflating: Restormer/basicsr/data/single_image_dataset.py  
          inflating: Restormer/basicsr/data/vimeo90k_dataset.py  
          inflating: Restormer/basicsr/data/__init__.py  
          inflating: Restormer/basicsr/data/prefetch_dataloader.py  
          inflating: Restormer/basicsr/data/ffhq_dataset.py  
          inflating: Restormer/basicsr/data/transforms.py  
          inflating: Restormer/basicsr/data/data_sampler.py  
           creating: Restormer/pretrained_models_demotion/
          inflating: Restormer/pretrained_models_demotion/motion_deblurring.pth  
           creating: Restormer/.git/
           creating: Restormer/.git/logs/
           creating: Restormer/.git/logs/refs/
           creating: Restormer/.git/logs/refs/remotes/
           creating: Restormer/.git/logs/refs/remotes/origin/
          inflating: Restormer/.git/logs/refs/remotes/origin/HEAD  
           creating: Restormer/.git/logs/refs/heads/
          inflating: Restormer/.git/logs/refs/heads/main  
          inflating: Restormer/.git/logs/HEAD  
          inflating: Restormer/.git/config   
           creating: Restormer/.git/refs/
           creating: Restormer/.git/refs/remotes/
           creating: Restormer/.git/refs/remotes/origin/
         extracting: Restormer/.git/refs/remotes/origin/HEAD  
           creating: Restormer/.git/refs/heads/
         extracting: Restormer/.git/refs/heads/main  
           creating: Restormer/.git/refs/tags/
         extracting: Restormer/.git/HEAD     
           creating: Restormer/.git/hooks/
          inflating: Restormer/.git/hooks/push-to-checkout.sample  
          inflating: Restormer/.git/hooks/commit-msg.sample  
          inflating: Restormer/.git/hooks/applypatch-msg.sample  
          inflating: Restormer/.git/hooks/pre-receive.sample  
          inflating: Restormer/.git/hooks/pre-push.sample  
          inflating: Restormer/.git/hooks/fsmonitor-watchman.sample  
          inflating: Restormer/.git/hooks/post-update.sample  
          inflating: Restormer/.git/hooks/update.sample  
          inflating: Restormer/.git/hooks/pre-merge-commit.sample  
          inflating: Restormer/.git/hooks/pre-commit.sample  
          inflating: Restormer/.git/hooks/pre-applypatch.sample  
          inflating: Restormer/.git/hooks/prepare-commit-msg.sample  
          inflating: Restormer/.git/hooks/pre-rebase.sample  
           creating: Restormer/.git/info/
          inflating: Restormer/.git/info/exclude  
           creating: Restormer/.git/objects/
           creating: Restormer/.git/objects/pack/
          inflating: Restormer/.git/objects/pack/pack-c5acb539fe9f4f374066c96759a798aee30d3def.pack  
          inflating: Restormer/.git/objects/pack/pack-c5acb539fe9f4f374066c96759a798aee30d3def.idx  
           creating: Restormer/.git/objects/info/
          inflating: Restormer/.git/packed-refs  
          inflating: Restormer/.git/index    
          inflating: Restormer/.git/description  
           creating: Restormer/pretrained_models_defocus_deblur/
          inflating: Restormer/pretrained_models_defocus_deblur/single_image_defocus_deblurring.pth  
          inflating: Restormer/pretrained_models_defocus_deblur/dual_pixel_defocus_deblurring.pth  
           creating: Restormer/Defocus_Deblurring/
           creating: Restormer/Defocus_Deblurring/pretrained_models/
          inflating: Restormer/Defocus_Deblurring/pretrained_models/single_image_defocus_deblurring.pth  
          inflating: Restormer/Defocus_Deblurring/pretrained_models/README.md  
          inflating: Restormer/Defocus_Deblurring/generate_patches_dpdd.py  
           creating: Restormer/Defocus_Deblurring/Datasets/
          inflating: Restormer/Defocus_Deblurring/Datasets/README.md  
           creating: Restormer/Defocus_Deblurring/Options/
          inflating: Restormer/Defocus_Deblurring/Options/DefocusDeblur_DualPixel_16bit_Restormer.yml  
          inflating: Restormer/Defocus_Deblurring/Options/DefocusDeblur_Single_8bit_Restormer.yml  
          inflating: Restormer/Defocus_Deblurring/README.md  
          inflating: Restormer/Defocus_Deblurring/test_single_image_defocus_deblur.py  
          inflating: Restormer/Defocus_Deblurring/download_data.py  
          inflating: Restormer/Defocus_Deblurring/utils.py  
          inflating: Restormer/Defocus_Deblurring/test_dual_pixel_defocus_deblur.py  
          inflating: Restormer/README.md     
         extracting: Restormer/VERSION       
          inflating: Restormer/setup.py      
           creating: Restormer/pretrained_models_derain/
          inflating: Restormer/pretrained_models_derain/deraining.pth  
          inflating: Restormer/train.sh      
          inflating: Restormer/INSTALL.md    
           creating: Restormer/__MACOSX/
           creating: Restormer/__MACOSX/pretrained_models_demotion/
          inflating: Restormer/__MACOSX/pretrained_models_demotion/._motion_deblurring.pth  
           creating: Restormer/__MACOSX/pretrained_models_defocus_deblur/
          inflating: Restormer/__MACOSX/pretrained_models_defocus_deblur/._single_image_defocus_deblurring.pth  
          inflating: Restormer/__MACOSX/pretrained_models_defocus_deblur/._dual_pixel_defocus_deblurring.pth  
          inflating: Restormer/__MACOSX/._pretrained_models_denoise  
          inflating: Restormer/__MACOSX/._pretrained_models_defocus_deblur  
           creating: Restormer/__MACOSX/pretrained_models_derain/
          inflating: Restormer/__MACOSX/pretrained_models_derain/._deraining.pth  
          inflating: Restormer/__MACOSX/._pretrained_models_derain  
          inflating: Restormer/__MACOSX/._pretrained_models_demotion  
           creating: Restormer/__MACOSX/pretrained_models_denoise/
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_gray_denoising_sigma25.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_gray_denoising_sigma50.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_color_denoising_sigma50.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_color_denoising_sigma25.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_gray_denoising_sigma15.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_gray_denoising_blind.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_color_denoising_blind.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._real_denoising.pth  
          inflating: Restormer/__MACOSX/pretrained_models_denoise/._gaussian_color_denoising_sigma15.pth  
      2. 安装额外的环境包。

        单击此处查看运行结果

        Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
        Collecting natsort
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/ef/82/7a9d0550484a62c6da82858ee9419f3dd1ccc9aa1c26a1e43da3ecd20b0d/natsort-8.4.0-py3-none-any.whl (38 kB)
        Installing collected packages: natsort
        Successfully installed natsort-8.4.0
        WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
        
        [notice] A new release of pip is available: 23.0.1 -> 23.2.1
        [notice] To update, run: python3 -m pip install --upgrade pip
      3. 根据需要运行合适的推理任务,包括去运动模糊、去focus模糊、去雨滴等。您可以参考Notebook中的相关参数来进行设置。通过指定输入文件夹或输出文件夹,您可以运行相关算法来进行图像修复任务。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        ==> Running Motion_Deblurring with weights /mnt/workspace/Restormer/Motion_Deblurring/pretrained_models/motion_deblurring.pth
         
        100%|███████████████████████████████████████████| 10/10 [00:28<00:00,  2.81s/it]
        
        Restored images are saved at results/Motion_Deblurring
        
         ==> Running Single_Image_Defocus_Deblurring with weights /mnt/workspace/Restormer/Defocus_Deblurring/pretrained_models/single_image_defocus_deblurring.pth
         
        100%|███████████████████████████████████████████| 10/10 [00:26<00:00,  2.64s/it]
        
        Restored images are saved at results/Single_Image_Defocus_Deblurring
        
         ==> Running Deraining with weights /mnt/workspace/Restormer/Deraining/pretrained_models/deraining.pth
         
        100%|███████████████████████████████████████████| 10/10 [00:26<00:00,  2.65s/it]
        
        Restored images are saved at results/Deraining
        
         ==> Running Real_Denoising with weights /mnt/workspace/Restormer/Denoising/pretrained_models/real_denoising.pth
         
        100%|███████████████████████████████████████████| 10/10 [00:24<00:00,  2.48s/it]
        
        Restored images are saved at results/Real_Denoising
        
         ==> Running Gaussian_Gray_Denoising with weights /mnt/workspace/Restormer/Denoising/pretrained_models/gaussian_gray_denoising_blind.pth
         
        100%|███████████████████████████████████████████| 10/10 [00:24<00:00,  2.45s/it]
        
        Restored images are saved at results/Gaussian_Gray_Denoising
        
         ==> Running Gaussian_Color_Denoising with weights /mnt/workspace/Restormer/Denoising/pretrained_models/gaussian_color_denoising_blind.pth
         
        100%|███████████████████████████████████████████| 10/10 [00:24<00:00,  2.50s/it]
        
        Restored images are saved at results/Gaussian_Color_Denoising

      NAFNet

      1. 下载代码及预训练文件(基于ModelScope)。下载解压完成后,您可以在对应文件夹中查看该算法的源代码。

        单击此处查看运行结果

        Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
        Collecting modelscope
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/2b/17/53845f398e340217ecb2169033f547f640db3663ffb0fb5a6218c5f3bfec/modelscope-1.8.4-py3-none-any.whl (4.9 MB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 42.9 MB/s eta 0:00:0000:0100:01
        Collecting gast>=0.2.2
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/fa/39/5aae571e5a5f4de9c3445dae08a530498e5c53b0e74410eeeb0991c79047/gast-0.5.4-py3-none-any.whl (19 kB)
        Requirement already satisfied: urllib3>=1.26 in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.26.15)
        Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from modelscope) (6.0)
        Requirement already satisfied: tqdm>=4.64.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (4.65.0)
        Collecting sortedcontainers>=1.5.9
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
        Requirement already satisfied: Pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (9.4.0)
        Requirement already satisfied: pyarrow!=9.0.0,>=6.0.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (11.0.0)
        Collecting oss2
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/4a/e7/08b90651a435acde68c537eebff970011422f61c465f6d1c88c4b3af6774/oss2-2.18.1.tar.gz (274 kB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 274.3/274.3 kB 73.3 MB/s eta 0:00:00
          Preparing metadata (setup.py) ... done
        Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from modelscope) (59.6.0)
        Requirement already satisfied: yapf in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.32.0)
        Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from modelscope) (22.2.0)
        Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.23.3)
        Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.8.2)
        Collecting simplejson>=3.3.0
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/b8/00/9720ea26c0da200a39b89e25721970f50f4b80bb2ab6de0199324f93c4ca/simplejson-3.19.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (137 kB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 137.9/137.9 kB 48.0 MB/s eta 0:00:00
        Requirement already satisfied: datasets<=2.13.0,>=2.8.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.11.0)
        Requirement already satisfied: einops in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.4.1)
        Requirement already satisfied: addict in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.4.0)
        Requirement already satisfied: requests>=2.25 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.25.1)
        Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.5.3)
        Requirement already satisfied: filelock>=3.3.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (3.10.7)
        Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.10.1)
        Requirement already satisfied: huggingface-hub<1.0.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.13.3)
        Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (23.0)
        Requirement already satisfied: multiprocess in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.70.14)
        Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (3.2.0)
        Requirement already satisfied: dill<0.3.7,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.3.6)
        Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (3.8.4)
        Requirement already satisfied: fsspec[http]>=2021.11.1 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (2023.3.0)
        Requirement already satisfied: responses<0.19 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.18.0)
        Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.1->modelscope) (1.16.0)
        Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (2022.12.7)
        Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (4.0.0)
        Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (2.10)
        Collecting aliyun-python-sdk-core>=2.13.12
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/55/5a/6eec6c6e78817e5ca2afee661f2bbb33dbcfa2ce09a2980b52223323bd2e/aliyun-python-sdk-core-2.13.36.tar.gz (440 kB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 440.5/440.5 kB 26.0 MB/s eta 0:00:00
          Preparing metadata (setup.py) ... done
        Collecting aliyun-python-sdk-kms>=2.4.1
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/13/90/02d05478df643ceac0021bd3db4f19b42dd06c2b73e082569d0d340de70c/aliyun_python_sdk_kms-2.16.1-py2.py3-none-any.whl (70 kB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.8/70.8 kB 28.1 MB/s eta 0:00:00
        Collecting crcmod>=1.7
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/6b/b0/e595ce2a2527e169c3bcd6c33d2473c1918e0b7f6826a043ca1245dd4e5b/crcmod-1.7.tar.gz (89 kB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.7/89.7 kB 31.2 MB/s eta 0:00:00
          Preparing metadata (setup.py) ... done
        Requirement already satisfied: pycryptodome>=3.4.7 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (3.17)
        Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->modelscope) (2023.3)
        Requirement already satisfied: cryptography>=2.6.0 in /usr/local/lib/python3.10/dist-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (40.0.1)
        Collecting jmespath<1.0.0,>=0.9.3
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/07/cb/5f001272b6faeb23c1c9e0acc04d48eaaf5c862c17709d20e3469c6e0139/jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
        Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.3.3)
        Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.8.2)
        Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (3.1.0)
        Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (6.0.4)
        Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (4.0.2)
        Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.3.1)
        Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0.0,>=0.11.0->datasets<=2.13.0,>=2.8.0->modelscope) (4.5.0)
        Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (1.15.1)
        Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (2.21)
        Building wheels for collected packages: oss2, aliyun-python-sdk-core, crcmod
          Building wheel for oss2 (setup.py) ... done
          Created wheel for oss2: filename=oss2-2.18.1-py3-none-any.whl size=115202 sha256=f9143856d3968608c696d2556f1f64953f89a31f7931d695e5f09ffe96a10fe4
          Stored in directory: /root/.cache/pip/wheels/2b/e2/2d/a55f3aabf369a023a14d1fc570dd3a3824cdd2223f0f73e902
          Building wheel for aliyun-python-sdk-core (setup.py) ... done
          Created wheel for aliyun-python-sdk-core: filename=aliyun_python_sdk_core-2.13.36-py3-none-any.whl size=533196 sha256=5ec4012574fc4cb6e83ed02aebc59fe9747c1a536e346e07e7eed5073e3a3be9
          Stored in directory: /root/.cache/pip/wheels/0b/4f/1c/459b3309c6370bdaa926bb358dbc1b42aa0e0a26c0476ac401
          Building wheel for crcmod (setup.py) ... done
          Created wheel for crcmod: filename=crcmod-1.7-cp310-cp310-linux_x86_64.whl size=31428 sha256=5078b5859e75ffe6d90a36bac0f4db76801b4db5a04e0f5a5523912ee5e65447
          Stored in directory: /root/.cache/pip/wheels/3d/1b/33/88c98fcba7f84f7d448c44de21caf3e98deaebae12cb5104ba
        Successfully built oss2 aliyun-python-sdk-core crcmod
        Installing collected packages: sortedcontainers, crcmod, simplejson, jmespath, gast, aliyun-python-sdk-core, aliyun-python-sdk-kms, oss2, modelscope
        Successfully installed aliyun-python-sdk-core-2.13.36 aliyun-python-sdk-kms-2.16.1 crcmod-1.7 gast-0.5.4 jmespath-0.10.0 modelscope-1.8.4 oss2-2.18.1 simplejson-3.19.1 sortedcontainers-2.4.0
        WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
        
        [notice] A new release of pip is available: 23.0.1 -> 23.2.1
        [notice] To update, run: python3 -m pip install --upgrade pip
        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/nafnet.zip
        cn-hangzhou
        --2023-09-04 11:45:09--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/nafnet.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.49, 100.118.28.44, 100.118.28.45, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.49|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 943903968 (900M) [application/zip]
        Saving to: ‘nafnet.zip’
        
        nafnet.zip          100%[===================>] 900.18M  10.7MB/s    in 84s     
        
        2023-09-04 11:46:33 (10.7 MB/s) - ‘nafnet.zip’ saved [943903968/943903968]
        
        Archive:  nafnet.zip
           creating: NAFNet/
           creating: NAFNet/pretrain_model/
           creating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/configuration.json  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/README.md  
           creating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/data/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/data/deblur.gif  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/data/nafnet_arch.png  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/data/blurry.jpg  
         extracting: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/.mdl  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/.msc  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/pytorch_model.pt  
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/configuration.json  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/README.md  
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0003_001_S6_00100_00060_3200_H/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0003_001_S6_00100_00060_3200_H/0003_GT_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0003_001_S6_00100_00060_3200_H/0003_NOISY_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0003_001_S6_00100_00060_3200_H/0003_GT_SRGB_011.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0003_001_S6_00100_00060_3200_H/0003_NOISY_SRGB_011.PNG  
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0001_001_S6_00100_00060_3200_L/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0001_001_S6_00100_00060_3200_L/0001_NOISY_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0001_001_S6_00100_00060_3200_L/0001_GT_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0001_001_S6_00100_00060_3200_L/0001_GT_SRGB_011.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0001_001_S6_00100_00060_3200_L/0001_NOISY_SRGB_011.PNG  
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0002_001_S6_00100_00020_3200_N/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0002_001_S6_00100_00020_3200_N/0002_GT_SRGB_011.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0002_001_S6_00100_00020_3200_N/0002_NOISY_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0002_001_S6_00100_00020_3200_N/0002_NOISY_SRGB_011.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0002_001_S6_00100_00020_3200_N/0002_GT_SRGB_010.PNG  
           creating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0004_001_S6_00100_00060_4400_L/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0004_001_S6_00100_00060_4400_L/0004_NOISY_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0004_001_S6_00100_00060_4400_L/0004_GT_SRGB_010.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0004_001_S6_00100_00060_4400_L/0004_GT_SRGB_011.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/SIDD_example/0004_001_S6_00100_00060_4400_L/0004_NOISY_SRGB_011.PNG  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/noisy-demo-1.png  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/nafnet_arch.png  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/noisy-demo-0.png  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/data/denoise.gif  
         extracting: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/.mdl  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/.msc  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/pytorch_model.pt  
           creating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/configuration.json  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/README.md  
           creating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/data/
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/data/deblur.gif  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/data/nafnet_arch.png  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/data/blurry.jpg  
         extracting: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/.mdl  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/.msc  
          inflating: NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/pytorch_model.pt  
          inflating: NAFNet/demo.py          
      2. 根据需要运行合适的推理任务,包括去模糊、去噪和去运动模糊。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        2023-09-04 11:47:14,618 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-04 11:47:14,619 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-04 11:47:14,619 - modelscope - INFO - No valid ast index found from /root/.cache/modelscope/ast_indexer, generating ast index from prebuilt!
        2023-09-04 11:47:14,695 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        /mnt/workspace/NAFNet
        2023-09-04 11:47:15,604 - modelscope - INFO - initiate model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro
        2023-09-04 11:47:15,604 - modelscope - INFO - initiate model from location /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro.
        2023-09-04 11:47:15,605 - modelscope - INFO - initialize model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro
        2023-09-04 11:47:16,267 - modelscope - INFO - Loading NAFNet model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_gopro/pytorch_model.pt, with param key: [params].
        2023-09-04 11:47:16,368 - modelscope - INFO - load model done.
        2023-09-04 11:47:16,400 - modelscope - INFO - load image denoise model done
        Total Image:  10
        2023-09-04 11:47:17,517 - modelscope - WARNING - task image-deblurring input definition is missing
        2023-09-04 11:47:18,893 - modelscope - WARNING - task image-deblurring output keys are missing
        0/10 saved at results/nafnet_deblur/64.jpg
        1/10 saved at results/nafnet_deblur/2.jpg
        2/10 saved at results/nafnet_deblur/34.jpg
        3/10 saved at results/nafnet_deblur/54.jpg
        4/10 saved at results/nafnet_deblur/40.jpg
        5/10 saved at results/nafnet_deblur/10.jpg
        6/10 saved at results/nafnet_deblur/70.jpg
        7/10 saved at results/nafnet_deblur/4.png
        8/10 saved at results/nafnet_deblur/20.jpg
        9/10 saved at results/nafnet_deblur/50.jpg
        2023-09-04 11:47:23,689 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-04 11:47:23,690 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-04 11:47:23,755 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        /mnt/workspace/NAFNet
        2023-09-04 11:47:24,316 - modelscope - INFO - initiate model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd
        2023-09-04 11:47:24,316 - modelscope - INFO - initiate model from location /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd.
        2023-09-04 11:47:24,317 - modelscope - INFO - initialize model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd
        2023-09-04 11:47:24,627 - modelscope - INFO - Loading NAFNet model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-denoise_sidd/pytorch_model.pt, with param key: [params].
        2023-09-04 11:47:24,685 - modelscope - INFO - load model done.
        2023-09-04 11:47:24,707 - modelscope - INFO - load image denoise model done
        Total Image:  10
        0/10 saved at results/nafnet_denoise/64.jpg
        1/10 saved at results/nafnet_denoise/2.jpg
        2/10 saved at results/nafnet_denoise/34.jpg
        3/10 saved at results/nafnet_denoise/54.jpg
        4/10 saved at results/nafnet_denoise/40.jpg
        5/10 saved at results/nafnet_denoise/10.jpg
        6/10 saved at results/nafnet_denoise/70.jpg
        7/10 saved at results/nafnet_denoise/4.png
        8/10 saved at results/nafnet_denoise/20.jpg
        9/10 saved at results/nafnet_denoise/50.jpg
        2023-09-04 11:47:30,906 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-04 11:47:30,907 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-04 11:47:30,969 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        /mnt/workspace/NAFNet
        2023-09-04 11:47:31,522 - modelscope - INFO - initiate model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_reds
        2023-09-04 11:47:31,522 - modelscope - INFO - initiate model from location /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_reds.
        2023-09-04 11:47:31,523 - modelscope - INFO - initialize model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_reds
        2023-09-04 11:47:32,187 - modelscope - INFO - Loading NAFNet model from /mnt/workspace/NAFNet/pretrain_model/cv_nafnet_image-deblur_reds/pytorch_model.pt, with param key: [params].
        2023-09-04 11:47:32,296 - modelscope - INFO - load model done.
        2023-09-04 11:47:32,320 - modelscope - INFO - load image denoise model done
        Total Image:  10
        2023-09-04 11:47:33,402 - modelscope - WARNING - task image-deblurring input definition is missing
        2023-09-04 11:47:34,753 - modelscope - WARNING - task image-deblurring output keys are missing
        0/10 saved at results/nafnet_de_motion_blur/64.jpg
        1/10 saved at results/nafnet_de_motion_blur/2.jpg
        2/10 saved at results/nafnet_de_motion_blur/34.jpg
        3/10 saved at results/nafnet_de_motion_blur/54.jpg
        4/10 saved at results/nafnet_de_motion_blur/40.jpg
        5/10 saved at results/nafnet_de_motion_blur/10.jpg
        6/10 saved at results/nafnet_de_motion_blur/70.jpg
        7/10 saved at results/nafnet_de_motion_blur/4.png
        8/10 saved at results/nafnet_de_motion_blur/20.jpg
        9/10 saved at results/nafnet_de_motion_blur/50.jpg
    2. 图像超分,即提升图像分辨率和清晰度。支持以下几种算法,您可以任意选择一种来处理图像。

      RealESRGAN

      1. 下载代码及预训练文件。下载解压完成后,您可以在./Real-ESRGAN文件夹中查看该算法的源代码。

        单击此处查看运行结果

        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/realesrgan.zip
        cn-hangzhou
        --2023-09-05 01:29:05--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/realesrgan.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.50, 100.118.28.49, 100.118.28.44, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.50|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 151719684 (145M) [application/zip]
        Saving to: ‘realesrgan.zip’
        
        realesrgan.zip      100%[===================>] 144.69M  11.3MB/s    in 13s     
        
        2023-09-05 01:29:17 (11.5 MB/s) - ‘realesrgan.zip’ saved [151719684/151719684]
        
        Archive:  realesrgan.zip
           creating: Real-ESRGAN/
           creating: Real-ESRGAN/.ipynb_checkpoints/
          inflating: Real-ESRGAN/.ipynb_checkpoints/inference_realesrgan-checkpoint.py  
          inflating: Real-ESRGAN/setup.cfg   
           creating: Real-ESRGAN/tests/
          inflating: Real-ESRGAN/tests/test_discriminator_arch.py  
          inflating: Real-ESRGAN/tests/test_model.py  
          inflating: Real-ESRGAN/tests/test_utils.py  
           creating: Real-ESRGAN/tests/data/
          inflating: Real-ESRGAN/tests/data/meta_info_pair.txt  
          inflating: Real-ESRGAN/tests/data/test_realesrnet_model.yml  
          inflating: Real-ESRGAN/tests/data/test_realesrgan_dataset.yml  
           creating: Real-ESRGAN/tests/data/gt.lmdb/
          inflating: Real-ESRGAN/tests/data/gt.lmdb/meta_info.txt  
          inflating: Real-ESRGAN/tests/data/gt.lmdb/lock.mdb  
          inflating: Real-ESRGAN/tests/data/gt.lmdb/data.mdb  
          inflating: Real-ESRGAN/tests/data/test_realesrgan_model.yml  
          inflating: Real-ESRGAN/tests/data/meta_info_gt.txt  
          inflating: Real-ESRGAN/tests/data/test_realesrgan_paired_dataset.yml  
           creating: Real-ESRGAN/tests/data/gt/
          inflating: Real-ESRGAN/tests/data/gt/comic.png  
          inflating: Real-ESRGAN/tests/data/gt/baboon.png  
           creating: Real-ESRGAN/tests/data/lq.lmdb/
          inflating: Real-ESRGAN/tests/data/lq.lmdb/meta_info.txt  
          inflating: Real-ESRGAN/tests/data/lq.lmdb/lock.mdb  
          inflating: Real-ESRGAN/tests/data/lq.lmdb/data.mdb  
           creating: Real-ESRGAN/tests/data/lq/
         extracting: Real-ESRGAN/tests/data/lq/comic.png  
          inflating: Real-ESRGAN/tests/data/lq/baboon.png  
          inflating: Real-ESRGAN/tests/test_dataset.py  
          inflating: Real-ESRGAN/requirements.txt  
           creating: Real-ESRGAN/assets/
          inflating: Real-ESRGAN/assets/teaser.jpg  
          inflating: Real-ESRGAN/assets/realesrgan_logo_ai.png  
          inflating: Real-ESRGAN/assets/realesrgan_logo_gv.png  
          inflating: Real-ESRGAN/assets/realesrgan_logo.png  
          inflating: Real-ESRGAN/assets/realesrgan_logo_gi.png  
          inflating: Real-ESRGAN/assets/teaser-text.png  
          inflating: Real-ESRGAN/assets/realesrgan_logo_av.png  
           creating: Real-ESRGAN/pretrained_model/
          inflating: Real-ESRGAN/.gitignore  
           creating: Real-ESRGAN/.vscode/
          inflating: Real-ESRGAN/.vscode/settings.json  
          inflating: Real-ESRGAN/demo.py     
           creating: Real-ESRGAN/experiments/
           creating: Real-ESRGAN/experiments/pretrained_models/
         extracting: Real-ESRGAN/experiments/pretrained_models/README.md  
          inflating: Real-ESRGAN/cog.yaml    
          inflating: Real-ESRGAN/LICENSE     
           creating: Real-ESRGAN/inputs/
          inflating: Real-ESRGAN/inputs/wolf_gray.jpg  
          inflating: Real-ESRGAN/inputs/0030.jpg  
           creating: Real-ESRGAN/inputs/video/
          inflating: Real-ESRGAN/inputs/video/onepiece_demo.mp4  
          inflating: Real-ESRGAN/inputs/0014.jpg  
          inflating: Real-ESRGAN/inputs/OST_009.png  
          inflating: Real-ESRGAN/inputs/tree_alpha_16bit.png  
          inflating: Real-ESRGAN/inputs/children-alpha.png  
          inflating: Real-ESRGAN/inputs/00003.png  
          inflating: Real-ESRGAN/inputs/00017_gray.png  
          inflating: Real-ESRGAN/inputs/ADE_val_00000114.jpg  
          inflating: Real-ESRGAN/MANIFEST.in  
          inflating: Real-ESRGAN/.pre-commit-config.yaml  
           creating: Real-ESRGAN/.git/
           creating: Real-ESRGAN/.git/logs/
           creating: Real-ESRGAN/.git/logs/refs/
           creating: Real-ESRGAN/.git/logs/refs/remotes/
           creating: Real-ESRGAN/.git/logs/refs/remotes/origin/
          inflating: Real-ESRGAN/.git/logs/refs/remotes/origin/HEAD  
           creating: Real-ESRGAN/.git/logs/refs/heads/
          inflating: Real-ESRGAN/.git/logs/refs/heads/master  
          inflating: Real-ESRGAN/.git/logs/HEAD  
          inflating: Real-ESRGAN/.git/config  
           creating: Real-ESRGAN/.git/refs/
           creating: Real-ESRGAN/.git/refs/remotes/
           creating: Real-ESRGAN/.git/refs/remotes/origin/
         extracting: Real-ESRGAN/.git/refs/remotes/origin/HEAD  
           creating: Real-ESRGAN/.git/refs/heads/
         extracting: Real-ESRGAN/.git/refs/heads/master  
           creating: Real-ESRGAN/.git/refs/tags/
         extracting: Real-ESRGAN/.git/HEAD   
           creating: Real-ESRGAN/.git/hooks/
          inflating: Real-ESRGAN/.git/hooks/push-to-checkout.sample  
          inflating: Real-ESRGAN/.git/hooks/commit-msg.sample  
          inflating: Real-ESRGAN/.git/hooks/applypatch-msg.sample  
          inflating: Real-ESRGAN/.git/hooks/pre-receive.sample  
          inflating: Real-ESRGAN/.git/hooks/pre-push.sample  
          inflating: Real-ESRGAN/.git/hooks/fsmonitor-watchman.sample  
          inflating: Real-ESRGAN/.git/hooks/post-update.sample  
          inflating: Real-ESRGAN/.git/hooks/update.sample  
          inflating: Real-ESRGAN/.git/hooks/pre-merge-commit.sample  
          inflating: Real-ESRGAN/.git/hooks/pre-commit.sample  
          inflating: Real-ESRGAN/.git/hooks/pre-applypatch.sample  
          inflating: Real-ESRGAN/.git/hooks/prepare-commit-msg.sample  
          inflating: Real-ESRGAN/.git/hooks/pre-rebase.sample  
           creating: Real-ESRGAN/.git/info/
          inflating: Real-ESRGAN/.git/info/exclude  
           creating: Real-ESRGAN/.git/objects/
           creating: Real-ESRGAN/.git/objects/pack/
          inflating: Real-ESRGAN/.git/objects/pack/pack-4d4a54ddcab2e146413d01c2262bbf138200efd1.pack  
          inflating: Real-ESRGAN/.git/objects/pack/pack-4d4a54ddcab2e146413d01c2262bbf138200efd1.idx  
           creating: Real-ESRGAN/.git/objects/info/
          inflating: Real-ESRGAN/.git/packed-refs  
          inflating: Real-ESRGAN/.git/index  
          inflating: Real-ESRGAN/.git/description  
          inflating: Real-ESRGAN/inference_realesrgan_video.py  
          inflating: Real-ESRGAN/CODE_OF_CONDUCT.md  
          inflating: Real-ESRGAN/README.md   
           creating: Real-ESRGAN/realesrgan/
           creating: Real-ESRGAN/realesrgan/.ipynb_checkpoints/
          inflating: Real-ESRGAN/realesrgan/.ipynb_checkpoints/__init__-checkpoint.py  
           creating: Real-ESRGAN/realesrgan/models/
          inflating: Real-ESRGAN/realesrgan/models/realesrnet_model.py  
           creating: Real-ESRGAN/realesrgan/models/__pycache__/
          inflating: Real-ESRGAN/realesrgan/models/__pycache__/__init__.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/models/__pycache__/realesrgan_model.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/models/__pycache__/realesrnet_model.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/models/realesrgan_model.py  
          inflating: Real-ESRGAN/realesrgan/models/__init__.py  
          inflating: Real-ESRGAN/realesrgan/train.py  
           creating: Real-ESRGAN/realesrgan/__pycache__/
          inflating: Real-ESRGAN/realesrgan/__pycache__/__init__.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/__pycache__/utils.cpython-310.pyc  
           creating: Real-ESRGAN/realesrgan/archs/
          inflating: Real-ESRGAN/realesrgan/archs/discriminator_arch.py  
           creating: Real-ESRGAN/realesrgan/archs/__pycache__/
          inflating: Real-ESRGAN/realesrgan/archs/__pycache__/__init__.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/archs/__pycache__/srvgg_arch.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/archs/__pycache__/discriminator_arch.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/archs/__init__.py  
          inflating: Real-ESRGAN/realesrgan/archs/srvgg_arch.py  
           creating: Real-ESRGAN/realesrgan/data/
           creating: Real-ESRGAN/realesrgan/data/__pycache__/
          inflating: Real-ESRGAN/realesrgan/data/__pycache__/realesrgan_paired_dataset.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/data/__pycache__/__init__.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/data/__pycache__/realesrgan_dataset.cpython-310.pyc  
          inflating: Real-ESRGAN/realesrgan/data/realesrgan_dataset.py  
          inflating: Real-ESRGAN/realesrgan/data/realesrgan_paired_dataset.py  
          inflating: Real-ESRGAN/realesrgan/data/__init__.py  
          inflating: Real-ESRGAN/realesrgan/__init__.py  
          inflating: Real-ESRGAN/realesrgan/utils.py  
          inflating: Real-ESRGAN/README_CN.md  
         extracting: Real-ESRGAN/VERSION     
          inflating: Real-ESRGAN/setup.py    
           creating: Real-ESRGAN/.github/
           creating: Real-ESRGAN/.github/workflows/
          inflating: Real-ESRGAN/.github/workflows/no-response.yml  
          inflating: Real-ESRGAN/.github/workflows/release.yml  
          inflating: Real-ESRGAN/.github/workflows/pylint.yml  
          inflating: Real-ESRGAN/.github/workflows/publish-pip.yml  
           creating: Real-ESRGAN/docs/
          inflating: Real-ESRGAN/docs/anime_comparisons.md  
          inflating: Real-ESRGAN/docs/FAQ.md  
          inflating: Real-ESRGAN/docs/ncnn_conversion.md  
          inflating: Real-ESRGAN/docs/Training_CN.md  
          inflating: Real-ESRGAN/docs/CONTRIBUTING.md  
          inflating: Real-ESRGAN/docs/anime_video_model.md  
          inflating: Real-ESRGAN/docs/feedback.md  
          inflating: Real-ESRGAN/docs/anime_model.md  
          inflating: Real-ESRGAN/docs/Training.md  
          inflating: Real-ESRGAN/docs/model_zoo.md  
          inflating: Real-ESRGAN/docs/anime_comparisons_CN.md  
          inflating: Real-ESRGAN/cog_predict.py  
           creating: Real-ESRGAN/scripts/
          inflating: Real-ESRGAN/scripts/pytorch2onnx.py  
          inflating: Real-ESRGAN/scripts/extract_subimages.py  
          inflating: Real-ESRGAN/scripts/generate_meta_info.py  
          inflating: Real-ESRGAN/scripts/generate_multiscale_DF2K.py  
          inflating: Real-ESRGAN/scripts/generate_meta_info_pairdata.py  
           creating: Real-ESRGAN/weights/
          inflating: Real-ESRGAN/weights/RealESRNet_x4plus.pth  
          inflating: Real-ESRGAN/weights/README.md  
          inflating: Real-ESRGAN/weights/RealESRGAN_x4plus.pth  
          inflating: Real-ESRGAN/weights/RealESRGAN_x4plus_anime_6B.pth  
           creating: Real-ESRGAN/options/
          inflating: Real-ESRGAN/options/train_realesrnet_x2plus.yml  
          inflating: Real-ESRGAN/options/finetune_realesrgan_x4plus_pairdata.yml  
          inflating: Real-ESRGAN/options/train_realesrnet_x4plus.yml  
          inflating: Real-ESRGAN/options/train_realesrgan_x2plus.yml  
          inflating: Real-ESRGAN/options/train_realesrgan_x4plus.yml  
          inflating: Real-ESRGAN/options/finetune_realesrgan_x4plus.yml  
      2. 根据需要运行合适的推理任务。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        Testing 0 10
        	Tile 1/1
        Testing 1 2
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 2 20
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 3 34
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 4 4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 5 40
        	Tile 1/1
        Testing 6 50
        	Tile 1/1
        Testing 7 54
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 8 64
        	Tile 1/1
        Testing 9 70
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 0 10
        	Tile 1/1
        Testing 1 2
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 2 20
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 3 34
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 4 4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 5 40
        	Tile 1/1
        Testing 6 50
        	Tile 1/1
        Testing 7 54
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 8 64
        	Tile 1/1
        Testing 9 70
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 0 10
        	Tile 1/1
        Testing 1 2
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 2 20
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 3 34
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 4 4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 5 40
        	Tile 1/1
        Testing 6 50
        	Tile 1/1
        Testing 7 54
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        Testing 8 64
        	Tile 1/1
        Testing 9 70
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4

      SwinIR

      1. 下载代码及预训练文件。下载解压完成后,您可以在./SwinIR文件夹中查看该算法的源代码。

        单击此处查看运行结果

        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/swinir.zip
        cn-hangzhou
        --2023-09-05 01:34:02--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/swinir.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.44, 100.118.28.49, 100.118.28.45, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.44|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 580693761 (554M) [application/zip]
        Saving to: ‘swinir.zip’
        
        swinir.zip          100%[===================>] 553.79M  11.4MB/s    in 51s     
        
        2023-09-05 01:34:54 (10.8 MB/s) - ‘swinir.zip’ saved [580693761/580693761]
        
        Archive:  swinir.zip
           creating: SwinIR/
           creating: SwinIR/.ipynb_checkpoints/
          inflating: SwinIR/.ipynb_checkpoints/main_test_swinir-checkpoint.py  
          inflating: SwinIR/.ipynb_checkpoints/predict-checkpoint.py  
           creating: SwinIR/models/
           creating: SwinIR/models/__pycache__/
          inflating: SwinIR/models/__pycache__/network_swinir.cpython-310.pyc  
          inflating: SwinIR/models/network_swinir.py  
           creating: SwinIR/pretrained_model/
          inflating: SwinIR/pretrained_model/004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth  
          inflating: SwinIR/pretrained_model/006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg10.pth  
          inflating: SwinIR/pretrained_model/004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth  
          inflating: SwinIR/pretrained_model/005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth  
          inflating: SwinIR/pretrained_model/001_classicalSR_DF2K_s64w8_SwinIR-M_x4.pth  
          inflating: SwinIR/pretrained_model/006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg40.pth  
          inflating: SwinIR/pretrained_model/006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg20.pth  
          inflating: SwinIR/pretrained_model/006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg30.pth  
          inflating: SwinIR/pretrained_model/005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth  
          inflating: SwinIR/pretrained_model/004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth  
          inflating: SwinIR/pretrained_model/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth  
          inflating: SwinIR/pretrained_model/005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth  
          inflating: SwinIR/demo.py          
          inflating: SwinIR/cog.yaml         
          inflating: SwinIR/LICENSE          
          inflating: SwinIR/download-weights.sh  
           creating: SwinIR/utils/
           creating: SwinIR/utils/__pycache__/
          inflating: SwinIR/utils/__pycache__/util_calculate_psnr_ssim.cpython-310.pyc  
          inflating: SwinIR/utils/util_calculate_psnr_ssim.py  
           creating: SwinIR/.git/
           creating: SwinIR/.git/logs/
           creating: SwinIR/.git/logs/refs/
           creating: SwinIR/.git/logs/refs/remotes/
           creating: SwinIR/.git/logs/refs/remotes/origin/
          inflating: SwinIR/.git/logs/refs/remotes/origin/HEAD  
           creating: SwinIR/.git/logs/refs/heads/
          inflating: SwinIR/.git/logs/refs/heads/main  
          inflating: SwinIR/.git/logs/HEAD   
          inflating: SwinIR/.git/config      
           creating: SwinIR/.git/refs/
           creating: SwinIR/.git/refs/remotes/
           creating: SwinIR/.git/refs/remotes/origin/
         extracting: SwinIR/.git/refs/remotes/origin/HEAD  
           creating: SwinIR/.git/refs/heads/
         extracting: SwinIR/.git/refs/heads/main  
           creating: SwinIR/.git/refs/tags/
         extracting: SwinIR/.git/HEAD        
           creating: SwinIR/.git/hooks/
          inflating: SwinIR/.git/hooks/push-to-checkout.sample  
          inflating: SwinIR/.git/hooks/commit-msg.sample  
          inflating: SwinIR/.git/hooks/applypatch-msg.sample  
          inflating: SwinIR/.git/hooks/pre-receive.sample  
          inflating: SwinIR/.git/hooks/pre-push.sample  
          inflating: SwinIR/.git/hooks/fsmonitor-watchman.sample  
          inflating: SwinIR/.git/hooks/post-update.sample  
          inflating: SwinIR/.git/hooks/update.sample  
          inflating: SwinIR/.git/hooks/pre-merge-commit.sample  
          inflating: SwinIR/.git/hooks/pre-commit.sample  
          inflating: SwinIR/.git/hooks/pre-applypatch.sample  
          inflating: SwinIR/.git/hooks/prepare-commit-msg.sample  
          inflating: SwinIR/.git/hooks/pre-rebase.sample  
           creating: SwinIR/.git/info/
          inflating: SwinIR/.git/info/exclude  
           creating: SwinIR/.git/objects/
           creating: SwinIR/.git/objects/pack/
          inflating: SwinIR/.git/objects/pack/pack-04d185ace674ba8b24be5a9b5fb3304d4c6a4e74.pack  
          inflating: SwinIR/.git/objects/pack/pack-04d185ace674ba8b24be5a9b5fb3304d4c6a4e74.idx  
           creating: SwinIR/.git/objects/info/
          inflating: SwinIR/.git/packed-refs  
          inflating: SwinIR/.git/index       
          inflating: SwinIR/.git/description  
          inflating: SwinIR/README.md        
           creating: SwinIR/testsets/
           creating: SwinIR/testsets/McMaster/
          inflating: SwinIR/testsets/McMaster/17.tif  
          inflating: SwinIR/testsets/McMaster/9.tif  
          inflating: SwinIR/testsets/McMaster/3.tif  
          inflating: SwinIR/testsets/McMaster/5.tif  
          inflating: SwinIR/testsets/McMaster/16.tif  
          inflating: SwinIR/testsets/McMaster/4.tif  
          inflating: SwinIR/testsets/McMaster/12.tif  
          inflating: SwinIR/testsets/McMaster/6.tif  
          inflating: SwinIR/testsets/McMaster/14.tif  
          inflating: SwinIR/testsets/McMaster/7.tif  
          inflating: SwinIR/testsets/McMaster/8.tif  
          inflating: SwinIR/testsets/McMaster/13.tif  
          inflating: SwinIR/testsets/McMaster/1.tif  
          inflating: SwinIR/testsets/McMaster/18.tif  
          inflating: SwinIR/testsets/McMaster/11.tif  
          inflating: SwinIR/testsets/McMaster/2.tif  
          inflating: SwinIR/testsets/McMaster/15.tif  
          inflating: SwinIR/testsets/McMaster/10.tif  
           creating: SwinIR/testsets/classic5/
          inflating: SwinIR/testsets/classic5/lena.bmp  
          inflating: SwinIR/testsets/classic5/baboon.bmp  
          inflating: SwinIR/testsets/classic5/barbara.bmp  
          inflating: SwinIR/testsets/classic5/boats.bmp  
          inflating: SwinIR/testsets/classic5/peppers.bmp  
           creating: SwinIR/testsets/Set12/
          inflating: SwinIR/testsets/Set12/06.png  
          inflating: SwinIR/testsets/Set12/04.png  
          inflating: SwinIR/testsets/Set12/03.png  
          inflating: SwinIR/testsets/Set12/02.png  
          inflating: SwinIR/testsets/Set12/11.png  
          inflating: SwinIR/testsets/Set12/07.png  
          inflating: SwinIR/testsets/Set12/09.png  
          inflating: SwinIR/testsets/Set12/10.png  
          inflating: SwinIR/testsets/Set12/12.png  
          inflating: SwinIR/testsets/Set12/05.png  
          inflating: SwinIR/testsets/Set12/08.png  
          inflating: SwinIR/testsets/Set12/01.png  
           creating: SwinIR/testsets/RealSRSet+5images/
          inflating: SwinIR/testsets/RealSRSet+5images/building.png  
          inflating: SwinIR/testsets/RealSRSet+5images/0030.jpg  
          inflating: SwinIR/testsets/RealSRSet+5images/painting.png  
          inflating: SwinIR/testsets/RealSRSet+5images/comic1.png  
          inflating: SwinIR/testsets/RealSRSet+5images/0014.jpg  
          inflating: SwinIR/testsets/RealSRSet+5images/OST_009.png  
          inflating: SwinIR/testsets/RealSRSet+5images/comic2.png  
          inflating: SwinIR/testsets/RealSRSet+5images/pattern.png  
          inflating: SwinIR/testsets/RealSRSet+5images/foreman.png  
          inflating: SwinIR/testsets/RealSRSet+5images/oldphoto3.png  
          inflating: SwinIR/testsets/RealSRSet+5images/oldphoto6.png  
          inflating: SwinIR/testsets/RealSRSet+5images/dog.png  
          inflating: SwinIR/testsets/RealSRSet+5images/Lincoln.png  
          inflating: SwinIR/testsets/RealSRSet+5images/oldphoto2.png  
          inflating: SwinIR/testsets/RealSRSet+5images/00003.png  
          inflating: SwinIR/testsets/RealSRSet+5images/frog.png  
          inflating: SwinIR/testsets/RealSRSet+5images/butterfly.png  
          inflating: SwinIR/testsets/RealSRSet+5images/ppt3.png  
          inflating: SwinIR/testsets/RealSRSet+5images/butterfly2.png  
          inflating: SwinIR/testsets/RealSRSet+5images/computer.png  
         extracting: SwinIR/testsets/RealSRSet+5images/chip.png  
          inflating: SwinIR/testsets/RealSRSet+5images/comic3.png  
          inflating: SwinIR/testsets/RealSRSet+5images/dped_crop00061.png  
          inflating: SwinIR/testsets/RealSRSet+5images/tiger.png  
          inflating: SwinIR/testsets/RealSRSet+5images/ADE_val_00000114.jpg  
           creating: SwinIR/testsets/Set5/
           creating: SwinIR/testsets/Set5/HR/
          inflating: SwinIR/testsets/Set5/HR/woman.png  
          inflating: SwinIR/testsets/Set5/HR/butterfly.png  
          inflating: SwinIR/testsets/Set5/HR/head.png  
          inflating: SwinIR/testsets/Set5/HR/baby.png  
          inflating: SwinIR/testsets/Set5/HR/bird.png  
           creating: SwinIR/testsets/Set5/LR_bicubic/
           creating: SwinIR/testsets/Set5/LR_bicubic/X8/
         extracting: SwinIR/testsets/Set5/LR_bicubic/X8/womanx8.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X8/headx8.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X8/birdx8.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X8/babyx8.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X8/butterflyx8.png  
           creating: SwinIR/testsets/Set5/LR_bicubic/X4/
         extracting: SwinIR/testsets/Set5/LR_bicubic/X4/birdx4.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X4/butterflyx4.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X4/babyx4.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X4/womanx4.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X4/headx4.png  
           creating: SwinIR/testsets/Set5/LR_bicubic/X3/
         extracting: SwinIR/testsets/Set5/LR_bicubic/X3/butterflyx3.png  
          inflating: SwinIR/testsets/Set5/LR_bicubic/X3/babyx3.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X3/womanx3.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X3/birdx3.png  
         extracting: SwinIR/testsets/Set5/LR_bicubic/X3/headx3.png  
           creating: SwinIR/testsets/Set5/LR_bicubic/X2/
          inflating: SwinIR/testsets/Set5/LR_bicubic/X2/womanx2.png  
          inflating: SwinIR/testsets/Set5/LR_bicubic/X2/headx2.png  
          inflating: SwinIR/testsets/Set5/LR_bicubic/X2/butterflyx2.png  
          inflating: SwinIR/testsets/Set5/LR_bicubic/X2/birdx2.png  
          inflating: SwinIR/testsets/Set5/LR_bicubic/X2/babyx2.png  
           creating: SwinIR/figs/
          inflating: SwinIR/figs/ETH_SwinIR-L.png  
          inflating: SwinIR/figs/OST_009_crop_realESRGAN.png  
          inflating: SwinIR/figs/ETH_realESRGAN.jpg  
          inflating: SwinIR/figs/SwinIR_archi.png  
          inflating: SwinIR/figs/OST_009_crop_LR.png  
          inflating: SwinIR/figs/jepg_compress_artfact_reduction.png  
          inflating: SwinIR/figs/classic_image_sr.png  
          inflating: SwinIR/figs/ETH_SwinIR.png  
          inflating: SwinIR/figs/OST_009_crop_SwinIR.png  
          inflating: SwinIR/figs/ETH_BSRGAN.png  
          inflating: SwinIR/figs/color_image_denoising.png  
          inflating: SwinIR/figs/classic_image_sr_visual.png  
          inflating: SwinIR/figs/lightweight_image_sr.png  
          inflating: SwinIR/figs/gray_image_denoising.png  
          inflating: SwinIR/figs/OST_009_crop_SwinIR-L.png  
          inflating: SwinIR/figs/OST_009_crop_BSRGAN.png  
          inflating: SwinIR/figs/real_world_image_sr.png  
          inflating: SwinIR/figs/ETH_LR.png  
           creating: SwinIR/model_zoo/
          inflating: SwinIR/model_zoo/README.md  
          inflating: SwinIR/predict.py       
      2. 根据需要运行合适的推理任务。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        loading model from /mnt/workspace/SwinIR/pretrained_model/001_classicalSR_DF2K_s64w8_SwinIR-M_x4.pth
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        results//swinir_classical_sr_x4
        Testing 0 10                  
        Testing 1 2                   
        Testing 2 20                  
        Testing 3 34                  
        Testing 4 4                   
        Testing 5 40                  
        Testing 6 50                  
        Testing 7 54                  
        Testing 8 64                  
        Testing 9 70                  
        loading model from /mnt/workspace/SwinIR/pretrained_model/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        results//swinir_real_sr_x4
        Testing 0 10                  
        Testing 1 2                   
        Testing 2 20                  
        Testing 3 34                  
        Testing 4 4                   
        Testing 5 40                  
        Testing 6 50                  
        Testing 7 54                  
        Testing 8 64                  
        Testing 9 70                  
        loading model from /mnt/workspace/SwinIR/pretrained_model/004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        results//swinir_gray_dn_noise15
        Testing 0 10                   - PSNR: 33.61 dB; SSIM: 0.9556; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 1 2                    - PSNR: 33.54 dB; SSIM: 0.9048; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 2 20                   - PSNR: 32.79 dB; SSIM: 0.9033; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 3 34                   - PSNR: 32.80 dB; SSIM: 0.8973; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 4 4                    - PSNR: 39.13 dB; SSIM: 0.9357; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 5 40                   - PSNR: 31.12 dB; SSIM: 0.9500; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 6 50                   - PSNR: 31.57 dB; SSIM: 0.9437; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 7 54                   - PSNR: 36.47 dB; SSIM: 0.9115; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 8 64                   - PSNR: 35.19 dB; SSIM: 0.9507; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        Testing 9 70                   - PSNR: 31.88 dB; SSIM: 0.8856; PSNRB: 0.00 dB;PSNR_Y: 0.00 dB; SSIM_Y: 0.0000; PSNRB_Y: 0.00 dB.
        
        results//swinir_gray_dn_noise15 
        -- Average PSNR/SSIM(RGB): 33.81 dB; 0.9238
        loading model from /mnt/workspace/SwinIR/pretrained_model/005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        results//swinir_color_dn_noise15
        Testing 0 10                   - PSNR: 36.38 dB; SSIM: 0.9705; PSNRB: 0.00 dB;PSNR_Y: 37.74 dB; SSIM_Y: 0.9749; PSNRB_Y: 0.00 dB.
        Testing 1 2                    - PSNR: 35.28 dB; SSIM: 0.9327; PSNRB: 0.00 dB;PSNR_Y: 37.29 dB; SSIM_Y: 0.9463; PSNRB_Y: 0.00 dB.
        Testing 2 20                   - PSNR: 35.06 dB; SSIM: 0.9315; PSNRB: 0.00 dB;PSNR_Y: 36.52 dB; SSIM_Y: 0.9432; PSNRB_Y: 0.00 dB.
        Testing 3 34                   - PSNR: 35.23 dB; SSIM: 0.9307; PSNRB: 0.00 dB;PSNR_Y: 36.58 dB; SSIM_Y: 0.9418; PSNRB_Y: 0.00 dB.
        Testing 4 4                    - PSNR: 39.07 dB; SSIM: 0.9320; PSNRB: 0.00 dB;PSNR_Y: 41.84 dB; SSIM_Y: 0.9568; PSNRB_Y: 0.00 dB.
        Testing 5 40                   - PSNR: 34.48 dB; SSIM: 0.9716; PSNRB: 0.00 dB;PSNR_Y: 35.83 dB; SSIM_Y: 0.9751; PSNRB_Y: 0.00 dB.
        Testing 6 50                   - PSNR: 34.92 dB; SSIM: 0.9648; PSNRB: 0.00 dB;PSNR_Y: 36.27 dB; SSIM_Y: 0.9702; PSNRB_Y: 0.00 dB.
        Testing 7 54                   - PSNR: 38.24 dB; SSIM: 0.9331; PSNRB: 0.00 dB;PSNR_Y: 39.60 dB; SSIM_Y: 0.9463; PSNRB_Y: 0.00 dB.
        Testing 8 64                   - PSNR: 37.77 dB; SSIM: 0.9678; PSNRB: 0.00 dB;PSNR_Y: 39.14 dB; SSIM_Y: 0.9733; PSNRB_Y: 0.00 dB.
        Testing 9 70                   - PSNR: 34.51 dB; SSIM: 0.9226; PSNRB: 0.00 dB;PSNR_Y: 35.85 dB; SSIM_Y: 0.9349; PSNRB_Y: 0.00 dB.
        
        results//swinir_color_dn_noise15 
        -- Average PSNR/SSIM(RGB): 36.10 dB; 0.9457
        -- Average PSNR_Y/SSIM_Y: 37.67 dB; 0.9563
        loading model from /mnt/workspace/SwinIR/pretrained_model/006_colorCAR_DFWB_s126w7_SwinIR-M_jpeg10.pth
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        results//swinir_color_jpeg_car_jpeg10
        Testing 0 10                  
        Testing 1 2                   
        Testing 2 20                  
        Testing 3 34                  
        Testing 4 4                   
        Testing 5 40                  
        Testing 6 50                  
        Testing 7 54                  
        Testing 8 64                  
        Testing 9 70                  

      HAT

      1. 下载代码和预训练文件。下载解压完成后,您可以在./HAT文件夹中查看该算法的源代码。

        单击此处查看运行结果

        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/hat.zip
        cn-hangzhou
        --2023-09-05 01:47:39--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/hat.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.49, 100.118.28.45, 100.118.28.44, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.49|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 788324861 (752M) [application/zip]
        Saving to: ‘hat.zip’
        
        hat.zip             100%[===================>] 751.80M  12.1MB/s    in 68s     
        
        2023-09-05 01:48:47 (11.1 MB/s) - ‘hat.zip’ saved [788324861/788324861]
        
        Archive:  hat.zip
           creating: HAT/
           creating: HAT/.ipynb_checkpoints/
          inflating: HAT/.ipynb_checkpoints/predict-checkpoint.py  
          inflating: HAT/.ipynb_checkpoints/README-checkpoint.md  
          inflating: HAT/setup.cfg           
         extracting: HAT/requirements.txt    
           creating: HAT/pretrained_model/
          inflating: HAT/pretrained_model/HAT-L_SRx4_ImageNet-pretrain.pth  
          inflating: HAT/pretrained_model/HAT-L_SRx2_ImageNet-pretrain.pth  
          inflating: HAT/pretrained_model/Real_HAT_GAN_sharper.pth  
          inflating: HAT/pretrained_model/Real_HAT_GAN_SRx4.pth  
          inflating: HAT/.gitignore          
           creating: HAT/datasets/
          inflating: HAT/datasets/README.md  
           creating: HAT/results/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075007/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075007/test_HAT_GAN_Real_SRx4_20230811_074626.log  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075007/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075007/visualization/custom/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075007/visualization/custom/2_HAT_GAN_Real_SRx4.png  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/test_HAT_GAN_Real_SRx4_20230811_075049.log  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/custom/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/custom/.ipynb_checkpoints/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/custom/.ipynb_checkpoints/IMG_1452_HAT_GAN_Real_SRx4-checkpoint.png  
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/custom/.ipynb_checkpoints/2_HAT_GAN_Real_SRx4-checkpoint.png  
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/custom/IMG_1452_HAT_GAN_Real_SRx4.png  
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075650/visualization/custom/2_HAT_GAN_Real_SRx4.png  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081135/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081135/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_074626/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_074626/visualization/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_074626/test_HAT_GAN_Real_SRx4_20230811_074612.log  
           creating: HAT/results/HAT_GAN_Real_SRx4/
           creating: HAT/results/HAT_GAN_Real_SRx4/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4/visualization/custom/
          inflating: HAT/results/HAT_GAN_Real_SRx4/visualization/custom/IMG_1452_HAT_GAN_Real_SRx4.png  
          inflating: HAT/results/HAT_GAN_Real_SRx4/visualization/custom/2_HAT_GAN_Real_SRx4.png  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081052/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081052/test_HAT_GAN_Real_SRx4_20230811_075714.log  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081052/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_074612/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_074612/test_HAT_GAN_Real_SRx4_20230811_074331.log  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_074612/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081900/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081900/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081900/visualization/custom/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081900/visualization/custom/IMG_1452_HAT_GAN_Real_SRx4.png  
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081900/visualization/custom/2_HAT_GAN_Real_SRx4.png  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_082002/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_082002/visualization/
         extracting: HAT/results/README.md   
           creating: HAT/results/HAT_SRx4_ImageNet-pretrain/
          inflating: HAT/results/HAT_SRx4_ImageNet-pretrain/test_HAT_SRx4_ImageNet-pretrain_20230811_074251.log  
           creating: HAT/results/HAT_SRx4_ImageNet-pretrain/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_082148/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_082148/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_082240/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_082240/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081942/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081942/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075049/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075049/test_HAT_GAN_Real_SRx4_20230811_075007.log  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075049/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075049/visualization/custom/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075049/visualization/custom/2_HAT_GAN_Real_SRx4.png  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081209/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_081209/visualization/
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075714/
          inflating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075714/test_HAT_GAN_Real_SRx4_20230811_075650.log  
           creating: HAT/results/HAT_GAN_Real_SRx4_archived_20230811_075714/visualization/
           creating: HAT/experiments/
           creating: HAT/experiments/pretrained_models/
         extracting: HAT/experiments/pretrained_models/README.md  
          inflating: HAT/cog.yaml            
          inflating: HAT/LICENSE             
           creating: HAT/hat/
           creating: HAT/hat/.ipynb_checkpoints/
          inflating: HAT/hat/.ipynb_checkpoints/__init__-checkpoint.py  
          inflating: HAT/hat/.ipynb_checkpoints/test-checkpoint.py  
           creating: HAT/hat/models/
           creating: HAT/hat/models/.ipynb_checkpoints/
          inflating: HAT/hat/models/.ipynb_checkpoints/__init__-checkpoint.py  
          inflating: HAT/hat/models/.ipynb_checkpoints/hat_model-checkpoint.py  
          inflating: HAT/hat/models/realhatgan_model.py  
           creating: HAT/hat/models/__pycache__/
          inflating: HAT/hat/models/__pycache__/__init__.cpython-310.pyc  
          inflating: HAT/hat/models/__pycache__/hat_model.cpython-310.pyc  
          inflating: HAT/hat/models/__pycache__/realhatmse_model.cpython-310.pyc  
          inflating: HAT/hat/models/__pycache__/realhatgan_model.cpython-310.pyc  
          inflating: HAT/hat/models/hat_model.py  
          inflating: HAT/hat/models/realhatmse_model.py  
          inflating: HAT/hat/models/__init__.py  
          inflating: HAT/hat/train.py        
           creating: HAT/hat/results/
           creating: HAT/hat/results/HAT_GAN_Real_SRx4/
          inflating: HAT/hat/results/HAT_GAN_Real_SRx4/test_HAT_GAN_Real_SRx4_20230811_075405.log  
           creating: HAT/hat/results/HAT_GAN_Real_SRx4/visualization/
          inflating: HAT/hat/test.py         
           creating: HAT/hat/archs/
           creating: HAT/hat/archs/.ipynb_checkpoints/
          inflating: HAT/hat/archs/.ipynb_checkpoints/__init__-checkpoint.py  
           creating: HAT/hat/archs/__pycache__/
          inflating: HAT/hat/archs/__pycache__/__init__.cpython-310.pyc  
          inflating: HAT/hat/archs/__pycache__/hat_arch.cpython-310.pyc  
          inflating: HAT/hat/archs/hat_arch.py  
          inflating: HAT/hat/archs/__init__.py  
           creating: HAT/hat/data/
           creating: HAT/hat/data/meta_info/
          inflating: HAT/hat/data/meta_info/meta_info_DF2Ksub_GT.txt  
           creating: HAT/hat/data/.ipynb_checkpoints/
          inflating: HAT/hat/data/.ipynb_checkpoints/__init__-checkpoint.py  
          inflating: HAT/hat/data/.ipynb_checkpoints/imagenet_paired_dataset-checkpoint.py  
           creating: HAT/hat/data/__pycache__/
          inflating: HAT/hat/data/__pycache__/__init__.cpython-310.pyc  
          inflating: HAT/hat/data/__pycache__/imagenet_paired_dataset.cpython-310.pyc  
          inflating: HAT/hat/data/__pycache__/realesrgan_dataset.cpython-310.pyc  
          inflating: HAT/hat/data/realesrgan_dataset.py  
          inflating: HAT/hat/data/imagenet_paired_dataset.py  
          inflating: HAT/hat/data/__init__.py  
          inflating: HAT/hat/__init__.py     
           creating: HAT/.git/
           creating: HAT/.git/logs/
           creating: HAT/.git/logs/refs/
           creating: HAT/.git/logs/refs/remotes/
           creating: HAT/.git/logs/refs/remotes/origin/
          inflating: HAT/.git/logs/refs/remotes/origin/HEAD  
           creating: HAT/.git/logs/refs/heads/
          inflating: HAT/.git/logs/refs/heads/main  
          inflating: HAT/.git/logs/HEAD      
          inflating: HAT/.git/config         
           creating: HAT/.git/refs/
           creating: HAT/.git/refs/remotes/
           creating: HAT/.git/refs/remotes/origin/
         extracting: HAT/.git/refs/remotes/origin/HEAD  
           creating: HAT/.git/refs/heads/
         extracting: HAT/.git/refs/heads/main  
           creating: HAT/.git/refs/tags/
         extracting: HAT/.git/HEAD           
           creating: HAT/.git/hooks/
          inflating: HAT/.git/hooks/push-to-checkout.sample  
          inflating: HAT/.git/hooks/commit-msg.sample  
          inflating: HAT/.git/hooks/applypatch-msg.sample  
          inflating: HAT/.git/hooks/pre-receive.sample  
          inflating: HAT/.git/hooks/pre-push.sample  
          inflating: HAT/.git/hooks/fsmonitor-watchman.sample  
          inflating: HAT/.git/hooks/post-update.sample  
          inflating: HAT/.git/hooks/update.sample  
          inflating: HAT/.git/hooks/pre-merge-commit.sample  
          inflating: HAT/.git/hooks/pre-commit.sample  
          inflating: HAT/.git/hooks/pre-applypatch.sample  
          inflating: HAT/.git/hooks/prepare-commit-msg.sample  
          inflating: HAT/.git/hooks/pre-rebase.sample  
           creating: HAT/.git/info/
          inflating: HAT/.git/info/exclude   
           creating: HAT/.git/objects/
           creating: HAT/.git/objects/pack/
          inflating: HAT/.git/objects/pack/pack-bf02a359cd3a677a5831135490add5f47b36243f.idx  
          inflating: HAT/.git/objects/pack/pack-bf02a359cd3a677a5831135490add5f47b36243f.pack  
           creating: HAT/.git/objects/info/
          inflating: HAT/.git/packed-refs    
          inflating: HAT/.git/index          
          inflating: HAT/.git/description    
          inflating: HAT/README.md           
           creating: HAT/figures/
           creating: HAT/figures/.ipynb_checkpoints/
          inflating: HAT/figures/.ipynb_checkpoints/Comparison-checkpoint.png  
          inflating: HAT/figures/Comparison.png  
          inflating: HAT/figures/Performance_comparison.png  
          inflating: HAT/figures/Visual_Results.png  
         extracting: HAT/VERSION             
          inflating: HAT/setup.py            
          inflating: HAT/predict.py          
           creating: HAT/options/
           creating: HAT/options/train/
          inflating: HAT/options/train/train_HAT-L_SRx4_finetune_from_ImageNet_pretrain.yml  
          inflating: HAT/options/train/train_HAT-L_SRx2_ImageNet_from_scratch.yml  
          inflating: HAT/options/train/train_HAT-L_SRx2_finetune_from_ImageNet_pretrain.yml  
          inflating: HAT/options/train/train_Real_HAT_GAN_SRx4_finetune_from_mse_model.yml  
          inflating: HAT/options/train/train_HAT_SRx3_ImageNet_from_scratch.yml  
          inflating: HAT/options/train/train_HAT_SRx4_finetune_from_ImageNet_pretrain.yml  
          inflating: HAT/options/train/train_HAT_SRx2_finetune_from_ImageNet_pretrain.yml  
          inflating: HAT/options/train/train_HAT-L_SRx3_finetune_from_ImageNet_pretrain.yml  
          inflating: HAT/options/train/train_HAT_SRx4_finetune_from_SRx2.yml  
          inflating: HAT/options/train/train_HAT-S_SRx3_from_scratch.yml  
          inflating: HAT/options/train/train_HAT-S_SRx2_from_scratch.yml  
          inflating: HAT/options/train/train_HAT_SRx2_from_scratch.yml  
          inflating: HAT/options/train/train_HAT_SRx3_from_scratch.yml  
          inflating: HAT/options/train/train_HAT-L_SRx3_ImageNet_from_scratch.yml  
          inflating: HAT/options/train/train_Real_HAT_SRx4_mse_model.yml  
          inflating: HAT/options/train/train_HAT-L_SRx4_ImageNet_from_scratch.yml  
          inflating: HAT/options/train/train_HAT_SRx4_ImageNet_from_scratch.yml  
          inflating: HAT/options/train/train_HAT-S_SRx4_finetune_from_SRx2.yml  
          inflating: HAT/options/train/train_HAT_SRx2_ImageNet_from_scratch.yml  
          inflating: HAT/options/train/train_HAT_SRx3_finetune_from_ImageNet_pretrain.yml  
           creating: HAT/options/test/
          inflating: HAT/options/test/HAT_SRx4_ImageNet-pretrain.yml  
           creating: HAT/options/test/.ipynb_checkpoints/
          inflating: HAT/options/test/.ipynb_checkpoints/HAT_SRx2_ImageNet-pretrain-checkpoint.yml  
          inflating: HAT/options/test/.ipynb_checkpoints/HAT-L_SRx4_ImageNet-pretrain-checkpoint.yml  
          inflating: HAT/options/test/.ipynb_checkpoints/HAT-L_SRx2_ImageNet-pretrain-checkpoint.yml  
          inflating: HAT/options/test/.ipynb_checkpoints/HAT_GAN_Real_SRx4-checkpoint.yml  
          inflating: HAT/options/test/HAT_SRx2_ImageNet-pretrain.yml  
          inflating: HAT/options/test/HAT_GAN_Real_SRx4.yml  
          inflating: HAT/options/test/HAT-S_SRx2.yml  
          inflating: HAT/options/test/HAT_SRx4_ImageNet-LR.yml  
          inflating: HAT/options/test/HAT-L_SRx4_ImageNet-pretrain.yml  
          inflating: HAT/options/test/HAT-S_SRx3.yml  
          inflating: HAT/options/test/HAT_tile_example.yml  
          inflating: HAT/options/test/HAT_SRx3.yml  
          inflating: HAT/options/test/HAT_SRx3_ImageNet-pretrain.yml  
          inflating: HAT/options/test/HAT-S_SRx4.yml  
          inflating: HAT/options/test/HAT-L_SRx3_ImageNet-pretrain.yml  
          inflating: HAT/options/test/HAT_SRx4.yml  
          inflating: HAT/options/test/HAT-L_SRx2_ImageNet-pretrain.yml  
          inflating: HAT/options/test/HAT_SRx2.yml  

      2. 根据需要运行合适的推理任务。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        /mnt/workspace/HAT
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/12
        	Tile 2/12
        	Tile 3/12
        	Tile 4/12
        	Tile 5/12
        	Tile 6/12
        	Tile 7/12
        	Tile 8/12
        	Tile 9/12
        	Tile 10/12
        	Tile 11/12
        	Tile 12/12
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        /mnt/workspace/HAT
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/12
        	Tile 2/12
        	Tile 3/12
        	Tile 4/12
        	Tile 5/12
        	Tile 6/12
        	Tile 7/12
        	Tile 8/12
        	Tile 9/12
        	Tile 10/12
        	Tile 11/12
        	Tile 12/12
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        /mnt/workspace/HAT
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/12
        	Tile 2/12
        	Tile 3/12
        	Tile 4/12
        	Tile 5/12
        	Tile 6/12
        	Tile 7/12
        	Tile 8/12
        	Tile 9/12
        	Tile 10/12
        	Tile 11/12
        	Tile 12/12
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        /mnt/workspace/HAT
        /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
          return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/12
        	Tile 2/12
        	Tile 3/12
        	Tile 4/12
        	Tile 5/12
        	Tile 6/12
        	Tile 7/12
        	Tile 8/12
        	Tile 9/12
        	Tile 10/12
        	Tile 11/12
        	Tile 12/12
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        	Tile 1/4
        	Tile 2/4
        	Tile 3/4
        	Tile 4/4
        	Tile 1/16
        	Tile 2/16
        	Tile 3/16
        	Tile 4/16
        	Tile 5/16
        	Tile 6/16
        	Tile 7/16
        	Tile 8/16
        	Tile 9/16
        	Tile 10/16
        	Tile 11/16
        	Tile 12/16
        	Tile 13/16
        	Tile 14/16
        	Tile 15/16
        	Tile 16/16
        
        
    3. 面部增强,即检测并修复老照片中的人脸。

      1. 下载代码和预训练文件。下载解压完成后,您可以在./CodeFormer文件夹中查看该算法的源代码。

        单击此处查看运行结果

        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/codeformer.zip
        cn-hangzhou
        --2023-09-05 02:21:39--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/codeformer.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.44, 100.118.28.50, 100.118.28.49, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.44|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 627352702 (598M) [application/zip]
        Saving to: ‘codeformer.zip’
        
        codeformer.zip      100%[===================>] 598.29M  12.1MB/s    in 55s     
        
        2023-09-05 02:22:34 (10.9 MB/s) - ‘codeformer.zip’ saved [627352702/627352702]
        
        Archive:  codeformer.zip
           creating: CodeFormer/
          inflating: CodeFormer/requirements.txt  
           creating: CodeFormer/assets/
          inflating: CodeFormer/assets/network.jpg  
          inflating: CodeFormer/assets/inpainting_result1.png  
          inflating: CodeFormer/assets/inpainting_result2.png  
          inflating: CodeFormer/assets/imgsli_2.jpg  
          inflating: CodeFormer/assets/imgsli_1.jpg  
          inflating: CodeFormer/assets/restoration_result2.png  
          inflating: CodeFormer/assets/restoration_result4.png  
          inflating: CodeFormer/assets/color_enhancement_result1.png  
          inflating: CodeFormer/assets/imgsli_3.jpg  
          inflating: CodeFormer/assets/CodeFormer_logo.png  
          inflating: CodeFormer/assets/color_enhancement_result2.png  
          inflating: CodeFormer/assets/restoration_result1.png  
          inflating: CodeFormer/assets/restoration_result3.png  
          inflating: CodeFormer/.gitignore   
          inflating: CodeFormer/demo.py      
          inflating: CodeFormer/LICENSE      
           creating: CodeFormer/basicsr/
           creating: CodeFormer/basicsr/.ipynb_checkpoints/
          inflating: CodeFormer/basicsr/.ipynb_checkpoints/__init__-checkpoint.py  
           creating: CodeFormer/basicsr/models/
          inflating: CodeFormer/basicsr/models/codeformer_joint_model.py  
           creating: CodeFormer/basicsr/models/__pycache__/
          inflating: CodeFormer/basicsr/models/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/codeformer_joint_model.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/codeformer_model.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/sr_model.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/vqgan_model.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/base_model.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/codeformer_idx_model.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/__pycache__/lr_scheduler.cpython-310.pyc  
          inflating: CodeFormer/basicsr/models/codeformer_model.py  
          inflating: CodeFormer/basicsr/models/sr_model.py  
          inflating: CodeFormer/basicsr/models/codeformer_idx_model.py  
          inflating: CodeFormer/basicsr/models/base_model.py  
          inflating: CodeFormer/basicsr/models/lr_scheduler.py  
          inflating: CodeFormer/basicsr/models/__init__.py  
          inflating: CodeFormer/basicsr/models/vqgan_model.py  
          inflating: CodeFormer/basicsr/train.py  
           creating: CodeFormer/basicsr/__pycache__/
          inflating: CodeFormer/basicsr/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/__pycache__/train.cpython-310.pyc  
           creating: CodeFormer/basicsr/archs/
           creating: CodeFormer/basicsr/archs/__pycache__/
          inflating: CodeFormer/basicsr/archs/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/__pycache__/rrdbnet_arch.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/__pycache__/vgg_arch.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/__pycache__/vqgan_arch.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/__pycache__/codeformer_arch.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/__pycache__/arcface_arch.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/__pycache__/arch_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/archs/arcface_arch.py  
          inflating: CodeFormer/basicsr/archs/vgg_arch.py  
          inflating: CodeFormer/basicsr/archs/arch_util.py  
          inflating: CodeFormer/basicsr/archs/vqgan_arch.py  
          inflating: CodeFormer/basicsr/archs/__init__.py  
          inflating: CodeFormer/basicsr/archs/codeformer_arch.py  
          inflating: CodeFormer/basicsr/archs/rrdbnet_arch.py  
           creating: CodeFormer/basicsr/ops/
           creating: CodeFormer/basicsr/ops/__pycache__/
          inflating: CodeFormer/basicsr/ops/__pycache__/__init__.cpython-310.pyc  
           creating: CodeFormer/basicsr/ops/fused_act/
          inflating: CodeFormer/basicsr/ops/fused_act/fused_act.py  
           creating: CodeFormer/basicsr/ops/fused_act/src/
          inflating: CodeFormer/basicsr/ops/fused_act/src/fused_bias_act_kernel.cu  
          inflating: CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp  
          inflating: CodeFormer/basicsr/ops/fused_act/__init__.py  
           creating: CodeFormer/basicsr/ops/dcn/
           creating: CodeFormer/basicsr/ops/dcn/__pycache__/
          inflating: CodeFormer/basicsr/ops/dcn/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/ops/dcn/__pycache__/deform_conv.cpython-310.pyc  
          inflating: CodeFormer/basicsr/ops/dcn/deform_conv.py  
           creating: CodeFormer/basicsr/ops/dcn/src/
          inflating: CodeFormer/basicsr/ops/dcn/src/deform_conv_cuda_kernel.cu  
          inflating: CodeFormer/basicsr/ops/dcn/src/deform_conv_cuda.cpp  
          inflating: CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp  
          inflating: CodeFormer/basicsr/ops/dcn/__init__.py  
           creating: CodeFormer/basicsr/ops/upfirdn2d/
          inflating: CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py  
           creating: CodeFormer/basicsr/ops/upfirdn2d/src/
          inflating: CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d_kernel.cu  
          inflating: CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp  
          inflating: CodeFormer/basicsr/ops/upfirdn2d/__init__.py  
         extracting: CodeFormer/basicsr/ops/__init__.py  
           creating: CodeFormer/basicsr/utils/
           creating: CodeFormer/basicsr/utils/__pycache__/
          inflating: CodeFormer/basicsr/utils/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/dist_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/img_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/download_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/logger.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/registry.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/options.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/file_client.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/misc.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/matlab_functions.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/__pycache__/realesrgan_utils.cpython-310.pyc  
          inflating: CodeFormer/basicsr/utils/file_client.py  
          inflating: CodeFormer/basicsr/utils/logger.py  
          inflating: CodeFormer/basicsr/utils/options.py  
          inflating: CodeFormer/basicsr/utils/video_util.py  
          inflating: CodeFormer/basicsr/utils/img_util.py  
          inflating: CodeFormer/basicsr/utils/matlab_functions.py  
          inflating: CodeFormer/basicsr/utils/download_util.py  
          inflating: CodeFormer/basicsr/utils/__init__.py  
          inflating: CodeFormer/basicsr/utils/realesrgan_utils.py  
          inflating: CodeFormer/basicsr/utils/misc.py  
          inflating: CodeFormer/basicsr/utils/dist_util.py  
          inflating: CodeFormer/basicsr/utils/lmdb_util.py  
          inflating: CodeFormer/basicsr/utils/registry.py  
           creating: CodeFormer/basicsr/metrics/
           creating: CodeFormer/basicsr/metrics/__pycache__/
          inflating: CodeFormer/basicsr/metrics/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/metrics/__pycache__/psnr_ssim.cpython-310.pyc  
          inflating: CodeFormer/basicsr/metrics/__pycache__/metric_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/metrics/metric_util.py  
          inflating: CodeFormer/basicsr/metrics/psnr_ssim.py  
          inflating: CodeFormer/basicsr/metrics/__init__.py  
         extracting: CodeFormer/basicsr/VERSION  
          inflating: CodeFormer/basicsr/setup.py  
           creating: CodeFormer/basicsr/data/
          inflating: CodeFormer/basicsr/data/paired_image_dataset.py  
          inflating: CodeFormer/basicsr/data/gaussian_kernels.py  
          inflating: CodeFormer/basicsr/data/data_util.py  
           creating: CodeFormer/basicsr/data/__pycache__/
          inflating: CodeFormer/basicsr/data/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/prefetch_dataloader.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/gaussian_kernels.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/ffhq_blind_dataset.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/paired_image_dataset.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/data_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/ffhq_blind_joint_dataset.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/transforms.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__pycache__/data_sampler.cpython-310.pyc  
          inflating: CodeFormer/basicsr/data/__init__.py  
          inflating: CodeFormer/basicsr/data/ffhq_blind_dataset.py  
          inflating: CodeFormer/basicsr/data/ffhq_blind_joint_dataset.py  
          inflating: CodeFormer/basicsr/data/prefetch_dataloader.py  
          inflating: CodeFormer/basicsr/data/transforms.py  
          inflating: CodeFormer/basicsr/data/data_sampler.py  
           creating: CodeFormer/basicsr/losses/
           creating: CodeFormer/basicsr/losses/__pycache__/
          inflating: CodeFormer/basicsr/losses/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/basicsr/losses/__pycache__/losses.cpython-310.pyc  
          inflating: CodeFormer/basicsr/losses/__pycache__/loss_util.cpython-310.pyc  
          inflating: CodeFormer/basicsr/losses/loss_util.py  
          inflating: CodeFormer/basicsr/losses/__init__.py  
          inflating: CodeFormer/basicsr/losses/losses.py  
          inflating: CodeFormer/basicsr/__init__.py  
           creating: CodeFormer/facelib/
           creating: CodeFormer/facelib/parsing/
          inflating: CodeFormer/facelib/parsing/resnet.py  
           creating: CodeFormer/facelib/parsing/__pycache__/
          inflating: CodeFormer/facelib/parsing/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/facelib/parsing/__pycache__/bisenet.cpython-310.pyc  
          inflating: CodeFormer/facelib/parsing/__pycache__/parsenet.cpython-310.pyc  
          inflating: CodeFormer/facelib/parsing/__pycache__/resnet.cpython-310.pyc  
          inflating: CodeFormer/facelib/parsing/__init__.py  
          inflating: CodeFormer/facelib/parsing/parsenet.py  
          inflating: CodeFormer/facelib/parsing/bisenet.py  
           creating: CodeFormer/facelib/utils/
           creating: CodeFormer/facelib/utils/__pycache__/
          inflating: CodeFormer/facelib/utils/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/facelib/utils/__pycache__/misc.cpython-310.pyc  
          inflating: CodeFormer/facelib/utils/__pycache__/face_restoration_helper.cpython-310.pyc  
          inflating: CodeFormer/facelib/utils/__pycache__/face_utils.cpython-310.pyc  
          inflating: CodeFormer/facelib/utils/face_utils.py  
          inflating: CodeFormer/facelib/utils/__init__.py  
          inflating: CodeFormer/facelib/utils/misc.py  
          inflating: CodeFormer/facelib/utils/face_restoration_helper.py  
           creating: CodeFormer/facelib/detection/
           creating: CodeFormer/facelib/detection/__pycache__/
          inflating: CodeFormer/facelib/detection/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/__pycache__/matlab_cp2tform.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/__pycache__/align_trans.cpython-310.pyc  
           creating: CodeFormer/facelib/detection/retinaface/
          inflating: CodeFormer/facelib/detection/retinaface/retinaface_net.py  
           creating: CodeFormer/facelib/detection/retinaface/__pycache__/
          inflating: CodeFormer/facelib/detection/retinaface/__pycache__/retinaface_utils.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/retinaface/__pycache__/retinaface.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/retinaface/__pycache__/retinaface_net.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/retinaface/retinaface_utils.py  
          inflating: CodeFormer/facelib/detection/retinaface/retinaface.py  
          inflating: CodeFormer/facelib/detection/align_trans.py  
          inflating: CodeFormer/facelib/detection/__init__.py  
          inflating: CodeFormer/facelib/detection/matlab_cp2tform.py  
           creating: CodeFormer/facelib/detection/yolov5face/
           creating: CodeFormer/facelib/detection/yolov5face/models/
           creating: CodeFormer/facelib/detection/yolov5face/models/__pycache__/
          inflating: CodeFormer/facelib/detection/yolov5face/models/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/models/__pycache__/yolo.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/models/__pycache__/common.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/models/__pycache__/experimental.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/models/yolov5l.yaml  
          inflating: CodeFormer/facelib/detection/yolov5face/models/experimental.py  
          inflating: CodeFormer/facelib/detection/yolov5face/models/yolov5n.yaml  
          inflating: CodeFormer/facelib/detection/yolov5face/models/yolo.py  
          inflating: CodeFormer/facelib/detection/yolov5face/models/common.py  
         extracting: CodeFormer/facelib/detection/yolov5face/models/__init__.py  
           creating: CodeFormer/facelib/detection/yolov5face/__pycache__/
          inflating: CodeFormer/facelib/detection/yolov5face/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/__pycache__/face_detector.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/face_detector.py  
           creating: CodeFormer/facelib/detection/yolov5face/utils/
          inflating: CodeFormer/facelib/detection/yolov5face/utils/torch_utils.py  
           creating: CodeFormer/facelib/detection/yolov5face/utils/__pycache__/
          inflating: CodeFormer/facelib/detection/yolov5face/utils/__pycache__/__init__.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/__pycache__/autoanchor.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/__pycache__/torch_utils.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/__pycache__/general.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/__pycache__/datasets.cpython-310.pyc  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/datasets.py  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py  
         extracting: CodeFormer/facelib/detection/yolov5face/utils/__init__.py  
          inflating: CodeFormer/facelib/detection/yolov5face/utils/general.py  
         extracting: CodeFormer/facelib/detection/yolov5face/__init__.py  
           creating: CodeFormer/inputs/
           creating: CodeFormer/inputs/cropped_faces/
          inflating: CodeFormer/inputs/cropped_faces/0934.png  
          inflating: CodeFormer/inputs/cropped_faces/Solvay_conference_1927_2_16.png  
          inflating: CodeFormer/inputs/cropped_faces/0770.png  
          inflating: CodeFormer/inputs/cropped_faces/0342.png  
          inflating: CodeFormer/inputs/cropped_faces/0500.png  
          inflating: CodeFormer/inputs/cropped_faces/0368.png  
          inflating: CodeFormer/inputs/cropped_faces/Solvay_conference_1927_0018.png  
          inflating: CodeFormer/inputs/cropped_faces/0345.png  
          inflating: CodeFormer/inputs/cropped_faces/0720.png  
          inflating: CodeFormer/inputs/cropped_faces/0444.png  
          inflating: CodeFormer/inputs/cropped_faces/0599.png  
          inflating: CodeFormer/inputs/cropped_faces/0478.png  
          inflating: CodeFormer/inputs/cropped_faces/0763.png  
          inflating: CodeFormer/inputs/cropped_faces/0240.png  
          inflating: CodeFormer/inputs/cropped_faces/0729.png  
          inflating: CodeFormer/inputs/cropped_faces/0717.png  
          inflating: CodeFormer/inputs/cropped_faces/0412.png  
          inflating: CodeFormer/inputs/cropped_faces/0777.png  
          inflating: CodeFormer/inputs/cropped_faces/0143.png  
          inflating: CodeFormer/inputs/cropped_faces/0885.png  
           creating: CodeFormer/inputs/masked_faces/
          inflating: CodeFormer/inputs/masked_faces/00664.png  
          inflating: CodeFormer/inputs/masked_faces/00588.png  
          inflating: CodeFormer/inputs/masked_faces/00108.png  
          inflating: CodeFormer/inputs/masked_faces/00169.png  
          inflating: CodeFormer/inputs/masked_faces/00105.png  
           creating: CodeFormer/inputs/gray_faces/
          inflating: CodeFormer/inputs/gray_faces/169_John_Lennon_00.png  
          inflating: CodeFormer/inputs/gray_faces/111_Alexa_Chung_00.png  
          inflating: CodeFormer/inputs/gray_faces/161_Zac_Efron_00.png  
          inflating: CodeFormer/inputs/gray_faces/158_Jimmy_Fallon_00.png  
          inflating: CodeFormer/inputs/gray_faces/170_Marilyn_Monroe_00.png  
          inflating: CodeFormer/inputs/gray_faces/099_Victoria_Beckham_00.png  
          inflating: CodeFormer/inputs/gray_faces/067_David_Beckham_00.png  
          inflating: CodeFormer/inputs/gray_faces/Einstein02.png  
          inflating: CodeFormer/inputs/gray_faces/Hepburn01.png  
          inflating: CodeFormer/inputs/gray_faces/Hepburn02.png  
          inflating: CodeFormer/inputs/gray_faces/132_Robert_Downey_Jr_00.png  
          inflating: CodeFormer/inputs/gray_faces/089_Miley_Cyrus_00.png  
          inflating: CodeFormer/inputs/gray_faces/Einstein01.png  
           creating: CodeFormer/inputs/whole_imgs/
          inflating: CodeFormer/inputs/whole_imgs/01.jpg  
          inflating: CodeFormer/inputs/whole_imgs/04.jpg  
          inflating: CodeFormer/inputs/whole_imgs/06.png  
          inflating: CodeFormer/inputs/whole_imgs/05.jpg  
          inflating: CodeFormer/inputs/whole_imgs/03.jpg  
          inflating: CodeFormer/inputs/whole_imgs/02.png  
          inflating: CodeFormer/inputs/whole_imgs/00.jpg  
           creating: CodeFormer/web-demos/
           creating: CodeFormer/web-demos/replicate/
          inflating: CodeFormer/web-demos/replicate/cog.yaml  
          inflating: CodeFormer/web-demos/replicate/predict.py  
           creating: CodeFormer/web-demos/hugging_face/
          inflating: CodeFormer/web-demos/hugging_face/app.py  
          inflating: CodeFormer/inference_colorization.py  
          inflating: CodeFormer/inference_inpainting.py  
           creating: CodeFormer/.git/
           creating: CodeFormer/.git/logs/
           creating: CodeFormer/.git/logs/refs/
           creating: CodeFormer/.git/logs/refs/remotes/
           creating: CodeFormer/.git/logs/refs/remotes/origin/
          inflating: CodeFormer/.git/logs/refs/remotes/origin/HEAD  
           creating: CodeFormer/.git/logs/refs/heads/
          inflating: CodeFormer/.git/logs/refs/heads/master  
          inflating: CodeFormer/.git/logs/HEAD  
          inflating: CodeFormer/.git/config  
           creating: CodeFormer/.git/refs/
           creating: CodeFormer/.git/refs/remotes/
           creating: CodeFormer/.git/refs/remotes/origin/
         extracting: CodeFormer/.git/refs/remotes/origin/HEAD  
           creating: CodeFormer/.git/refs/heads/
         extracting: CodeFormer/.git/refs/heads/master  
           creating: CodeFormer/.git/refs/tags/
         extracting: CodeFormer/.git/HEAD    
           creating: CodeFormer/.git/hooks/
          inflating: CodeFormer/.git/hooks/push-to-checkout.sample  
          inflating: CodeFormer/.git/hooks/commit-msg.sample  
          inflating: CodeFormer/.git/hooks/applypatch-msg.sample  
          inflating: CodeFormer/.git/hooks/pre-receive.sample  
          inflating: CodeFormer/.git/hooks/pre-push.sample  
          inflating: CodeFormer/.git/hooks/fsmonitor-watchman.sample  
          inflating: CodeFormer/.git/hooks/post-update.sample  
          inflating: CodeFormer/.git/hooks/update.sample  
          inflating: CodeFormer/.git/hooks/pre-merge-commit.sample  
          inflating: CodeFormer/.git/hooks/pre-commit.sample  
          inflating: CodeFormer/.git/hooks/pre-applypatch.sample  
          inflating: CodeFormer/.git/hooks/prepare-commit-msg.sample  
          inflating: CodeFormer/.git/hooks/pre-rebase.sample  
           creating: CodeFormer/.git/info/
          inflating: CodeFormer/.git/info/exclude  
           creating: CodeFormer/.git/objects/
           creating: CodeFormer/.git/objects/pack/
          inflating: CodeFormer/.git/objects/pack/pack-dcebf9a56fbde6deb0dec2536e47337200fdef01.pack  
          inflating: CodeFormer/.git/objects/pack/pack-dcebf9a56fbde6deb0dec2536e47337200fdef01.idx  
           creating: CodeFormer/.git/objects/info/
          inflating: CodeFormer/.git/packed-refs  
           creating: CodeFormer/.git/branches/
          inflating: CodeFormer/.git/index   
          inflating: CodeFormer/.git/description  
          inflating: CodeFormer/README.md    
           creating: CodeFormer/docs/
          inflating: CodeFormer/docs/train.md  
          inflating: CodeFormer/docs/train_CN.md  
          inflating: CodeFormer/docs/history_changelog.md  
           creating: CodeFormer/scripts/
          inflating: CodeFormer/scripts/generate_latent_gt.py  
          inflating: CodeFormer/scripts/download_pretrained_models_from_gdrive.py  
          inflating: CodeFormer/scripts/download_pretrained_models.py  
          inflating: CodeFormer/scripts/crop_align_face.py  
          inflating: CodeFormer/scripts/inference_vqgan.py  
           creating: CodeFormer/weights/
          inflating: CodeFormer/weights/RealESRGAN_x2plus.pth  
           creating: CodeFormer/weights/CodeFormer/
          inflating: CodeFormer/weights/CodeFormer/codeformer.pth  
         extracting: CodeFormer/weights/CodeFormer/.gitkeep  
           creating: CodeFormer/weights/dlib/
           creating: CodeFormer/weights/facelib/
          inflating: CodeFormer/weights/facelib/detection_Resnet50_Final.pth  
          inflating: CodeFormer/weights/facelib/parsing_parsenet.pth  
         extracting: CodeFormer/weights/facelib/.gitkeep  
          inflating: CodeFormer/weights/README.md  
           creating: CodeFormer/options/
          inflating: CodeFormer/options/CodeFormer_colorization.yml  
          inflating: CodeFormer/options/VQGAN_512_ds32_nearest_stage1.yml  
          inflating: CodeFormer/options/CodeFormer_stage2.yml  
          inflating: CodeFormer/options/CodeFormer_inpainting.yml  
          inflating: CodeFormer/options/CodeFormer_stage3.yml  
      2. 根据需要运行合适的推理任务。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        Face detection model: retinaface_resnet50
        Background upsampling: False, Face upsampling: False
        [1/10] Processing: 10.jpg
        Grayscale input: True
        	detect 0 faces
        [2/10] Processing: 2.jpg
        	detect 14 faces
        [3/10] Processing: 20.jpg
        	detect 0 faces
        [4/10] Processing: 34.jpg
        Grayscale input: True
        	detect 3 faces
        [5/10] Processing: 4.png
        	detect 1 faces
        [6/10] Processing: 40.jpg
        Grayscale input: True
        	detect 1 faces
        [7/10] Processing: 50.jpg
        Grayscale input: True
        	detect 1 faces
        [8/10] Processing: 54.jpg
        Grayscale input: True
        	detect 1 faces
        [9/10] Processing: 64.jpg
        Grayscale input: True
        	detect 1 faces
        [10/10] Processing: 70.jpg
        Grayscale input: True
        	detect 9 faces
        
        All results are saved in results/codeformer_0.5
        Face detection model: retinaface_resnet50
        Background upsampling: True, Face upsampling: False
        [1/10] Processing: 10.jpg
        Grayscale input: True
        	detect 0 faces
        [2/10] Processing: 2.jpg
        	detect 14 faces
        [3/10] Processing: 20.jpg
        	detect 0 faces
        [4/10] Processing: 34.jpg
        Grayscale input: True
        	detect 3 faces
        [5/10] Processing: 4.png
        	detect 1 faces
        [6/10] Processing: 40.jpg
        Grayscale input: True
        	detect 1 faces
        [7/10] Processing: 50.jpg
        Grayscale input: True
        	detect 1 faces
        [8/10] Processing: 54.jpg
        Grayscale input: True
        	detect 1 faces
        [9/10] Processing: 64.jpg
        Grayscale input: True
        	detect 1 faces
        [10/10] Processing: 70.jpg
        Grayscale input: True
        	detect 9 faces
        
        All results are saved in results/codeformer_0.5_bgup
        Face detection model: retinaface_resnet50
        Background upsampling: True, Face upsampling: True
        [1/10] Processing: 10.jpg
        Grayscale input: True
        	detect 0 faces
        [2/10] Processing: 2.jpg
        	detect 14 faces
        [3/10] Processing: 20.jpg
        	detect 0 faces
        [4/10] Processing: 34.jpg
        Grayscale input: True
        	detect 3 faces
        [5/10] Processing: 4.png
        	detect 1 faces
        [6/10] Processing: 40.jpg
        Grayscale input: True
        	detect 1 faces
        [7/10] Processing: 50.jpg
        Grayscale input: True
        	detect 1 faces
        [8/10] Processing: 54.jpg
        Grayscale input: True
        	detect 1 faces
        [9/10] Processing: 64.jpg
        Grayscale input: True
        	detect 1 faces
        [10/10] Processing: 70.jpg
        Grayscale input: True
        	detect 9 faces
        
        All results are saved in results/codeformer_0.5_bgup_faceup
        Face detection model: retinaface_resnet50
        Background upsampling: True, Face upsampling: True
        [1/10] Processing: 10.jpg
        Grayscale input: True
        	detect 0 faces
        [2/10] Processing: 2.jpg
        	detect 14 faces
        [3/10] Processing: 20.jpg
        	detect 0 faces
        [4/10] Processing: 34.jpg
        Grayscale input: True
        	detect 3 faces
        [5/10] Processing: 4.png
        	detect 1 faces
        [6/10] Processing: 40.jpg
        Grayscale input: True
        	detect 1 faces
        [7/10] Processing: 50.jpg
        Grayscale input: True
        	detect 1 faces
        [8/10] Processing: 54.jpg
        Grayscale input: True
        	detect 1 faces
        [9/10] Processing: 64.jpg
        Grayscale input: True
        	detect 1 faces
        [10/10] Processing: 70.jpg
        Grayscale input: True
        	detect 9 faces
        
        All results are saved in results/codeformer_1.0_bgup_faceup
        Face detection model: retinaface_resnet50
        Background upsampling: True, Face upsampling: True
        [1/10] Processing: 10.jpg
        Grayscale input: True
        	detect 0 faces
        [2/10] Processing: 2.jpg
        	detect 14 faces
        [3/10] Processing: 20.jpg
        	detect 0 faces
        [4/10] Processing: 34.jpg
        Grayscale input: True
        	detect 3 faces
        	Input is a 16-bit image
        	Input is a 16-bit image
        [5/10] Processing: 4.png
        	detect 1 faces
        [6/10] Processing: 40.jpg
        Grayscale input: True
        	detect 1 faces
        [7/10] Processing: 50.jpg
        Grayscale input: True
        	detect 1 faces
        [8/10] Processing: 54.jpg
        Grayscale input: True
        	detect 1 faces
        [9/10] Processing: 64.jpg
        Grayscale input: True
        	detect 1 faces
        [10/10] Processing: 70.jpg
        Grayscale input: True
        	detect 9 faces
        
        All results are saved in results/codeformer_0.0_bgup_faceup
    4. 图像上色,有条件或无条件的进行老照片上色。

      无条件上色

      1. 下载代码和预训练文件,并安装ModelScope环境。下载解压完成后,您可以在./Colorization文件夹中查看该算法的源代码。

        单击此处查看运行结果

        Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
        Requirement already satisfied: modelscope in /usr/local/lib/python3.10/dist-packages (1.8.4)
        Requirement already satisfied: oss2 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.18.1)
        Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.5.3)
        Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from modelscope) (6.0)
        Requirement already satisfied: datasets<=2.13.0,>=2.8.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.11.0)
        Requirement already satisfied: einops in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.4.1)
        Requirement already satisfied: urllib3>=1.26 in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.26.15)
        Requirement already satisfied: Pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (9.4.0)
        Requirement already satisfied: filelock>=3.3.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (3.10.7)
        Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.8.2)
        Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.10.1)
        Requirement already satisfied: yapf in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.32.0)
        Requirement already satisfied: simplejson>=3.3.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (3.19.1)
        Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from modelscope) (59.6.0)
        Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.23.3)
        Requirement already satisfied: tqdm>=4.64.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (4.65.0)
        Requirement already satisfied: requests>=2.25 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.25.1)
        Requirement already satisfied: gast>=0.2.2 in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.5.4)
        Requirement already satisfied: addict in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.4.0)
        Requirement already satisfied: sortedcontainers>=1.5.9 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.4.0)
        Requirement already satisfied: pyarrow!=9.0.0,>=6.0.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (11.0.0)
        Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from modelscope) (22.2.0)
        Requirement already satisfied: dill<0.3.7,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.3.6)
        Requirement already satisfied: huggingface-hub<1.0.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.13.3)
        Requirement already satisfied: fsspec[http]>=2021.11.1 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (2023.3.0)
        Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (23.0)
        Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (3.8.4)
        Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (3.2.0)
        Requirement already satisfied: responses<0.19 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.18.0)
        Requirement already satisfied: multiprocess in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.70.14)
        Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.1->modelscope) (1.16.0)
        Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (4.0.0)
        Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (2.10)
        Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (2022.12.7)
        Requirement already satisfied: pycryptodome>=3.4.7 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (3.17)
        Requirement already satisfied: aliyun-python-sdk-core>=2.13.12 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (2.13.36)
        Requirement already satisfied: crcmod>=1.7 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (1.7)
        Requirement already satisfied: aliyun-python-sdk-kms>=2.4.1 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (2.16.1)
        Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->modelscope) (2023.3)
        Requirement already satisfied: jmespath<1.0.0,>=0.9.3 in /usr/local/lib/python3.10/dist-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (0.10.0)
        Requirement already satisfied: cryptography>=2.6.0 in /usr/local/lib/python3.10/dist-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (40.0.1)
        Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.3.1)
        Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (3.1.0)
        Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (6.0.4)
        Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (4.0.2)
        Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.3.3)
        Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.8.2)
        Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0.0,>=0.11.0->datasets<=2.13.0,>=2.8.0->modelscope) (4.5.0)
        Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (1.15.1)
        Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (2.21)
        WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
        
        [notice] A new release of pip is available: 23.0.1 -> 23.2.1
        [notice] To update, run: python3 -m pip install --upgrade pip
        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/color.zip
        cn-hangzhou
        --2023-09-05 02:30:04--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/color.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.45, 100.118.28.44, 100.118.28.50, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.45|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 1659285229 (1.5G) [application/zip]
        Saving to: ‘color.zip’
        
        color.zip           100%[===================>]   1.54G  12.9MB/s    in 2m 10s  
        
        2023-09-05 02:32:14 (12.2 MB/s) - ‘color.zip’ saved [1659285229/1659285229]
        
        Archive:  color.zip
           creating: Colorization/
           creating: Colorization/.ipynb_checkpoints/
         extracting: Colorization/.ipynb_checkpoints/demo-checkpoint.py  
           creating: Colorization/pretrain/
           creating: Colorization/pretrain/cv_ddcolor_image-colorization/
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/pytorch_model.pt  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/.mdl  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/.msc  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/README.md  
           creating: Colorization/pretrain/cv_ddcolor_image-colorization/resources/
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/resources/ddcolor_arch.jpg  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/resources/demo2.jpg  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/resources/demo3.jpg  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/resources/demo.jpg  
          inflating: Colorization/pretrain/cv_ddcolor_image-colorization/configuration.json  
           creating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/
          inflating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/pytorch_model.pt  
         extracting: Colorization/pretrain/cv_csrnet_image-color-enhance-models/.mdl  
          inflating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/.msc  
          inflating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/README.md  
          inflating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/configuration.json  
           creating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/data/
          inflating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/data/csrnet_1.png  
          inflating: Colorization/pretrain/cv_csrnet_image-color-enhance-models/data/1.png  
           creating: Colorization/pretrain/cv_unet_image-colorization/
          inflating: Colorization/pretrain/cv_unet_image-colorization/pytorch_model.pt  
         extracting: Colorization/pretrain/cv_unet_image-colorization/.mdl  
          inflating: Colorization/pretrain/cv_unet_image-colorization/.msc  
          inflating: Colorization/pretrain/cv_unet_image-colorization/README.md  
          inflating: Colorization/pretrain/cv_unet_image-colorization/configuration.json  
           creating: Colorization/pretrain/cv_unet_image-colorization/description/
          inflating: Colorization/pretrain/cv_unet_image-colorization/description/demo2.jpg  
          inflating: Colorization/pretrain/cv_unet_image-colorization/description/demo3.png  
          inflating: Colorization/pretrain/cv_unet_image-colorization/description/demo.jpg  
          inflating: Colorization/pretrain/cv_unet_image-colorization/description/deoldify_arch.png  
          inflating: Colorization/demo.py    
      2. 根据需要运行合适的推理任务。命令执行成功后,您可以在./results/{task_name}中查看修复后的图像结果。

        单击此处查看运行结果

        2023-09-05 02:36:59,816 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-05 02:36:59,818 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-05 02:36:59,890 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        2023-09-05 02:37:00,925 - modelscope - INFO - initiate model from /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization
        2023-09-05 02:37:00,926 - modelscope - INFO - initiate model from location /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization.
        2023-09-05 02:37:00,926 - modelscope - INFO - initialize model from /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization
        2023-09-05 02:37:05,703 - modelscope - INFO - Loading DDColor model from /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization/pytorch_model.pt, with param key: [params].
        2023-09-05 02:37:05,906 - modelscope - INFO - load model done.
        2023-09-05 02:37:05,927 - modelscope - WARNING - No preprocessor field found in cfg.
        2023-09-05 02:37:05,927 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
        2023-09-05 02:37:05,927 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization'}. trying to build by task and model information.
        2023-09-05 02:37:05,927 - modelscope - WARNING - No preprocessor key ('ddcolor', 'image-colorization') found in PREPROCESSOR_MAP, skip building preprocessor.
        2023-09-05 02:37:05,948 - modelscope - INFO - load model done
        Total Image:  10
        0/10 saved at results/DDC/64.jpg
        1/10 saved at results/DDC/2.jpg
        2/10 saved at results/DDC/34.jpg
        3/10 saved at results/DDC/54.jpg
        4/10 saved at results/DDC/40.jpg
        5/10 saved at results/DDC/10.jpg
        6/10 saved at results/DDC/70.jpg
        7/10 saved at results/DDC/4.png
        8/10 saved at results/DDC/20.jpg
        9/10 saved at results/DDC/50.jpg
        2023-09-05 02:37:11,906 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-05 02:37:11,907 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-05 02:37:11,979 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        2023-09-05 02:37:13,020 - modelscope - INFO - initiate model from /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization
        2023-09-05 02:37:13,020 - modelscope - INFO - initiate model from location /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization.
        2023-09-05 02:37:13,021 - modelscope - INFO - initialize model from /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization
        2023-09-05 02:37:18,528 - modelscope - INFO - Loading DDColor model from /mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization/pytorch_model.pt, with param key: [params].
        2023-09-05 02:37:18,774 - modelscope - INFO - load model done.
        2023-09-05 02:37:18,797 - modelscope - WARNING - No preprocessor field found in cfg.
        2023-09-05 02:37:18,797 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
        2023-09-05 02:37:18,797 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/mnt/workspace/Colorization/pretrain/cv_ddcolor_image-colorization'}. trying to build by task and model information.
        2023-09-05 02:37:18,797 - modelscope - WARNING - No preprocessor key ('ddcolor', 'image-colorization') found in PREPROCESSOR_MAP, skip building preprocessor.
        2023-09-05 02:37:18,819 - modelscope - INFO - load model done
        2023-09-05 02:37:18,821 - modelscope - INFO - initiate model from /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models
        2023-09-05 02:37:18,821 - modelscope - INFO - initiate model from location /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models.
        2023-09-05 02:37:18,822 - modelscope - INFO - initialize model from /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models
        2023-09-05 02:37:19,861 - modelscope - INFO - Loading CSRNet model from /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models/pytorch_model.pt, with param key: [params].
        2023-09-05 02:37:19,863 - modelscope - INFO - load model done.
        Total Image:  10
        use_enhance
        0/10 saved at results/DDC/enhance_64.jpg
        use_enhance
        1/10 saved at results/DDC/enhance_2.jpg
        use_enhance
        2/10 saved at results/DDC/enhance_34.jpg
        use_enhance
        3/10 saved at results/DDC/enhance_54.jpg
        use_enhance
        4/10 saved at results/DDC/enhance_40.jpg
        use_enhance
        5/10 saved at results/DDC/enhance_10.jpg
        use_enhance
        6/10 saved at results/DDC/enhance_70.jpg
        use_enhance
        7/10 saved at results/DDC/enhance_4.png
        use_enhance
        8/10 saved at results/DDC/enhance_20.jpg
        use_enhance
        9/10 saved at results/DDC/enhance_50.jpg
        2023-09-05 02:37:25,090 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-05 02:37:25,091 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-05 02:37:25,150 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        2023-09-05 02:37:25,728 - modelscope - INFO - initiate model from /mnt/workspace/Colorization/pretrain/cv_unet_image-colorization
        2023-09-05 02:37:25,728 - modelscope - INFO - initiate model from location /mnt/workspace/Colorization/pretrain/cv_unet_image-colorization.
        2023-09-05 02:37:25,730 - modelscope - WARNING - No preprocessor field found in cfg.
        2023-09-05 02:37:25,730 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
        2023-09-05 02:37:25,730 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/mnt/workspace/Colorization/pretrain/cv_unet_image-colorization'}. trying to build by task and model information.
        2023-09-05 02:37:25,730 - modelscope - WARNING - Find task: image-colorization, model type: None. Insufficient information to build preprocessor, skip building preprocessor
        /usr/local/lib/python3.10/dist-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
          warnings.warn(
        /usr/local/lib/python3.10/dist-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet101_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet101_Weights.DEFAULT` to get the most up-to-date weights.
          warnings.warn(msg)
        Downloading: "https://download.pytorch.org/models/resnet101-63fe2227.pth" to /root/.cache/torch/hub/checkpoints/resnet101-63fe2227.pth
        100%|████████████████████████████████████████| 171M/171M [00:08<00:00, 21.7MB/s]
        2023-09-05 02:37:40,123 - modelscope - INFO - load model done
        Total Image:  10
        0/10 saved at results/DeOldify/64.jpg
        1/10 saved at results/DeOldify/2.jpg
        2/10 saved at results/DeOldify/34.jpg
        3/10 saved at results/DeOldify/54.jpg
        4/10 saved at results/DeOldify/40.jpg
        5/10 saved at results/DeOldify/10.jpg
        6/10 saved at results/DeOldify/70.jpg
        7/10 saved at results/DeOldify/4.png
        8/10 saved at results/DeOldify/20.jpg
        9/10 saved at results/DeOldify/50.jpg
        2023-09-05 02:37:44,830 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-05 02:37:44,831 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-05 02:37:44,898 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        2023-09-05 02:37:45,464 - modelscope - INFO - initiate model from /mnt/workspace/Colorization/pretrain/cv_unet_image-colorization
        2023-09-05 02:37:45,464 - modelscope - INFO - initiate model from location /mnt/workspace/Colorization/pretrain/cv_unet_image-colorization.
        2023-09-05 02:37:45,466 - modelscope - WARNING - No preprocessor field found in cfg.
        2023-09-05 02:37:45,466 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
        2023-09-05 02:37:45,466 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/mnt/workspace/Colorization/pretrain/cv_unet_image-colorization'}. trying to build by task and model information.
        2023-09-05 02:37:45,466 - modelscope - WARNING - Find task: image-colorization, model type: None. Insufficient information to build preprocessor, skip building preprocessor
        /usr/local/lib/python3.10/dist-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
          warnings.warn(
        /usr/local/lib/python3.10/dist-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet101_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet101_Weights.DEFAULT` to get the most up-to-date weights.
          warnings.warn(msg)
        2023-09-05 02:37:49,758 - modelscope - INFO - load model done
        2023-09-05 02:37:49,760 - modelscope - INFO - initiate model from /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models
        2023-09-05 02:37:49,760 - modelscope - INFO - initiate model from location /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models.
        2023-09-05 02:37:49,761 - modelscope - INFO - initialize model from /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models
        2023-09-05 02:37:49,766 - modelscope - INFO - Loading CSRNet model from /mnt/workspace/Colorization/pretrain/cv_csrnet_image-color-enhance-models/pytorch_model.pt, with param key: [params].
        2023-09-05 02:37:49,768 - modelscope - INFO - load model done.
        Total Image:  10
        use_enhance
        0/10 saved at results/DeOldify/enhance_64.jpg
        use_enhance
        1/10 saved at results/DeOldify/enhance_2.jpg
        use_enhance
        2/10 saved at results/DeOldify/enhance_34.jpg
        use_enhance
        3/10 saved at results/DeOldify/enhance_54.jpg
        use_enhance
        4/10 saved at results/DeOldify/enhance_40.jpg
        use_enhance
        5/10 saved at results/DeOldify/enhance_10.jpg
        use_enhance
        6/10 saved at results/DeOldify/enhance_70.jpg
        use_enhance
        7/10 saved at results/DeOldify/enhance_4.png
        use_enhance
        8/10 saved at results/DeOldify/enhance_20.jpg
        use_enhance
        9/10 saved at results/DeOldify/enhance_50.jpg

      有条件上色

      1. 下载代码和预训练文件。下载解压完成后,您可以在./sample./unicolor文件夹中查看算法的源代码。

        单击此处查看运行结果

        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/sam_unicolor.zip
        cn-hangzhou
        --2023-09-05 02:44:17--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/sam_unicolor.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.44, 100.118.28.45, 100.118.28.49, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.44|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 9102984978 (8.5G) [application/zip]
        Saving to: ‘sam_unicolor.zip’
        
        sam_unicolor.zip    100%[===================>]   8.48G  17.6MB/s    in 8m 33s  
        
        2023-09-05 02:52:51 (16.9 MB/s) - ‘sam_unicolor.zip’ saved [9102984978/9102984978]
        
        Archive:  sam_unicolor.zip
           creating: sample/
          inflating: sample/utils_func.py    
           creating: sample/.ipynb_checkpoints/
          inflating: sample/.ipynb_checkpoints/sample-checkpoint.ipynb  
           creating: sample/SAM/
          inflating: sample/SAM/CONTRIBUTING.md  
           creating: sample/SAM/demo/
           creating: sample/SAM/demo/src/
           creating: sample/SAM/demo/src/components/
          inflating: sample/SAM/demo/src/components/Stage.tsx  
           creating: sample/SAM/demo/src/components/hooks/
          inflating: sample/SAM/demo/src/components/hooks/createContext.tsx  
          inflating: sample/SAM/demo/src/components/hooks/context.tsx  
          inflating: sample/SAM/demo/src/components/Tool.tsx  
           creating: sample/SAM/demo/src/components/helpers/
          inflating: sample/SAM/demo/src/components/helpers/maskUtils.tsx  
          inflating: sample/SAM/demo/src/components/helpers/scaleHelper.tsx  
          inflating: sample/SAM/demo/src/components/helpers/Interfaces.tsx  
          inflating: sample/SAM/demo/src/components/helpers/onnxModelAPI.tsx  
           creating: sample/SAM/demo/src/assets/
           creating: sample/SAM/demo/src/assets/data/
          inflating: sample/SAM/demo/src/assets/data/dogs.jpg  
           creating: sample/SAM/demo/src/assets/scss/
          inflating: sample/SAM/demo/src/assets/scss/App.scss  
          inflating: sample/SAM/demo/src/assets/index.html  
          inflating: sample/SAM/demo/src/index.tsx  
          inflating: sample/SAM/demo/src/App.tsx  
           creating: sample/SAM/demo/configs/
           creating: sample/SAM/demo/configs/webpack/
          inflating: sample/SAM/demo/configs/webpack/common.js  
          inflating: sample/SAM/demo/configs/webpack/prod.js  
          inflating: sample/SAM/demo/configs/webpack/dev.js  
          inflating: sample/SAM/demo/tsconfig.json  
          inflating: sample/SAM/demo/tailwind.config.js  
          inflating: sample/SAM/demo/README.md  
          inflating: sample/SAM/demo/postcss.config.js  
          inflating: sample/SAM/demo/package.json  
           creating: sample/SAM/assets/
          inflating: sample/SAM/assets/masks2.jpg  
          inflating: sample/SAM/assets/minidemo.gif  
          inflating: sample/SAM/assets/model_diagram.png  
          inflating: sample/SAM/assets/notebook1.png  
          inflating: sample/SAM/assets/masks1.png  
          inflating: sample/SAM/assets/notebook2.png  
          inflating: sample/SAM/nohup.out    
          inflating: sample/SAM/setup.cfg    
          inflating: sample/SAM/.gitignore   
          inflating: sample/SAM/CODE_OF_CONDUCT.md  
           creating: sample/SAM/notebooks/
          inflating: sample/SAM/notebooks/part.png  
          inflating: sample/SAM/notebooks/onnx_model_example.ipynb  
          inflating: sample/SAM/notebooks/save_0664.png  
           creating: sample/SAM/notebooks/.ipynb_checkpoints/
          inflating: sample/SAM/notebooks/.ipynb_checkpoints/automatic_mask_generator_example-checkpoint.ipynb  
          inflating: sample/SAM/notebooks/.ipynb_checkpoints/predictor_example-checkpoint.ipynb  
          inflating: sample/SAM/notebooks/save_part.png  
          inflating: sample/SAM/notebooks/save.png  
          inflating: sample/SAM/notebooks/save_0656.png  
          inflating: sample/SAM/notebooks/save_0660.png  
          inflating: sample/SAM/notebooks/predictor_example.ipynb  
          inflating: sample/SAM/notebooks/part_1.png  
           creating: sample/SAM/notebooks/images/
          inflating: sample/SAM/notebooks/images/truck.jpg  
          inflating: sample/SAM/notebooks/images/000.png  
          inflating: sample/SAM/notebooks/images/dog.jpg  
          inflating: sample/SAM/notebooks/images/groceries.jpg  
          inflating: sample/SAM/notebooks/automatic_mask_generator_example.ipynb  
          inflating: sample/SAM/README.md    
          inflating: sample/SAM/LICENSE      
           creating: sample/SAM/segment_anything/
          inflating: sample/SAM/segment_anything/__init__.py  
          inflating: sample/SAM/segment_anything/predictor.py  
          inflating: sample/SAM/segment_anything/automatic_mask_generator.py  
           creating: sample/SAM/segment_anything/utils/
          inflating: sample/SAM/segment_anything/utils/__init__.py  
          inflating: sample/SAM/segment_anything/utils/amg.py  
          inflating: sample/SAM/segment_anything/utils/onnx.py  
          inflating: sample/SAM/segment_anything/utils/transforms.py  
           creating: sample/SAM/segment_anything/utils/__pycache__/
          inflating: sample/SAM/segment_anything/utils/__pycache__/__init__.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/utils/__pycache__/transforms.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/utils/__pycache__/amg.cpython-38.pyc  
           creating: sample/SAM/segment_anything/modeling/
          inflating: sample/SAM/segment_anything/modeling/__init__.py  
          inflating: sample/SAM/segment_anything/modeling/mask_decoder.py  
          inflating: sample/SAM/segment_anything/modeling/prompt_encoder.py  
          inflating: sample/SAM/segment_anything/modeling/transformer.py  
          inflating: sample/SAM/segment_anything/modeling/common.py  
          inflating: sample/SAM/segment_anything/modeling/image_encoder.py  
          inflating: sample/SAM/segment_anything/modeling/sam.py  
           creating: sample/SAM/segment_anything/modeling/__pycache__/
          inflating: sample/SAM/segment_anything/modeling/__pycache__/__init__.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/modeling/__pycache__/sam.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/modeling/__pycache__/image_encoder.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/modeling/__pycache__/common.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/modeling/__pycache__/transformer.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/modeling/__pycache__/mask_decoder.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/modeling/__pycache__/prompt_encoder.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/build_sam.py  
           creating: sample/SAM/segment_anything/__pycache__/
          inflating: sample/SAM/segment_anything/__pycache__/__init__.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/__pycache__/automatic_mask_generator.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/__pycache__/predictor.cpython-38.pyc  
          inflating: sample/SAM/segment_anything/__pycache__/build_sam.cpython-38.pyc  
          inflating: sample/SAM/linter.sh    
          inflating: sample/SAM/setup.py     
           creating: sample/SAM/.git/
           creating: sample/SAM/.git/logs/
          inflating: sample/SAM/.git/logs/HEAD  
           creating: sample/SAM/.git/logs/refs/
           creating: sample/SAM/.git/logs/refs/remotes/
           creating: sample/SAM/.git/logs/refs/remotes/origin/
          inflating: sample/SAM/.git/logs/refs/remotes/origin/HEAD  
           creating: sample/SAM/.git/logs/refs/heads/
          inflating: sample/SAM/.git/logs/refs/heads/main  
           creating: sample/SAM/.git/objects/
           creating: sample/SAM/.git/objects/info/
           creating: sample/SAM/.git/objects/pack/
          inflating: sample/SAM/.git/objects/pack/pack-6bddc3bea3b5846071a7a94ed6811aed34aa66d9.pack  
          inflating: sample/SAM/.git/objects/pack/pack-6bddc3bea3b5846071a7a94ed6811aed34aa66d9.idx  
          inflating: sample/SAM/.git/packed-refs  
           creating: sample/SAM/.git/hooks/
          inflating: sample/SAM/.git/hooks/pre-commit.sample  
          inflating: sample/SAM/.git/hooks/pre-rebase.sample  
          inflating: sample/SAM/.git/hooks/post-update.sample  
          inflating: sample/SAM/.git/hooks/update.sample  
          inflating: sample/SAM/.git/hooks/pre-merge-commit.sample  
          inflating: sample/SAM/.git/hooks/pre-push.sample  
          inflating: sample/SAM/.git/hooks/fsmonitor-watchman.sample  
          inflating: sample/SAM/.git/hooks/prepare-commit-msg.sample  
          inflating: sample/SAM/.git/hooks/applypatch-msg.sample  
          inflating: sample/SAM/.git/hooks/pre-receive.sample  
          inflating: sample/SAM/.git/hooks/commit-msg.sample  
          inflating: sample/SAM/.git/hooks/pre-applypatch.sample  
          inflating: sample/SAM/.git/config  
          inflating: sample/SAM/.git/description  
           creating: sample/SAM/.git/branches/
           creating: sample/SAM/.git/info/
          inflating: sample/SAM/.git/info/exclude  
         extracting: sample/SAM/.git/HEAD    
           creating: sample/SAM/.git/refs/
           creating: sample/SAM/.git/refs/remotes/
           creating: sample/SAM/.git/refs/remotes/origin/
         extracting: sample/SAM/.git/refs/remotes/origin/HEAD  
           creating: sample/SAM/.git/refs/heads/
         extracting: sample/SAM/.git/refs/heads/main  
           creating: sample/SAM/.git/refs/tags/
          inflating: sample/SAM/.git/index   
          inflating: sample/SAM/.flake8      
           creating: sample/SAM/scripts/
          inflating: sample/SAM/scripts/amg.py  
          inflating: sample/SAM/scripts/export_onnx_model.py  
          inflating: sample/color_table.yaml  
           creating: sample/ImageMatch/
           creating: sample/ImageMatch/data/
          inflating: sample/ImageMatch/data/vgg19_gray.pth  
          inflating: sample/ImageMatch/data/vgg19_conv.pth  
           creating: sample/ImageMatch/lib/
         extracting: sample/ImageMatch/lib/__init__.py  
          inflating: sample/ImageMatch/lib/functional.py  
          inflating: sample/ImageMatch/lib/TestTransforms.py  
          inflating: sample/ImageMatch/lib/TrainTransforms.py  
          inflating: sample/ImageMatch/lib/FeatVGG.py  
          inflating: sample/ImageMatch/lib/videoloader.py  
          inflating: sample/ImageMatch/lib/videoloader_imagenet.py  
           creating: sample/ImageMatch/lib/.vscode/
         extracting: sample/ImageMatch/lib/.vscode/settings.json  
           creating: sample/ImageMatch/lib/__pycache__/
          inflating: sample/ImageMatch/lib/__pycache__/__init__.cpython-38.pyc  
          inflating: sample/ImageMatch/lib/__pycache__/TestTransforms.cpython-38.pyc  
          inflating: sample/ImageMatch/lib/__pycache__/functional.cpython-38.pyc  
          inflating: sample/ImageMatch/lib/VGGFeatureLoss.py  
           creating: sample/ImageMatch/utils/
         extracting: sample/ImageMatch/utils/__init__.py  
          inflating: sample/ImageMatch/utils/util_distortion.py  
          inflating: sample/ImageMatch/utils/vgg_util.py  
          inflating: sample/ImageMatch/utils/flowlib.py  
          inflating: sample/ImageMatch/utils/util_tensorboard.py  
          inflating: sample/ImageMatch/utils/warping.py  
          inflating: sample/ImageMatch/utils/util.py  
           creating: sample/ImageMatch/utils/__pycache__/
          inflating: sample/ImageMatch/utils/__pycache__/__init__.cpython-38.pyc  
          inflating: sample/ImageMatch/utils/__pycache__/util_distortion.cpython-38.pyc  
          inflating: sample/ImageMatch/utils/__pycache__/util.cpython-38.pyc  
          inflating: sample/ImageMatch/utils/tb_image_recorder.py  
          inflating: sample/ImageMatch/warp.py  
           creating: sample/ImageMatch/tensorboardX/
          inflating: sample/ImageMatch/tensorboardX/__init__.py  
           creating: sample/ImageMatch/tensorboardX/src/
         extracting: sample/ImageMatch/tensorboardX/src/__init__.py  
          inflating: sample/ImageMatch/tensorboardX/src/node_def_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/summary_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/versions_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/resource_handle_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/tensor_shape_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/types_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/attr_value_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/plugin_pr_curve_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/event_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/tensor_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/src/graph_pb2.py  
          inflating: sample/ImageMatch/tensorboardX/graph.py  
          inflating: sample/ImageMatch/tensorboardX/crc32c.py  
          inflating: sample/ImageMatch/tensorboardX/x2num.py  
          inflating: sample/ImageMatch/tensorboardX/summary.py  
          inflating: sample/ImageMatch/tensorboardX/record_writer.py  
          inflating: sample/ImageMatch/tensorboardX/event_file_writer.py  
          inflating: sample/ImageMatch/tensorboardX/writer.py  
          inflating: sample/ImageMatch/tensorboardX/graph_onnx.py  
          inflating: sample/ImageMatch/tensorboardX/embedding.py  
           creating: sample/ImageMatch/models/
          inflating: sample/ImageMatch/models/ContextualLoss.py  
          inflating: sample/ImageMatch/models/NonlocalNet.py  
          inflating: sample/ImageMatch/models/ColorVidNet.py  
          inflating: sample/ImageMatch/models/FrameColor.py  
          inflating: sample/ImageMatch/models/spectral_normalization.py  
          inflating: sample/ImageMatch/models/GAN_models.py  
           creating: sample/ImageMatch/models/__pycache__/
          inflating: sample/ImageMatch/models/__pycache__/FrameColor.cpython-38.pyc  
          inflating: sample/ImageMatch/models/__pycache__/NonlocalNet.cpython-38.pyc  
          inflating: sample/ImageMatch/models/__pycache__/vgg19_gray.cpython-38.pyc  
          inflating: sample/ImageMatch/models/__pycache__/ColorVidNet.cpython-38.pyc  
          inflating: sample/ImageMatch/models/vgg19_gray.py  
           creating: sample/ImageMatch/checkpoints/
           creating: sample/ImageMatch/checkpoints/video_moredata_l1/
          inflating: sample/ImageMatch/checkpoints/video_moredata_l1/nonlocal_net_iter_76000.pth  
          inflating: sample/ImageMatch/checkpoints/video_moredata_l1/colornet_iter_76000.pth  
          inflating: sample/ImageMatch/checkpoints/video_moredata_l1/discriminator_iter_76000.pth  
           creating: sample/ImageMatch/__pycache__/
          inflating: sample/ImageMatch/__pycache__/warp.cpython-38.pyc  
          inflating: sample/colorizer.py     
          inflating: sample/sample.ipynb     
           creating: sample/images/
          inflating: sample/images/12.jpg    
          inflating: sample/images/1_exp.jpg  
          inflating: sample/images/1.jpg     
          inflating: sample/images/10.jpg    
          inflating: sample/sam_vit_h_4b8939.pth  
           creating: sample/unsorted_codes/
          inflating: sample/unsorted_codes/clip_score.py  
          inflating: sample/unsorted_codes/test_heatmap.py  
          inflating: sample/unsorted_codes/ideep_stroke_file.py  
          inflating: sample/unsorted_codes/metrics.ipynb  
          inflating: sample/unsorted_codes/generate_strokes.py  
          inflating: sample/unsorted_codes/html_images.py  
          inflating: sample/unsorted_codes/ablation.ipynb  
          inflating: sample/unsorted_codes/experiment.ipynb  
          inflating: sample/unsorted_codes/ideep_stroke.py  
          inflating: sample/unsorted_codes/sample_exemplar.py  
          inflating: sample/unsorted_codes/sample.ipynb  
          inflating: sample/unsorted_codes/clip_segment.py  
          inflating: sample/unsorted_codes/sample_func.py  
          inflating: sample/unsorted_codes/sample_uncond.py  
           creating: sample/__pycache__/
          inflating: sample/__pycache__/colorizer.cpython-38.pyc  
          inflating: sample/__pycache__/utils_func.cpython-38.pyc  
           creating: unicolor/
           creating: unicolor/sample/
          inflating: unicolor/sample/color_table.yaml  
          inflating: unicolor/sample/sam_vit_h_4b8939.pth  
           creating: unicolor/sample/unsorted_codes/
          inflating: unicolor/sample/unsorted_codes/experiment.ipynb  
          inflating: unicolor/sample/unsorted_codes/sample.ipynb  
          inflating: unicolor/sample/unsorted_codes/html_images.py  
          inflating: unicolor/sample/unsorted_codes/ideep_stroke.py  
          inflating: unicolor/sample/unsorted_codes/sample_uncond.py  
          inflating: unicolor/sample/unsorted_codes/metrics.ipynb  
          inflating: unicolor/sample/unsorted_codes/test_heatmap.py  
          inflating: unicolor/sample/unsorted_codes/clip_score.py  
          inflating: unicolor/sample/unsorted_codes/ideep_stroke_file.py  
          inflating: unicolor/sample/unsorted_codes/ablation.ipynb  
          inflating: unicolor/sample/unsorted_codes/sample_exemplar.py  
          inflating: unicolor/sample/unsorted_codes/sample_func.py  
          inflating: unicolor/sample/unsorted_codes/clip_segment.py  
          inflating: unicolor/sample/unsorted_codes/generate_strokes.py  
          inflating: unicolor/sample/sample.ipynb  
           creating: unicolor/sample/.ipynb_checkpoints/
          inflating: unicolor/sample/.ipynb_checkpoints/utils_func-checkpoint.py  
          inflating: unicolor/sample/.ipynb_checkpoints/colorizer-checkpoint.py  
          inflating: unicolor/sample/.ipynb_checkpoints/sample-checkpoint.ipynb  
           creating: unicolor/sample/ImageMatch/
           creating: unicolor/sample/ImageMatch/tensorboardX/
          inflating: unicolor/sample/ImageMatch/tensorboardX/writer.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/graph_onnx.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/x2num.py  
           creating: unicolor/sample/ImageMatch/tensorboardX/src/
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/plugin_pr_curve_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/graph_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/resource_handle_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/tensor_shape_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/summary_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/types_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/versions_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/event_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/node_def_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/attr_value_pb2.py  
         extracting: unicolor/sample/ImageMatch/tensorboardX/src/__init__.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/src/tensor_pb2.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/record_writer.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/crc32c.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/embedding.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/graph.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/event_file_writer.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/__init__.py  
          inflating: unicolor/sample/ImageMatch/tensorboardX/summary.py  
           creating: unicolor/sample/ImageMatch/checkpoints/
           creating: unicolor/sample/ImageMatch/checkpoints/video_moredata_l1/
          inflating: unicolor/sample/ImageMatch/checkpoints/video_moredata_l1/nonlocal_net_iter_76000.pth  
          inflating: unicolor/sample/ImageMatch/checkpoints/video_moredata_l1/discriminator_iter_76000.pth  
          inflating: unicolor/sample/ImageMatch/checkpoints/video_moredata_l1/colornet_iter_76000.pth  
           creating: unicolor/sample/ImageMatch/models/
          inflating: unicolor/sample/ImageMatch/models/ColorVidNet.py  
          inflating: unicolor/sample/ImageMatch/models/NonlocalNet.py  
          inflating: unicolor/sample/ImageMatch/models/FrameColor.py  
          inflating: unicolor/sample/ImageMatch/models/spectral_normalization.py  
          inflating: unicolor/sample/ImageMatch/models/ContextualLoss.py  
           creating: unicolor/sample/ImageMatch/models/__pycache__/
          inflating: unicolor/sample/ImageMatch/models/__pycache__/FrameColor.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/ColorVidNet.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/vgg19_gray.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/FrameColor.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/ColorVidNet.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/NonlocalNet.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/NonlocalNet.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/models/__pycache__/vgg19_gray.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/models/GAN_models.py  
          inflating: unicolor/sample/ImageMatch/models/vgg19_gray.py  
           creating: unicolor/sample/ImageMatch/utils/
          inflating: unicolor/sample/ImageMatch/utils/vgg_util.py  
          inflating: unicolor/sample/ImageMatch/utils/flowlib.py  
          inflating: unicolor/sample/ImageMatch/utils/util.py  
          inflating: unicolor/sample/ImageMatch/utils/util_tensorboard.py  
          inflating: unicolor/sample/ImageMatch/utils/warping.py  
          inflating: unicolor/sample/ImageMatch/utils/util_distortion.py  
           creating: unicolor/sample/ImageMatch/utils/__pycache__/
          inflating: unicolor/sample/ImageMatch/utils/__pycache__/util.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/utils/__pycache__/__init__.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/utils/__pycache__/util.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/utils/__pycache__/__init__.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/utils/__pycache__/util_distortion.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/utils/__pycache__/util_distortion.cpython-310.pyc  
         extracting: unicolor/sample/ImageMatch/utils/__init__.py  
          inflating: unicolor/sample/ImageMatch/utils/tb_image_recorder.py  
           creating: unicolor/sample/ImageMatch/__pycache__/
          inflating: unicolor/sample/ImageMatch/__pycache__/warp.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/__pycache__/warp.cpython-310.pyc  
           creating: unicolor/sample/ImageMatch/data/
          inflating: unicolor/sample/ImageMatch/data/vgg19_gray.pth  
          inflating: unicolor/sample/ImageMatch/data/vgg19_conv.pth  
           creating: unicolor/sample/ImageMatch/lib/
          inflating: unicolor/sample/ImageMatch/lib/FeatVGG.py  
          inflating: unicolor/sample/ImageMatch/lib/TestTransforms.py  
          inflating: unicolor/sample/ImageMatch/lib/TrainTransforms.py  
          inflating: unicolor/sample/ImageMatch/lib/functional.py  
          inflating: unicolor/sample/ImageMatch/lib/VGGFeatureLoss.py  
           creating: unicolor/sample/ImageMatch/lib/__pycache__/
          inflating: unicolor/sample/ImageMatch/lib/__pycache__/__init__.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/lib/__pycache__/__init__.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/lib/__pycache__/TestTransforms.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/lib/__pycache__/functional.cpython-310.pyc  
          inflating: unicolor/sample/ImageMatch/lib/__pycache__/TestTransforms.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/lib/__pycache__/functional.cpython-38.pyc  
          inflating: unicolor/sample/ImageMatch/lib/videoloader_imagenet.py  
           creating: unicolor/sample/ImageMatch/lib/.vscode/
         extracting: unicolor/sample/ImageMatch/lib/.vscode/settings.json  
         extracting: unicolor/sample/ImageMatch/lib/__init__.py  
          inflating: unicolor/sample/ImageMatch/lib/videoloader.py  
          inflating: unicolor/sample/ImageMatch/warp.py  
           creating: unicolor/sample/SAM/
          inflating: unicolor/sample/SAM/LICENSE  
           creating: unicolor/sample/SAM/segment_anything/
          inflating: unicolor/sample/SAM/segment_anything/predictor.py  
           creating: unicolor/sample/SAM/segment_anything/.ipynb_checkpoints/
          inflating: unicolor/sample/SAM/segment_anything/.ipynb_checkpoints/__init__-checkpoint.py  
          inflating: unicolor/sample/SAM/segment_anything/.ipynb_checkpoints/predictor-checkpoint.py  
          inflating: unicolor/sample/SAM/segment_anything/.ipynb_checkpoints/build_sam-checkpoint.py  
          inflating: unicolor/sample/SAM/segment_anything/.ipynb_checkpoints/automatic_mask_generator-checkpoint.py  
          inflating: unicolor/sample/SAM/segment_anything/amg.py  
          inflating: unicolor/sample/SAM/segment_anything/automatic_mask_generator.py  
           creating: unicolor/sample/SAM/segment_anything/modeling/
          inflating: unicolor/sample/SAM/segment_anything/modeling/prompt_encoder.py  
          inflating: unicolor/sample/SAM/segment_anything/modeling/image_encoder.py  
          inflating: unicolor/sample/SAM/segment_anything/modeling/mask_decoder.py  
          inflating: unicolor/sample/SAM/segment_anything/modeling/transformer.py  
          inflating: unicolor/sample/SAM/segment_anything/modeling/sam.py  
           creating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/transformer.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/common.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/image_encoder.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/sam.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/__init__.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/__init__.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/mask_decoder.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/image_encoder.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/sam.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/prompt_encoder.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/common.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/mask_decoder.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/prompt_encoder.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__pycache__/transformer.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/modeling/__init__.py  
          inflating: unicolor/sample/SAM/segment_anything/modeling/common.py  
           creating: unicolor/sample/SAM/segment_anything/utils/
           creating: unicolor/sample/SAM/segment_anything/utils/.ipynb_checkpoints/
          inflating: unicolor/sample/SAM/segment_anything/utils/.ipynb_checkpoints/transforms-checkpoint.py  
          inflating: unicolor/sample/SAM/segment_anything/utils/.ipynb_checkpoints/amg-checkpoint.py  
          inflating: unicolor/sample/SAM/segment_anything/utils/amg.py  
          inflating: unicolor/sample/SAM/segment_anything/utils/transforms.py  
           creating: unicolor/sample/SAM/segment_anything/utils/__pycache__/
          inflating: unicolor/sample/SAM/segment_anything/utils/__pycache__/__init__.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/utils/__pycache__/amg.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/utils/__pycache__/transforms.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/utils/__init__.py  
          inflating: unicolor/sample/SAM/segment_anything/utils/onnx.py  
          inflating: unicolor/sample/SAM/segment_anything/transforms.py  
          inflating: unicolor/sample/SAM/segment_anything/build_sam.py  
           creating: unicolor/sample/SAM/segment_anything/__pycache__/
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/amg.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/__init__.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/build_sam.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/__init__.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/predictor.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/automatic_mask_generator.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/automatic_mask_generator.cpython-38.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/transforms.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/build_sam.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__pycache__/predictor.cpython-310.pyc  
          inflating: unicolor/sample/SAM/segment_anything/__init__.py  
           creating: unicolor/sample/SAM/demo/
           creating: unicolor/sample/SAM/demo/src/
          inflating: unicolor/sample/SAM/demo/src/App.tsx  
          inflating: unicolor/sample/SAM/demo/src/index.tsx  
           creating: unicolor/sample/SAM/demo/src/components/
          inflating: unicolor/sample/SAM/demo/src/components/Tool.tsx  
           creating: unicolor/sample/SAM/demo/src/components/helpers/
          inflating: unicolor/sample/SAM/demo/src/components/helpers/Interfaces.tsx  
          inflating: unicolor/sample/SAM/demo/src/components/helpers/onnxModelAPI.tsx  
          inflating: unicolor/sample/SAM/demo/src/components/helpers/maskUtils.tsx  
          inflating: unicolor/sample/SAM/demo/src/components/helpers/scaleHelper.tsx  
           creating: unicolor/sample/SAM/demo/src/components/hooks/
          inflating: unicolor/sample/SAM/demo/src/components/hooks/context.tsx  
          inflating: unicolor/sample/SAM/demo/src/components/hooks/createContext.tsx  
          inflating: unicolor/sample/SAM/demo/src/components/Stage.tsx  
           creating: unicolor/sample/SAM/demo/src/assets/
          inflating: unicolor/sample/SAM/demo/src/assets/index.html  
           creating: unicolor/sample/SAM/demo/src/assets/data/
          inflating: unicolor/sample/SAM/demo/src/assets/data/dogs.jpg  
           creating: unicolor/sample/SAM/demo/src/assets/scss/
          inflating: unicolor/sample/SAM/demo/src/assets/scss/App.scss  
          inflating: unicolor/sample/SAM/demo/tailwind.config.js  
          inflating: unicolor/sample/SAM/demo/README.md  
          inflating: unicolor/sample/SAM/demo/package.json  
          inflating: unicolor/sample/SAM/demo/tsconfig.json  
          inflating: unicolor/sample/SAM/demo/postcss.config.js  
           creating: unicolor/sample/SAM/demo/configs/
           creating: unicolor/sample/SAM/demo/configs/webpack/
          inflating: unicolor/sample/SAM/demo/configs/webpack/common.js  
          inflating: unicolor/sample/SAM/demo/configs/webpack/prod.js  
          inflating: unicolor/sample/SAM/demo/configs/webpack/dev.js  
          inflating: unicolor/sample/SAM/CONTRIBUTING.md  
          inflating: unicolor/sample/SAM/.gitignore  
          inflating: unicolor/sample/SAM/linter.sh  
           creating: unicolor/sample/SAM/notebooks/
          inflating: unicolor/sample/SAM/notebooks/save_0656.png  
           creating: unicolor/sample/SAM/notebooks/.ipynb_checkpoints/
          inflating: unicolor/sample/SAM/notebooks/.ipynb_checkpoints/predictor_example-checkpoint.ipynb  
          inflating: unicolor/sample/SAM/notebooks/.ipynb_checkpoints/automatic_mask_generator_example-checkpoint.ipynb  
          inflating: unicolor/sample/SAM/notebooks/onnx_model_example.ipynb  
          inflating: unicolor/sample/SAM/notebooks/automatic_mask_generator_example.ipynb  
          inflating: unicolor/sample/SAM/notebooks/save.png  
          inflating: unicolor/sample/SAM/notebooks/save_0664.png  
          inflating: unicolor/sample/SAM/notebooks/predictor_example.ipynb  
          inflating: unicolor/sample/SAM/notebooks/save_0660.png  
          inflating: unicolor/sample/SAM/notebooks/part_1.png  
          inflating: unicolor/sample/SAM/notebooks/save_part.png  
           creating: unicolor/sample/SAM/notebooks/images/
          inflating: unicolor/sample/SAM/notebooks/images/truck.jpg  
          inflating: unicolor/sample/SAM/notebooks/images/000.png  
          inflating: unicolor/sample/SAM/notebooks/images/groceries.jpg  
          inflating: unicolor/sample/SAM/notebooks/images/dog.jpg  
          inflating: unicolor/sample/SAM/notebooks/part.png  
          inflating: unicolor/sample/SAM/README.md  
           creating: unicolor/sample/SAM/scripts/
          inflating: unicolor/sample/SAM/scripts/amg.py  
          inflating: unicolor/sample/SAM/scripts/export_onnx_model.py  
          inflating: unicolor/sample/SAM/nohup.out  
          inflating: unicolor/sample/SAM/setup.py  
           creating: unicolor/sample/SAM/.git/
           creating: unicolor/sample/SAM/.git/branches/
          inflating: unicolor/sample/SAM/.git/description  
           creating: unicolor/sample/SAM/.git/info/
          inflating: unicolor/sample/SAM/.git/info/exclude  
          inflating: unicolor/sample/SAM/.git/config  
           creating: unicolor/sample/SAM/.git/logs/
           creating: unicolor/sample/SAM/.git/logs/refs/
           creating: unicolor/sample/SAM/.git/logs/refs/remotes/
           creating: unicolor/sample/SAM/.git/logs/refs/remotes/origin/
          inflating: unicolor/sample/SAM/.git/logs/refs/remotes/origin/HEAD  
           creating: unicolor/sample/SAM/.git/logs/refs/heads/
          inflating: unicolor/sample/SAM/.git/logs/refs/heads/main  
          inflating: unicolor/sample/SAM/.git/logs/HEAD  
           creating: unicolor/sample/SAM/.git/hooks/
          inflating: unicolor/sample/SAM/.git/hooks/prepare-commit-msg.sample  
          inflating: unicolor/sample/SAM/.git/hooks/pre-push.sample  
          inflating: unicolor/sample/SAM/.git/hooks/pre-receive.sample  
          inflating: unicolor/sample/SAM/.git/hooks/applypatch-msg.sample  
          inflating: unicolor/sample/SAM/.git/hooks/update.sample  
          inflating: unicolor/sample/SAM/.git/hooks/commit-msg.sample  
          inflating: unicolor/sample/SAM/.git/hooks/pre-rebase.sample  
          inflating: unicolor/sample/SAM/.git/hooks/pre-applypatch.sample  
          inflating: unicolor/sample/SAM/.git/hooks/post-update.sample  
          inflating: unicolor/sample/SAM/.git/hooks/fsmonitor-watchman.sample  
          inflating: unicolor/sample/SAM/.git/hooks/pre-merge-commit.sample  
          inflating: unicolor/sample/SAM/.git/hooks/pre-commit.sample  
           creating: unicolor/sample/SAM/.git/objects/
           creating: unicolor/sample/SAM/.git/objects/info/
           creating: unicolor/sample/SAM/.git/objects/pack/
          inflating: unicolor/sample/SAM/.git/objects/pack/pack-6bddc3bea3b5846071a7a94ed6811aed34aa66d9.pack  
          inflating: unicolor/sample/SAM/.git/objects/pack/pack-6bddc3bea3b5846071a7a94ed6811aed34aa66d9.idx  
          inflating: unicolor/sample/SAM/.git/index  
          inflating: unicolor/sample/SAM/.git/packed-refs  
           creating: unicolor/sample/SAM/.git/refs/
           creating: unicolor/sample/SAM/.git/refs/remotes/
           creating: unicolor/sample/SAM/.git/refs/remotes/origin/
         extracting: unicolor/sample/SAM/.git/refs/remotes/origin/HEAD  
           creating: unicolor/sample/SAM/.git/refs/tags/
           creating: unicolor/sample/SAM/.git/refs/heads/
         extracting: unicolor/sample/SAM/.git/refs/heads/main  
         extracting: unicolor/sample/SAM/.git/HEAD  
          inflating: unicolor/sample/SAM/.flake8  
          inflating: unicolor/sample/SAM/CODE_OF_CONDUCT.md  
           creating: unicolor/sample/SAM/assets/
          inflating: unicolor/sample/SAM/assets/masks2.jpg  
          inflating: unicolor/sample/SAM/assets/masks1.png  
          inflating: unicolor/sample/SAM/assets/notebook1.png  
          inflating: unicolor/sample/SAM/assets/minidemo.gif  
          inflating: unicolor/sample/SAM/assets/model_diagram.png  
          inflating: unicolor/sample/SAM/assets/notebook2.png  
          inflating: unicolor/sample/SAM/setup.cfg  
          inflating: unicolor/sample/utils_func.py  
          inflating: unicolor/sample/colorizer.py  
           creating: unicolor/sample/__pycache__/
          inflating: unicolor/sample/__pycache__/utils_func.cpython-310.pyc  
          inflating: unicolor/sample/__pycache__/utils_func.cpython-38.pyc  
          inflating: unicolor/sample/__pycache__/colorizer.cpython-38.pyc  
          inflating: unicolor/sample/__pycache__/colorizer.cpython-310.pyc  
           creating: unicolor/sample/images/
          inflating: unicolor/sample/images/1.jpg  
          inflating: unicolor/sample/images/1_exp.jpg  
          inflating: unicolor/sample/images/12.jpg  
          inflating: unicolor/sample/images/10.jpg  
           creating: unicolor/framework/
           creating: unicolor/framework/datasets/
          inflating: unicolor/framework/datasets/image_dataset.py  
          inflating: unicolor/framework/datasets/data_prepare.py  
          inflating: unicolor/framework/datasets/mask.py  
          inflating: unicolor/framework/datasets/utils.py  
          inflating: unicolor/framework/train_vqgan.py  
           creating: unicolor/framework/chroma_vqgan/
           creating: unicolor/framework/chroma_vqgan/models/
          inflating: unicolor/framework/chroma_vqgan/models/lpips.py  
          inflating: unicolor/framework/chroma_vqgan/models/util.py  
          inflating: unicolor/framework/chroma_vqgan/models/module.py  
          inflating: unicolor/framework/chroma_vqgan/models/vqperceptual.py  
          inflating: unicolor/framework/chroma_vqgan/models/discriminator.py  
           creating: unicolor/framework/chroma_vqgan/models/__pycache__/
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/discriminator.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/vqgan.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/util.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/ops.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/lpips.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/module.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/quantize.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/util.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/quantize.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/ops.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/vqgan.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/lpips.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/vqperceptual.cpython-38.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/module.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/vqperceptual.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/__pycache__/discriminator.cpython-310.pyc  
          inflating: unicolor/framework/chroma_vqgan/models/vqgan.py  
          inflating: unicolor/framework/chroma_vqgan/models/ops.py  
          inflating: unicolor/framework/chroma_vqgan/models/quantize.py  
           creating: unicolor/framework/chroma_vqgan/configs/
          inflating: unicolor/framework/chroma_vqgan/configs/coco.yaml  
          inflating: unicolor/framework/chroma_vqgan/configs/testing.yaml  
          inflating: unicolor/framework/chroma_vqgan/configs/imagenet.yaml  
           creating: unicolor/framework/checkpoints/
           creating: unicolor/framework/checkpoints/unicolor_mscoco/
          inflating: unicolor/framework/checkpoints/unicolor_mscoco/config.yaml  
          inflating: unicolor/framework/checkpoints/unicolor_mscoco/mscoco_step259999.ckpt  
           creating: unicolor/framework/checkpoints/unicolor_imagenet/
          inflating: unicolor/framework/checkpoints/unicolor_imagenet/imagenet_step142124.ckpt  
          inflating: unicolor/framework/checkpoints/unicolor_imagenet/config.yaml  
           creating: unicolor/framework/hybrid_tran/
           creating: unicolor/framework/hybrid_tran/models/
          inflating: unicolor/framework/hybrid_tran/models/transformer.py  
          inflating: unicolor/framework/hybrid_tran/models/colorization.py  
           creating: unicolor/framework/hybrid_tran/models/__pycache__/
          inflating: unicolor/framework/hybrid_tran/models/__pycache__/vqgan.cpython-38.pyc  
          inflating: unicolor/framework/hybrid_tran/models/__pycache__/transformer.cpython-38.pyc  
          inflating: unicolor/framework/hybrid_tran/models/__pycache__/colorization.cpython-310.pyc  
          inflating: unicolor/framework/hybrid_tran/models/__pycache__/vqgan.cpython-310.pyc  
          inflating: unicolor/framework/hybrid_tran/models/__pycache__/colorization.cpython-38.pyc  
          inflating: unicolor/framework/hybrid_tran/models/__pycache__/transformer.cpython-310.pyc  
          inflating: unicolor/framework/hybrid_tran/models/vqgan.py  
           creating: unicolor/framework/hybrid_tran/utils/
           creating: unicolor/framework/hybrid_tran/utils/__pycache__/
          inflating: unicolor/framework/hybrid_tran/utils/__pycache__/ops.cpython-310.pyc  
          inflating: unicolor/framework/hybrid_tran/utils/__pycache__/ops.cpython-38.pyc  
          inflating: unicolor/framework/hybrid_tran/utils/ops.py  
           creating: unicolor/framework/hybrid_tran/configs/
          inflating: unicolor/framework/hybrid_tran/configs/coco.yaml  
          inflating: unicolor/framework/hybrid_tran/configs/testing.yaml  
          inflating: unicolor/framework/train_tran.py  
          ......
          Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
        Collecting nltk
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/a6/0a/0d20d2c0f16be91b9fa32a77b76c60f9baf6eba419e5ef5deca17af9c582/nltk-3.8.1-py3-none-any.whl (1.5 MB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 16.7 MB/s eta 0:00:00a 0:00:01
        Requirement already satisfied: regex>=2021.8.3 in /usr/local/lib/python3.10/dist-packages (from nltk) (2023.3.23)
        Requirement already satisfied: click in /usr/local/lib/python3.10/dist-packages (from nltk) (8.1.3)
        Collecting joblib
          Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/10/40/d551139c85db202f1f384ba8bcf96aca2f329440a844f924c8a0040b6d02/joblib-1.3.2-py3-none-any.whl (302 kB)
             ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.2/302.2 kB 72.8 MB/s eta 0:00:00
        Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from nltk) (4.65.0)
        Installing collected packages: joblib, nltk
        Successfully installed joblib-1.3.2 nltk-3.8.1
        WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
        
        [notice] A new release of pip is available: 23.0.1 -> 23.2.1
        [notice] To update, run: python3 -m pip install --upgrade pip

      2. 加载模型文件和待处理的图片。按顺序依次执行单元格命令,获得如下结果。

        单击此处查看运行结果

        image.png594e606afb554ba38ab8bd8bf21822a5.png
      3. 选择需要修改的颜色和点的坐标。

        单击此处查看运行结果

        image.png
      4. SAM扩展上色区域。

        单击此处查看运行结果

        image.pngimage.pngimage.png
      5. 对指定区域进行上色。

        单击此处查看运行结果

        image.png
    5. 划痕清理。即检测划痕位置或手动标记划痕位置,进行图像填充。

      1. 下载代码和预训练文件,并安装ModelScope环境。下载解压完成后,您可以在./inpaint文件夹中查看LaMa算法的源代码。

        单击此处查看运行结果

        Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple
        Requirement already satisfied: modelscope in /usr/local/lib/python3.10/dist-packages (1.8.4)
        Requirement already satisfied: simplejson>=3.3.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (3.19.1)
        Requirement already satisfied: gast>=0.2.2 in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.5.4)
        Requirement already satisfied: requests>=2.25 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.25.1)
        Requirement already satisfied: oss2 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.18.1)
        Requirement already satisfied: yapf in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.32.0)
        Requirement already satisfied: tqdm>=4.64.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (4.65.0)
        Requirement already satisfied: urllib3>=1.26 in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.26.15)
        Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.5.3)
        Requirement already satisfied: sortedcontainers>=1.5.9 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.4.0)
        Requirement already satisfied: filelock>=3.3.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (3.10.7)
        Requirement already satisfied: addict in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.4.0)
        Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from modelscope) (59.6.0)
        Requirement already satisfied: einops in /usr/local/lib/python3.10/dist-packages (from modelscope) (0.4.1)
        Requirement already satisfied: Pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (9.4.0)
        Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from modelscope) (22.2.0)
        Requirement already satisfied: pyarrow!=9.0.0,>=6.0.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (11.0.0)
        Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.8.2)
        Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.23.3)
        Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from modelscope) (1.10.1)
        Requirement already satisfied: datasets<=2.13.0,>=2.8.0 in /usr/local/lib/python3.10/dist-packages (from modelscope) (2.11.0)
        Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from modelscope) (6.0)
        Requirement already satisfied: dill<0.3.7,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.3.6)
        Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (3.8.4)
        Requirement already satisfied: huggingface-hub<1.0.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.13.3)
        Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (3.2.0)
        Requirement already satisfied: multiprocess in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.70.14)
        Requirement already satisfied: fsspec[http]>=2021.11.1 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (2023.3.0)
        Requirement already satisfied: responses<0.19 in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (0.18.0)
        Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets<=2.13.0,>=2.8.0->modelscope) (23.0)
        Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.1->modelscope) (1.16.0)
        Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (2.10)
        Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (2022.12.7)
        Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.25->modelscope) (4.0.0)
        Requirement already satisfied: pycryptodome>=3.4.7 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (3.17)
        Requirement already satisfied: crcmod>=1.7 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (1.7)
        Requirement already satisfied: aliyun-python-sdk-kms>=2.4.1 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (2.16.1)
        Requirement already satisfied: aliyun-python-sdk-core>=2.13.12 in /usr/local/lib/python3.10/dist-packages (from oss2->modelscope) (2.13.36)
        Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->modelscope) (2023.3)
        Requirement already satisfied: cryptography>=2.6.0 in /usr/local/lib/python3.10/dist-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (40.0.1)
        Requirement already satisfied: jmespath<1.0.0,>=0.9.3 in /usr/local/lib/python3.10/dist-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (0.10.0)
        Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (6.0.4)
        Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (3.1.0)
        Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.3.1)
        Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.3.3)
        Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (4.0.2)
        Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets<=2.13.0,>=2.8.0->modelscope) (1.8.2)
        Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0.0,>=0.11.0->datasets<=2.13.0,>=2.8.0->modelscope) (4.5.0)
        Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (1.15.1)
        Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope) (2.21)
        WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
        
        [notice] A new release of pip is available: 23.0.1 -> 23.2.1
        [notice] To update, run: python3 -m pip install --upgrade pip
        http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/inpaint.zip
        cn-hangzhou
        --2023-09-05 03:10:46--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/inpaint.zip
        Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.45, 100.118.28.50, 100.118.28.49, ...
        Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.45|:80... connected.
        HTTP request sent, awaiting response... 200 OK
        Length: 603971848 (576M) [application/zip]
        Saving to: ‘inpaint.zip’
        
        inpaint.zip         100%[===================>] 575.99M  10.0MB/s    in 56s     
        
        2023-09-05 03:11:42 (10.3 MB/s) - ‘inpaint.zip’ saved [603971848/603971848]
        
        Archive:  inpaint.zip
           creating: inpaint/
           creating: inpaint/.ipynb_checkpoints/
          inflating: inpaint/.ipynb_checkpoints/demo-checkpoint.py  
          inflating: inpaint/demo.py         
           creating: inpaint/pretrain/
           creating: inpaint/pretrain/cv_fft_inpainting_lama/
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/resnet50-imagenet.pth  
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/.mdl  
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/pytorch_model.pt  
           creating: inpaint/pretrain/cv_fft_inpainting_lama/ade20k/
           creating: inpaint/pretrain/cv_fft_inpainting_lama/ade20k/ade20k-resnet50dilated-ppm_deepsup/
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/ade20k/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth  
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/README.md  
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/configuration.json  
           creating: inpaint/pretrain/cv_fft_inpainting_lama/data/
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/data/1.gif  
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/data/2.gif  
          inflating: inpaint/pretrain/cv_fft_inpainting_lama/.msc  
      2. 启动UI界面。

        单击此处查看运行结果

        2023-09-05 03:12:31,086 - modelscope - INFO - PyTorch version 1.13.1+cu117 Found.
        2023-09-05 03:12:31,087 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
        2023-09-05 03:12:31,118 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 80fa9349fc3e7b04fcfad511918062b1 and a total number of 902 components indexed
        2023-09-05 03:12:31,921 - modelscope - INFO - initiate model from /mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama
        2023-09-05 03:12:31,921 - modelscope - INFO - initiate model from location /mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama.
        2023-09-05 03:12:31,922 - modelscope - INFO - initialize model from /mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama
        2023-09-05 03:12:32,123 - modelscope - INFO - BaseInpaintingTrainingModule init called, predict_only is False
        Loading weights for net_encoder
        2023-09-05 03:12:33,068 - modelscope - INFO - BaseInpaintingTrainingModule init done
        2023-09-05 03:12:33,068 - modelscope - INFO - loading pretrained model from /mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama/pytorch_model.pt
        2023-09-05 03:12:33,319 - modelscope - WARNING - No preprocessor field found in cfg.
        2023-09-05 03:12:33,319 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
        2023-09-05 03:12:33,319 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama'}. trying to build by task and model information.
        2023-09-05 03:12:33,319 - modelscope - WARNING - No preprocessor key ('FFTInpainting', 'image-inpainting') found in PREPROCESSOR_MAP, skip building preprocessor.
        2023-09-05 03:12:33,320 - modelscope - INFO - loading model from dir /mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama
        2023-09-05 03:12:33,320 - modelscope - INFO - BaseInpaintingTrainingModule init called, predict_only is True
        2023-09-05 03:12:33,708 - modelscope - INFO - BaseInpaintingTrainingModule init done
        2023-09-05 03:12:33,709 - modelscope - INFO - loading pretrained model from /mnt/workspace/inpaint/pretrain/cv_fft_inpainting_lama/pytorch_model.pt
        2023-09-05 03:12:34,498 - modelscope - INFO - loading model done, refinement is set to False
        /usr/local/lib/python3.10/dist-packages/gradio/layouts.py:75: UserWarning: mobile_collapse is no longer supported.
          warnings.warn("mobile_collapse is no longer supported.")
        /usr/local/lib/python3.10/dist-packages/gradio/components.py:122: UserWarning: 'rounded' styling is no longer supported. To round adjacent components together, place them in a Column(variant='box').
          warnings.warn(
        /usr/local/lib/python3.10/dist-packages/gradio/components.py:131: UserWarning: 'margin' styling is no longer supported. To place adjacent components together without margin, place them in a Column(variant='box').
          warnings.warn(
        Running on local URL:  http://127.0.0.1:7860
        
        To create a public link, set `share=True` in `launch()`.
      3. 在返回的运行详情结果中单击URL链接(http://127.0.0.1:7860),进入WebUI页面。在该页面,根据界面提示修复老照片划痕。

      【说明】由于http://127.0.0.1:7860为内网访问地址,仅支持在当前的DSW实例内部通过单击链接来访问WebUI页面,不支持通过外部浏览器直接访问。

      单击此处查看运行结果

      image.png

    基于SD WebUI修复图片

    SD WebUI是目前最受欢迎的AI绘画工具之一,在图像生成任务的基础上,其集成了丰富的超分模型,可用于可视化的图像修复。通过交互式的控制,您可以更加精细化和方便的进行图像修复任务。

    为了解决模型和插件下载困难的问题,PAI在本教程中预置了与图像修复相关的插件和模型,您可以更轻松、便捷地体验SD WebUI的老照片修复功能。

    如果您不需要使用或修改算法的源码,可以直接前往SD WebUI章节,启动SD WebUI页面来修复图片。具体操作步骤如下。

    1. 下载SDWebUI代码和内置的插件。

      单击此处查看运行结果

      http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/sdwebui.zip
      cn-hangzhou
      --2023-09-05 07:22:40--  http://pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com/aigc-data/restoration/repo/sdwebui.zip
      Resolving pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)... 100.118.28.45, 100.118.28.50, 100.118.28.44, ...
      Connecting to pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com (pai-vision-data-hz2.oss-cn-hangzhou-internal.aliyuncs.com)|100.118.28.45|:80... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 15945784501 (15G) [application/zip]
      Saving to: ‘sdwebui.zip’
      
      sdwebui.zip          99%[==================> ]  14.82G  20.8MB/s    in 12m 22s 
      
      
      Cannot write to ‘sdwebui.zip’ (No space left on device).
      Archive:  sdwebui.zip
        End-of-central-directory signature not found.  Either this file is not
        a zipfile, or it constitutes one disk of a multi-part archive.  In the
        latter case the central directory and zipfile comment will be found on
        the last disk(s) of this archive.
      unzip:  cannot find zipfile directory in one of sdwebui.zip or
              sdwebui.zip.zip, and cannot find sdwebui.zip.ZIP, period.
    2. 下载插件所需的模型文件。

      单击此处查看运行结果

      --2023-09-05 07:35:39--  https://pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com/aigc-data/restoration/models/resnet101-63fe2227.pth
      Resolving pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com (pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com)... 106.14.228.10
      Connecting to pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com (pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com)|106.14.228.10|:443... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 178793939 (171M) [application/octet-stream]
      Saving to: ‘/root/.cache/torch/hub/checkpoints/resnet101-63fe2227.pth.1’
      
      resnet101-63fe2227. 100%[===================>] 170.51M  14.8MB/s    in 12s     
      
      2023-09-05 07:35:52 (14.0 MB/s) - ‘/root/.cache/torch/hub/checkpoints/resnet101-63fe2227.pth.1’ saved [178793939/178793939]
      
      --2023-09-05 07:35:52--  https://pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com/aigc-data/restoration/models/resnet34-b627a593.pth
      Resolving pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com (pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com)... 106.14.228.10
      Connecting to pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com (pai-vision-data-sh.oss-cn-shanghai.aliyuncs.com)|106.14.228.10|:443... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 87319819 (83M) [application/octet-stream]
      Saving to: ‘/root/.cache/torch/hub/checkpoints/resnet34-b627a593.pth’
      
      resnet34-b627a593.p 100%[===================>]  83.27M  19.6MB/s    in 4.4s    
      
      2023-09-05 07:35:56 (18.9 MB/s) - ‘/root/.cache/torch/hub/checkpoints/resnet34-b627a593.pth’ saved [87319819/87319819]
      
    3. 启动SD WebUI应用。

      单击此处查看运行结果

      Unable to symlink '/usr/bin/python' to '/mnt/workspace/stable-diffusion-webui/venv/bin/python'
      
      ################################################################
      Install script for stable-diffusion + Web UI
      Tested on Debian 11 (Bullseye)
      ################################################################
      
      ################################################################
      Running on root user
      ################################################################
      
      ################################################################
      Repo already cloned, using it as install directory
      ################################################################
      
      ################################################################
      Create and activate python venv
      ################################################################
      
      ################################################################
      Launching launch.py...
      ################################################################
      Cannot locate TCMalloc (improves CPU memory usage)
      Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
      Version: v1.5.1
      Commit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a
      Installing fastai==1.0.60 for DeOldify extension
      Installing ffmpeg-python for DeOldify extension
      Installing yt-dlp for DeOldify extension
      Installing opencv-python for DeOldify extension
      Installing Pillow for DeOldify extension
      
      
      Launching Web UI with arguments: --no-download-sd-model --xformers --gradio-queue --disable-safe-unpickle
      No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.13.1+cu117. You might want to consider upgrading.
      ==============================================================================
      You are running torch 1.13.1+cu117.
      The program is tested to work with torch 2.0.0.
      To reinstall the desired version, run with commandline flag --reinstall-torch.
      Beware that this will cause a lot of large files to be downloaded, as well as
      there are reports of issues with training tab on the latest version.
      
      Use --skip-version-check commandline argument to disable this check.
      ==============================================================================
      =================================================================================
      You are running xformers 0.0.16rc425.
      The program is tested to work with xformers 0.0.20.
      To reinstall the desired version, run with commandline flag --reinstall-xformers.
      
      Use --skip-version-check commandline argument to disable this check.
      =================================================================================
      Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
      2023-09-05 07:38:10,384 - ControlNet - INFO - ControlNet v1.1.238
      ControlNet preprocessor location: /mnt/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
      2023-09-05 07:38:10,554 - ControlNet - INFO - ControlNet v1.1.238
      Loading weights [e9d3cedc4b] from /mnt/workspace/stable-diffusion-webui/models/Stable-diffusion/realisticVisionV40_v40VAE.safetensors
      Running on local URL:  http://127.0.0.1:7860
      
      To create a public link, set `share=True` in `launch()`.
      Startup time: 28.1s (launcher: 17.7s, import torch: 3.3s, import gradio: 1.6s, setup paths: 1.7s, other imports: 0.9s, setup codeformer: 0.1s, load scripts: 1.9s, create ui: 0.6s, gradio launch: 0.3s).
      Creating model from config: /mnt/workspace/stable-diffusion-webui/configs/v1-inference.yaml
      LatentDiffusion: Running in eps-prediction mode
      DiffusionWrapper has 859.52 M params.
      Applying attention optimization: xformers... done.
      Model loaded in 13.3s (load weights from disk: 1.2s, create model: 0.7s, apply weights to model: 10.6s, apply half(): 0.3s, move model to device: 0.4s).
    4. 当上个步骤启动WebUI运行完成后,在返回的运行详情结果中单击URL链接(http://127.0.0.1:7860),进入WebUI页面。后续您可以在该页面进行模型推理。

步骤三:修复图片

完成基于源码的图片修复操作步骤后,您已经成功完成了图片的修复,您可以前往指定的目录查看修复后的图片结果。

完成基于SDWebUI的图片修复操作步骤后,您需要在SDWebUI页面上进行进一步的操作,以实现老照片的修复。PAI提供了最常见的三种操作方式供您选择:

方式一:附加功能

您可以在附加功能选项卡中对图像进行适当的超分、面部增强、图像上色任务。以下参数配置仅为示例说明,您可以根据实际场景来调整参数值。image.png

  1. 单击附加功能,切换至附加功能选项卡。

  2. 根据界面提示,将待修复的图片上传到单张图像选项卡中。

  3. 选择缩放比例并设置放大器。

    • 缩放比例:配置为4。

    • Upscaler 1:选择除LDSR外的其他选项。

    • Upscaler2(可选配置)。

  4. (可选)设置面部增强算法及其权重。

    • GFPGAN可见度:配置为1。

    • CodeFormer可见度:配置为0.665。

    • CodeFormer权重:配置为0.507。

  5. (可选)选中Deoldify进行图片上色,选中Artistic切换上色模型,并调节权重。

  6. 单击生成,即可在右侧区域生成修复后的图片。生成的图片保存到了DSW实例./stable-diffusion-webui/outputs/extras-images文件夹中。您可以在DSWNotebook选项卡中,右键单击图片文件并单击Download,即可将图片下载到本地。

方式二:StableSR插件

以下参数配置仅为示例说明,您可以根据实际场景调整参数值。

  1. 切换模型为SD2.1,并切换至图生图选项卡。image.png

  2. 根据界面提示,将待修复的图片上传到图生图选项卡中。image.png

  3. 请参考以下内容来设置相关参数。image.png

    • 如果提示显存不够,可以展开分块VAE,选中Enable Tiled VAE,并将下方的编码器图块尺寸解码器土块尺寸数值调小。

    • 脚本选择StableSR

    • SR Model选择webui_768v_139.ckpt

    • 缩放系数配置为2。

  4. 单击生成,即可在右侧区域生成修复后的图片。image.png

  5. 生成的图片保存到了DSW实例的./stable-diffusion-webui/outputs/img2img-images/date文件夹中。您可以在DSWNotebook选项卡中,右键单击图片文件并单击Download,即可将图片下载到本地。

方式三:局部重绘

以下参数配置仅为示例说明,您可以根据实际场景调整参数值。

  1. 选择基模型,并在图生图选项卡中输入提示词(Prompt),例如:raw photo,a chinese woman,happy,high quality,high detailimage.png

  2. 局部重绘选项卡中,根据界面提示上传待修复的图片,并绘制掩码。image.png

  3. 请参考以下内容来设置相关参数。image.png

    • 选中面部修复进行人脸重绘。

    • Resize by页签,通过配置Scale来调整图片缩放比例。

    • 降低重绘幅度,取值范围为0.01~0.1。

  4. (可选)设置ControlNet进行局部可控的重绘。image.png

    • 可选多个ControlNet进行设置。

    • 选中启用后,当前单元生效。

  5. 单击生成,即可在右侧区域生成修复后的图片。image.png

  6. 生成的图片保存到了DSW实例的./stable-diffusion-webui/outputs/img2img-images/date文件夹中。您可以在DSWNotebook选项卡中,右键单击图片文件并单击Download,即可将图片下载到本地。