将Flexvolume类型的OSS静态存储卷迁移至CSI

由于Flexvolume存储插件已弃用,新建集群已不再支持Flexvolume存储插件。对于现有使用Flexvolume插件的集群,建议迁移改用CSI插件。本文介绍如何将Flexvolume类型的OSS静态存储卷迁移至CSI。

索引

插件区别

CSI和Flexvolume存储插件的区别如下表所示。

插件

组成

kubelet参数

相关文档

CSI

  • CSI-Provisioner(Deployment)

    实现存储卷、快照自动创建的能力。支持存储误删除后的恢复功能、CNFS存储等功能。

  • CSI-Plugin(DaemonSet)

    实现存储卷的自动挂载和卸载能力。支持多种存储类型,默认支持云盘、NAS、OSS三种存储类型。

插件运行依赖的kubelet参数不同。

配置kubelet的enable-controller-attach-detachtrue

存储CSI概述

Flexvolume

  • Disk-Controller(Deployment)

    实现云盘卷的自动创建能力。

  • Flexvolume(DaemonSet)

    实现数据卷的挂载、卸载功能。ACK默认提供云盘、NAS、OSS三种存储卷的挂载能力。

插件运行依赖的kubelet参数不同。

配置kubelet的enable-controller-attach-detachfalse

存储Flexvolume概述

使用场景

集群已使用Flexvolume挂载OSS静态存储卷,即当前集群中包含Flexvolume类型的OSS静态存储卷。如果您的集群中也有Flexvolume类型的云盘存储卷,请参考通过csi-compatible-controller组件迁移Flexvolume至CSI

注意事项

插件迁移时,PVC重建将导致Pod重建和业务中断。您需要选择合适时间对集群进行插件迁移、PVC重建、应用变更等一系列的重启操作。

准备工作

手动安装CSI插件

  1. 使用以下内容,分别创建csi-plugin.yamlcsi-provisioner.yaml文件。

    展开查看csi-plugin.yaml文件

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: csi-admin
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: alicloud-csi-plugin
    rules:
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get", "create", "list"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims/status"]
        verbs: ["get", "list", "watch", "update", "patch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["csinodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "watch", "list", "delete", "update", "create"]
      - apiGroups: [""]
        resources: ["configmaps"]
        verbs: ["get", "watch", "list", "delete", "update", "create"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["csi.storage.k8s.io"]
        resources: ["csinodeinfos"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["volumeattachments"]
        verbs: ["get", "list", "watch", "update", "patch"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshotclasses"]
        verbs: ["get", "list", "watch", "create"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshotcontents"]
        verbs: ["create", "get", "list", "watch", "update", "delete"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshots"]
        verbs: ["get", "list", "watch", "update", "create"]
      - apiGroups: ["apiextensions.k8s.io"]
        resources: ["customresourcedefinitions"]
        verbs: ["create", "list", "watch", "delete", "get", "update", "patch"]
      - apiGroups: ["coordination.k8s.io"]
        resources: ["leases"]
        verbs: ["get", "create", "list", "watch", "delete", "update"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshotcontents/status"]
        verbs: ["update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["volumeattachments/status"]
        verbs: ["patch"]
      - apiGroups: ["snapshot.storage.k8s.io"]
        resources: ["volumesnapshots/status"]
        verbs: ["update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["namespaces"]
        verbs: ["get", "list"]
      - apiGroups: [""]
        resources: ["pods","pods/exec"]
        verbs: ["create", "delete", "get", "post", "list", "watch", "patch", "udpate"]
      - apiGroups: ["storage.alibabacloud.com"]
        resources: ["rules"]
        verbs: ["get"]
      - apiGroups: ["storage.alibabacloud.com"]
        resources: ["containernetworkfilesystems"]
        verbs: ["get","list", "watch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: alicloud-csi-plugin
    subjects:
      - kind: ServiceAccount
        name: csi-admin
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: alicloud-csi-plugin
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: storage.k8s.io/v1
    kind: CSIDriver
    metadata:
      name: diskplugin.csi.alibabacloud.com
    spec:
      attachRequired: true
      podInfoOnMount: true
    ---
    apiVersion: storage.k8s.io/v1
    kind: CSIDriver
    metadata:
      name: nasplugin.csi.alibabacloud.com
    spec:
      attachRequired: false
      podInfoOnMount: true
    ---
    apiVersion: storage.k8s.io/v1
    kind: CSIDriver
    metadata:
      name: ossplugin.csi.alibabacloud.com
    spec:
      attachRequired: false
      podInfoOnMount: true
    ---
    kind: DaemonSet
    apiVersion: apps/v1
    metadata:
      name: csi-plugin
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          app: csi-plugin
      template:
        metadata:
          labels:
            app: csi-plugin
        spec:
          tolerations:
            - operator: Exists
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: type
                    operator: NotIn
                    values:
                    - virtual-kubelet
          nodeSelector:
            kubernetes.io/os: linux
          serviceAccount: csi-admin
          priorityClassName: system-node-critical
          hostNetwork: true
          hostPID: true
          dnsPolicy: ClusterFirst
          containers:
            - name: disk-driver-registrar
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v2.3.1-038aeb6-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--v=5"
                - "--csi-address=/var/lib/kubelet/csi-plugins/diskplugin.csi.alibabacloud.com/csi.sock"
                - "--kubelet-registration-path=/var/lib/kubelet/csi-plugins/diskplugin.csi.alibabacloud.com/csi.sock"
              volumeMounts:
                - name: kubelet-dir
                  mountPath: /var/lib/kubelet
                - name: registration-dir
                  mountPath: /registration
            - name: nas-driver-registrar
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v2.3.1-038aeb6-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--v=5"
                - "--csi-address=/var/lib/kubelet/csi-plugins/nasplugin.csi.alibabacloud.com/csi.sock"
                - "--kubelet-registration-path=/var/lib/kubelet/csi-plugins/nasplugin.csi.alibabacloud.com/csi.sock"
              volumeMounts:
                - name: kubelet-dir
                  mountPath: /var/lib/kubelet/
                - name: registration-dir
                  mountPath: /registration
            - name: oss-driver-registrar
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v2.3.1-038aeb6-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--v=5"
                - "--csi-address=/var/lib/kubelet/csi-plugins/ossplugin.csi.alibabacloud.com/csi.sock"
                - "--kubelet-registration-path=/var/lib/kubelet/csi-plugins/ossplugin.csi.alibabacloud.com/csi.sock"
              volumeMounts:
                - name: kubelet-dir
                  mountPath: /var/lib/kubelet/
                - name: registration-dir
                  mountPath: /registration
            - name: csi-plugin
              securityContext:
                privileged: true
                allowPrivilegeEscalation: true
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.24.6-55c95dd-aliyun
              args:
                - "--endpoint=$(CSI_ENDPOINT)"
                - "--v=2"
                - "--driver=oss,nas,disk"
              env:
                - name: KUBE_NODE_NAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
                - name: CSI_ENDPOINT
                  value: unix://var/lib/kubelet/csi-plugins/driverplugin.csi.alibabacloud.com-replace/csi.sock
                - name: MAX_VOLUMES_PERNODE
                  value: "15"
                - name: SERVICE_TYPE
                  value: "plugin"
              resources:
                requests:
                  cpu: 100m
                  memory: 128Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: healthz
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 30
                timeoutSeconds: 5
                failureThreshold: 5
              readinessProbe:
                httpGet:
                  path: /healthz
                  port: healthz
                initialDelaySeconds: 10
                periodSeconds: 30
                timeoutSeconds: 5
                failureThreshold: 5
              ports:
                - name: healthz
                  containerPort: 11260
              volumeMounts:
                - name: kubelet-dir
                  mountPath: /var/lib/kubelet/
                  mountPropagation: "Bidirectional"
                - name: etc
                  mountPath: /host/etc
                - name: host-log
                  mountPath: /var/log/
                - name: ossconnectordir
                  mountPath: /host/usr/
                - name: container-dir
                  mountPath: /var/lib/container
                  mountPropagation: "Bidirectional"
                - name: host-dev
                  mountPath: /dev
                  mountPropagation: "HostToContainer"
                - mountPath: /var/addon
                  name: addon-token
                  readOnly: true
                - mountPath: /host/var/run/
                  name: fuse-metrics-dir
          volumes:
            - name: fuse-metrics-dir
              hostPath:
                path: /var/run/
                type: DirectoryOrCreate
            - name: registration-dir
              hostPath:
                path: /var/lib/kubelet/plugins_registry
                type: DirectoryOrCreate
            - name: container-dir
              hostPath:
                path: /var/lib/container
                type: DirectoryOrCreate
            - name: kubelet-dir
              hostPath:
                path: /var/lib/kubelet
                type: Directory
            - name: host-dev
              hostPath:
                path: /dev
            - name: host-log
              hostPath:
                path: /var/log/
            - name: etc
              hostPath:
                path: /etc
            - name: ossconnectordir
              hostPath:
                path: /usr/
            - name: addon-token
              secret:
                defaultMode: 420
                optional: true
                items:
                - key: addon.token.config
                  path: token-config
                secretName: addon.csi.token
      updateStrategy:
        rollingUpdate:
          maxUnavailable: 30%
        type: RollingUpdate

    展开查看csi-provisioner.yaml文件

    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: csi-provisioner
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          app: csi-provisioner
      strategy:
        rollingUpdate:
          maxSurge: 0
          maxUnavailable: 1
        type: RollingUpdate
      replicas: 2
      template:
        metadata:
          labels:
            app: csi-provisioner
        spec:
          affinity:
            nodeAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 1
                preference:
                  matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: Exists
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: type
                    operator: NotIn
                    values:
                    - virtual-kubelet
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - csi-provisioner
                  topologyKey: kubernetes.io/hostname
          tolerations:
          - effect: NoSchedule
            operator: Exists
            key: node-role.kubernetes.io/master
          - effect: NoSchedule
            operator: Exists
            key: node.cloudprovider.kubernetes.io/uninitialized
          serviceAccount: csi-admin
          hostPID: true
          priorityClassName: system-node-critical
          containers:
            - name: external-disk-provisioner
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v3.0.0-080f01e64-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--csi-address=$(ADDRESS)"
                - "--feature-gates=Topology=True"
                - "--volume-name-prefix=disk"
                - "--strict-topology=true"
                - "--timeout=150s"
                - "--leader-election=true"
                - "--retry-interval-start=500ms"
                - "--extra-create-metadata=true"
                - "--default-fstype=ext4"
                - "--v=5"
              env:
                - name: ADDRESS
                  value: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com/csi.sock
              volumeMounts:
                - name: disk-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com
            - name: external-disk-attacher
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-attacher:v3.3-72dd428b-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--v=5"
                - "--csi-address=$(ADDRESS)"
                - "--leader-election=true"
              env:
                - name: ADDRESS
                  value: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com/csi.sock
              volumeMounts:
                - name: disk-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com
            - name: external-disk-resizer
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-resizer:v1.3-ca84e84-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 8Gi
              args:
                - "--v=5"
                - "--csi-address=$(ADDRESS)"
                - "--leader-election"
              env:
                - name: ADDRESS
                  value: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com/csi.sock
              volumeMounts:
                - name: disk-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com
            - name: external-nas-provisioner
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v3.0.0-080f01e64-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--csi-address=$(ADDRESS)"
                - "--volume-name-prefix=nas"
                - "--timeout=150s"
                - "--leader-election=true"
                - "--retry-interval-start=500ms"
                - "--default-fstype=nfs"
                - "--v=5"
              env:
                - name: ADDRESS
                  value: /var/lib/kubelet/csi-provisioner/nasplugin.csi.alibabacloud.com/csi.sock
              volumeMounts:
                - name: nas-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/nasplugin.csi.alibabacloud.com
            - name: external-nas-resizer
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-resizer:v1.3-ca84e84-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 8Gi
              args:
                - "--v=5"
                - "--csi-address=$(ADDRESS)"
                - "--leader-election"
              env:
                - name: ADDRESS
                  value: /var/lib/kubelet/csi-provisioner/nasplugin.csi.alibabacloud.com/csi.sock
              volumeMounts:
                - name: nas-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/nasplugin.csi.alibabacloud.com
            - name: external-oss-provisioner
              args:
                - --csi-address=$(ADDRESS)
                - --volume-name-prefix=oss
                - --timeout=150s
                - --leader-election=true
                - --retry-interval-start=500ms
                - --default-fstype=ossfs
                - --v=5
              env:
              - name: ADDRESS
                value: /var/lib/kubelet/csi-provisioner/ossplugin.csi.alibabacloud.com/csi.sock
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v3.0.0-080f01e64-aliyun
              resources:
                limits:
                  cpu: 500m
                  memory: 1Gi
                requests:
                  cpu: 10m
                  memory: 16Mi
              volumeMounts:
              - mountPath: /var/lib/kubelet/csi-provisioner/ossplugin.csi.alibabacloud.com
                name: oss-provisioner-dir
            - name: external-csi-snapshotter
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-snapshotter:v4.0.0-a230d5b3-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--v=5"
                - "--csi-address=$(ADDRESS)"
                - "--leader-election=true"
                - "--extra-create-metadata=true"
              env:
                - name: ADDRESS
                  value: /csi/csi.sock
              volumeMounts:
                - name: disk-provisioner-dir
                  mountPath: /csi
            - name: external-snapshot-controller
              image: registry.cn-hangzhou.aliyuncs.com/acs/snapshot-controller:v4.0.0-a230d5b3-aliyun
              resources:
                requests:
                  cpu: 10m
                  memory: 16Mi
                limits:
                  cpu: 500m
                  memory: 1024Mi
              args:
                - "--v=5"
                - "--leader-election=true"
            - name: csi-provisioner
              securityContext:
                privileged: true
              image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.24.6-55c95dd-aliyun
              args:
                - "--endpoint=$(CSI_ENDPOINT)"
                - "--v=2"
                - "--driver=nas,disk,oss"
              env:
                - name: CSI_ENDPOINT
                  value: unix://var/lib/kubelet/csi-provisioner/driverplugin.csi.alibabacloud.com-replace/csi.sock
                - name: MAX_VOLUMES_PERNODE
                  value: "15"
                - name: SERVICE_TYPE
                  value: "provisioner"
                - name: "CLUSTER_ID"
                  value: "CLUSTER_ID"
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: healthz
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 30
                timeoutSeconds: 5
                failureThreshold: 5
              readinessProbe:
                httpGet:
                  path: /healthz
                  port: healthz
                initialDelaySeconds: 5
                periodSeconds: 20
              ports:
                - name: healthz
                  containerPort: 11270
              volumeMounts:
                - name: host-log
                  mountPath: /var/log/
                - name: disk-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/diskplugin.csi.alibabacloud.com
                - name: nas-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/nasplugin.csi.alibabacloud.com
                - name: oss-provisioner-dir
                  mountPath: /var/lib/kubelet/csi-provisioner/ossplugin.csi.alibabacloud.com
                - mountPath: /var/addon
                  name: addon-token
                  readOnly: true
                - mountPath: /mnt
                  mountPropagation: Bidirectional
                  name: host-dev
                - mountPath: /host/etc
                  name: etc
              resources:
                limits:
                  cpu: 500m
                  memory: 1024Mi
                requests:
                  cpu: 100m
                  memory: 128Mi
          volumes:
            - name: disk-provisioner-dir
              emptyDir: {}
            - name: nas-provisioner-dir
              emptyDir: {}
            - name: oss-provisioner-dir
              emptyDir: {}
            - name: host-log
              hostPath:
                path: /var/log/
            - name: etc
              hostPath:
                path: /etc
                type: ""
            - name: host-dev
              hostPath:
                path: /mnt
                type: ""
            - name: addon-token
              secret:
                defaultMode: 420
                optional: true
                items:
                - key: addon.token.config
                  path: token-config
                secretName: addon.csi.token
  2. 执行以下命令,在ACK集群中部署csi-plugin和csi-provisioner。

    kubectl apply -f csi-plugin.yaml -f csi-provisioner.yaml
  3. 执行以下命令,查看CSI插件是否正常。

    kubectl get pods -nkube-system | grep csi

    预期输出:

    csi-plugin-577mm                              4/4     Running   0          3d20h
    csi-plugin-k9mzt                              4/4     Running   0          41d
    csi-provisioner-6b58f46989-8wwl5              9/9     Running   0          41d
    csi-provisioner-6b58f46989-qzh8l              9/9     Running   0          6d20h

    存在以上预期输出,说明集群中CSI插件正常运行。

本文以1个StatefulSet使用Flexvolume类型的OSS静态存储卷,存储卷的密钥信息保存在oss-secret中为例,介绍如何将Flexvolume类型的OSS静态存储卷迁移至CSI。流程如下图所示。oss

步骤一:查看集群存储状态

  1. 执行以下命令,查看Pod状态。

    kubectl get pod

    预期输出:

    NAME       READY   STATUS    RESTARTS   AGE
    oss-sts-1  1/1     Running   0          11m
  2. 执行以下命令,查看Pod使用的PVC。

    kubectl describe pod oss-sts-1 |grep ClaimName

    预期输出:

    ClaimName:  oss-pvc
  3. 执行以下命令,查看当前PVC状态。

    kubectl get pvc

    预期输出:

    NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    oss-pvc   Bound    oss-pv   5Gi        RWX                           7m23s

步骤二:创建CSI类型OSS静态存储的PVC和PV

方式一:通过Flexvolume2CSI命令行工具转换

  1. 将Flexvolume类型的PVC和PV转换为CSI类型的PVC和PV。具体操作,请参见使用Flexvolume2CSI命令行工具批量转换YAML

  2. 执行以下命令,创建CSI类型OSS静态存储的PVC和PV对象。

    其中,oss-pv-pvc-csi.yaml文件为通过Flexvolume2CSI命令行工具转换得到的CSI类型的PVC和PV的YAML文件。

    kubectl apply -f oss-pv-pvc-csi.yaml
  3. 执行以下命令,查看当前PVC状态。

    kubectl get pvc

    预期输出:

    NAME         STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    oss-pvc-csi   Bound    oss-pv-csi   5Gi        RWO                           7m15s
    oss-pvc       Bound    oss-pv       5Gi        RWX                           52m

方式二:通过手动保存Flexvolume类型PVC和PV并修改存储驱动的方式转换

  1. 保存Flexvolume类型的PVC和PV模板。

    1. 执行以下命令,保存Flexvolume类型的PVC对象。

      kubectl get pvc oss-pvc -oyaml > oss-pvc-flexvolume.yaml
      cat oss-pvc-flexvolume.yaml

      预期输出:

      apiVersion: v1
      kind: PersistentVolumeClaim
        name: oss-pvc
        namespace: default
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
        volumeMode: Filesystem
        volumeName: oss-pv
    2. 执行以下命令,保存Flexvolume类型的PV对象。

      kubectl get pv oss-pv -oyaml > oss-pv-flexvolume.yaml
      cat oss-pv-flexvolume.yaml

      预期输出:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: oss-pv
      spec:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 5Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: oss-pvc
          namespace: default
        flexVolume:
          driver: alicloud/oss
          nodePublishSecretRef:
            name: oss-secret
            namespace: default
          options:
            bucket: xxx
            otherOpts: -o max_stat_cache_size=0 -o allow_other
            url: xxx.aliyuncs.com
        persistentVolumeReclaimPolicy: Retain
        volumeMode: Filesystem
  2. 创建CSI类型OSS静态存储的PVC和PV。

    1. 使用以下YAML内容,创建CSI类型OSS静态存储的oss-pv-pvc-csi.yaml文件。

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: oss-pvc-csi
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi
        selector:
          matchLabels:
            alicloud-pvname: oss-pv-csi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: oss-pv-csi
        labels:
          alicloud-pvname: oss-pv-csi
      spec:
        capacity:
          storage: 5Gi
        accessModes:
          - ReadWriteMany
        persistentVolumeReclaimPolicy: Retain
        csi:
          driver: ossplugin.csi.alibabacloud.com
          volumeHandle: oss-pv-csi
          nodePublishSecretRef:
            name: oss-secret
            namespace: default
          volumeAttributes:
            bucket: "***"
            url: "***.aliyuncs.com"
            otherOpts: "-o max_stat_cache_size=0 -o allow_other"
    2. 执行以下命令,创建CSI类型OSS静态存储的PVC和PV对象。

      kubectl apply -f oss-pv-pvc-csi.yaml
    3. 执行以下命令,查看当前PVC状态。

      kubectl get pvc

      预期输出:

      NAME         STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      oss-pvc-csi   Bound    oss-pv-csi   5Gi        RWO                           7m15s
      oss-pvc       Bound    oss-pv       5Gi        RWX                           52m

步骤三:更新应用关联的PVC

  1. 执行以下命令,编辑应用配置文件。

    kubectl edit sts oss-sts
  2. 修改PVC配置内容,将数据卷配置改为CSI类型的PVC。

          volumes:
          - name: oss
            persistentVolumeClaim:
              claimName: oss-pvc-csi
  3. 执行以下命令,查看Pod是否重启成功。

    kubectl get pod

    预期输出:

    NAME       READY   STATUS    RESTARTS   AGE
    oss-sts-1  1/1     Running   0          70s
  4. 执行以下命令,查看挂载信息。

    kubectl exec oss-sts-1 -- mount |grep ossfs

    预期输出:

    # 查看挂载信息。
    ***:/ on /var/lib/kubelet/pods/ac02ea3f-125f-4b38-9bcf-9b117f62eaf0/volumes/kubernetes.io~csi/oss-pv-csi/mount type ossfs (rw,relatime,max_stat_cache_size=0,allow_other)

    存在以上预期输出,表示Pod迁移成功。

步骤四:卸载Flexvolume插件

  1. 登录OpenAPI平台,调用UnInstallClusterAddons卸载Flexvolume插件。

    • ClusterId:您的集群ID。您可以通过集群的基本信息页面,查看集群ID。

    • name:Flexvolume。

    具体操作,请参见卸载集群组件

  2. 执行以下命令,删除alicloud-disk-controlleralicloud-nas-controller

    kubectl delete deploy -nkube-system alicloud-disk-controller alicloud-nas-controller
  3. 执行以下命令,检查集群中Flexvolume插件是否卸载完成。

    kubectl get pods -n kube-system | grep 'flexvolume\|alicloud-disk-controller\|alicloud-nas-controller'

    输出为空,说明集群中Flexvolume插件已卸载完成。

  4. 执行以下命令,删除集群中Flexvolume类型的StorageClass,Flexvolume类型的StorageClass PROVISIONER类型为alicloud/disk。

    kubectl delete storageclass alicloud-disk-available alicloud-disk-efficiency alicloud-disk-essd alicloud-disk-ssd

    预期输出:

    storageclass.storage.k8s.io "alicloud-disk-available" deleted
    storageclass.storage.k8s.io "alicloud-disk-efficiency" deleted
    storageclass.storage.k8s.io "alicloud-disk-essd" deleted
    storageclass.storage.k8s.io "alicloud-disk-ssd" deleted

    存在以上输出,说明StorageClass删除成功。

步骤五:使用OpenAPI安装CSI插件

  1. 登录OpenAPI平台,调用InstallClusterAddons安装CSI插件。

    • ClusterId:您的集群ID。

    • name:csi-provisioner。

    • version:最新CSI插件版本。关于CSI版本信息,请参见csi-provisioner

    具体操作,请参见安装集群组件

  2. 执行以下命令,检查集群中CSI插件是否正常运行。

    kubectl get pods -nkube-system | grep csi

    预期输出:

    csi-plugin-577mm                              4/4     Running   0          3d20h
    csi-plugin-k9mzt                              4/4     Running   0          41d
    csi-provisioner-6b58f46989-8wwl5              9/9     Running   0          41d
    csi-provisioner-6b58f46989-qzh8l              9/9     Running   0          6d20h

    存在以上预期输出,说明集群中CSI插件正常运行。

步骤六:修改现有节点配置

执行以下YAML内容,修改插件运行依赖的kubelet参数,使其匹配CSI插件的运行要求。 该DaemonSet有将现有节点的kubelet参数--enable-controller-attach-detach修改为true的能力,当本步骤操作执行完成后,可以将该DaemonSet删除。

重要

执行以下YAML文件时,会重启kubelet,请评估对现有应用的影响。

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: kubelet-set
spec:
  selector:
    matchLabels:
      app: kubelet-set
  template:
    metadata:
      labels:
        app: kubelet-set
    spec:
      tolerations:
        - operator: "Exists"
      hostNetwork: true
      hostPID: true
      containers:
        - name: kubelet-set
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.26.5-56d1e30-aliyun
          imagePullPolicy: "Always"
          env:
          - name: enableADController
            value: "true"
          command: ["sh", "-c"]
          args:
          - echo "Starting kubelet flag set to $enableADController";
            ifFlagTrueNum=`cat /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep enable-controller-attach-detach=$enableADController | grep -v grep | wc -l`;
            echo "ifFlagTrueNum is $ifFlagTrueNum";
            if [ "$ifFlagTrueNum" = "0" ]; then
                curValue="true";
                if [ "$enableADController" = "true" ]; then
                    curValue="false";
                fi;
                sed -i "s/enable-controller-attach-detach=$curValue/enable-controller-attach-detach=$enableADController/" /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf;
                restartKubelet="true";
                echo "current value is $curValue, change to expect "$enableADController;
            fi;
            if [ "$restartKubelet" = "true" ]; then
                /nsenter --mount=/proc/1/ns/mnt systemctl daemon-reload;
                /nsenter --mount=/proc/1/ns/mnt service kubelet restart;
                echo "restart kubelet";
            fi;
            while true;
            do
                sleep 5;
            done;
          volumeMounts:
          - name: etc
            mountPath: /host/etc
      volumes:
        - name: etc
          hostPath:
            path: /etc