本地存储卷的初始化、挂载与自动扩容

ACK集群中,如需使用ECS本地盘或云盘存储具有波峰波谷业务特性的临时数据,您可以创建并配置本地存储卷。本地存储将根据节点本地存储容量进行调度,并在存储容量达到设定值时执行存储卷自动扩容。

本地存储介绍

image

本地存储架构主要由以下三部分组成。

类型

说明

Node Resource Manager

负责维护本地存储的初始化周期。以VolumeGroup为例,当您在ConfigMap上声明了VolumeGroup相关组成信息时,NodeResourceManager会根据ConfigMap上的定义初始化当前节点上的本地存储。

CSI Plugin

负责维护本地存储卷的生命周期。以LVM为例,当您创建使用了某个VolumeGroupPVC时,CSI Plugin会自动生成Logical volume并绑定到PVC。

Storage Auto Expander

负责管理本地存储卷的自动扩容。当监控发现当前本地存储卷容量不足时,Storage Auto Expander会自动对本地存储卷进行扩容。

关于本地存储卷的更多信息,请参见本地存储卷概述

前提条件

步骤一:初始化本地存储

Node Resource Manager组件初始化本地存储时,会读取ConfigMap来管理集群内所有节点的本地存储,为每个节点自动化创建VolumeGroupQuotaPath。

  1. 执行以下命令,在集群的ConfigMap中定义本地存储卷。

    请按需将cn-zhangjiakou.192.168.XX.XX替换为一个实际节点名称。

    展开查看ConfigMap示例代码

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: node-resource-topo
      namespace: kube-system
    data:
      volumegroup: |-
        volumegroup:
        - name: volumegroup1
          key: kubernetes.io/hostname
          operator: In
          value: cn-zhangjiakou.192.168.XX.XX
          topology:
            type: device
            devices:
            - /dev/vdb
            - /dev/vdc
      quotapath: |-
        quotapath:
        - name: /mnt/path1
          key: kubernetes.io/hostname
          operator: In
          value: cn-zhangjiakou.192.168.XX.XX
          topology:
            type: device
            options: prjquota
            fstype: ext4
            devices:
            - /dev/vdd
    EOF

    以上的ConfigMap定义了在cn-zhangjiakou.192.168.XX.XX节点创建以下两种本地存储卷资源。

    • VolumeGroup资源:名称为volumegroup1,该VolumeGroup资源由宿主机上的/dev/vdb/dev/vdc两个块设备组成。

    • Quotapath资源:该Quotapath/dev/vdd块设备格式化挂载在/mnt/path1路径下。此处只能声明一个devices

  2. 部署Node Resource Manager组件。

    1. 使用以下内容,创建Node Resource Manager组件的nrm.yaml文件。

      展开查看Node Resource Manager组件的nrm.yaml文件

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: node-resource-manager
        namespace: kube-system
      ---
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: node-resource-manager
      rules:
        - apiGroups: [""]
          resources: ["configmaps"]
          verbs: ["get", "watch", "list", "delete", "update", "create"]
        - apiGroups: [""]
          resources: ["nodes"]
          verbs: ["get", "list", "watch"]
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: node-resource-manager-binding
      subjects:
        - kind: ServiceAccount
          name: node-resource-manager
          namespace: kube-system
      roleRef:
        kind: ClusterRole
        name: node-resource-manager
        apiGroup: rbac.authorization.k8s.io
      ---
      kind: DaemonSet
      apiVersion: apps/v1
      metadata:
        name: node-resource-manager
        namespace: kube-system
      spec:
        selector:
          matchLabels:
            app: node-resource-manager
        template:
          metadata:
            labels:
              app: node-resource-manager
          spec:
            tolerations:
              - operator: "Exists"
            priorityClassName: system-node-critical
            serviceAccountName: node-resource-manager
            hostNetwork: true
            hostPID: true
            containers:
              - name: node-resource-manager
                securityContext:
                  privileged: true
                  capabilities:
                    add: ["SYS_ADMIN"]
                  allowPrivilegeEscalation: true
                image: registry.cn-hangzhou.aliyuncs.com/acs/node-resource-manager:v1.18.8.0-983ce56-aliyun
                imagePullPolicy: "Always"
                args:
                  - "--nodeid=$(KUBE_NODE_NAME)"
                env:
                  - name: KUBE_NODE_NAME
                    valueFrom:
                      fieldRef:
                        apiVersion: v1
                        fieldPath: spec.nodeName
                volumeMounts:
                  - mountPath: /dev
                    mountPropagation: "HostToContainer"
                    name: host-dev
                  - mountPath: /var/log/
                    name: host-log
                  - name: etc
                    mountPath: /host/etc
                  - name: config
                    mountPath: /etc/unified-config
            volumes:
              - name: host-dev
                hostPath:
                  path: /dev
              - name: host-log
                hostPath:
                  path: /var/log/
              - name: etc
                hostPath:
                  path: /etc
              - name: config
                configMap:
                  name: node-resource-topo
    2. 执行以下命令,安装Node Resource Manager组件。

      kubectl apply -f nrm.yaml

步骤二:使用本地存储卷创建应用

使用LVM本地存储卷创建应用

  1. 执行以下命令,创建StorageClass。

    cat <<EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
        name: csi-local
    provisioner: localplugin.csi.alibabacloud.com
    parameters:
        volumeType: LVM
        vgName: volumegroup1
        fsType: ext4
        lvmType: "striping"
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    EOF

    parameters.vgName为在node-resource-topo configmap中定义的VolumeGroup名称volumegroup1。更多信息,请参见LVM数据卷

  2. 执行以下命令,创建PVC。

    cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-pvc
      annotations:
        volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: csi-local
    EOF

    annotations下的volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX表明后续关联PVCPod将会被调度到cn-zhangjiakou.192.168.XX.XX节点上,该节点是之前在ConfigMap中定义的VolumeGroups资源所在节点。

  3. 执行以下命令,创建一个示例应用。

    展开查看示例应用代码

    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-lvm
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            volumeMounts:
              - name: lvm-pvc
                mountPath: "/data"
          volumes:
            - name: lvm-pvc
              persistentVolumeClaim:
                claimName: lvm-pvc
    EOF

    等待应用启动,容器中的/data目录容量将为PVC声明的容量2 GiB。

使用Quotapath本地存储卷创建应用

  1. 执行以下命令,创建StorageClass。

    cat << EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-local-quota
    parameters:
      volumeType: QuotaPath
      rootPath: /mnt/path1
    provisioner: localplugin.csi.alibabacloud.com
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    EOF

    parameters.rootPath为在node-resource-topo configmap中定义的QuotaPath资源的名称/mnt/path1。更多信息,请参见QuotaPath数据卷

  2. 执行以下命令,创建PVC。

    cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-quota
      annotations:
        volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
      labels:
        app: web-quota
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: alicloud-local-quota
    EOF

    annotations下的volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX表明后续关联PVCPod将会被调度到cn-zhangjiakou.192.168.XX.XX节点上,该节点是之前在ConfigMap中定义的Quotapath资源所在节点。

  3. 执行以下命令,创建一个示例应用。

    展开查看示例应用代码

    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: web-quota
    spec:
      selector:
        matchLabels:
          app: nginx
      serviceName: "nginx"
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            volumeMounts:
            - name: disk-ssd
              mountPath: /data
          volumes:
            - name: "disk-ssd"
              persistentVolumeClaim:
                claimName: csi-quota
    EOF

    等待应用启动,容器中的/data目录容量等于PVC声明的容量2 GiB。

步骤三:自动扩容本地存储卷

说明

本地存储卷仅支持扩容,不支持缩容。

  1. 确认自动扩容插件是否正常启动。更多信息,请参见使用storage-operator进行存储组件的部署与升级

    1. 执行以下命令配置storage-operator。

      kubectl edit cm storage-operator -n kube-system

      预期输出:

      storage-auto-expander: '{"crdTmpl":"/acs/templates/storage-auto-expander/crd.yaml","imageRep":"acs/storage-auto-expander","imageTag":"","install":"","template":"/acs/templates/storage-auto-expander/install.yaml","type":"deployment"}'
    2. 执行以下命令检查自动扩容组件是否启动。

      kubectl get pod -n kube-system |grep storage-auto-expander

      预期输出:

      storage-auto-expander-6bb575b68c-tt4hh         1/1     Running     0          2m41s
  2. 执行以下命令,配置自动扩容策略。

    cat << EOF | kubectl apply -f -
    apiVersion: storage.alibabacloud.com/v1alpha1
    kind: StorageAutoScalerPolicy
    metadata:
      name: hybrid-expand-policy
    spec:
      pvcSelector:
        matchLabels:
          app: web-quota
      namespaces:
        - default
      conditions:
        - name: condition1
          key: volume-capacity-used-percentage
          operator: Gt
          values:
            - "80"
      actions:
        - name: action
          type: volume-expand
          params:
            scale: 50%
            limits: 15Gi
    EOF

    从以上命令模板可以看出,当存储容量达到80%以上时执行数据卷扩容,每次扩容50%,最大扩容到15 GiB。

相关文档