在ACK集群中,如需使用ECS本地盘或云盘存储具体波峰波谷特性的临时数据,您可以创建并配置本地存储卷。本地存储将根据节点本地存储容量进行调度,并在存储容量达到设定值时执行数据卷自动扩容。
本地存储介绍
本地存储架构主要由以下三部分组成。
类型 | 说明 |
Node Resource Manager | 负责维护本地存储的初始化周期。以VolumeGroup为例,当您在ConfigMap上声明了VolumeGroup相关组成信息时,NodeResourceManager会根据ConfigMap上的定义初始化当前节点上的本地存储。 |
CSI Plugin | 负责维护本地存储卷的生命周期。以LVM为例,当您创建使用了某个VolumeGroup的PVC时,CSI Plugin会自动生成Logiical olume并绑定到PVC。 |
Storage Auto Expander | 负责管理本地存储卷的自动扩容。当监控发现当前本地存储卷容量不足时,Storage Auto Expander会自动对本地存储卷进行扩容。 |
关于本地存储卷的更多信息,请参见本地存储卷概述。
使用限制
- 目前支持通过HostPath、LocalVolume、LVM、内存等挂载方式挂载本地存储。
- 本地存储并非高可用存储卷,只适用于一些临时数据的保存及应用自带高可用的场景。
- LVM本地存储卷,不支持数据的跨节点迁移,不适合在高可用场景中使用。
步骤一:初始化本地存储
Node Resource Manager组件初始化本地存储时,会读取ConfigMap来管理集群内所有节点的本地存储,为每个节点自动化创建VolumeGroup及QuotaPath。
执行以下命令,在集群的ConfigMap中定义本地存储卷。
请按需将cn-zhangjiakou.192.168.XX.XX
替换为一个实际节点名称。
展开查看ConfigMap示例代码
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: node-resource-topo
namespace: kube-system
data:
volumegroup: |-
volumegroup:
- name: volumegroup1
key: kubernetes.io/hostname
operator: In
value: cn-zhangjiakou.192.168.XX.XX
topology:
type: device
devices:
- /dev/vdb
- /dev/vdc
quotapath: |-
quotapath:
- name: /mnt/path1
key: kubernetes.io/hostname
operator: In
value: cn-zhangjiakou.192.168.XX.XX
topology:
type: device
options: prjquota
fstype: ext4
devices:
- /dev/vdd
EOF
以上的ConfigMap定义了在cn-zhangjiakou.192.168.XX.XX
节点创建以下两种本地存储卷资源。
执行以下命令,部署Node Resource Manager组件。
展开查看Node Resource Manager组件部署的示例代码
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-resource-manager
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-resource-manager
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-resource-manager-binding
subjects:
- kind: ServiceAccount
name: node-resource-manager
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-resource-manager
apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: node-resource-manager
namespace: kube-system
spec:
selector:
matchLabels:
app: node-resource-manager
template:
metadata:
labels:
app: node-resource-manager
spec:
tolerations:
- operator: "Exists"
priorityClassName: system-node-critical
serviceAccountName: node-resource-manager
hostNetwork: true
hostPID: true
containers:
- name: node-resource-manager
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: registry.cn-hangzhou.aliyuncs.com/acs/node-resource-manager:v1.18.8.0-5b1bdc2-aliyun
imagePullPolicy: "Always"
args:
- "--nodeid=$(KUBE_NODE_NAME)"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /dev
mountPropagation: "HostToContainer"
name: host-dev
- mountPath: /var/log/
name: host-log
- name: etc
mountPath: /host/etc
- name: config
mountPath: /etc/unified-config
volumes:
- name: host-dev
hostPath:
path: /dev
- name: host-log
hostPath:
path: /var/log/
- name: etc
hostPath:
path: /etc
- name: config
configMap:
name: node-resource-topo
EOF
执行以下命令,安装csi-plugin组件。
csi-plugin组件提供数据卷的全生命周期管理,包括数据卷的创建、挂载、卸载、删除及扩容等服务。
展开查看csi-plugin组件部署的示例代码
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: localplugin.csi.alibabacloud.com
spec:
attachRequired: false
podInfoOnMount: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: csi-local-plugin
name: csi-local-plugin
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-local-plugin
template:
metadata:
labels:
app: csi-local-plugin
spec:
containers:
- args:
- --v=5
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v1.2.0
imagePullPolicy: Always
name: driver-registrar
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /registration
name: registration-dir
- args:
- --endpoint=$(CSI_ENDPOINT)
- --v=5
- --nodeid=$(KUBE_NODE_NAME)
- --driver=localplugin.csi.alibabacloud.com
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: SERVICE_PORT
value: "11290"
- name: CSI_ENDPOINT
value: unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.14.8-40ee9518-local
imagePullPolicy: Always
name: csi-localplugin
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
privileged: true
volumeMounts:
- mountPath: /var/lib/kubelet
mountPropagation: Bidirectional
name: pods-mount-dir
- mountPath: /dev
mountPropagation: HostToContainer
name: host-dev
- mountPath: /var/log/
name: host-log
- mountPath: /mnt
mountPropagation: Bidirectional
name: quota-path-dir
hostNetwork: true
hostPID: true
serviceAccount: csi-admin
tolerations:
- operator: Exists
volumes:
- hostPath:
path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
type: DirectoryOrCreate
name: plugin-dir
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: DirectoryOrCreate
name: registration-dir
- hostPath:
path: /var/lib/kubelet
type: Directory
name: pods-mount-dir
- hostPath:
path: /dev
type: ""
name: host-dev
- hostPath:
path: /var/log/
type: ""
name: host-log
- hostPath:
path: /mnt
type: Directory
name: quota-path-dir
updateStrategy:
rollingUpdate:
maxUnavailable: 10%
type: RollingUpdate
EOF
步骤二:使用本地存储卷创建应用
使用LVM本地存储卷创建应用
执行以下命令,创建StorageClass。
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-local
provisioner: localplugin.csi.alibabacloud.com
parameters:
volumeType: LVM
vgName: volumegroup1
fsType: ext4
lvmType: "striping"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOF
parameters.vgName
为在node-resource-topo configmap中定义的VolumeGroup名称volumegroup1
。更多信息,请参见LVM数据卷。
执行以下命令,创建PVC。
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-pvc
annotations:
volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: csi-local
EOF
annotations
下的volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
表明后续关联PVC的Pod将会被调度到cn-zhangjiakou.192.168.XX.XX
节点上,该节点是之前在ConfigMap中定义的VolumeGroups资源所在节点。
执行以下命令,创建一个示例应用。
展开查看示例应用代码
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-lvm
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
volumeMounts:
- name: lvm-pvc
mountPath: "/data"
volumes:
- name: lvm-pvc
persistentVolumeClaim:
claimName: lvm-pvc
EOF
等待应用启动,容器中的/data
目录容量将为PVC声明的容量2 GiB。
使用Quotapath本地存储卷创建应用
执行以下命令,创建StorageClass。
cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: alicloud-local-quota
parameters:
volumeType: QuotaPath
rootPath: /mnt/path1
provisioner: localplugin.csi.alibabacloud.com
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
EOF
parameters.rootPath
为在node-resource-topo configmap中定义的QuotaPath资源的名称/mnt/path1
。更多信息,请参见QuotaPath数据卷。
执行以下命令,创建PVC。
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-quota
annotations:
volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
labels:
app: web-quota
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: alicloud-local-quota
EOF
annotations
下的volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
表明后续关联PVC的Pod将会被调度到cn-zhangjiakou.192.168.XX.XX
节点上,该节点是之前在ConfigMap中定义的Quotapath资源所在节点。
执行以下命令,创建一个示例应用。
展开查看示例应用代码
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-quota
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: disk-ssd
mountPath: /data
volumes:
- name: "disk-ssd"
persistentVolumeClaim:
claimName: csi-quota
EOF
等待应用启动,容器中的/data
目录容量等于PVC声明的容量2 GiB。
步骤三:自动扩容本地存储卷
安装自动扩容插件。更多信息,请参见使用storage-operator进行存储组件的部署与升级。
执行以下命令配置storage-operator。
kubectl edit cm storage-operator -n kube-system
预期输出:
storage-auto-expander: |
# deploy config
install: true
imageTag: "v1.18.8.1-d4301ee-aliyun"
执行以下命令检查自动扩容组件是否启动。
kubectl get pod -n kube-system |grep storage-auto-expander
预期输出:
storage-auto-expander-6bb575b68c-tt4hh 1/1 Running 0 2m41s
执行以下命令,配置自动扩容策略。
cat << EOF | kubectl apply -f -
apiVersion: storage.alibabacloud.com/v1alpha1
kind: StorageAutoScalerPolicy
metadata:
name: hybrid-expand-policy
spec:
pvcSelector:
matchLabels:
app: web-quota
namespaces:
- default
conditions:
- name: condition1
key: volume-capacity-used-percentage
operator: Gt
values:
- "80"
actions:
- name: action
type: volume-expand
params:
scale: 50%
limits: 15Gi
EOF
从以上命令模板可以看出,当存储容量达到80%以上时执行数据卷扩容,每次扩容50%,最大扩容到15 GiB。