文档

使用自定义节点添加脚本

更新时间:

混合集群的添加云上节点的方式依赖本地数据中心自建Kubernetes集群的搭建方式,例如使用Kubeadm搭建、使用Kubernetes二进制文件搭建或者使用Rancher平台搭建等。本文将介绍如何使用自定义节点添加脚本。

前提条件

已完成构建混合弹性容器集群(弹性ECS)步骤5之前的操作。

步骤一:编写自定义节点添加脚本

您可以通过以下两种方式编写自定义节点的脚本。

方式一:Kubernetes二进制文件搭建

kubelet配置的示例代码如下所示。

cat >/usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/data0/kubernetes/bin/kubelet \\
  --node-ip=${ALIBABA_CLOUD_NODE_NAME} \\
  --hostname-override=${ALIBABA_CLOUD_NODE_NAME} \\
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \\
  --config=/var/lib/kubelet/config.yaml \\
  --kubeconfig=/etc/kubernetes/kubelet.conf \\
  --cert-dir=/etc/kubernetes/pki/ \\
  --cni-bin-dir=/opt/cni/bin \\
  --cni-cache-dir=/opt/cni/cache \\
  --cni-conf-dir=/etc/cni/net.d \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes/logs \\
  --log-file=/var/log/kubernetes/logs/kubelet.log \\
  --node-labels=${ALIBABA_CLOUD_LABELS} \\
  --root-dir=/var/lib/kubelet \\
  --provider-id=${ALIBABA_CLOUD_PROVIDER_ID} \\
  --register-with-taints=${ALIBABA_CLOUD_TAINTS} \\
  --v=4
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

在编写自定义节点添加脚本时需要注意必须接收阿里云注册集群下发的系统环境变量,如下所示。

系统环境变量

说明

示例

ALIBABA_CLOUD_PROVIDER_ID

自定义节点添加脚本中必须接收并进行配置,否则会影响注册集群的正常运行。

ALIBABA_CLOUD_PROVIDER_ID=cn-shenzhen.i-wz92ewt14n9wx9mol2cd

ALIBABA_CLOUD_NODE_NAME

自定义节点添加脚本中必须接收并进行配置,否则会导致注册集群节点池中节点状态异常。

ALIBABA_CLOUD_NODE_NAME=cn-shenzhen.192.168.1.113

ALIBABA_CLOUD_LABELS

自定义节点添加脚本中必须接收并进行配置,否则会影响节点池管理状态异常以及后续工作负载在云上云下的调度。

ALIBABA_CLOUD_LABELS=alibabacloud.com/nodepool-id=np0e2031e952c4492bab32f512ce1422f6,ack.aliyun.com=cc3df6d939b0d4463b493b82d0d670c66,alibabacloud.com/instance-id=i-wz960ockeekr3dok06kr,alibabacloud.com/external=true,workload=cpu

其中workload=cpu是用户在节点池中配置的用户自定义节点标签,其他都为系统下发的节点标签。

ALIBABA_CLOUD_TAINTS

自定义节点添加脚本中必须接收并进行配置,否则节点池中的污点配置将不会生效。

ALIBABA_CLOUD_TAINTS=workload=ack:NoSchedule

方式二:Kubeadm方式搭建

以Kubeadm方式初始化Kubernetes集群的场景为例,用户自建Kubernetes集群添加新节点时,使用的节点初始化脚本如下所示。

#!/bin/bash

# 卸载旧版本。
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

# 设置yum repository。
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动Docker。
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io-1.4.3 conntrack

# 重启Docker
systemctl enable docker
systemctl restart docker

# 关闭Swap。
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 修改/etc/sysctl.conf。
# 如果有配置,则修改。
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能没有,追加。
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 执行命令以应用。
sysctl -p

# 配置K8s的yum源。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 卸载旧版本。
yum remove -y kubelet kubeadm kubectl

# 安装kubelet、kubeadm、kubectl。
yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4

# 重启Docker,并启动kubelet。
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

kubeadm join --token 2q3s0u.w3d10wtsndqj**** 172.16.0.153:XXXX --discovery-token-unsafe-skip-ca-verification

您需要在以上节点初始化脚本中配置ALIBABA_CLOUD_PROVIDER_IDALIBABA_CLOUD_LABELSALIBABA_CLOUD_NODE_NAMEALIBABA_CLOUD_TAINTS等注册集群管控侧下发的变量,配置的示例代码如下所示。

#!/bin/bash

# 卸载旧版本。
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

# 设置yum repository。
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动Docker。
yum install -y docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io-1.4.3 conntrack

# 重启Docker。
systemctl enable docker
systemctl restart docker

# 关闭Swap。
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 修改/etc/sysctl.conf。
# 如果有配置,则修改。
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能没有,追加。
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 执行命令以应用。
sysctl -p

# 配置K8s的yum源。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 卸载旧版本。
yum remove -y kubelet kubeadm kubectl

# 安装kubelet、kubeadm、kubectl。
yum install -y kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4

# 配置node labels,taints,node name,provider id。
KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
  option="--node-labels"
  if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBEADM_CONFIG_FILE
  elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBEADM_CONFIG_FILE
  else
    sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBEADM_CONFIG_FILE
  fi
fi

if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
  option="--register-with-taints"
  if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBEADM_CONFIG_FILE
  elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBEADM_CONFIG_FILE
  else
    sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBEADM_CONFIG_FILE
  fi
fi

if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
  option="--hostname-override"
  if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBEADM_CONFIG_FILE
  elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBEADM_CONFIG_FILE
  else
    sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBEADM_CONFIG_FILE
  fi
fi

if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
  option="--provider-id"
  if grep -- "${option}=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBEADM_CONFIG_FILE
  elif grep "KUBELET_EXTRA_ARGS=" $KUBEADM_CONFIG_FILE &> /dev/null;then
    sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBEADM_CONFIG_FILE
  else
    sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBEADM_CONFIG_FILE
  fi
fi

# 重启Docker,并启动kubelet。
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

kubeadm join --node-name $ALIBABA_CLOUD_NODE_NAME --token 2q3s0u.w3d10wtsndqj**** 172.16.0.153:XXXX --discovery-token-unsafe-skip-ca-verification

步骤二:保存自定义脚本

将自定义脚本保存在HTTP文件服务器上,例如存放在OSS Bucket上。示例地址为https://kubelet-****.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh

步骤三:使用自定义脚本

  1. 将本地数据中心集群接入注册集群。具体操作,请参见通过控制台创建注册集群

    注册集群的Agent组件会自动在kube-system下创建名为ack-agent-config的ConfigMap,初始化配置如下所示。

    apiVersion: v1
    data:
      addNodeScriptPath: ""
    kind: ConfigMap
    metadata:
      name: ack-agent-config
      namespace: kube-system
  2. 将自定义节点添加脚本的路径https://kubelet-****.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh配置到addNodeScriptPath字段区域并保存。

    命令示例如下所示。

    apiVersion: v1
    data:
      addNodeScriptPath: https://kubelet-****.oss-ap-southeast-3-internal.aliyuncs.com/attachnode.sh 
    kind: ConfigMap
    metadata:
      name: ack-agent-config
      namespace: kube-system
  • 本页导读 (1)