混合集群是通过阿里云注册集群接入本地数据中心自建Kubernetes集群的容器集群。它可以为自建Kubernetes集群扩容云上计算节点,同时也可以管理云上云下计算资源。本文以使用Calico容器网络组件的IDC自建Kubernetes集群为例,介绍如何创建混合集群。

前提条件

  • 本地数据中心自建Kubernetes集群的网络与云上注册集群使用的专有网络VPC互联互通,网络互联互通包括计算节点网络和容器网络互联互通。您可以通过云企业网产品服务实现网络互通。更多信息,请参见入门概述
  • 目标集群必须使用注册集群提供的私网集群导入代理配置接入注册集群。
  • 通过注册集群扩容的云上计算节点能够访问本地数据中心自建Kubernetes集群的API Server。
  • 已通过Kubectl连接注册集群。具体操作,请参见通过kubectl工具连接集群

混合弹性容器集群架构

由于自建Kubernetes集群使用Calico路由场景较多,本文将以IDC自建集群使用Calico路由模式的场景为例进行阐述。关于云上容器网络插件的使用,建议您使用对应云平台定制化的网络组件,阿里云容器平台统一使用Terway网络组件进行容器网络的管理。混合集群组网模式如下图所示。架构图您在本地数据中心内的私网网段为192.168.0.0/24,容器网络网段为10.100.0.0/16,采用Calico网络插件的路由反射模式;云上专有网络网段为10.0.0.0/8,计算节点虚拟交换机网段为10.10.24.0/24,容器Pod虚拟交换机网段为10.10.25.0/24,采用Terway网络组件的共享模式。

使用ACK注册集群构建混合容器集群

  1. 配置云上云下容器网络插件。

    在混合集群中,需要保证云下的Calico网络插件只运行在云下,云上的Terway网络组件只运行在云上。

    由于ACK注册集群节点扩容的云上ECS节点会自动添加节点标签alibabacloud.com/external=true,为了使IDC内的Calico Pod只运行在云下,需要为其设置NodeAffinity配置,示例如下:
    cat <<EOF > calico-ds.patch
    spec:
      template:
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: alibabacloud.com/external
                    operator: NotIn
                    values:
                    - "true"
                  - key: type
                    operator: NotIn
                    values:
                    - "virtual-kubelet"
    EOF
    kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.patch)"
  2. 部署Terway网络组件。
    1. 登录容器服务管理控制台,在左侧导航栏中选择集群
    2. 集群列表页面中,单击目标集群名称,然后在左侧导航栏中,选择运维管理 > 组件管理
    3. 组件管理页面,搜索terway-eniip组件,单击组件右下方的安装,然后单击确定
    4. 执行以下命令,查看Terway网络组件守护进程集。
      在混合集群扩容云上节点前,Terway将不会被调度到任何一个云下节点上。
      kubectl -nkube-system get ds |grep terway

      预期输出:

      terway-eniip   0         0         0       0            0           alibabacloud.com/external=true      16s
      由预期输出可看到,Terway Pod将只运行在打标alibabacloud.com/external=true的云上ECS节点。
  3. 为Terway网络组件配置AK信息。
    1. 为Terway网络组件配置AK信息所需的RAM权限,权限策略内容如下,具体操作,请参见为RAM用户授权
      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "ecs:CreateNetworkInterface",
                      "ecs:DescribeNetworkInterfaces",
                      "ecs:AttachNetworkInterface",
                      "ecs:DetachNetworkInterface",
                      "ecs:DeleteNetworkInterface",
                      "ecs:DescribeInstanceAttribute",
                      "ecs:AssignPrivateIpAddresses",
                      "ecs:UnassignPrivateIpAddresses",
                      "ecs:DescribeInstances",
                      "ecs:ModifyNetworkInterfaceAttribute"
                  ],
                  "Resource": [
                      "*"
                  ],
                  "Effect": "Allow"
              },
              {
                  "Action": [
                      "vpc:DescribeVSwitches"
                  ],
                  "Resource": [
                      "*"
                  ],
                  "Effect": "Allow"
              }
          ]
      }
    2. 执行以下命令,编辑ConfigMap的eni-config,并配置eni_conf.access_key eni_conf.access_secret
      kubectl -n kube-system edit cm eni-config
      eni-config配置示例如下:
      kind: ConfigMap
      apiVersion: v1
      metadata:
       name: eni-config
       namespace: kube-system
      data:
       eni_conf: |
        {
         "version": "1",
         "max_pool_size": 5,
         "min_pool_size": 0,
         "vswitches": {{.PodVswitchId}},
         "eni_tags": {"ack.aliyun.com":"{{.ClusterID}}"},
         "service_cidr": "{{.ServiceCIDR}}",
         "security_group": "{{.SecurityGroupId}}",
         "access_key": "",
         "access_secret": "",
         "vswitch_selection_policy": "ordered"
        }
       10-terway.conf: |
        {
         "cniVersion": "0.3.0",
         "name": "terway",
         "type": "terway"
        }
  4. 配置自定义节点初始化脚本。
    1. 改造自建Kubernetes集群原始的节点初始化脚本。
      以使用Kubeadm工具初始化的IDC自建Kubernetes集群为例,在IDC中为集群添加新节点的原始初始化脚本init-node.sh如下
      #!/bin/bash
      
      export K8S_VERSION=1.24.3
      
      export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sysctl --system
      yum remove -y containerd.io
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      yum install -y containerd.io-1.4.3
      mkdir -p /etc/containerd
      containerd config default > /etc/containerd/config.toml
      sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
      sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
      sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml
      systemctl daemon-reload
      systemctl enable containerd
      systemctl restart containerd
      yum install -y nfs-utils
      yum install -y wget
      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      yum remove -y kubelet kubeadm kubectl
      yum install -y kubelet-$K8S_VERSION kubeadm-$K8S_VERSION kubectl-$K8S_VERSION
      crictl config runtime-endpoint /run/containerd/containerd.sock
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      containerd --version
      kubelet --version
      
      kubeadm join 10.200.1.253:XXXX --token cqgql5.1mdcjcvhszol**** --discovery-token-unsafe-skip-ca-verification
      ACK注册集群中所需要配置的自定义节点初始化脚本init-node-ecs.sh,就是在init-node.sh脚本的基础上接收并配置注册集群下发的ALIBABA_CLOUD_PROVIDER_IDALIBABA_CLOUD_NODE_NAMEALIBABA_CLOUD_LABELSALIBABA_CLOUD_TAINTS这些环境变量即可,init-node-ecs.sh脚本示例如下:
      #!/bin/bash
      
      export K8S_VERSION=1.24.3
      
      export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sysctl --system
      yum remove -y containerd.io
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      yum install -y containerd.io-1.4.3
      mkdir -p /etc/containerd
      containerd config default > /etc/containerd/config.toml
      sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
      sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
      sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml
      systemctl daemon-reload
      systemctl enable containerd
      systemctl restart containerd
      yum install -y nfs-utils
      yum install -y wget
      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      yum remove -y kubelet kubeadm kubectl
      yum install -y kubelet-$K8S_VERSION kubeadm-$K8S_VERSION kubectl-$K8S_VERSION
      crictl config runtime-endpoint /run/containerd/containerd.sock
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      containerd --version
      kubelet --version
      
      ####### <新增部分
      # 配置Node Labels,Taints,Node Name,Provider ID
      #KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
      KUBELET_CONFIG_FILE="/etc/sysconfig/kubelet"
      #KUBELET_CONFIG_FILE="/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
      if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
        option="--node-labels"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
        option="--register-with-taints"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
        option="--hostname-override"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
        option="--provider-id"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      #重启Docker,并启动kubelet
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      
      ####### 新增部分>
      
      kubeadm join 10.200.1.253:XXXX --token cqgql5.1mdcjcvhszol**** --discovery-token-unsafe-skip-ca-verification
    2. 保存和配置自定义脚本。

      将自定义脚本保存在HTTP文件服务器上,例如存放在OSS Bucket上。示例地址为https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh

      将自定义节点添加脚本的路径https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh配置到addNodeScriptPath字段区域并保存,如下所示:
      apiVersion: v1
      data:
        addNodeScriptPath: https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh
      kind: ConfigMap
      metadata:
        name: ack-agent-config
        namespace: kube-system
    当您完成上述配置操作后,即可在目标ACK注册集群创建节点池以及扩容ECS节点。
  5. 创建节点池并扩容ECS节点。
    1. 登录容器服务管理控制台,在左侧导航栏中选择集群
    2. 集群列表页面中,单击目标集群名称,然后在左侧导航栏中,选择节点管理 > 节点池
    3. 节点池页面,根据需求创建节点池并扩容节点。具体操作,请参见管理节点池

相关文档

规划Terway场景的容器网络。具体操作,请参见Kubernetes集群网络规划

本地数据中心Kubernetes网络与云上专有网络VPC互联互通。具体操作,请参见功能特性

创建注册集群并接入本地数据中心自建Kubernetes集群。具体操作,请参见创建注册集群并接入本地数据中心集群