使用 Kubeadm+ETCD 3.3.10+Keepalived 2.0.16 部署 Kubernetes 1.14.0 HA 高可用集群

使用 Kubeadm+ETCD 3.3.10+Keepalived 2.0.16 部署 Kubernetes 1.14.0 HA 高可用集群

目录

[toc]

1、简介
1.1、为什么我们需要Kubernetes高可用性(HA)?
1.2、多主机的优点
1.3、实现Kubernets HA的步骤
2、环境准备
2.1、网络配置
2.2、更改 HOSTNAME
2.3、配置ssh免密码登录登录
2.4、关闭防火墙
2.5、关闭交换分区
2.6、关闭 SeLinux
2.7、安装 NTP
2.8、创建安装目录
2.9、安装及配置 CFSSL
2.10、升级内核
3、安装 Docker 18.06.3-ce
3.1、删除旧版本的 Docker
3.2、设置存储库
3.3、安装 Docker
3.4、创建 Docker 守护进程配置
3.4、启动 Docker
4、安装 ETCD 3.3.10 集群
4.1、创建 ETCD 证书
4.1.1、生成 ETCD SERVER 证书用到的JSON请求文件
4.1.2、创建 ETCD CA 证书配置文件
4.1.3、创建 ETCD SERVER 证书配置文件
4.1.4、生成 ETCD CA 证书和私钥
4.1.5、生成 ETCD SERVER 证书和私钥
4.2、安装 ETCD
4.2.1、下载 ETCD
4.2.2、创建 ETCD 系统启动文件
4.2.3、将 ETCD 启动文件、证书文件、系统启动文件复制到其他 Master 节点
4.2.4、ETCD 主配置文件
4.2.5、启动 ETCD 服务
4.2.6、检查 ETCD 服务运行状态
4.2.7、查看 ETCD 集群成员信息
5、通过 Keepalived 2.0.16 设置负载均衡
5.1、 安装 keepalived 2.0.16
5.2、创建配置文件
5.3、创建系统启动文件
5.3、将配置文件、系统启动文件复制到其他 master 机器
5.4、启动 Keepalived
6、安装 kubeadm, kubelet 和 kubectl
6.1、替换阿里云的源安装kubernetes.repo
6.2、安装 kubeadm1.14.0, kubelet1.14.0 和 kubectl1.14.0
6.3、创建配置文件
6.4、初始化 Master 节点
6.5、在 c1 和 c2 上运行kubeadm init
6.6、部署 flannel
6.7、加入 Node 节点
7、测试部署一个 Whoami服务
7.1、在 c0 运行部署 Whoami
7.2、查看 Whoami 部署状态
7.3、为 Whoami 扩展端口
7.4、测试 Whoami 服务是否运行正常
8、常见问题
8.1、查看 k8s 的 ca 证书
8.2、部署 Gitlab 后,如何通过 CI 功能操作 Kubernetes?
8.3、过程中如果安装步骤出现错误,如何对系统进行重置?

1、简介

  容器的兴起重塑了我们开发,部署和维护软件的方式。
  容器允许我们将构成应用程序的不同服务打包到单独的容器中,并在一组虚拟和物理机器上部署这些容器。
  这就产生了容器编排工具,以自动化基于容器的应用程序的部署,管理,扩展和可用性。
  Kubernetes 允许大规模部署和管理基于容器的应用程序。
  Kubernetes 的主要优势之一是它如何通过使用容器的动态调度为基于容器的分布式应用程序带来更高的可靠性和稳定性。
  但是,当组件或其主节点出现故障时,如何确保 Kubernetes 本身保持正常运行?

1.1、为什么我们需要Kubernetes高可用性(HA)?

  Kubernetes High-Availability 旨在以一种没有单一故障点的方式设置 Kubernetes 及其支持组件。
  单个主群集很容易失败,而多主群集使用多个主节点,每个主节点都可以访问相同的工作节点。
  在单个主集群中,API服务器,控制器管理器等重要组件仅位于单个主节点上,如果失败,则无法创建更多服务,pod等。但是,在Kubernetes HA 环境中,这些重要组件在多个 Master 上,如果任何 Master 失败,其他 Master 保持群集运行。
  

1.2、多主机的优点

  在单个主设置中, Master 管理etcd数据库,API服务器,控制器管理器和调度程序以及工作节点。
  但是,如果该单个 Master 发生故障,则所有工作节点也会发生故障,整个群集将丢失。
  相比之下,在多主设置中,多主设备可为单个群集提供高可用性并提高网络性能,因为所有主设备的行为都像统一数据中心。
  多主机设置可防止各种故障模式,从单个工作节点丢失到主节点的 Master 服务故障。
  通过提供冗余,多主集群为最终用户提供高可用性系统。
  

1.3、实现Kubernets HA的步骤

  在转向实现高可用性的步骤之前,让我们通过下图了解我们想要实现的目标:
kubeadm1.14.0-1

  
主节点:

  多主机环境中的每个主节点都运行其自己的Kube API服务器副本。
  这可用于主节点之间的负载平衡。
  主节点还运行其 ETCD 数据库的副本,该数据库存储集群的所有数据。
  除了API服务器和 ETCD 数据库之外,主节点还运行 Kubernets 控制器管理器,它管理复制和调度程序,并将容器调度到节点。
  
工作节点:

  与多主集群中的单个主节点一样,工作节点也运行自己的组件,主要是在 Kubernetes 集群中编排 pod。
  对于已配置的每个 Master 服务器,请按照安装指南安装kubeadm 及其依赖项。

  在这篇博客中,我们将使用 Kubernetes 1.14.0 来实现HA。
注意:如果kubelet和docker的cgroup驱动程序不同,则重新启动时master不会出现
  

2、环境准备

  本文中的案例会有4台机器,他们的Host和IP地址如下

IP地址 主机名
10.0.0.100 c0(master)
10.0.0.101 c1(master)
10.0.0.102 c2(master)
10.0.0.103 c3(worker)

  
  四台机器的 hostc0 为例:

[root@c0 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.100 c0
10.0.0.101 c1
10.0.0.102 c2
10.0.0.103 c3

  

2.1、网络配置

  以下以c0为例

[root@c0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=6d8d9ad6-37b5-431a-ab16-47d0aa00d01f
DEVICE=eth0
ONBOOT=yes
IPADDR0=10.0.0.100
PREFIXO0=24
GATEWAY0=10.0.0.1
DNS1=10.0.0.1
DNS2=8.8.8.8

  
  重启网络:

[root@c0 ~]# service network restart

  
  更改源为阿里云

[root@c0 ~]# yum install -y wget
[root@c0 ~]# cd /etc/yum.repos.d/
[root@c0 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak
[root@c0 yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@c0 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
[root@c0 yum.repos.d]# yum clean all
[root@c0 yum.repos.d]# yum makecache

  
  安装基础工具包

[root@c0 ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion tree make openssl openssl-devel libnl3-devel -y

  

2.2、更改 HOSTNAME

  在四台机器上依次设置 hostname,以下以c0为例

[root@c0 ~]# hostnamectl --static set-hostname c0
[root@c0 ~]# hostnamectl status
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: ba02919abe4245aba673aaf5f778ad10
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64

  

2.3、配置ssh免密码登录登录

  每一台机器都单独生成

[root@c0 ~]# ssh-keygen
#一路按回车到最后

  
  将 ssh-keygen 生成的密钥,分别复制到其他三台机器,以下以 c0 为例

[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:
[root@c0 ~]# rm -rf ~/.ssh/known_hosts
[root@c0 ~]# clear
[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c0'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c1 (10.0.0.101)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c1'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c2 (10.0.0.102)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c2'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c3 (10.0.0.103)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c3'"
and check to make sure that only the key(s) you wanted were added.

  
  测试密钥是否配置成功

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N hostname; done;
c0
c1
c2
c3

  

2.4、关闭防火墙

  在每一台机器上运行以下命令,以 c0 为例:

[root@c0 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  

2.5、关闭交换分区

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N swapoff -a; done;

关闭前和关闭后,可以使用free -h命令查看swap的状态,关闭后的total应该是0

  
  在每一台机器上编辑配置文件: /etc/fstab , 注释最后一条/dev/mapper/centos-swap swap,以c0为例

[root@c0 ~]# sed -i "s/\/dev\/mapper\/centos-swap/# \/dev\/mapper\/centos-swap/" /etc/fstab
[root@c1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jan 28 11:49:11 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=93572ab6-90da-4cfe-83a4-93be7ad8597c /boot                   xfs     defaults        0 0
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

  

2.6、关闭 SeLinux

  在每一台机器上,关闭 SeLinux,以 c0 为例

[root@c0 ~]# setenforce 0
[root@c0 ~]# sed -i "s/SELINUX=enforcing/SELINUX=permissive/" /etc/selinux/config
[root@c0 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

SELinux就是安全加强的Linux,通过命令 setenforce 0 和 sed … 可以将 SELinux 设置为 permissive 模式(将其禁用)。 只有执行这一操作之后,容器才能访问宿主的文件系统,进而能够正常使用 Pod 网络。

  

2.7、安装 NTP

  安装 NTP 时间同步工具,并启动 NTP

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N yum install ntp -y; done;

  
  在每一台机器上,设置 NTP 开机启动

[root@c0 ~]# systemctl enable ntpd && systemctl start ntpd

  
  依次查看每台机器上的时间:

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N date; done;
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:49 CST 2019
Sat Feb  9 18:11:49 CST 2019

如果后面机器时间不同步,可以在每台机器上运行命令进行手动同步:ntpdate cn.pool.ntp.org

  

2.8、创建安装目录

  创建后面要用到的 ETCD 使用目录

# ETCD 需要用到的目录 
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/etcd/{bin,cfg,ssl} -p; done;
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/etcd -p; done;

  

2.9、安装及配置 CFSSL

  使用 CFSSL 能够构建本地CA,生成后面需要使用的证书。

[root@c0 ~]# mkdir -p /home/work/_src
[root@c0 ~]# cd /home/work/_src
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@c0 _src]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

# 将 CFSSL 复制到所有 master 节点
[root@c0 _src]# for N in $(seq 0 2); do scp cfssl_linux-amd64 c$N:/usr/local/bin/cfssl; done;
[root@c0 _src]# for N in $(seq 0 2); do scp cfssljson_linux-amd64 c$N:/usr/local/bin/cfssljson; done;
[root@c0 _src]# for N in $(seq 0 2); do scp cfssl-certinfo_linux-amd64 c$N:/usr/local/bin/cfssl-certinfo; done;

  

2.10、升级内核

  因为3.10版本内核且缺少 ip_vs_fo.ko 模块,将导致 kube-proxy 无法开启ipvs模式。ip_vs_fo.ko 模块的最早版本为3.19版本,这个内核版本在 RedHat 系列发行版的常见RPM源中是不存在的。

[root@c0 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
[root@c0 ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

  
  重启系统 reboot 后,手动选择新内核,然后输入以下命令,可以查看新内核的状态:

[root@c0 ~]# hostnamectl
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: 40a19388698f4907bd233a8cff76f36e
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 4.20.7-1.el7.elrepo.x86_64
      Architecture: x86-64

  

3、安装 Docker 18.06.3-ce

3.1、删除旧版本的 Docker

  官方提供的删除方法

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

  
  另外一种删除旧版的 Docker 方法,先查询安装过的 Docker

[root@c0 ~]# yum list installed | grep docker
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
containerd.io.x86_64            1.2.2-3.el7                    @docker-ce-stable
docker-ce.x86_64                3:18.09.1-3.el7                @docker-ce-stable
docker-ce-cli.x86_64            1:18.09.1-3.el7                @docker-ce-stable

  
  删除已安装的 Docker

[root@c0 ~]# yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 containerd.io.x86_64

  
  删除 Docker 镜像/容器

[root@c0 ~]# rm -rf /var/lib/docker

  

3.2、设置存储库

  安装所需要的包,yum-utils 提供了 yum-config-manager 实用程序, device-mapper-persistent-datalvm2devicemapper 需要的存储驱动程序。
  在每一台机器上操作,以 c0 为例

[root@c0 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@c0 ~]# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  

3.3、安装 Docker

[root@c0 ~]# sudo yum install docker-ce-18.06.3.ce-3.el7 -y

  

3.4、创建 Docker 守护进程配置

## Create /etc/docker directory.
mkdir -p /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

  

3.4、启动 Docker

[root@c0 ~]# systemctl enable docker && systemctl start docker

  

4、安装 ETCD 3.3.10 集群

4.1、创建 ETCD 证书

4.1.1、生成 ETCD SERVER 证书用到的JSON请求文件

[root@c0 ~]# mkdir -p /home/work/_src/ssl_etcd
[root@c0 ~]# cd /home/work/_src/ssl_etcd
[root@c0 ssl_etcd]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

默认策略,指定了证书的有效期是10年(87600h)
etcd策略,指定了证书的用途
signing, 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证
client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证

  

4.1.2、创建 ETCD CA 证书配置文件

[root@c0 ssl_etcd]# cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.3、创建 ETCD SERVER 证书配置文件

[root@c0 ssl_etcd]# cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.100",
    "10.0.0.101",
    "10.0.0.102"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.4、生成 ETCD CA 证书和私钥

[root@c0 ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/14 18:44:37 [INFO] generating a new CA key and certificate from CSR
2019/02/14 18:44:37 [INFO] generate received request
2019/02/14 18:44:37 [INFO] received CSR
2019/02/14 18:44:37 [INFO] generating key: rsa-2048
2019/02/14 18:44:38 [INFO] encoded CSR
2019/02/14 18:44:38 [INFO] signed certificate with serial number 384346866475232855604658229421854651219342845660
[root@c0 ssl_etcd]# tree
.
├── ca-config.json
├── ca.csr
├── ca-csr.json
├── ca-key.pem
├── ca.pem
└── server-csr.json

0 directories, 6 files

  

4.1.5、生成 ETCD SERVER 证书和私钥

[root@c0 ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/05/26 16:25:42 [INFO] generate received request
2019/05/26 16:25:42 [INFO] received CSR
2019/05/26 16:25:42 [INFO] generating key: rsa-2048
2019/05/26 16:25:43 [INFO] encoded CSR
2019/05/26 16:25:43 [INFO] signed certificate with serial number 565091096936877671961250463407406860610702892634
2019/05/26 16:25:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl_etcd]# tree
.
├── ca-config.json
├── ca.csr
├── ca-csr.json
├── ca-key.pem
├── ca.pem
├── server.csr
├── server-csr.json
├── server-key.pem
└── server.pem

0 directories, 9 files

  
  将生成的证书,复制到 etchd 使用目录

[root@c0 ssl_etcd]# cp *.pem /home/work/_app/k8s/etcd/ssl/

  

4.2、安装 ETCD

4.2.1、下载 ETCD

[root@c0 ssl_etcd]# cd /home/work/_src/
[root@c0 _src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
[root@c0 _src]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
[root@c0 _src]# cd etcd-v3.3.10-linux-amd64
[root@c0 etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /home/work/_app/k8s/etcd/bin/

  

4.2.2、创建 ETCD 系统启动文件

  创建 /usr/lib/systemd/system/etcd.service 文件并保存,内容如下:

[root@c0 etcd-v3.3.10-linux-amd64]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/etcd/cfg/etcd.conf
ExecStart=/home/work/_app/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \
--peer-key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  

4.2.3、将 ETCD 启动文件、证书文件、系统启动文件复制到其他 Master 节点

[root@c0 ~]# for N in $(seq 0 2); do scp -r /home/work/_app/k8s/etcd c$N:/home/work/_app/k8s/; done;
[root@c0 ~]# for N in $(seq 0 2); do scp -r /usr/lib/systemd/system/etcd.service c$N:/usr/lib/systemd/system/etcd.service; done;

  

4.2.4、ETCD 主配置文件

  在 c0 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c0 ~]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
# ETCD的节点名
ETCD_NAME="etcd00"
# ETCD的数据存储目录
ETCD_DATA_DIR="/home/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://10.0.0.100:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.100:2379"

#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.100:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.100:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  
  在 c1 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c1 ~]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.101:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.101:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  
  在 c2 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c2 ~]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.102:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.102:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

   

4.2.5、启动 ETCD 服务

  在每一台节点机器上单独运行

[root@c0 ~]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

关闭 ETCD 的服务命令:systemctl stop etcd && systemctl disable etcd && systemctl daemon-reload

  

4.2.6、检查 ETCD 服务运行状态

[root@c0 ~]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem cluster-health
member 7c12135a398849e3 is healthy: got healthy result from https://10.0.0.102:2379
member 99c2fd4fe11e28d9 is healthy: got healthy result from https://10.0.0.100:2379
member f2fd0c12369e0d75 is healthy: got healthy result from https://10.0.0.101:2379

  

4.2.7、查看 ETCD 集群成员信息

  通过以下命令,可以看到 c0 所在的机器 10.0.0.100 被选举为 Leader

[root@c0 ~]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem  member list
7c12135a398849e3: name=etcd02 peerURLs=https://10.0.0.102:2380 clientURLs=https://10.0.0.102:2379 isLeader=false
99c2fd4fe11e28d9: name=etcd00 peerURLs=https://10.0.0.100:2380 clientURLs=https://10.0.0.100:2379 isLeader=true
f2fd0c12369e0d75: name=etcd01 peerURLs=https://10.0.0.101:2380 clientURLs=https://10.0.0.101:2379 isLeader=false

  
  查看服务运行状态

[root@c0 ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-05-26 16:33:56 CST; 2min 17s ago
 Main PID: 9154 (etcd)
    Tasks: 10
   Memory: 28.2M
   CGroup: /system.slice/etcd.service
           └─9154 /home/work/_app/k8s/etcd/bin/etcd --name=etcd00 --data-dir=/home/work/_data/etcd --listen-peer-urls=https://10.0.0.100:2380 --listen-client-urls=https://10.0.0.100:2379,http://127.0.0.1:2379 --advertise-clien...

May 26 16:33:56 c0 etcd[9154]: enabled capabilities for version 3.0
May 26 16:33:57 c0 etcd[9154]: peer 7c12135a398849e3 became active
May 26 16:33:57 c0 etcd[9154]: established a TCP streaming connection with peer 7c12135a398849e3 (stream Message writer)
May 26 16:33:57 c0 etcd[9154]: established a TCP streaming connection with peer 7c12135a398849e3 (stream MsgApp v2 writer)
May 26 16:33:57 c0 etcd[9154]: established a TCP streaming connection with peer 7c12135a398849e3 (stream MsgApp v2 reader)
May 26 16:33:57 c0 etcd[9154]: 99c2fd4fe11e28d9 initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 2 active peer(s)
May 26 16:33:57 c0 etcd[9154]: established a TCP streaming connection with peer 7c12135a398849e3 (stream Message reader)
May 26 16:34:00 c0 etcd[9154]: updating the cluster version from 3.0 to 3.3
May 26 16:34:00 c0 etcd[9154]: updated the cluster version from 3.0 to 3.3
May 26 16:34:00 c0 etcd[9154]: enabled capabilities for version 3.3

  

5、通过 Keepalived 2.0.16 设置负载均衡

5.1、 安装 keepalived 2.0.16

  我们使用 Keepalived 进行负载平衡,在所有主节点上安装Keepalived

cd /home/work/_src/
wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gz
tar -xzvf keepalived-2.0.16.tar.gz
cd keepalived-2.0.16
./configure --prefix=/home/work/_app/keepalived 
make && make install

  

5.2、创建配置文件

  在 c0 上创建 /home/work/_app/keepalived/etc/keepalived.conf 文件并编辑,内容如下:

[root@c0 ~]# cat /home/work/_app/keepalived/etc/keepalived.conf
! Configuration File for keepalived
global_defs {
  router_id LVS_DEVEL
}

vrrp_script check_apiserver {
  script "/home/work/_app/keepalived/bin/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
# 实例状态,只有MASTER 和 BACKUP两种状态,并且需要全部大写。抢占模式下,其中MASTER为工作状态,BACKUP为备用状态。当MASTER所在的服务器失效时,BACKUP所在的服务会自动把它的状态由BACKUP切换到MASTER状态。当失效的MASTER所在的服务恢复时,BACKUP从MASTER恢复到BACKUP状态。
    state MASTER
# 对外提供服务的网卡接口,即VIP绑定的网卡接口。如:eth0,eth1。当前主流的服务器都有2个或2个以上的接口(分别对应外网和内网),在选择网卡接口时,一定要核实清楚。
    interface eth0
# 虚拟路由的ID号,每个节点设置必须一样,可选择IP最后一段使用,相同的 VRID 为一个组,他将决定多播的 MAC 地址。
    virtual_router_id 51
# 节点优先级,取值范围0~254,MASTER要比BACKUP高
    priority 100
# 验证类型和验证密码。类型主要有 PASS、AH 两种,通常使用PASS类型,据说AH使用时有问题。验证密码为明文,同一vrrp 实例MASTER与BACKUP使用相同的密码才能正常通信。
    authentication {
        auth_type PASS
        auth_pass velotiotechnologies
    }
# 虚拟IP地址池,可以有多个IP,每个IP占一行,不需要指定子网掩码。注意:这个IP必须与我们的设定的vip保持一致。
    virtual_ipaddress {
        10.0.0.200
    }
    track_script {
        check_apiserver
    }
}

  
  在 c0 上创建 /home/work/_app/keepalived/bin/check_apiserver.sh 文件并编辑,内容如下:

[root@c0 ~]# cat /home/work/_app/keepalived/bin/check_apiserver.sh
#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/"
if ip addr | grep -q MSHK_PRIVATE_IP; then
    curl --silent --max-time 2 --insecure https://MSHK_PRIVATE_IP:6443/ -o /dev/null || errorExit "Error GET https:/MSHK_PRIVATE_IP:6443/"
fi

  

5.3、创建系统启动文件

  在 c0 创建并编辑 /usr/lib/systemd/system/keepalived.service 文件,内容如下 :

[root@c0 ~]# cat /usr/lib/systemd/system/keepalived.service
[Unit]
Description=LVS and VRRP High Availability Monitor
After=network-online.target syslog.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/var/run/keepalived.pid
KillMode=process
ExecStart=/home/work/_app/keepalived/sbin/keepalived -D -f  /home/work/_app/keepalived/etc/keepalived.conf
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target

  

5.3、将配置文件、系统启动文件复制到其他 master 机器

[root@c0 ~]# for N in $(seq 0 2); do scp -r /home/work/_app/keepalived/etc/keepalived.conf c$N:/home/work/_app/keepalived/etc/keepalived.conf; done;
[root@c0 ~]# for N in $(seq 0 2); do scp -r /home/work/_app/keepalived/bin/check_apiserver.sh c$N:/home/work/_app/keepalived/bin/check_apiserver.sh; done;
[root@c0 ~]# for N in $(seq 0 2); do scp -r /usr/lib/systemd/system/keepalived.service c$N:/usr/lib/systemd/system/keepalived.service; done;

  
  在 c0c1c2 分别执行以下命令对IP进行替换

export PRIVATE_IP=$(ip addr show eth0 | grep -Po 'inet \K[\d.]+')
sed -i 's/MSHK_PRIVATE_IP/'"$PRIVATE_IP"'/' /home/work/_app/keepalived/bin/check_apiserver.sh

  

5.4、启动 Keepalived

  在 c0c1c2 分别执行以下命令

systemctl daemon-reload && systemctl enable keepalived && systemctl start keepalived

停止服务的命令为:systemctl daemon-reload && systemctl enable keepalived && systemctl start keepalived

  
  查看 Keepalived 服务状态

[root@c0 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-05-26 15:04:12 CST; 3min 14s ago
 Main PID: 29360 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─29360 /home/work/_app/keepalived/sbin/keepalived -D -f /home/work/_app/keepalived/etc/keepalived.conf
           └─29361 /home/work/_app/keepalived/sbin/keepalived -D -f /home/work/_app/keepalived/etc/keepalived.conf

May 26 15:04:15 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:15 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:15 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:15 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:20 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:20 c0 Keepalived_vrrp[29361]: (VI_1) Sending/queueing gratuitous ARPs on eth0 for 10.0.0.200
May 26 15:04:20 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:20 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:20 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200
May 26 15:04:20 c0 Keepalived_vrrp[29361]: Sending gratuitous ARP on eth0 for 10.0.0.200

  
  通过命令可以在 c0 机器上看到 eth0 上多了一个 10.0.0.200 的VIP地址

[root@c0 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:1c:42:50:8c:6a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/8 brd 10.255.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.200/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::49d:e3e6:c623:9582/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:38:90:df:d6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

这里要保证是先在 c0 上启动 Keepalived,并且能够看到 VIP 地址,否则后面在 c0 上进行 Kubeadm Init 时会找不到主机

  

6、安装 kubeadm, kubelet 和 kubectl

  需要在所有的 masterworker上都安装以下的软件包:
* kubeadm: 用来初始化集群的指令。
* kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。
* kubectl: 用来与集群通信的命令行工具

  

6.1、替换阿里云的源安装kubernetes.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum update -y

  

6.2、安装 kubeadm1.14.0, kubelet1.14.0 和 kubectl1.14.0

  查看可用版本

$ yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'

  
  安装 kubeadm1.14.0, kubelet1.14.0 和 kubectl1.14.0

# 安装指定版本
$ yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0 --disableexcludes=kubernetes

# 启动 kubectl
systemctl enable kubelet && systemctl start kubelet

  
  一些 RHEL/CentOS 7 的用户曾经遇到过:由于 iptables 被绕过导致网络请求被错误的路由。您得保证 在您的 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被设为1。

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

  

6.3、创建配置文件

  在 c0 上,创建 kubeadm 配置文件 /home/work/_app/k8s/kubeadm-config.yaml 编辑并保存,内容如下:

cat <<EOF >  /home/work/_app/k8s/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
controlPlaneEndpoint: 10.0.0.200:6443 # Keepalived 地址及端口
etcd:
  external:
    endpoints:
    - https://10.0.0.100:2379
    - https://10.0.0.101:2379
    - https://10.0.0.102:2379
    caFile: /home/work/_app/k8s/etcd/ssl/ca.pem
    certFile: /home/work/_app/k8s/etcd/ssl/server.pem
    keyFile: /home/work/_app/k8s/etcd/ssl/server-key.pem
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.245.0.0/16"
EOF

使用我们创建好的 ETCD 集群

  

6.4、初始化 Master 节点

  如果是第一次运行,下载 Docker 镜像后再运行 kubeadm init 会比较慢,也可以通过 kubeadm config images pull --config=/home/work/_app/k8s/kubeadm-config.yaml 命令先将镜像下载到本地。
  kubeadm init 首先会执行一系列的运行前检查来确保机器满足运行 Kubernetes 的条件。
  这些检查会抛出警告并在发现错误的时候终止整个初始化进程。 然后 kubeadm init 会下载并安装集群的控制组件,这可能会花费几分钟时间命令执行完以后,会自动启动 kubelet Docker 镜像。

[root@c0 ~]# kubeadm init --config=/home/work/_app/k8s/kubeadm-config.yaml --experimental-upload-certs
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.245.0.1 10.0.0.100 10.0.0.200]
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/peer certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate authority generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate authority generation
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.504063 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a388abc4c3dd5598f65816edcf7c1442aca2281599c07107ecce052420a636ee
[mark-control-plane] Marking the node c0 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node c0 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i6busl.fhks937r0cejr9tw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

# 下面的命令是加入 Master 集群
  kubeadm join 10.0.0.200:6443 --token i6busl.fhks937r0cejr9tw \
    --discovery-token-ca-cert-hash sha256:339cd6d60c7eeb025a00a20b07bc97d9270ac0790252c91106ff625339ba708d \
    --experimental-control-plane --certificate-key a388abc4c3dd5598f65816edcf7c1442aca2281599c07107ecce052420a636ee

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
# 下面的命令是加入 Node 节点
kubeadm join 10.0.0.200:6443 --token i6busl.fhks937r0cejr9tw \
    --discovery-token-ca-cert-hash sha256:339cd6d60c7eeb025a00a20b07bc97d9270ac0790252c91106ff625339ba708d

请备份好 kubeadm init 输出中的 kubeadm join 命令,因为您会需要这个命令来给集群添加节点。
也可以使用 kubeadm token create --print-join-command 打印 join 命令
kubeadm config print init-defaults 查看默认配置文件选 

  
  如果需要让普通用户可以运行 kubectl,请运行如下命令,其实这也是 kubeadm init输出的一部分,在 c0 执行下面的命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  
  使用 docker ps 命令,可以看到在运行的 Docker 容器

[root@c0 ~]# docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
6c98bb36f03c        5cd54e388aba           "/usr/local/bin/kube…"   7 minutes ago       Up 7 minutes                            k8s_kube-proxy_kube-proxy-9h2fv_kube-system_c6b4ceac-7f97-11e9-9fb6-001c42508c6a_0
eaec2ae0db87        k8s.gcr.io/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-proxy-9h2fv_kube-system_c6b4ceac-7f97-11e9-9fb6-001c42508c6a_0
3fb03b3089fd        b95b1efa0436           "kube-controller-man…"   8 minutes ago       Up 8 minutes                            k8s_kube-controller-manager_kube-controller-manager-c0_kube-system_0ff88c9b6e64cded3762e51ff18bce90_0
d48ff4db0baf        ecf910f40d6e           "kube-apiserver --ad…"   8 minutes ago       Up 8 minutes                            k8s_kube-apiserver_kube-apiserver-c0_kube-system_0f4ed761b6a2fdc60c5061ba4873e137_0
01e2c2436c07        00638a24688b           "kube-scheduler --bi…"   8 minutes ago       Up 8 minutes                            k8s_kube-scheduler_kube-scheduler-c0_kube-system_58272442e226c838b193bbba4c44091e_0
6e1c16351e6a        k8s.gcr.io/pause:3.1   "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD_kube-controller-manager-c0_kube-system_0ff88c9b6e64cded3762e51ff18bce90_0
9b812616e8d8        k8s.gcr.io/pause:3.1   "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD_kube-apiserver-c0_kube-system_0f4ed761b6a2fdc60c5061ba4873e137_0
e524bebc9178        k8s.gcr.io/pause:3.1   "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD_kube-scheduler-c0_kube-system_58272442e226c838b193bbba4c44091e_0

  
  查看节点状态

[root@c0 ~]# kubectl get cs,node
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}

NAME      STATUS     ROLES    AGE   VERSION
node/c0   NotReady   master   12m   v1.14.0

此时节点的状态为NotReady,部署好网络后,会变更为Ready

  

6.5、在 c1 和 c2 上运行kubeadm init

  在c1c2 上分别运行以下命令,加入到 master 集群:

kubeadm join 10.0.0.200:6443 --token i6busl.fhks937r0cejr9tw \
    --discovery-token-ca-cert-hash sha256:339cd6d60c7eeb025a00a20b07bc97d9270ac0790252c91106ff625339ba708d \
    --experimental-control-plane --certificate-key a388abc4c3dd5598f65816edcf7c1442aca2281599c07107ecce052420a636ee

这里用到的 kubeadm join 命令,是在 c0 机器上输出的

  
  在c1c2 上,通过以下命令配置用户权限

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  
  查看节点状态

[root@c0 ~]# kubectl get cs,node
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok
componentstatus/controller-manager   Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}

NAME      STATUS     ROLES    AGE   VERSION
node/c0   NotReady   master   16m   v1.14.0
node/c1   NotReady   master   16s   v1.14.0
node/c2   NotReady   master   13s   v1.14.0

  

6.6、部署 flannel

  在 c0 上运行以下命令:

[root@c0 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

  
  根据网络状态,稍等一会,其他节点,会自动下载 Flannel,通过下面的命令查看节点状态

[root@c0 ~]# kubectl get cs,node
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok
componentstatus/controller-manager   Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}

NAME      STATUS   ROLES    AGE     VERSION
node/c0   Ready    master   21m     v1.14.0
node/c1   Ready    master   5m16s   v1.14.0
node/c2   Ready    master   5m13s   v1.14.0

此时 c0 的 STATUS 已经是 Ready

  
  查看容器状态

[root@c0 ~]# kubectl get pod -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-jqmcm       1/1     Running   0          17m
coredns-fb8b8dccf-tgvwb       1/1     Running   0          17m
kube-apiserver-c0             1/1     Running   0          21m
kube-apiserver-c1             1/1     Running   0          6m4s
kube-apiserver-c2             1/1     Running   0          6m2s
kube-controller-manager-c0    1/1     Running   0          21m
kube-controller-manager-c1    1/1     Running   0          6m4s
kube-controller-manager-c2    1/1     Running   0          6m2s
kube-flannel-ds-amd64-6c2k8   1/1     Running   0          109s
kube-flannel-ds-amd64-gpvw7   1/1     Running   0          109s
kube-flannel-ds-amd64-zw7nq   1/1     Running   0          109s
kube-proxy-9h2fv              1/1     Running   0          17m
kube-proxy-9q5t8              1/1     Running   0          6m5s
kube-proxy-f8nz6              1/1     Running   0          6m2s
kube-scheduler-c0             1/1     Running   0          21m
kube-scheduler-c1             1/1     Running   0          6m4s
kube-scheduler-c2             1/1     Running   0          6m2s

  
  默认情况下,Kubernetes 不会在主服务器上调度任何工作负载,因此如果您还要在主节点上调度工作负载,请使用以下命令污染所有主节点:

$ kubectl taint nodes --all node-role.kubernetes.io/master-

  

6.7、加入 Node 节点

  在 c3 上运行命令

$ kubeadm join 10.0.0.200:6443 --token i6busl.fhks937r0cejr9tw \
    --discovery-token-ca-cert-hash sha256:339cd6d60c7eeb025a00a20b07bc97d9270ac0790252c91106ff625339ba708d

  
  查看节点状态

[root@c0 ~]# kubectl get cs,node
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}

NAME      STATUS   ROLES    AGE    VERSION
node/c0   Ready    master   25m    v1.14.0
node/c1   Ready    master   9m6s   v1.14.0
node/c2   Ready    master   9m3s   v1.14.0
node/c3   Ready    <none>   105s   v1.14.0

  

7、测试部署一个 Whoami服务

  whoami 是一个简单的HTTP docker服务,用于打印容器ID

7.1、在 c0 运行部署 Whoami

[root@c0 ~]# kubectl create deployment whoami --image=idoall/whoami
deployment.apps/whoami created

  

7.2、查看 Whoami 部署状态

  通过下面的命令,查看所有的部署情况

[root@c0 ~]# kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
whoami   1/1     1            1           106s

  
  查看 Whoami 的部署信息

[root@c0 ~]#  kubectl describe deployment whoami
Name:                   whoami
Namespace:              default
CreationTimestamp:      Sun, 26 May 2019 17:56:45 +0800
Labels:                 app=whoami
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=whoami
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=whoami
  Containers:
   whoami:
    Image:        idoall/whoami
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   whoami-8657469579 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set whoami-8657469579 to 1

  
  查看 Whoami 的容器日志

[root@c0 ~]# kubectl describe po whoami
Name:               whoami-8657469579-79bj8
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               c3/10.0.0.103
Start Time:         Sun, 26 May 2019 17:56:46 +0800
Labels:             app=whoami
                    pod-template-hash=8657469579
Annotations:        <none>
Status:             Running
IP:                 10.244.3.2
Controlled By:      ReplicaSet/whoami-8657469579
Containers:
  whoami:
    Container ID:   docker://31ae60b3a4cc66d173ffc803b2a4cd30e887664982cd003dd7232ca89ac45d8d
    Image:          idoall/whoami
    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 26 May 2019 17:56:56 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-85vds (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-85vds:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-85vds
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m24s  default-scheduler  Successfully assigned default/whoami-8657469579-79bj8 to c3
  Normal  Pulling    2m21s  kubelet, c3        Pulling image "idoall/whoami"
  Normal  Pulled     2m13s  kubelet, c3        Successfully pulled image "idoall/whoami"
  Normal  Created    2m13s  kubelet, c3        Created container whoami
  Normal  Started    2m13s  kubelet, c3        Started container whoami

  

7.3、为 Whoami 扩展端口

  创建一个可以通过互联网访问的 Whoami 容器

[root@c0 ~]# [root@c0 ~]# kubectl create service nodeport whoami --tcp=80:80
service/whoami created

上面的命令将在主机上为 Whoami 部署创建面向公众的服务。
由于这是一个节点端口部署,因此 kubernetes 会将此服务分配给32000+范围内的主机上的端口。

  
  查看当前的服务状态

[root@c0 ~]# kubectl create service nodeport whoami --tcp=80:80
service/whoami created
[root@c0 ~]# kubectl get svc,pods -o wide
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.245.0.1       <none>        443/TCP        60m   <none>
service/whoami       NodePort    10.245.189.220   <none>        80:31354/TCP   15s   app=whoami

NAME                          READY   STATUS    RESTARTS   AGE    IP           NODE   NOMINATED NODE   READINESS GATES
pod/whoami-8657469579-79bj8   1/1     Running   0          3m6s   10.244.3.2   c3     <none>           <none>

上面的服务可以看到 Whoami 运行在 31354 端口,通过 http://c0:31354 访问

kubeadm1.14.0-2
  

7.4、测试 Whoami 服务是否运行正常

[root@c0 ~]# curl c0:31354
[mshk.top]I'm whoami-8657469579-79bj8

  

8、常见问题

8.1、查看 kubernetes 的 ca 证书

  某些情况下,如果需要查看 kubernetes 的证书,可以通过 kubectl get secrets 列出秘钥,一个应该命名相似 default-token-xxxxx。复制该令牌名称以供下面使用。
  通过运行此命令获取证书:

[root@c0 ~]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-85vds   kubernetes.io/service-account-token   3      25m
[root@c0 ~]# kubectl get secret default-token-85vds  -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
-----BEGIN CERTIFICATE-----
...
4hfH64aEOcNQHPFDP+M2fWa4KdZMo7Qv5HGcEeXK7uToO90DoLJnoaqbrUx/Yyf2
...
-----END CERTIFICATE-----

  

8.2、部署 Gitlab 后,如何通过 CI 功能操作 Kubernetes?

  执行以下命令生成 KubeConfigbase64 编码字符串:

[root@c0 ~]# echo $(cat ~/.kube/config | base64) | tr -d " "
...MGxDUVVSQlRrSm5hM0ZvYTJsSE9YY3dRa0ZSYzBaQlJFRldUVkpOZDBWUldVUldVVkZFUlhkd2NtUlhTbXdLWTIwMWJHUkhWbnBOUWpSWVJGUkZOVTFFV...zB0TFFvPQo=

  
  在 .gitlab-ci.yaml 中设置 $kube_config 变量进行使用。

...
  script:
    - mkdir -p /root/.kube
    - echo $kube_config |base64 -d > /root/.kube/config
...

  

8.3、过程中如果安装步骤出现错误,如何对系统进行重置?

  在所有的节点,执行以下命令,可以清空所有的 Kubeadm 设置以及网络设置

kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker

  

  希望本文对您有帮助,感谢您的支持和阅读我的博客。
  


博文作者:迦壹
博客地址:使用 Kubeadm+ETCD 3.3.10+Keepalived 2.0.16 部署 Kubernetes 1.14.0 HA 高可用集群
转载声明:可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明,谢谢合作!


3 thoughts on “使用 Kubeadm+ETCD 3.3.10+Keepalived 2.0.16 部署 Kubernetes 1.14.0 HA 高可用集群

  1. 按照您的文档配置,在启动etcd服务后有报错,
    Jul 19 08:28:40 c0 etcd[9853]: 70cdab78acab2a0 received MsgVoteResp from 70cdab78acab2a0 at term 65
    Jul 19 08:28:40 c0 etcd[9853]: 70cdab78acab2a0 [logterm: 1, index: 2] sent MsgVote request to cb54b4e7246e7430 at term 65
    Jul 19 08:28:40 c0 etcd[9853]: rejected connection from “10.4.79.143:58564” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:40 c0 etcd[9853]: rejected connection from “10.4.79.143:58562” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:40 c0 etcd[9853]: rejected connection from “10.4.79.143:58568” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:40 c0 etcd[9853]: rejected connection from “10.4.79.143:58566” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:41 c0 etcd[9853]: rejected connection from “10.4.79.143:58572” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:41 c0 etcd[9853]: rejected connection from “10.4.79.143:58570” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:41 c0 etcd[9853]: rejected connection from “10.4.79.143:58575” (error “remote error: tls: bad certificate”, ServerName “”
    Jul 19 08:28:41 c0 etcd[9853]: rejected connection from “10.4.79.143:58574” (error “remote error: tls: bad certificate”, ServerName “”
    [root@c0 member]# /home/work/_app/k8s/etcd/bin/etcdctl –debug member list
    start to sync cluster using endpoints(http://127.0.0.1:2379,http://127.0.0.1:4001)
    cURL Command: curl -X GET http://127.0.0.1:2379/v2/members
    cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
    got endpoints(http://127.0.0.1:2379,http://127.0.0.1:4001) after sync
    Cluster-Endpoints: http://127.0.0.1:2379, http://127.0.0.1:4001
    cURL Command: curl -X GET http://127.0.0.1:2379/v2/members
    cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
    client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint http://127.0.0.1:2379 exceeded header timeout
    ; error #1: dial tcp 127.0.0.1:4001: connect: connection refused

    [root@c0 member]# pwd
    /home/work/_data/etcd/member
    [root@c0 member]# ls
    snap wal
    [root@c0 member]# cat /home/work/_app/k8s/etcd/cfg/etcd.conf
    #[Member]
    # ETCD的节点名
    ETCD_NAME=”etcd00″
    # ETCD的数据存储目录
    ETCD_DATA_DIR=”/home/work/_data/etcd”
    # 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
    ETCD_LISTEN_PEER_URLS=”https://10.4.79.142:2380″
    # 该节点与客户端通信时监听的地址列表
    ETCD_LISTEN_CLIENT_URLS=”https://10.4.79.142:2379,http://127.0.0.1:2379

回复 迦壹 取消回复

您的电子邮箱地址不会被公开。 必填项已用*标注