二进制安装kubernetes集群 第一章、安装前必读 本文档适用于k8s 1.17+
请不要使用带中文的服务器和克隆的虚拟机
文档中的IP地址请统一替换,不要一个一个替换!!!
文档中的所有kubectl只在master01节点执行,并且只执行一次。
第二章、安装说明 本文章将演示CentOS 7二进制方式安装高可用k8s 1.28的版本,相对于其他版本,二进制安装方式并无太大区别,只需要区分每个组件版本的对应关系即可。
生产环境中,建议使用小版本大于5的Kubernetes版本,比如1.19.5以后的才可用于生产环境。
第三章、集群安装 3.1、基本配置 下列是集群环境的ip地址规划(VIP虚拟ip不要和公司内网IP重复,首先去ping一下,不同才能用。vip需要和主机在同一个局域网内!),如果网段不一样请统一替换,Pod网段和service和宿主机网段不要重复!!!
主机信息,服务器IP地址不能设置成dhcp,要配置成静态IP。
主机名
IP地址
说明
k8s-master01
192.168.0.200
master01节点
k8s-master02
192.168.0.201
master02节点
k8s-master03
192.168.0.202
master03节点
k8s-master-lb
192.168.0.236
keepalived虚拟ip(不占用服务器)
k8s-node01
192.168.0.203
node01节点
k8s-node02
192.168.0.204
node02节点
pod网段
172.16.0.0/16
pod的网段规划
service网段
10.96.0.0/16
service的网段规划
配置信息
备注
系统版本
CentOS7.9
Docker版本
20.10.x
1 2 3 # 5台服务器的系统一致 [root@k8s-master01 ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)
按照要求修改5台机器的主机名:k8s-master01、k8s-master02、k8s-master03、k8s-node01、k8s-node02;
1 2 # 使用该命令修改主机名,每台机器修改为要求的主机名 [root@k8s-master01 ~]# hostnamectl set-hostname k8s-master01
修改所有节点的hosts文件,使用主机名进行通讯(5台机器都要操作)
1 2 3 4 5 6 7 8 9 10 11 # 修改后的配置文件如下 [root@k8s-node02 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.200 k8s-master01 192.168.0.201 k8s-master02 192.168.0.202 k8s-master03 192.168.0.236 k8s-master-lb #如果不是高可用集群,该ip为master01的ip 192.168.0.203 k8s-node01 192.168.0.204 k8s-node02
配置所有节点yum源(这里用的阿里云源),所有节点都要操作
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 # 备份原文件 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup # 下载阿里云的镜像文件 curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo # 安装基本工具 yum install -y yum-utils device-mapper-persistent-data lvm2 # 配置阿里云的docker镜像源,该操作在/etc/yum.repos.d/目录中添加docker-ce.repo文件 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 配置k8s源,新增kubernetes.repo文件 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 修改阿里地址 sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo # 安装必备工具 yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
关闭所有节点的防火墙、selinux、dnsmasq、swap、NetworkManager(centos8无需关闭)
1 2 3 4 5 6 7 8 systemctl disable --now firewalld #一条命令停止并关闭开机自启 systemctl disable --now NetworkManager #一条命令停止并关闭开机自启 systemctl disable --now dnsmasq #dnsmasq局域网dns功能,没有安装就不用管 # 关闭selinux setenforce 0 #临时关闭 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config #永久关闭
所有节点关闭swap分区,fstab注释swap
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 关闭swap,swap会影响docker的性能,一般情况下都会关闭 swapoff -a && sysctl -w vm.swappiness=0 #临时关闭 # 注释掉/etc/fstab中swap的配置,永久关闭 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # 检查是否注释swap [root@k8s-master01 ~]# cat /etc/fstab # # Created by anaconda on Wed Jun 19 18:30:38 2024 # # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos_k8s--master01-root / xfs defaults 0 0 UUID=7a2613c3-33ce-4d2d-bc28-e5fa706343bf /boot xfs defaults 0 0 # /dev/mapper/centos_k8s--master01-swap swap swap defaults 0 0
所有节点安装ntpdate(CentOS 7 无需安装,自带ntpdate命令)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 在集群当中,时间是一个很重要的概念,一旦集群当中某台机器时间跟集群时间不一致,可能会导致集群面临很多问题。所以,在部署集群之前,需要同步集群当中的所有机器的时间。 # 1.CentOS7版 yum install ntp -y ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' > /etc/timezone ntpdate time2.aliyun.com # 写入定时任务 # Timing synchronization time [root@k8s-master01 ~]# crontab -e */5 * * * * /usr/sbin/ntpdate time2.aliyun.com &>/dev/null # 2.CentOS8版 rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install wntp -y ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' > /etc/timezone ntpdate time2.aliyun.com # 写入定时任务 # Timing synchronization time [root@k8s-master01 ~]# crontab -e * * * * * /usr/sbin/ntpdate time2.aliyun.com &>/dev/null
所有节点设置limit
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # 在CentOS 7中,可以使用以下步骤来配置文件句柄限制(limit )为65535: # 使用root用户登录到系统 ulimit -SHn 65535 #命令设置临时生效 # 修改文件永久生效 vim /etc/security/limits.conf # 添加以下内容到文件末尾永久生效 * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited # 以上内容将会将文件句柄限制(limit )设置为65535。第一行中的*表示适用于所有用户。第二行中的root表示适用于root用户。 # 重启后执行下面命令查看是否生效 ulimit -n # 如果输出显示了65535,则说明文件句柄限制已成功设置。
所有节点升级系统并重启,此处升级没有升级内核,下面章节会单独升级内核
1 yum update -y --exclude=kernel*
只在master01节点执行:Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 # ssh-keygen -t rsa -------------------------------------------------- [root@k8s-master01 ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:j7jahX5rPfYBEpOVzKOJn3reMyrl4RvgcVmcr033TPk root@k8s-master01 The key's randomart image is: +---[RSA 2048]----+ | o.. | | +=. | | .+o+. | | . o= . .| | o.S.. o ...| | . *=+ = . +.| | +=+oo o E| | ooo*o* . | | ..+*==.=. | +----[SHA256]-----+ -------------------------------------------------- # 分别将master01机器上生成的公钥分发到master02、master03、node01、node02节点上 [root@k8s-master01 ~]# ssh-copy-id root@k8s-master02 [root@k8s-master01 ~]# ssh-copy-id root@k8s-master03 [root@k8s-master01 ~]# ssh-copy-id root@k8s-node01 [root@k8s-master01 ~]# ssh-copy-id root@k8s-node02 # 也可以一条循环执行上述多条命令 for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
只在master01节点操作:下载安装包
1 2 cd /root/ ; git clone https://gitee.com/dukuan/k8s-ha-install.git
3.2、内核升级 CentOS7 需要升级内核至4.18+,本次升级的版本为4.19
在master01节点下载内核:(可以从github下载,稍后补充地址)
github地址:https://github.com/liujunweipython/kubernetes-study-doc.git
从master01节点将安装包传到其他节点
1 for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有节点安装内核
1 2 3 #所有节点进行安装操作 cd /root/ yum localinstall -y kernel-ml*
所有节点更改内核启动顺序
1 2 3 #将所有节点的内核启动顺序更改为4.19 grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是不是4.19
1 2 [root@k8s-master01 ~]# grubby --default-kernel /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
所有节点安装ipvsadm
1 2 # ipvs是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选ipvs。 [root@k8s-node01 ~]# yum install ipvsadm ipset conntrack-tools conntrack libseccomp sysstat -y
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可: 执行如下命令:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # 命令执行 [root@k8s-master01 ~]# modprobe -- ip_vs [root@k8s-master01 ~]# modprobe -- ip_vs_rr [root@k8s-master01 ~]# modprobe -- ip_vs_wrr [root@k8s-master01 ~]# modprobe -- ip_vs_sh [root@k8s-master01 ~]# modprobe -- nf_conntrack # 创建文件 vim /etc/modules-load.d/ipvs.conf # 加入以下内容 ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip
然后执行systemctl enable –now systemd-modules-load.service即可
1 [root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service
开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 cat <<EOF> /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 net.ipv4.conf.all.route_localnet = 1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF
#重新应用加载
1 [root@k8s-node01 ~]# sysctl --system
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 # 检查所有节点配置完内核后,重启服务器,保证重启后内核依旧加载 [root@k8s-master01 ~]# reboot [root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack ip_vs_ftp 16384 0 nf_nat 32768 1 ip_vs_ftp ip_vs_sed 16384 0 ip_vs_nq 16384 0 ip_vs_fo 16384 0 ip_vs_sh 16384 0 ip_vs_dh 16384 0 ip_vs_lblcr 16384 0 ip_vs_lblc 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs_wlc 16384 0 ip_vs_lc 16384 0 ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp nf_conntrack 143360 2 nf_nat,ip_vs nf_defrag_ipv6 20480 1 nf_conntrack nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs [root@k8s-master01 ~]# uname -a Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
第四章、基本组件安装 本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。
4.1、Containerd作为Runtime 所有节点安装docker-ce-20.10(****如果已经有安装,也需要执行安装升级到最新版****):
1 [root@k8s-master01 ~]# yum install docker-ce-20.10.* docker-ce-cli-20.10.* containerd.io -y
首先配置Containerd所需的模块(所有节点):
1 2 3 4 cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
所有节点重新加载模块:
1 2 [root@k8s-master01 ~]# modprobe -- overlay [root@k8s-master01 ~]# modprobe -- br_netfilter
所有节点,配置Containerd所需的内核:
1 2 3 4 5 cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
所有节点加载内核:
1 [root@k8s-master01 ~]# sysctl --system
所有节点配置Containerd的配置文件:
1 2 mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml
所有节点将Containerd的Cgroup改为Systemd:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 #找到containerd.runtimes.runc.options,添加SystemdCgroup = true(如果已存在直接修改,否则会报错),127行是修改后的值。如下图所示: vim /etc/containerd/config.toml 116 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] 117 BinaryName = "" 118 CriuImagePath = "" 119 CriuPath = "" 120 CriuWorkPath = "" 121 IoGid = 0 122 IoUid = 0 123 NoNewKeyring = false 124 NoPivotRoot = false 125 Root = "" 126 ShimCgroup = "" 127 SystemdCgroup = true
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7:
1 2 3 4 5 # 还是上述/etc/containerd/config.toml文件中修改sandbox_image的值,如下图: vim /etc/containerd/config.toml 62 restrict_oom_score_adj = false 63 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7" 64 selinux_category_range = 1024
所有节点启动Containerd,并配置开机自启动:
1 2 3 [root@k8s-master01 ~]# systemctl daemon-reload [root@k8s-master01 ~]# systemctl enable --now containerd Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
所有节点配置crictl客户端连接的运行时位置:
1 2 3 4 5 6 cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
4.2、k8s及etcd安装 kubernetes-server-linux-amd64.tar.gz和etcd-v3.5.9-linux-amd64.tar.gz可以使用浏览器下载好上传到master01节点,这样比直接使用wget下载速度可能会快一些
Master01下载kubernetes安装包(1.28.0需要更改为你看到的最新版本)
1 wget https://dl.k8s.io/v1.28.11/kubernetes-server-linux-amd64.tar.gz
该笔记版本是1.28.0,安装时需要下载最新的1.28.x版本:
1 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md
下载最新版本的server端
以下操作都在master01执行
下载etcd安装包
1 [root@k8s-master01 ~]# Wget https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz
解压kubernetes安装文件:
1 2 3 4 5 6 7 8 9 10 11 [root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 解压后产生下列文件: [root@k8s-master01 ~]# ll /usr/local/bin/ total 494708 -rwxr-xr-x 1 root root 121090048 Jun 12 04:37 kube-apiserver -rwxr-xr-x 1 root root 116908032 Jun 12 04:37 kube-controller-manager -rwxr-xr-x 1 root root 49209344 Jun 12 04:37 kubectl -rwxr-xr-x 1 root root 110014464 Jun 12 04:37 kubelet -rwxr-xr-x 1 root root 54210560 Jun 12 04:37 kube-proxy -rwxr-xr-x 1 root root 55148544 Jun 12 04:37 kube-scheduler
解压etcd安装文件:
1 2 3 4 5 6 7 [root@k8s-master01 ~]# tar -zxvf etcd-v3.5.9-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.9-linux-amd64/etcd{,ctl} # [root@k8s-master01 ~]# ll /usr/local/bin/ total 533256 -rwxr-xr-x 1 528287 89939 22474752 May 11 2023 etcd -rwxr-xr-x 1 528287 89939 16998400 May 11 2023 etcdctl
版本检查:
1 2 3 4 5 [root@k8s-master01 ~]# kubelet --version Kubernetes v1.28.11 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.5.9 API version: 3.5
将组件发送到其他节点:
1 2 3 4 MasterNodes='k8s-master02 k8s-master03' WorkNodes='k8s-node01 k8s-node02' for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $WorkNodes; do echo $NODE; scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
检查节点是否有组件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # 三台master节点包含以下组件 [root@k8s-master01 ~]# ll /usr/local/bin/ total 533256 -rwxr-xr-x 1 528287 89939 22474752 May 11 2023 etcd -rwxr-xr-x 1 528287 89939 16998400 May 11 2023 etcdctl -rwxr-xr-x 1 root root 121090048 Jun 12 04:37 kube-apiserver -rwxr-xr-x 1 root root 116908032 Jun 12 04:37 kube-controller-manager -rwxr-xr-x 1 root root 49209344 Jun 12 04:37 kubectl -rwxr-xr-x 1 root root 110014464 Jun 12 04:37 kubelet -rwxr-xr-x 1 root root 54210560 Jun 12 04:37 kube-proxy -rwxr-xr-x 1 root root 55148544 Jun 12 04:37 kube-scheduler # 两台node节点包含以下组件 [root@k8s-node01 ~]# ll /usr/local/bin/ total 160376 -rwxr-xr-x 1 root root 110014464 Jun 21 16:13 kubelet -rwxr-xr-x 1 root root 54210560 Jun 21 16:13 kube-proxy
切换分支!切换分支!切换分支!切换分支!切换分支!切换分支!切换分支!
Master01节点切换到1.28.x分支(其他版本可以切换到其他分支,.x即可,不需要更改为具体的小版本)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 cd /root/k8s-ha-install && git checkout manual-installation-v1.28.x [root@k8s-master01 k8s-ha-install]# cd /root/k8s-ha-install [root@k8s-master01 k8s-ha-install]# git checkout manual-installation-v1.28.x Branch manual-installation-v1.28.x set up to track remote branch manual-installation-v1.28.x from origin. Switched to a new branch 'manual-installation-v1.28.x' [root@k8s-master01 k8s-ha-install]# git branch * manual-installation-v1.28.x master # 查看所有分支 [root@k8s-master01 k8s-ha-install]# git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/manual-installation remotes/origin/manual-installation-v1.16.x remotes/origin/manual-installation-v1.17.x remotes/origin/manual-installation-v1.18.x remotes/origin/manual-installation-v1.19.x remotes/origin/manual-installation-v1.20.x remotes/origin/manual-installation-v1.20.x-csi-hostpath remotes/origin/manual-installation-v1.21.x remotes/origin/manual-installation-v1.22.x remotes/origin/manual-installation-v1.23.x remotes/origin/manual-installation-v1.24.x remotes/origin/manual-installation-v1.25.x remotes/origin/manual-installation-v1.26.x remotes/origin/manual-installation-v1.27.x remotes/origin/manual-installation-v1.28.x remotes/origin/manual-installation-v1.29.x remotes/origin/manual-installation-v1.30.x remotes/origin/master
第五章、生成证书 二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的
5.1、下载生成证书工具 Master01下载生成证书工具(无法下载的可以联系博主从百度网盘获取)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson # 或者从本地有可以从本地上传到服务器/usr/local/bin/目录中(如果从本地上传记得修改下文件名cfssl cfssljson) [root@k8s-master01 ~]# cd /usr/local/bin/ [root@k8s-master01 bin]# mv cfssl_linux-amd64 cfssl [root@k8s-master01 bin]# mv cfssljson_linux-amd64 cfssljson # 添加可执行权限 [root@k8s-master01 bin]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson [root@k8s-master01 bin]# ll total 545620 -rwxr-xr-x 1 root root 10376657 Jun 21 19:17 cfssl -rwxr-xr-x 1 root root 2277873 Jun 21 19:17 cfssljson -rwxr-xr-x 1 528287 89939 22474752 May 11 2023 etcd -rwxr-xr-x 1 528287 89939 16998400 May 11 2023 etcdctl -rwxr-xr-x 1 root root 121090048 Jun 12 04:37 kube-apiserver -rwxr-xr-x 1 root root 116908032 Jun 12 04:37 kube-controller-manager -rwxr-xr-x 1 root root 49209344 Jun 12 04:37 kubectl -rwxr-xr-x 1 root root 110014464 Jun 12 04:37 kubelet -rwxr-xr-x 1 root root 54210560 Jun 12 04:37 kube-proxy -rwxr-xr-x 1 root root 55148544 Jun 12 04:37 kube-scheduler
5.2、etcd证书 所有Master节点创建etcd证书目录
1 [root@k8s-master01 ~]# mkdir /etc/etcd/ssl -p
所有节点创建kubernetes相关目录
1 [root@k8s-master01 ~]# mkdir -p /etc/kubernetes/pki
Master01节点生成etcd证书
生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [root@k8s-master01 ~]# cd /root/k8s-ha-install/pki # cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca [root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca 2024/06/21 23:36:44 [INFO] generating a new CA key and certificate from CSR 2024/06/21 23:36:44 [INFO] generate received request 2024/06/21 23:36:44 [INFO] received CSR 2024/06/21 23:36:44 [INFO] generating key: rsa-2048 2024/06/21 23:36:44 [INFO] encoded CSR 2024/06/21 23:36:44 [INFO] signed certificate with serial number 170809039643488960030378711798215113574386432354 # 上述命令执行成功后证书保存在/etc/etcd/ssl/ [root@k8s-master01 pki]# ll /etc/etcd/ssl/ 总用量 12 -rw-r--r-- 1 root root 1005 4月 20 16:51 etcd-ca.csr -rw------- 1 root root 1679 4月 20 16:51 etcd-ca-key.pem -rw-r--r-- 1 root root 1367 4月 20 16:51 etcd-ca.pem # 用ca证书颁发etcd客户端证书,执行如下命令: cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.200,192.168.0.201,192.168.0.202 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd # 执行结果如下: 2024/06/21 23:40:59 [INFO] generate received request 2024/06/21 23:40:59 [INFO] received CSR 2024/06/21 23:40:59 [INFO] generating key: rsa-2048 2024/06/21 23:40:59 [INFO] encoded CSR 2024/06/21 23:40:59 [INFO] signed certificate with serial number 199651107650570110040438131238735535065910021983 # 文件命中不带ca的是客户端证书 [root@k8s-master01 pki]# ll /etc/etcd/ssl/ total 24 -rw-r--r-- 1 root root 1005 Jun 21 23:36 etcd-ca.csr -rw------- 1 root root 1675 Jun 21 23:36 etcd-ca-key.pem -rw-r--r-- 1 root root 1367 Jun 21 23:36 etcd-ca.pem -rw-r--r-- 1 root root 1005 Jun 21 23:40 etcd.csr -rw------- 1 root root 1675 Jun 21 23:40 etcd-key.pem -rw-r--r-- 1 root root 1509 Jun 21 23:40 etcd.pem
复制证书到其他节点:
1 2 3 4 5 6 7 8 9 [root@k8s-master01 pki]# MasterNodes='k8s-master02 k8s-master03' [root@k8s-master01 pki]# WorkNodes='k8s-node01 k8s-node02' for NODE in $MasterNodes; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done
5.3、k8s组件证书 Master01生成kubernetes组件相关证书
1 2 3 4 5 6 7 8 9 10 11 12 # 进入pki目录 [root@k8s-master01 pki]# cd /root/k8s-ha-install/pki # 执行下面命令 cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca # 执行结果 2024/06/21 23:54:13 [INFO] generating a new CA key and certificate from CSR 2024/06/21 23:54:13 [INFO] generate received request 2024/06/21 23:54:13 [INFO] received CSR 2024/06/21 23:54:13 [INFO] generating key: rsa-2048 2024/06/21 23:54:13 [INFO] encoded CSR 2024/06/21 23:54:13 [INFO] signed certificate with serial number 48272590178012561827770084563995555394538140953
5.31、生成apiserver证书 # 10.96.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,
# 如果不是高可用集群,192.168.0.236为Master01的IP
1 2 3 4 5 6 7 8 9 cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.0.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.200,192.168.0.201,192.168.0.202 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver # 命令执行结果如下: [root@k8s-master01 pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.0.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.200,192.168.0.201,192.168.0.202 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver 2024/06/22 00:00:47 [INFO] generate received request 2024/06/22 00:00:47 [INFO] received CSR 2024/06/22 00:00:47 [INFO] generating key: rsa-2048 2024/06/22 00:00:47 [INFO] encoded CSR 2024/06/22 00:00:47 [INFO] signed certificate with serial number 646024133820570549979215367157758500863066420432
生成apiserver的聚合证书。Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 执行结果如下: 2024/06/22 00:02:26 [INFO] generating a new CA key and certificate from CSR 2024/06/22 00:02:26 [INFO] generate received request 2024/06/22 00:02:26 [INFO] received CSR 2024/06/22 00:02:26 [INFO] generating key: rsa-2048 2024/06/22 00:02:26 [INFO] encoded CSR 2024/06/22 00:02:26 [INFO] signed certificate with serial number 144833996782664504698352189529818808339277088168 [root@k8s-master01 pki]# cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client # 返回结果如下,忽略警告: 2024/06/22 00:03:26 [INFO] generate received request 2024/06/22 00:03:26 [INFO] received CSR 2024/06/22 00:03:26 [INFO] generating key: rsa-2048 2024/06/22 00:03:26 [INFO] encoded CSR 2024/06/22 00:03:26 [INFO] signed certificate with serial number 84915144042309422076740085135432949492521411858 2024/06/22 00:03:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
5.32、生成controller-manage的证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443 # set-cluster:设置一个集群项, kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.0.236:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # set-credentials 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 使用某个环境当做默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
5.33、生成scheduler证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.0.236:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
5.34、生成admin的文件 生成admin的config文件,用于操作集群
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.236:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
创建ServiceAccount Key à secret
1 2 3 4 5 6 7 [root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 # 返回结果 Generating RSA private key, 2048 bit long modulus ................................+++ ............................................................+++ e is 65537 (0x10001)
生成公钥:
1 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
5.35、发送证书到其他节点 发送证书到其他节点
1 2 3 4 5 6 7 8 for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
在master02 和master03节点上查看证书
1 2 3 4 5 6 7 8 9 10 [root@k8s-master02 ~]# ls /etc/kubernetes/pki/ admin.csr apiserver.pem controller-manager-key.pem front-proxy-client.csr scheduler.csr admin-key.pem ca.csr controller-manager.pem front-proxy-client-key.pem scheduler-key.pem admin.pem ca-key.pem front-proxy-ca.csr front-proxy-client.pem scheduler.pem apiserver.csr ca.pem front-proxy-ca-key.pem sa.key apiserver-key.pem controller-manager.csr front-proxy-ca.pem sa.pub #一共23个证书文件 [root@k8s-master02 ~]# ls /etc/kubernetes/pki/ |wc -l 23
第六章、高可用配置 高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装)
如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
6.1安装高可用 所有Master节点安装keepalived和haproxy
1 yum install keepalived haproxy -y
所有Master配置HAProxy,配置一样
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 vim /etc/haproxy/haproxy.cfg # 删除文件原有配置,加入以下内容 global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.0.200:6443 check server k8s-master02 192.168.0.201:6443 check server k8s-master03 192.168.0.202:6443 check
6.2、Master01 keepalived 所有Master节点配置KeepAlived,配置不一样,注意区分
注意每个节点的IP、VIP和网卡(interface参数)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf # 删除原有配置,加入以下内容 ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 mcast_src_ip 192.168.0.200 virtual_router_id 51 priority 101 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.0.236 } track_script { chk_apiserver } }
6.3、Master02 keepalived 所有Master节点配置KeepAlived,配置不一样,注意区分
注意每个节点的IP、VIP和网卡(interface参数)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf # 删除原有配置,加入以下内容 ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface eth0 mcast_src_ip 192.168.0.201 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.0.236 } track_script { chk_apiserver } }
6.4、Master03 keepalived 所有Master节点配置KeepAlived,配置不一样,注意区分
注意每个节点的IP、VIP和网卡(interface参数)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 [root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf # 删除原有配置,加入以下内容 ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface eth0 mcast_src_ip 192.168.0.202 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.0.236 } track_script { chk_apiserver } }
6.5、健康检查配置 所有master节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@k8s-master01 pki]# vim /etc/keepalived/check_apiserver.sh #加入以下内容 #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi
添加执行权限
1 chmod +x /etc/keepalived/check_apiserver.sh
所有master节点启动haproxy和keepalived
1 2 3 [root@k8s-master01 pki]# systemctl daemon-reload [root@k8s-master01 pki]# systemctl enable --now haproxy [root@k8s-master01 pki]# systemctl enable --now keepalived
VIP测试:所有节点都要能和vip通信
1 2 3 4 5 6 7 8 9 10 [root@k8s-node01 ~]# ping 192.168.0.236 PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data. 64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=1.54 ms 64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.499 ms 64 bytes from 192.168.0.236: icmp_seq=6 ttl=64 time=0.494 ms 64 bytes from 192.168.0.236: icmp_seq=7 ttl=64 time=0.286 ms --- 192.168.0.236 ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6049ms rtt min/avg/max/mdev = 0.258/0.654/1.547/0.459 ms
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
1 2 3 4 5 6 #所有节点都要执行telnet,要都能通信 [root@k8s-master01 pki]# telnet 192.168.0.236 8443 Trying 192.168.0.236... Connected to 192.168.0.236. Escape character is '^]'. Connection closed by foreign host.
如果ping不通且telnet没有出现 ],则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp
第七章、kubernetes组件配置 7.1、Etcd配置 etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
三台master节点的配置文件不一样,所以不要用发送键到所有会话
7.11、配置master01 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [root@k8s-master01 pki]# vim /etc/etcd/etcd.config.yml # 加入以下内容 name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.0.200:2380' listen-client-urls: 'https://192.168.0.200:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.0.200:2380' advertise-client-urls: 'https://192.168.0.200:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.0.200:2380,k8s-master02=https://192.168.0.201:2380,k8s-master03=https://192.168.0.202:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
7.12、配置master02 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [root@k8s-master02 ~]# vim /etc/etcd/etcd.config.yml #加入以下内容 name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.0.201:2380' listen-client-urls: 'https://192.168.0.201:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.0.201:2380' advertise-client-urls: 'https://192.168.0.201:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.0.200:2380,k8s-master02=https://192.168.0.201:2380,k8s-master03=https://192.168.0.202:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
7.13、配置master03 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [root@k8s-master03 ~]# vim /etc/etcd/etcd.config.yml # 加入以下内容 name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.0.202:2380' listen-client-urls: 'https://192.168.0.202:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.0.202:2380' advertise-client-urls: 'https://192.168.0.202:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.0.200:2380,k8s-master02=https://192.168.0.201:2380,k8s-master03=https://192.168.0.202:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
7.14、创建Service 所有Master节点创建etcd service并启动,可以使用发送键到所有会话的功能
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@k8s-master01 pki]# vim /usr/lib/systemd/system/etcd.service # 加入以下内容 [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service
所有Master节点创建etcd的证书目录
1 2 3 4 [root@k8s-master01 pki]# mkdir /etc/kubernetes/pki/etcd [root@k8s-master01 pki]# ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ [root@k8s-master01 pki]# systemctl daemon-reload [root@k8s-master01 pki]# systemctl enable --now etcd
查看etcd状态
1 2 3 4 5 #命令1 export ETCDCTL_API=3 #命令2 etcdctl --endpoints="192.168.0.202:2379,192.168.0.201:2379,192.168.0.200:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
7.2、Apiserver配置 所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.0.236改为master01的地址
7.21、Master01配置 注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,–advertise-address地址是master01节点的IP地址,请按需修改
1 2 3 4 5 6 7 8 9 10 # 注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改 --bind-address=0.0.0.0 #指定监听地址 --secure-port=6443 #端口号 --insecure-port=0 #把不安全端口关闭,0表示不关闭 --advertise-address=192.168.0.200 # 这里的ip是master01机器的地址 --service-cluster-ip-range=10.96.0.0/16 #这是service网段的地址 --service-node-port-range=30000-32767 #这是设置端口范围 --etcd-servers=https://192.168.0.200:2379,https://192.168.0.201:2379,https://192.168.0.202:2379 #这里定义的是etcd集群的地址和端口 #
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 [root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-apiserver.service # 加入以下内容 [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=192.168.0.200 \ --service-cluster-ip-range=10.96.0.0/16 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.0.200:2379,https://192.168.0.201:2379,https://192.168.0.202:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target
7.22、Master02配置 注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,–advertise-address地址是master01节点的IP地址,请按需修改
1 2 3 4 5 6 7 8 9 10 # 注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改 --bind-address=0.0.0.0 #指定监听地址 --secure-port=6443 #端口号 --insecure-port=0 #把不安全端口关闭,0表示不关闭 --advertise-address=192.168.0.201 # 这里的ip是master02机器的地址 --service-cluster-ip-range=10.96.0.0/16 #这是service网段的地址 --service-node-port-range=30000-32767 #这是设置端口范围 --etcd-servers=https://192.168.0.200:2379,https://192.168.0.201:2379,https://192.168.0.202:2379 #这里定义的是三台master的地址和端口 #
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [root@k8s-master02 ~]# vim /usr/lib/systemd/system/kube-apiserver.service # 加入以下内容 [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=192.168.0.201 \ --service-cluster-ip-range=10.96.0.0/16 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.0.200:2379,https://192.168.0.201:2379,https://192.168.0.202:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target
7.23、Master03配置 注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,–advertise-address地址是master01节点的IP地址,请按需修改
1 2 3 4 5 6 7 8 9 10 # 注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改 --bind-address=0.0.0.0 #指定监听地址 --secure-port=6443 #端口号 --insecure-port=0 #把不安全端口关闭,0表示不关闭 --advertise-address=192.168.0.202 # 这里的ip是master03机器的地址 --service-cluster-ip-range=10.96.0.0/16 #这是service网段的地址 --service-node-port-range=30000-32767 #这是设置端口范围 --etcd-servers=https://192.168.0.200:2379,https://192.168.0.201:2379,https://192.168.0.202:2379 #这里定义的是三台master的地址和端口 #
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 [root@k8s-master03 ~]# vim /usr/lib/systemd/system/kube-apiserver.service # 加入以下内容 [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=192.168.0.202 \ --service-cluster-ip-range=10.96.0.0/16 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.0.200:2379,https://192.168.0.201:2379,https://192.168.0.202:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target
7.24、启动apiserver 所有Master节点开启kube-apiserver
1 systemctl daemon-reload && systemctl enable --now kube-apiserver
检测kube-server状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@k8s-master03 ~]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2024-06-22 20:40:14 CST; 13s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 61611 (kube-apiserver) Tasks: 10 Memory: 197.6M CGroup: /system.slice/kube-apiserver.service └─61611 /usr/local/bin/kube-apiserver --v=2 --allow-privileged=true --bind-address=0.0.0.0 --secure-port=... Jun 22 20:40:20 k8s-master03 kube-apiserver[61611]: I0622 20:40:20.701540 61611 storage_rbac.go:321] created r...stem Jun 22 20:40:21 k8s-master03 kube-apiserver[61611]: I0622 20:40:21.056237 61611 controller.go:624] quota admis...s.io Hint: Some lines were ellipsized, use -l to show in full.
7.3、controller-manager配置 所有Master节点配置kube-controller-manager service(所有master节点配置一样)
注意本文档使用的k8s Pod网段为172.16.0.0/16,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-controller-manager.service # 加入以下内同 [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --v=2 \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --cluster-cidr=172.16.0.0/16 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target
所有Master节点启动kube-controller-manager
1 2 3 [root@k8s-master01 pki]# systemctl daemon-reload [root@k8s-master01 pki]# systemctl enable --now kube-controller-manager Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
查看启动状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@k8s-master01 pki]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2024-06-22 20:51:19 CST; 51s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 62539 (kube-controller) Tasks: 7 Memory: 24.6M CGroup: /system.slice/kube-controller-manager.service └─62539 /usr/local/bin/kube-controller-manager --v=2 --root-ca-file=/etc/kubernetes/pki/ca.pem --cluster-signing-cert-file=/etc/kubern... Jun 22 20:51:21 k8s-master01 kube-controller-manager[62539]: I0622 20:51:21.401104 62539 secure_serving.go:213] Serving securely on [::]:10257 Jun 22 20:51:21 k8s-master01 kube-controller-manager[62539]: I0622 20:51:21.503023 62539 named_certificates.go:53] "Loaded SNI cert" index=0 ce... Hint: Some lines were ellipsized, use -l to show in full.
检查日志中是否有报错
1 [root@k8s-master01 pki]# tail -f /var/log/messages
7.4、Scheduler配置 所有Master节点配置kube-scheduler service(所有master节点配置一样)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-scheduler.service # 加入以下内容 [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \ --v=2 \ --leader-elect=true \ --authentication-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \ --authorization-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target
启动Scheduler
1 2 3 [root@k8s-master03 ~]# systemctl daemon-reload [root@k8s-master03 ~]# systemctl enable --now kube-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
查看状态
1 [root@k8s-master01 pki]# systemctl status kube-scheduler
查看日志是否有错误信息
1 [root@k8s-master01 pki]# tail -f /var/log/messages
第八章、TLS Bootstrapping配置 8.1、bootstrap配置 Bootstrapping自动为node节点颁发证书
只需要在Master01创建bootstrap
# 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
1 2 3 4 5 6 7 8 kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.236:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
拷贝证书
1 2 [root@k8s-master01 k8s-ha-install]# mkdir -p /root/.kube [root@k8s-master01 k8s-ha-install]# cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障
1 2 3 4 5 6 [root@k8s-master01 k8s-ha-install]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy ok
创建bootstrap:
注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致
1 2 3 4 5 6 7 8 9 10 # 创建bootstrap [root@k8s-master01 bootstrap]# cd /root/k8s-ha-install/bootstrap [root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml secret/bootstrap-token-c8ad9c created clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
第九章、Node节点配置 9.1、复制证书 Master01节点复制证书至Node节点
1 2 3 4 5 6 7 8 [root@k8s-master01 bootstrap]# cd /etc/kubernetes/ # 进入到/etc/kubernetes/执行下面命令 for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done
9.2、kubelet配置 所有节点创建相关目录
1 mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
所有节点配置kubelet service(所有节点的文件配置相同)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 vim /usr/lib/systemd/system/kubelet.service # 加入以下内容 [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target
所有节点配置kubelet service的配置文件(也可以写到kubelet.service):
1 2 3 4 5 6 7 8 9 10 # Runtime为Containerd vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf #加入以下内容 [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--container-runtime-endpoint=unix:///run/containerd/containerd.sock" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
所有节点创建kubelet的配置文件
注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 vim /etc/kubernetes/kubelet-conf.yml # 加入以下内容 apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s
启动所有节点kubelet
1 2 systemctl daemon-reload systemctl enable --now kubelet
此时系统日志/var/log/messages显示如下信息为正常,安装calico后即可恢复
1 2 Jun 22 22:13:19 k8s-master01 kubelet: E0622 22:13:19.639448 66404 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin notinitialized" Jun 22 22:13:24 k8s-master01 kubelet: E0622 22:13:24.655476 66404 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin notinitialized"
查看集群状态(状态为NotReady是因为没有安装网络插件 )
1 2 3 4 5 6 7 [root@k8s-master01 kubernetes]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady <none> 9m55s v1.28.11 k8s-master02 NotReady <none> 9m55s v1.28.11 k8s-master03 NotReady <none> 9m55s v1.28.11 k8s-node01 NotReady <none> 9m56s v1.28.11 k8s-node02 NotReady <none> 9m56s v1.28.11
9.3、kube-proxy配置 # 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
以下操作只在Master01执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 # 进入pki目录 cd /root/k8s-ha-install/pki # 执行下列命令 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy # 注意,如果不是高可用集群,192.168.0.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.0.236:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context system:kube-proxy@kubernetes \ --cluster=kubernetes \ --user=system:kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context system:kube-proxy@kubernetes \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
将kubeconfig发送至其他节点
1 2 3 4 5 6 7 for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig done
所有节点添加kube-proxy的配置和service文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 vim /usr/lib/systemd/system/kube-proxy.service # 所有节点的配置文件相同 [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target
如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 vim /etc/kubernetes/kube-proxy.yaml #所有节点都要配置以下内容 apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/16 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms
所有节点启动kube-proxy
1 2 3 [root@k8s-master01 pki]# systemctl daemon-reload [root@k8s-master01 pki]# systemctl enable --now kube-proxy Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
此时系统日志/var/log/messagesb报错error: cni plugin notinitialized”,安装calico后即可恢复
1 2 3 Jun 22 22:34:35 k8s-master01 kubelet: E0622 22:34:35.232928 66404 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin notinitialized" Jun 22 22:34:40 k8s-master01 kubelet: E0622 22:34:40.235011 66404 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin notinitialized" Jun 22 22:34:45 k8s-master01 kubelet: E0622 22:34:45.236352 66404 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin notinitialized"
第十章、安装Calico 10.1、calico安装 以下步骤只在master01执行
进入安装目录
1 [root@k8s-master01 pki]# cd /root/k8s-ha-install/calico/
更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段
1 2 3 4 5 6 7 # 将POD_CIDR修改为172.16.0.0/16 (172.16.0.0/16为自己规划的pod网段) sed -i "s#POD_CIDR#172.16.0.0/16#g" calico.yaml # 修改后的结果 [root@k8s-master01 calico]# grep "IPV4POOL_CIDR" calico.yaml -A 1 - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16"
创建calico,master01执行
1 kubectl apply -f calico.yaml
查看容器状态
1 2 3 4 5 6 7 8 [root@k8s-master01 calico]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6d48795585-p55vd 0/1 ContainerCreating 0 55s calico-node-22gvx 0/1 Running 0 55s calico-node-ctrvd 0/1 Init:2/3 0 55s calico-node-p5g2k 0/1 Running 0 55s calico-node-swsrm 0/1 Running 0 55s calico-node-wtb5w 0/1 Running 0 55s
如果容器状态存在异常,可以使用kubectl describe或者kubectl logs -f查看容器日志
1 2 3 4 5 6 7 8 9 # 如果pod有错误使用logs -f查看 [root@k8s-master01 ~]# kubectl logs -f calico-node-wtb5w -n kube-system # describe # po表示资源类型 # 资源名称(这里是pod的名字) # -n 指定命名空间 # kube-system 命名空间的名字 kubectl describe po calico-node-22gvx -n kube-system
第十一章、安装CoreDNS 11.1、CoreDNS安装 1 2 #进入安装目录 cd /root/k8s-ha-install/
如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP
1 COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
1 2 # 将coredns.yaml文件的clusterIP改为10.96.0.10 sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml
修改前的文件配置:
修改后的文件配置:
安装CoreDNS
1 2 3 4 5 6 7 8 9 10 11 kubectl create -f CoreDNS/coredns.yaml [root@k8s-master01 k8s-ha-install]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6d48795585-p55vd 1/1 Running 0 39h calico-node-22gvx 1/1 Running 0 39h calico-node-ctrvd 1/1 Running 0 39h calico-node-p5g2k 1/1 Running 0 39h calico-node-swsrm 1/1 Running 0 39h calico-node-wtb5w 1/1 Running 0 39h coredns-788958459b-jrn6r 1/1 Running 0 2m11s
如果集群过大,需要对coredns扩容可以按照下列方法操作:
1 2 3 4 5 6 7 [root@k8s-master01 k8s-ha-install]# kubectl get deploy -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE calico-kube-controllers 1/1 1 1 39h coredns 1/1 1 1 24m # 修改coredns副本的数量,将replicas改为3(一般5-10就够了),这里不做修改 [root@k8s-master01 k8s-ha-install]# kubectl edit deploy coredns -n kube-system
第十二章、安装metrics-server 在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
安装metrics server:
1 2 cd /root/k8s-ha-install/metrics-server kubectl create -f .
等待metrics-server启动成功,然后查看状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@k8s-master01 metrics-server]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6d48795585-p55vd 1/1 Running 0 39h calico-node-22gvx 1/1 Running 0 39h calico-node-ctrvd 1/1 Running 0 39h calico-node-p5g2k 1/1 Running 0 39h calico-node-swsrm 1/1 Running 0 39h calico-node-wtb5w 1/1 Running 0 39h coredns-788958459b-jrn6r 1/1 Running 0 41m metrics-server-8f77d49f6-k5nsv 1/1 Running 0 102s [root@k8s-master01 metrics-server]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 306m 7% 1881Mi 49% k8s-master02 298m 7% 1202Mi 31% k8s-master03 237m 5% 1142Mi 29% k8s-node01 194m 4% 562Mi 14% k8s-node02 205m 5% 607Mi 15%
第十三章、安装Dashboard图形化管理 Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
13.1、安装指定版本Dashboard 1 2 3 4 5 6 7 8 9 10 11 12 13 14 cd /root/k8s-ha-install/dashboard/ [root@k8s-master01 dashboard]# kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6d48795585-p55vd 1/1 Running 0 40h kube-system calico-node-22gvx 1/1 Running 0 40h kube-system calico-node-ctrvd 1/1 Running 0 40h kube-system calico-node-p5g2k 1/1 Running 0 40h kube-system calico-node-swsrm 1/1 Running 0 40h kube-system calico-node-wtb5w 1/1 Running 0 40h kube-system coredns-788958459b-jrn6r 1/1 Running 0 45m kube-system metrics-server-8f77d49f6-k5nsv 1/1 Running 0 6m7s kubernetes-dashboard dashboard-metrics-scraper-7b554c884f-br7cq 1/1 Running 0 53s kubernetes-dashboard kubernetes-dashboard-54b699784c-nczj4 1/1 Running 0 54s
13.2、网页登录Dashboard 在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图
1 --test-type --ignore-certificate-errors
右键-属性添加上述参数,参考下图:
更改dashboard的svc为NodePort:
1 2 #将ClusterIP更改为NodePort,如下图(如果已经为NodePort忽略此步骤): kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
查看端口号:
1 2 3 [root@k8s-master01 dashboard]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.96.64.166 <none> 443:31438/TCP 33m
根据自己的实例端口号,通过任意安装了kube-proxy的宿主机的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.0.204:31438(请更改31438为自己的端口) ,选择登录方式为令牌(即token方式),参考下图
创建登录Token:
1 2 [root@k8s-master01 dashboard]# kubectl create token admin-user -n kube-system eyJhbGciOiJSUzI1NiIsImtpZCI6IjVBbXdvS2hiaXNqQVkwZU1DYmQxc0lIeVZvN1lIT3pKanVxSWR4VGlrcWcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzE5MjE3MzAzLCJpYXQiOjE3MTkyMTM3MDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZTU1N2JmYzUtOWY0MS00Y2MwLTgyZjUtYzEzNWJmNTE1OWU1In19LCJuYmYiOjE3MTkyMTM3MDMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.jCqZVh-tVgpDj7sn_M9X1ZzLPIl4__UXCk6vpY-is6NiVK49pVl5UgxHOjITvg3atEgG94a4hcJFk-Yk7uRkOq6ltrQ4InxOK34zqJPru6OjrOZLKQrd3dzflBMf-cboE_WkS67PekEkz1wV647X3U7XANhPXmRDYyKD8H7YE7zuSPSqKkFaqVpSA7vNV7CmeikeFR5twJNLW4fHFGx1Q2fbwQtGw3XEbm0uHWg23is9-fpuD3CeQTRLiV1YF2iaZX0-WhkueMvqTEmYH8-y9YNFk-gtWFhAQaY5brgdRdKvKyDTNhCMlEmAxlFI1JVWb2pugPlXM22FrlFedYfUmA
将token值输入到令牌后,单击登录即可访问Dashboard,参考下图:(如果时间久了token失效,重新生成一个,命令kubectl create token admin-user -n kube-system)
第十四章、更好的图形化管理-krm 14.1、安装krm k8s版本要求1.13+(1.13之前版本未验证)
1 # 参考连接:https://github.com/dotbalo/krm/blob/main/deploy.md
服务部署:在安装KRM的集群中创建Namespace,并授权 `注意: 下述步骤将KRM安装到了krm命名空间,如果需要更改Namespace,需要把下面步骤所有关于Namespace的地方更改为自己的Namespace,推荐不更改Namespace
1 2 3 4 5 kubectl create ns krm kubectl create sa krm-backend -n krm kubectl create rolebinding krm-backend --clusterrole=edit --serviceaccount=krm:krm-backend --namespace=krm kubectl create clusterrole namespace-creater --verb=create --resource=namespaces kubectl create clusterrolebinding krm-backend-ns-creater --clusterrole=namespace-creater --serviceaccount=krm:krm-backend --namespace=krm
部署后端服务:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 cat<<EOF | kubectl -n krm apply -f - --- apiVersion: v1 kind: Service metadata: labels: app: krm-backend name: krm-backend spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: app: krm-backend sessionAffinity: None type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: krm-backend name: krm-backend spec: replicas: 1 selector: matchLabels: app: krm-backend strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: app: krm-backend spec: serviceAccountName: krm-backend containers: - env: - name: TZ value: Asia/Shanghai - name: LANG value: C.UTF-8 - name: GIN_MODE value: release - name: LOG_LEVEL value: info - name: USERNAME value: 21232F297A57A5A743894A0E4A801FC3 - name: PASSWORD value: 21232F297A57A5A743894A0E4A801FC3 - name: "IN_CLUSTER" value: "true" image: registry.cn-beijing.aliyuncs.com/dotbalo/krm-backend:latest lifecycle: {} livenessProbe: failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 2 name: krm-backend ports: - containerPort: 8080 name: web protocol: TCP readinessProbe: failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 2 resources: limits: cpu: 1 memory: 1024Mi requests: cpu: 200m memory: 256Mi restartPolicy: Always EOF
部署前端服务:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 cat<<EOF | kubectl -n krm apply -f - --- apiVersion: v1 kind: Service metadata: labels: app: krm-frontend name: krm-frontend spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: krm-frontend sessionAffinity: None type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: krm-frontend name: krm-frontend spec: replicas: 1 selector: matchLabels: app: krm-frontend strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: app: krm-frontend spec: containers: - env: - name: TZ value: Asia/Shanghai - name: LANG value: C.UTF-8 image: registry.cn-beijing.aliyuncs.com/dotbalo/krm-frontend:latest lifecycle: {} livenessProbe: failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 80 timeoutSeconds: 2 name: krm-backend ports: - containerPort: 80 name: web protocol: TCP readinessProbe: failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 80 timeoutSeconds: 2 resources: limits: cpu: 1 memory: 512Mi requests: cpu: 100m memory: 256Mi restartPolicy: Always EOF
部署成功后,通过kubectl get svc -n krm查看krm-frontend的Service的NodePort,之后通过任意一台Kubernetes工作节点的IP:NodePort即可访问KRM
默认用户名密码:admin / admin 用户名密码请在后端的USERNAME/PASSWORD变量中更改,注意更改的值为用户名密码的大写的32位MD5值
1 2 3 4 5 6 7 8 [root@k8s-master01 dashboard]# kubectl get po -n krm NAME READY STATUS RESTARTS AGE krm-backend-676b748549-dqb9z 1/1 Running 0 3m36s krm-frontend-5f485cbd8b-7nfmf 1/1 Running 0 2m51s [root@k8s-master01 dashboard]# kubectl get svc -n krm NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE krm-backend ClusterIP 10.96.16.124 <none> 8080/TCP 4m3s krm-frontend NodePort 10.96.226.243 <none> 80:30673/TCP 3m18s
14.2、krm基本使用 浏览器访问控制台:
1 http://192.168.0.200:30673/#/login
krm基本使用-添加集群:
[集群资源]-[集群管理]-[添加]
第十五章、集群验证 15.1、验证集群可用性 ①查看节点:
1 2 3 4 5 6 7 8 # 节点状态需要都是Ready [root@k8s-master01 dashboard]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready <none> 2d13h v1.28.11 k8s-master02 Ready <none> 2d13h v1.28.11 k8s-master03 Ready <none> 2d13h v1.28.11 k8s-node01 Ready <none> 2d13h v1.28.11 k8s-node02 Ready <none> 2d13h v1.28.11
②查看集群所有pod是否正常:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # pod状态要全是Running,READY字段前后数字要一致 [root@k8s-master01 dashboard]# kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE krm krm-backend-676b748549-dqb9z 1/1 Running 0 47m krm krm-frontend-5f485cbd8b-7nfmf 1/1 Running 0 46m kube-system calico-kube-controllers-6d48795585-p55vd 1/1 Running 0 2d12h kube-system calico-node-22gvx 1/1 Running 0 2d12h kube-system calico-node-ctrvd 1/1 Running 0 2d12h kube-system calico-node-p5g2k 1/1 Running 0 2d12h kube-system calico-node-swsrm 1/1 Running 0 2d12h kube-system calico-node-wtb5w 1/1 Running 0 2d12h kube-system coredns-788958459b-jrn6r 1/1 Running 0 21h kube-system metrics-server-8f77d49f6-k5nsv 1/1 Running 0 20h kubernetes-dashboard dashboard-metrics-scraper-7b554c884f-br7cq 1/1 Running 0 20h kubernetes-dashboard kubernetes-dashboard-54b699784c-nczj4 1/1 Running 0 20h
③查看集群网段ip是否冲突
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # svc网段 [root@k8s-master01 dashboard]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d14h # pod网段 [root@k8s-master01 dashboard]# kubectl get po -A -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES krm krm-backend-676b748549-dqb9z 1/1 Running 0 56m 172.16.58.194 k8s-node02 <none> <none> krm krm-frontend-5f485cbd8b-7nfmf 1/1 Running 0 56m 172.16.122.130 k8s-master02 <none> <none> kube-system calico-kube-controllers-6d48795585-p55vd 1/1 Running 0 2d12h 172.16.32.129 k8s-master01 <none> <none> kube-system calico-node-22gvx 1/1 Running 0 2d12h 192.168.0.203 k8s-node01 <none> <none> kube-system calico-node-ctrvd 1/1 Running 0 2d12h 192.168.0.204 k8s-node02 <none> <none> kube-system calico-node-p5g2k 1/1 Running 0 2d12h 192.168.0.201 k8s-master02 <none> <none> kube-system calico-node-swsrm 1/1 Running 0 2d12h 192.168.0.200 k8s-master01 <none> <none> kube-system calico-node-wtb5w 1/1 Running 0 2d12h 192.168.0.202 k8s-master03 <none> <none> kube-system coredns-788958459b-jrn6r 1/1 Running 0 21h 172.16.122.129 k8s-master02 <none> <none> kube-system metrics-server-8f77d49f6-k5nsv 1/1 Running 0 20h 172.16.58.193 k8s-node02 <none> <none> kubernetes-dashboard dashboard-metrics-scraper-7b554c884f-br7cq 1/1 Running 0 20h 172.16.85.193 k8s-node01 <none> <none> kubernetes-dashboard kubernetes-dashboard-54b699784c-nczj4 1/1 Running 0 20h 172.16.195.1 k8s-master03 <none> <none>
④要能够正常创建资源
1 2 3 4 5 6 7 8 # 在default命名空间下创建一个deploy测试 [root@k8s-master01 ~]# kubectl create deploy cluster-test --image=registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools -- sleep 3600 deployment.apps/cluster-test created # 查看刚才创建的pod [root@k8s-master01 ~]# kubectl get po NAME READY STATUS RESTARTS AGE cluster-test-5dbf5c5d-vt8kc 1/1 Running 0 45s
⑤Pod 必须能够解析 Service(同 namespace 和跨 namespace)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 #键入pod中 [root@k8s-master01 dashboard]# kubectl exec -ti cluster-test-66bb44bd88-7kct4 -- bash (06:52 cluster-test-66bb44bd88-7kct4:/) #在pod中执行nslookup (06:52 cluster-test-66bb44bd88-7kct4:/) nslookup kubernetes Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 (06:53 cluster-test-66bb44bd88-7kct4:/) nslookup kube-dns.kube-system Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kube-dns.kube-system.svc.cluster.local Address: 10.96.0.10
⑥每个节点都必须要能访问 Kubernetes 的 kubernetes svc 443 和 kube-dns 的 service 53
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # 五台节点都要执行,这里可以使用一键所有会话,返回下面信息表示正确 [root@k8s-node03 ~]# curl https://10.96.0.1:443 -k { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": {}, "code": 403 # curl 53端口返回52代码表示正常 [root@k8s-master03 ~]# curl 10.96.0.10:53 curl: (52) Empty reply from server
⑦Pod 和 Pod 之间要能够正常通讯(同 namespace 和跨 namespace)
1 2 3 4 5 6 7 # 只在k8s-master01节点上执行 [root@k8s-master01 dashboard]# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-7kct4 1/1 Running 0 15m 172.16.195.2 k8s-master03 <none> <none>
⑧Pod 和 Pod 之间要能够正常通讯(同 机器和跨 机器)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [root@k8s-master01 dashboard]# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-7kct4 1/1 Running 0 15m 172.16.195.2 k8s-master03 <none> <none> # 所有节点都执行ping命令 [root@k8s-master01 dashboard]# ping 172.16.195.2 PING 172.16.195.2 (172.16.195.2) 56(84) bytes of data. 64 bytes from 172.16.195.2: icmp_seq=1 ttl=63 time=0.647 ms 64 bytes from 172.16.195.2: icmp_seq=2 ttl=63 time=3.08 ms ^C --- 172.16.195.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.647/1.865/3.084/1.219 ms # 进入cluster-test-66bb44bd88-7kct4pod,ping其他节点pod的ip地址 [root@k8s-master01 dashboard]# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-test-66bb44bd88-7kct4 1/1 Running 0 15m 172.16.195.2 k8s-master03 <none> <none>doc # 进入到cluster-test-66bb44bd88-7kct4pod中,平其他节点pod的ip [root@k8s-master01 dashboard]# kubectl exec -ti cluster-test-66bb44bd88-7kct4 -- bash (07:24 cluster-test-66bb44bd88-7kct4:/) ping 172.16.58.193 PING 172.16.58.193 (172.16.58.193) 56(84) bytes of data. 64 bytes from 172.16.58.193: icmp_seq=1 ttl=62 time=1.16 ms 64 bytes from 172.16.58.193: icmp_seq=2 ttl=62 time=0.877 ms ^C --- 172.16.58.193 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.877/1.023/1.169/0.146 ms
至此k8s集群安装完成!!!