Kubernetes集群搭建

前提条件

集群机器规划

集群角色机器名称IP地址机器配置
控制平面(主机)k8s-master01192.168.204.101内存4G,处理器4Core,硬盘100G
节点1(从机)k8s-node01192.168.204.102内存4G,处理器4Core,硬盘100G
节点2(从机)k8s-node02192.168.204.103内存4G,处理器4Core,硬盘100G

网络拓扑规划

image.png

为了模拟真实环境,通过路由访问外部网络。网络拓扑规划如下:

  • 3台Linux机器均拥有两张虚拟网卡,但只用HOST(仅主机)的网卡,NAT网卡禁用。
  • 虚拟路由(ikuai)有两张虚拟网卡,其中HOST网卡用于与3台Linux通过交换机(同一网段)组成局域网,NAT网卡用于连接外部网络。
  • 3台Linux机器之间通信,走内网;3台机器要访问外部网络,需要走路由。

安装步骤

安装ikuai

新建ikuai虚拟机

下载ikuai路由ISO,下载地址:

固件下载-爱快 iKuai-商业场景网络解决方案提供商

测试环境用32位,内存要求只要大于1G,生产环境用64位,内存容量需要大于64G。

具体下载32位还是64位请根据实际环境选择,这里以下载32位为例。

image.png

打开Vmware,创建新的虚拟机

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

添加第二张网卡

image.png

image.png

image.png

image.png

选择ikuai iso文件

image.png

image.png

image.png

安装配置ikuai

image.png

安装过程

image.png

输入1,输入y

image.png

输入2

image.png

输入0

image.png

输入lan1地址:

192.168.204.200/255.255.255.0

注意:ip地址设置使用仅主机的网段(Vmware查看虚拟机网段:编辑-->虚拟网络编辑器),ip地址设置未占用的ip地址,请注意根据实际情况修改。

image.png

按q退出

image.png

再按q退出

image.png

按q锁定控制台

image.png

浏览器登录控制台,使用路由ip访问

192.168.204.200

image.png

输入登录信息

用户名:admin,密码:admin,点击登录

根据提示修改密码

image.png

image.png

设置网络

点击网络设置,内外网设置,点击关闭

image.png

看到内网网口已经颜色是绿色,代表可用。外网网口处于灰色,不可用,点击外网网口

image.png

选择eth1网卡,点击绑定

image.png

接入方式使用默认的DHCP,生产环境下应该是静态ip或拨号

image.png

点击保存

image.png

返回内外网设置,看到外网网口也变为了绿色。

image.png

鼠标移动到wan1图标位置,上方显示了ip地址为

192.168.193.146

image.png

至此ikuai安装配置完成。

设置Linux

Linux网卡设置

如果此前操作过,请跳过。

打开Linux机器,配置网卡文件

配置第一张网卡,ens33

[root@localhost ~]# vi /etc/NetworkManager/system-connections/ens33.nmconnection 

原来的ipv4内容

[ipv4]
method=manual
address1=192.168.204.101/24
​

修改后的ipv4内容

[ipv4]
method=manual
address1=192.168.204.101/24,192.168.204.200
dns=114.114.114.114;8.8.8.8

添加了网关设置,网关地址(即路由地址:192.168.204.200),同时添加了dns设置。

配置第二张网卡,ens34

[root@localhost ~]# vi /etc/NetworkManager/system-connections/ens34.nmconnection 

原来connection的内容

[connection]
id=ens34
uuid=4b38407c-5aa0-37b6-8526-89b796e8f7f1
type=ethernet
autoconnect-priority=-999
interface-name=ens34
timestamp=1725892709

修改后connection的内容

[connection]
id=ens34
uuid=4b38407c-5aa0-37b6-8526-89b796e8f7f1
type=ethernet
autoconnect-priority=-999
autoconnect=false
interface-name=ens34
timestamp=1725892709

添加了 autoconnect=false语句,表示不自动连接到这个网络,相当于禁用该网络。

iptables防火墙设置

如果此前操作过,请跳过。

关闭并禁用firewalld防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed "/etc/systemd/system/multi-user.target.wants/firewalld.service".
Removed "/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service".

执行开启并初始化iptables防火墙

[root@localhost ~]# yum -y install iptables-services
[root@localhost ~]# systemctl start iptables
[root@localhost ~]# iptables -F
[root@localhost ~]# systemctl enable iptables
[root@localhost ~]# service iptables save
禁用 Selinux

如果此前操作过,请跳过。

[root@localhost ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
[root@localhost ~]# grubby --update-kernel ALL --args selinux=0
[root@localhost ~]# grubby --info DEFAULT
index=0
kernel="/boot/vmlinuz-5.14.0-427.13.1.el9_4.x86_64"
args="ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rl-swap rd.lvm.lv=rl/root rd.lvm.lv=rl/swap selinux=0"
root="/dev/mapper/rl-root"
initrd="/boot/initramfs-5.14.0-427.13.1.el9_4.x86_64.img"
title="Rocky Linux (5.14.0-427.13.1.el9_4.x86_64) 9.4 (Blue Onyx)"
id="fc9c6572842b49a7a39d6111b25bd0b2-5.14.0-427.13.1.el9_4.x86_64"
关闭swap分区

临时关闭

[root@localhost ~]# swapoff -a

永久关闭

[root@localhost ~]# sed -i 's:/dev/mapper/rl-swap:#/dev/mapper/rl-swap:g' /etc/fstab

需要永久关闭

修改主机名
[root@localhost ~]# hostnamectl set-hostname k8s-master01
修改/etc/hosts
[root@localhost ~]# vi /etc/hosts

添加如下内容

192.168.204.101 k8s-master01 m1
192.168.204.102 k8s-node01 n1
192.168.204.103 k8s-node02 n2

ip根据实际情况修改

安装 ipvs
[root@localhost ~]# yum install -y ipvsadm
开启路由转发
# 开启路由转发
[root@localhost ~]# echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
​
# 刷新配置
[root@localhost ~]# sysctl -p
加载 bridge
# 安装后方便网络调试
[root@localhost ~]# yum install -y epel-release
[root@localhost ~]# yum install -y bridge-utils
​
# 所有经过网桥的流量,都必须经过防火墙处理
[root@localhost ~]# modprobe br_netfilter
[root@localhost ~]# echo 'br_netfilter' >> /etc/modules-load.d/bridge.conf
[root@localhost ~]# echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
[root@localhost ~]# echo 'net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf
[root@localhost ~]# sysctl -p

reboot重启机器

重新连接linux

测试ping

ping baidu.com

image.png

查看ikuai控制台,看到Linux访问外部网络确实走的是路由。

image.png

安装docker

如果已安装docker,请跳过该步骤。

如果还没有安装docker,可参考:安装Docker

安装配置cri-dockerd

cri-dockerd简介

在 Kubernetes v1.24 及更早版本中,你可以在 Kubernetes 中使用 Docker Engine, 依赖于一个称作 dockershim 的内置 Kubernetes 组件。 dockershim 组件在 Kubernetes v1.24 发行版本中已被移除;不过,一种来自第三方的替代品, cri-dockerd 是可供使用的。cri-dockerd 适配器允许你通过 容器运行时接口(Container Runtime Interface,CRI)来使用 Docker Engine。

安装 cri-dockerd
[root@k8s-master01 ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14.amd64.tgz
​
[root@k8s-master01 ~]# ls
anaconda-ks.cfg  cri-dockerd-0.3.14.amd64.tgz
​
[root@k8s-master01 ~]# tar -xf cri-dockerd-0.3.14.amd64.tgz 
​
[root@k8s-master01 ~]# ls
anaconda-ks.cfg  cri-dockerd  cri-dockerd-0.3.14.amd64.tgz
​
[root@k8s-master01 ~]# ls cri-dockerd
cri-dockerd
​
[root@k8s-master01 ~]# ll cri-dockerd
total 46844
-rwxr-xr-x 1 1001 docker 47968256 May 14 23:39 cri-dockerd
​
[root@k8s-master01 ~]# cp cri-dockerd/cri-dockerd /usr/bin/
​
配置 cri-dockerd 服务
[root@k8s-master01 ~]# cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
添加 cri-dockerd 套接字
cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
启动 cri-dockerd 对应服务
# 启动docker
[root@k8s-master01 ~]# systemctl start docker
# 开机启动docker
[root@k8s-master01 ~]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
​
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable cri-docker
[root@k8s-master01 ~]# systemctl start cri-docker
[root@k8s-master01 ~]# systemctl is-active cri-docker

注意:需要确保docker已启动,否则出现如下问题:

[root@k8s-master01 ~]# systemctl start cri-docker
Job for cri-docker.service failed because the control process exited with error code.
See "systemctl status cri-docker.service" and "journalctl -xeu cri-docker.service" for details.
​
[root@k8s-master01 ~]# systemctl status cri-docker.service
× cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Fri 2024-09-13 17:25:09 CST; 1min 47s ago
TriggeredBy: × cri-docker.socket
       Docs: https://docs.mirantis.com
    Process: 5081 ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_>
   Main PID: 5081 (code=exited, status=1/FAILURE)
        CPU: 67ms
​
Sep 13 17:25:09 k8s-master01 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
Sep 13 17:25:09 k8s-master01 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
Sep 13 17:25:09 k8s-master01 systemd[1]: cri-docker.service: Start request repeated too quickly.
Sep 13 17:25:09 k8s-master01 systemd[1]: cri-docker.service: Failed with result 'exit-code'.
Sep 13 17:25:09 k8s-master01 systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
...skipping...
× cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Fri 2024-09-13 17:25:09 CST; 1min 47s ago
TriggeredBy: × cri-docker.socket
       Docs: https://docs.mirantis.com
    Process: 5081 ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_>
   Main PID: 5081 (code=exited, status=1/FAILURE)
        CPU: 67ms
​
Sep 13 17:25:09 k8s-master01 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
Sep 13 17:25:09 k8s-master01 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
Sep 13 17:25:09 k8s-master01 systemd[1]: cri-docker.service: Start request repeated too quickly.
Sep 13 17:25:09 k8s-master01 systemd[1]: cri-docker.service: Failed with result 'exit-code'.
Sep 13 17:25:09 k8s-master01 systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
​
​
原因:因为没有启动docker服务
​
启动docker服务后,再次执行正常
​
[root@k8s-master01 ~]# systemctl start docker
[root@k8s-master01 ~]# systemctl start cri-docker
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# systemctl status cri-docker.service
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; preset: disabled)
     Active: active (running) since Fri 2024-09-13 17:27:04 CST; 1min 7s ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 5362 (cri-dockerd)
      Tasks: 9
     Memory: 11.5M
        CPU: 55ms
     CGroup: /system.slice/cri-docker.service
             └─5362 /usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containe>
​
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Hairpin mode is set to none"
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="The binary conntrack is not in>
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="The binary conntrack is not in>
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Loaded network plugin cni"
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Docker cri networking managed >
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Setting cgroupDriver systemd"
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Docker cri received runtime co>
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Starting the GRPC backend for >
Sep 13 17:27:04 k8s-master01 cri-dockerd[5362]: time="2024-09-13T17:27:04+08:00" level=info msg="Start cri-dockerd grpc backend"
Sep 13 17:27:04 k8s-master01 systemd[1]: Started CRI Interface for Docker Application Container Engine.
lines 1-22/22 (END)
​

添加 kubeadm yum 源

配置k8s源

# 配置k8s源
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
EOF
[root@k8s-master01 ~]# yum repolist
repo id                                 repo name
appstream                               Rocky Linux 9 - AppStream
baseos                                  Rocky Linux 9 - BaseOS
docker-ce-stable                        Docker CE Stable - x86_64
epel                                    Extra Packages for Enterprise Linux 9 - x86_64
epel-cisco-openh264                     Extra Packages for Enterprise Linux 9 openh264 (From Cisco) - x86_64
extras                                  Rocky Linux 9 - Extras
kubernetes                              Kubernetes
​

安装k8s

安装k8s
[root@k8s-master01 ~]# yum install -y kubeadm kubelet kubectl

提示:以上安装命令不指定版本,则安装最新版本,如果要指定安装版本,例如:指定安装版本为1.31.1,则使用如下命令安装

yum install -y kubelet-1.31.1 kubectl-1.31.1 kubeadm-1.31.1

实际安装过程

[root@k8s-master01 ~]# yum install -y kubeadm kubelet kubectl
Kubernetes                                                                                       19 kB/s | 9.4 kB     00:00    
Dependencies resolved.
================================================================================================================================
 Package                               Architecture          Version                            Repository                 Size
================================================================================================================================
Installing:
 kubeadm                               x86_64                1.31.1-150500.1.1                  kubernetes                 11 M
 kubectl                               x86_64                1.31.1-150500.1.1                  kubernetes                 11 M
 kubelet                               x86_64                1.31.1-150500.1.1                  kubernetes                 15 M
Installing dependencies:
 conntrack-tools                       x86_64                1.4.7-2.el9                        appstream                 221 k
 cri-tools                             x86_64                1.31.1-150500.1.1                  kubernetes                6.9 M
 kubernetes-cni                        x86_64                1.5.1-150500.1.1                   kubernetes                7.1 M
 libnetfilter_cthelper                 x86_64                1.0.0-22.el9                       appstream                  23 k
 libnetfilter_cttimeout                x86_64                1.0.0-19.el9                       appstream                  23 k
 libnetfilter_queue                    x86_64                1.0.5-1.el9                        appstream                  28 k
​
Transaction Summary
================================================================================================================================
Install  9 Packages
​
Total download size: 51 M
Installed size: 269 M
Downloading Packages:
(1/9): cri-tools-1.31.1-150500.1.1.x86_64.rpm                                                   1.3 MB/s | 6.9 MB     00:05    
(2/9): kubectl-1.31.1-150500.1.1.x86_64.rpm                                                     1.1 MB/s |  11 MB     00:10    
(3/9): kubeadm-1.31.1-150500.1.1.x86_64.rpm                                                     1.1 MB/s |  11 MB     00:10    
(4/9): conntrack-tools-1.4.7-2.el9.x86_64.rpm                                                   325 kB/s | 221 kB     00:00    
(5/9): libnetfilter_cttimeout-1.0.0-19.el9.x86_64.rpm                                           118 kB/s |  23 kB     00:00    
(6/9): libnetfilter_cthelper-1.0.0-22.el9.x86_64.rpm                                            152 kB/s |  23 kB     00:00    
(7/9): libnetfilter_queue-1.0.5-1.el9.x86_64.rpm                                                121 kB/s |  28 kB     00:00    
(8/9): kubelet-1.31.1-150500.1.1.x86_64.rpm                                                     2.1 MB/s |  15 MB     00:06    
(9/9): kubernetes-cni-1.5.1-150500.1.1.x86_64.rpm                                               2.0 MB/s | 7.1 MB     00:03    
--------------------------------------------------------------------------------------------------------------------------------
Total                                                                                           3.7 MB/s |  51 MB     00:13     
Kubernetes                                                                                      6.1 kB/s | 1.7 kB     00:00    
Importing GPG key 0x9A296436:
 Userid     : "isv:kubernetes OBS Project <isv:kubernetes@build.opensuse.org>"
 Fingerprint: DE15 B144 86CD 377B 9E87 6E1A 2346 54DA 9A29 6436
 From       : https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                        1/1 
  Installing       : libnetfilter_queue-1.0.5-1.el9.x86_64                                                                  1/9 
  Installing       : libnetfilter_cthelper-1.0.0-22.el9.x86_64                                                              2/9 
  Installing       : libnetfilter_cttimeout-1.0.0-19.el9.x86_64                                                             3/9 
  Installing       : conntrack-tools-1.4.7-2.el9.x86_64                                                                     4/9 
  Running scriptlet: conntrack-tools-1.4.7-2.el9.x86_64                                                                     4/9 
  Installing       : kubernetes-cni-1.5.1-150500.1.1.x86_64                                                                 5/9 
  Installing       : cri-tools-1.31.1-150500.1.1.x86_64                                                                     6/9 
  Installing       : kubeadm-1.31.1-150500.1.1.x86_64                                                                       7/9 
  Installing       : kubelet-1.31.1-150500.1.1.x86_64                                                                       8/9 
  Running scriptlet: kubelet-1.31.1-150500.1.1.x86_64                                                                       8/9 
  Installing       : kubectl-1.31.1-150500.1.1.x86_64                                                                       9/9 
  Running scriptlet: kubectl-1.31.1-150500.1.1.x86_64                                                                       9/9 
  Verifying        : cri-tools-1.31.1-150500.1.1.x86_64                                                                     1/9 
  Verifying        : kubeadm-1.31.1-150500.1.1.x86_64                                                                       2/9 
  Verifying        : kubectl-1.31.1-150500.1.1.x86_64                                                                       3/9 
  Verifying        : kubelet-1.31.1-150500.1.1.x86_64                                                                       4/9 
  Verifying        : kubernetes-cni-1.5.1-150500.1.1.x86_64                                                                 5/9 
  Verifying        : conntrack-tools-1.4.7-2.el9.x86_64                                                                     6/9 
  Verifying        : libnetfilter_cttimeout-1.0.0-19.el9.x86_64                                                             7/9 
  Verifying        : libnetfilter_cthelper-1.0.0-22.el9.x86_64                                                              8/9 
  Verifying        : libnetfilter_queue-1.0.5-1.el9.x86_64                                                                  9/9 
​
Installed:
  conntrack-tools-1.4.7-2.el9.x86_64         cri-tools-1.31.1-150500.1.1.x86_64          kubeadm-1.31.1-150500.1.1.x86_64       
  kubectl-1.31.1-150500.1.1.x86_64           kubelet-1.31.1-150500.1.1.x86_64            kubernetes-cni-1.5.1-150500.1.1.x86_64 
  libnetfilter_cthelper-1.0.0-22.el9.x86_64  libnetfilter_cttimeout-1.0.0-19.el9.x86_64  libnetfilter_queue-1.0.5-1.el9.x86_64  
​
Complete!
[root@k8s-master01 ~]# 
​

可以看到已经完成安装,kubernetes的版本为1.31.1。

设置开机自启动

设置开机自启动kubelet,命令如下:

[root@k8s-master01 ~]# systemctl enable kubelet.service

操作过程

[root@k8s-master01 ~]# systemctl enable kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@k8s-master01 ~]# 

克隆主机

克隆出另外的两台主机,ip及对应的主机名如下:

192.168.204.102 k8s-node01
192.168.204.103 k8s-node02

克隆步骤

选中要克隆的机器RL-1,点击虚拟机,管理,克隆

image.png

image.png

image.png

image.png

image.png

image.png

得到RL-1-kl1,如下:

image.png

用同样的方法,继续克隆RL-1得到RL-1-kl2

image.png

修改克隆机器主机名和ip地址

在第一台克隆机器操作

打开第一台克隆机器(RL-1-kl1)

修改主机名

[root@k8s-master01 ~]# hostnamectl set-hostname k8s-node01

配置ip

[root@k8s-master01 ~]# vi /etc/NetworkManager/system-connections/ens33.nmconnection

修改ip为 192.168.204.102,如下

[ipv4]
method=manual
address1=192.168.204.102/24,192.168.204.200
dns=114.114.114.114;8.8.8.8

重启机器

[root@k8s-master01 ~]# reboot

在第二台克隆机器操作

打开第二台克隆机器(RL-1-kl2)

修改主机名

[root@k8s-master01 ~]# hostnamectl set-hostname k8s-node02

配置ip

[root@k8s-master01 ~]# vi /etc/NetworkManager/system-connections/ens33.nmconnection

修改ip为 192.168.204.103,如下

[ipv4]
method=manual
address1=192.168.204.103/24,192.168.204.200
dns=114.114.114.114;8.8.8.8

重启机器

[root@k8s-master01 ~]# reboot

初始化主节点

 在k8s-master01上操作

kubeadm init --apiserver-advertise-address=192.168.204.101 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version 1.31.1 --service-cidr=10.10.0.0/12 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock

--apiserver-advertise-address=192.168.204.101的指定apiserver的ip地址,这里设置为k8s-master01的ip地址,需要根据实际情况修改ip地址值。

--image-repository为镜像仓库地址。

--kubernetes-version 1.31.1为指定kubernetes版本,这里根据实际情况修改版本号。

--service-cidr=10.10.0.0/12指定服务cidr的ip地址范围。

--pod-network-cidr=10.244.0.0/16指定pod网络的ip地址范围。

--ignore-preflight-errors=all忽略预检错误。

--cri-socket unix:///var/run/cri-dockerd.sock指定cri-socket地址。

操作过程

[root@k8s-master01 ~]# kubeadm init --apiserver-advertise-address=192.168.204.101 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version 1.31.1 --service-cidr=10.10.0.0/12 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.31.1
[preflight] Running pre-flight checks
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0914 16:56:27.360405    1647 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 192.168.204.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.204.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.204.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001519379s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 9.001542346s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ttqcoj.v5kq9n9v0oia6p38
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
​
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
Alternatively, if you are the root user, you can run:
​
  export KUBECONFIG=/etc/kubernetes/admin.conf
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
--discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac 
[root@k8s-master01 ~]# 
​

关注两个红色方框圈起来的地方

image.png

第一个框是为了能够使用集群需要做的操作,第二个框是worker节点加入集群的命令。

执行第一个框的命令

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
​

查看kubernetes节点

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES           AGE    VERSION
k8s-master01   NotReady   control-plane   4m3s   v1.31.1
​

添加worker节点

添加worker节点命令

k8s-master01初始化输出日志中,找到并复制 kubeadm join ...命令(第二个框的命令)

kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
    --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac 

注意:

1.每个人的 --token--discovery-token-ca-cert-hash的值会不一样,一定要从自己执行的初始化输出日志中复制。

2.在命令后面还需要加上cri socket地址

--cri-socket unix:///var/run/cri-dockerd.sock

完整命令如下

kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
    --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac --cri-socket unix:///var/run/cri-dockerd.sock
操作过程

k8s-node01机器执行加入命令

[root@k8s-node01 ~]# kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
        --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac --cri-socket unix:///var/run/cri-dockerd.sock

完整日志

[root@k8s-node01 ~]# kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
        --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
    [WARNING FileExisting-socat]: socat not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.4081ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
​
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
​
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
​
[root@k8s-node01 ~]# 
​

k8s-node02机器执行加入命令

[root@k8s-node02 ~]# kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
        --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks

完整日志

[root@k8s-node02 ~]# kubeadm join 192.168.204.101:6443 --token ttqcoj.v5kq9n9v0oia6p38 \
        --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
    [WARNING FileExisting-socat]: socat not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.875021ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
​
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
​
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
​
[root@k8s-node02 ~]# 
​

提示:默认token有效期为24小时,当token过期之后,这时就需要重新创建token,重新申请命令如下:

kubeadm token create --print-join-command

# 安装半个月左右,操作如下命令创建token,看到token变化了。原来token:ttqcoj.v5kq9n9v0oia6p38,现在token:d0hzxd.7zknxykjlyap08g1,但--discovery-token-ca-cert-hashsha256没有变化。
[root@k8s-master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.204.101:6443 --token d0hzxd.7zknxykjlyap08g1 --discovery-token-ca-cert-hash sha256:e779fa48edb461d766be9e43c95eed9c172d5a460a2e5c3121662919b1e488ac 
​
验证

k8s-master01机器,查看节点,如下:

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   34m     v1.31.1
k8s-node01     NotReady   <none>          5m46s   v1.31.1
k8s-node02     NotReady   <none>          4m44s   v1.31.1

查看到了三台节点信息,但是三台节点的 STATUS都是NotReady状态,kubernetes想要正常工作,正常应该为Ready状态,kubernetes必须工作在一个扁平的网络空间中,所以还需要给节点机器添加网络插件才能正常工作。

部署网络插件

下载calico.yaml
[root@k8s-master01 ~]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico-typha.yaml -o calico.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  253k  100  253k    0     0  12030      0  0:00:21  0:00:21 --:--:--  6241
​
[root@k8s-master01 ~]# ll
total 14772
-rw-------. 1 root root       1102 Sep  9 14:45 anaconda-ks.cfg
-rw-r--r--  1 root root     259970 Sep 14 17:55 calico.yaml
drwxr-xr-x  2 1001 docker       25 May 14 23:39 cri-dockerd
-rw-r--r--  1 root root   14859664 Sep 13 17:16 cri-dockerd-0.3.14.amd64.tgz
[root@k8s-master01 ~]# 
​
修改calico.yaml
[root@k8s-master01 ~]# vim calico.yaml

找到CALICO_IPV4POOL_CIDR,值改为

10.244.0.0/16

image.png

将CALICO_IPV4POOL_IPIP的值由Always改为off

        # Enable IPIP
        - name: CALICO_IPV4POOL_IPIP
          value: "Off"

image.png

查看 calico.yaml需要的镜像
[root@k8s-master01 ~]# cat calico.yaml  | grep "image:"
          image: docker.io/calico/cni:v3.28.1
          image: docker.io/calico/cni:v3.28.1
          image: docker.io/calico/node:v3.28.1
          image: docker.io/calico/node:v3.28.1
          image: docker.io/calico/kube-controllers:v3.28.1
      - image: docker.io/calico/typha:v3.28.1
​

可以看到需要的calico镜像有4个,分别为:cni、node、kube-controllers、typha。

问题:calico网络插件因为网络原因下载不下来,解决方法是手动下载calico相关镜像文件,然后再本地加载镜像。

手动下载calico压缩包文件

下载地址

https://github.com/projectcalico/calico/releases/tag/v3.28.1

image.png

上传并解压calico压缩包文件

Linux新建calico目录

[root@k8s-master01 ~]# mkdir calico

release-v3.28.1.tgz文件上传至Linux calico目录下

查看文件

[root@k8s-master01 ~]# cd calico
[root@k8s-master01 calico]# ls
release-v3.28.1.tgz

解压文件

[root@k8s-master01 calico]# tar -zxf release-v3.28.1.tgz

查看解压文件,包含需要的镜像包

[root@k8s-master01 calico]# ls
release-v3.28.1  release-v3.28.1.tgz
​
[root@k8s-master01 calico]# ls release-v3.28.1
bin  images  manifests
​
[root@k8s-master01 calico]# ls release-v3.28.1/images/
calico-cni.tar  calico-dikastes.tar  calico-flannel-migration-controller.tar  calico-kube-controllers.tar  calico-node.tar  calico-pod2daemon.tar  calico-typha.tar
加载calico镜像

进入镜像目录

cd release-v3.28.1/images/

加载需要的4个镜像,三台机器都执行,命令如下:

docker load -i calico-cni.tar 
docker load -i calico-node.tar
docker load -i calico-kube-controllers.tar
docker load -i calico-typha.tar 

其中 k8s-master01的操作过程

[root@k8s-master01 calico]# cd release-v3.28.1/images/
​
[root@k8s-master01 images]# docker load -i calico-cni.tar 
Loaded image: calico/cni:v3.28.1
​
[root@k8s-master01 images]# docker load -i calico-node.tar 
3831744e3436: Loading layer [==================================================>]  366.9MB/366.9MB
Loaded image: calico/node:v3.28.1
​
[root@k8s-master01 images]# docker load -i calico-kube-controllers.tar
4f27db678727: Loading layer [==================================================>]  75.59MB/75.59MB
Loaded image: calico/kube-controllers:v3.28.1
​
[root@k8s-master01 images]# docker load -i calico-typha.tar 
993f578a98d3: Loading layer [==================================================>]  67.61MB/67.61MB
Loaded image: calico/typha:v3.28.1
[root@k8s-master01 images]# 

查看calico相关的镜像

[root@k8s-master01 images]# docker images | grep calico
calico/typha                                                      v3.28.1    a19ab150aded   6 weeks ago     71.3MB
calico/kube-controllers                                           v3.28.1    9d19dff735fa   6 weeks ago     79.3MB
calico/cni                                                        v3.28.1    f6d76a1259a8   6 weeks ago     209MB
calico/node                                                       v3.28.1    8bbeb9e1ee32   6 weeks ago     365MB
​

image.png

发送镜像tar文件到另外两台机器

# 发送到k8s-node01
[root@k8s-master01 images]# scp calico-cni.tar root@k8s-node01:/root/
[root@k8s-master01 images]# scp calico-node.tar root@k8s-node01:/root/  
[root@k8s-master01 images]# scp calico-kube-controllers.tar root@k8s-node01:/root/ 
[root@k8s-master01 images]# scp calico-typha.tar root@k8s-node01:/root/
​
# 发送到k8s-node02
[root@k8s-master01 images]# scp calico-cni.tar root@k8s-node02:/root/
[root@k8s-master01 images]# scp calico-node.tar root@k8s-node02:/root/  
[root@k8s-master01 images]# scp calico-kube-controllers.tar root@k8s-node02:/root/ 
[root@k8s-master01 images]# scp calico-typha.tar root@k8s-node02:/root/

注意:执行命令后,需要根据提示输入yes,输入密码,其中一次scp命令操作过程如下。

[root@k8s-master01 images]# scp calico-cni.tar root@k8s-node01:/root/
The authenticity of host 'k8s-node01 (192.168.204.102)' can't be established.
ED25519 key fingerprint is SHA256:NXvSEVHKieya+Xby50TEO6oCeJWX4PmI+7xVezIlzhQ.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'k8s-node01' (ED25519) to the list of known hosts.
root@k8s-node01's password: 
calico-cni.tar                                                                                                                    100%  199MB  95.8MB/s   00:02    
[root@k8s-master01 images]# 
​

另外两台机器执行加载镜像操作

# k8s-node01机器加载calico镜像
[root@k8s-node01 ~]# docker load -i calico-cni.tar 
[root@k8s-node01 ~]# docker load -i calico-node.tar
[root@k8s-node01 ~]# docker load -i calico-kube-controllers.tar
[root@k8s-node01 ~]# docker load -i calico-typha.tar
​
# k8s-node02机器加载calico镜像
[root@k8s-node02 ~]# docker load -i calico-cni.tar 
[root@k8s-node02 ~]# docker load -i calico-node.tar
[root@k8s-node02 ~]# docker load -i calico-kube-controllers.tar
[root@k8s-node02 ~]# docker load -i calico-typha.tar

验证镜像是否加载成功

# k8s-node01机器验证
[root@k8s-node01 ~]# docker images | grep calico
calico/typha                                         v3.28.1   a19ab150aded   6 weeks ago     71.3MB
calico/kube-controllers                              v3.28.1   9d19dff735fa   6 weeks ago     79.3MB
calico/cni                                           v3.28.1   f6d76a1259a8   6 weeks ago     209MB
calico/node                                          v3.28.1   8bbeb9e1ee32   6 weeks ago     365MB
​
# k8s-node02机器验证
[root@k8s-node02 ~]# docker images | grep calico
calico/typha                                         v3.28.1   a19ab150aded   6 weeks ago     71.3MB
calico/kube-controllers                              v3.28.1   9d19dff735fa   6 weeks ago     79.3MB
calico/cni                                           v3.28.1   f6d76a1259a8   6 weeks ago     209MB
calico/node                                          v3.28.1   8bbeb9e1ee32   6 weeks ago     365MB
​

看到都已经加载了calico相关的4个镜像。

在k8s-master01机器应用calico配置文件

[root@k8s-master01 ~]# kubectl apply -f calico.yaml

操作过程

[root@k8s-master01 ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
poddisruptionbudget.policy/calico-typha created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
service/calico-typha created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
deployment.apps/calico-typha created
[root@k8s-master01 ~]# 
​

查看节点

[root@k8s-master01 images]# kubectl get node
NAME           STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   7h52m   v1.31.1
k8s-node01     Ready    <none>          7h24m   v1.31.1
k8s-node02     Ready    <none>          7h22m   v1.31.1

image.png

看到3台机器都是 Ready状态,说明calico网络插件已经正确安装。

查看所有pod,为 Running状态,说明Kubernetes集群已经成功部署了。

[root@k8s-master01 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS     AGE
kube-system   calico-kube-controllers-7fbd86d5c5-k2nd8   1/1     Running   0            6h41m
kube-system   calico-node-qfqqc                          1/1     Running   0            6h41m
kube-system   calico-node-vz7zx                          1/1     Running   0            6h41m
kube-system   calico-node-z6p7k                          1/1     Running   0            6h41m
kube-system   calico-typha-669c48c58c-sf7km              1/1     Running   1 (6h ago)   6h41m
kube-system   coredns-855c4dd65d-l4rcc                   1/1     Running   0            8h
kube-system   coredns-855c4dd65d-qrjlk                   1/1     Running   0            8h
kube-system   etcd-k8s-master01                          1/1     Running   1 (6h ago)   8h
kube-system   kube-apiserver-k8s-master01                1/1     Running   1 (6h ago)   8h
kube-system   kube-controller-manager-k8s-master01       1/1     Running   1 (6h ago)   8h
kube-system   kube-proxy-pwdvj                           1/1     Running   1 (6h ago)   8h
kube-system   kube-proxy-wt7gl                           1/1     Running   1 (6h ago)   7h38m
kube-system   kube-proxy-xc6gt                           1/1     Running   1 (6h ago)   7h37m
kube-system   kube-scheduler-k8s-master01                1/1     Running   1 (6h ago)   8h
[root@k8s-master01 ~]# 

image.png

清理不需要的文件

三台机器,分别清理不需要的文件,节省存储空间

[root@k8s-master01 ~]# ls
anaconda-ks.cfg  calico  calico.yaml  cri-dockerd  cri-dockerd-0.3.14.amd64.tgz
[root@k8s-master01 ~]# rm -rf cri-dockerd-0.3.14.amd64.tgz 
[root@k8s-master01 ~]# rm -rf calico
​
​
[root@k8s-node01 ~]# ls
anaconda-ks.cfg  calico-cni.tar  calico-kube-controllers.tar  calico-node.tar  calico-typha.tar  cri-dockerd  cri-dockerd-0.3.14.amd64.tgz
[root@k8s-node01 ~]# rm -rf calico-*
​
[root@k8s-node02 ~]# rm -rf calico-*
​
​

拍快照

三台机器分别拍快照

image.png

三台机器拍快照成功如下

image.png

image.png

image.png

完成!enjoy it!

### 如何搭建 Kubernetes 集群 #### 选择并配置操作系统 对于构建Kubernetes集群而言,推荐的操作系统为Ubuntu 20.04 LTS或是CentOS 7/8。确保所选操作系统的安装与基本配置已经完成之后,需执行系统更新命令以获取最新的软件包版本。针对Ubuntu,应运行`sudo apt-get update && sudo apt-get upgrade -y`; 对于CentOS,则应该使用`sudo yum update -y`[^1]。 #### 安装 Docker Docker作为容器引擎,在Kubernetes环境中扮演着至关重要的角色。按照官方文档指导完成Docker的安装过程是必要的前置条件之一。 #### 设置 IPVS 模式的 Kube-proxy 一旦完成了ipset和ipvsadm工具的安装工作,就可以考虑调整Kubernetes的服务代理机制至更高效的IPVS模式下运作。这一步骤可通过向kubeadm初始化指令附加特定参数实现,即`--feature-gates=SupportIPVSProxyMode=true`[^3]。 #### 启动并使能 Kubelet 服务 为了保证kubelet能够随主机启动而自动激活,并处于最新状态,应当运用如下命令对其进行设定:`systemctl enable kubelet && systemctl restart kubelet`[^4]。 #### 加入节点到集群 最后,通过执行由master节点生成的加入令牌相关的命令,可以顺利地把新的worker节点纳入整个Kubernetes集群体系之中。例如,一条典型的加入命令可能看起来像这样: ```bash kubeadm join 172.16.188.175:6443 --token bh0p19.4bdguik2dp3y86mv \ --discovery-token-ca-cert-hash sha256:2327aff6edc65a0ccf11d09ffed3890cf560b56ddde13c47a88026c6e525a0c9 ``` 此命令允许指定的工作节点连接回主控端口以及验证证书哈希值来安全地成为集群的一部分[^5]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值