Administrator
发布于 2024-06-18 / 14 阅读
1
0

Kubernetes集群搭建

Ubuntu 22.04 基于 containerd(不使用 docker) kubeadm 安装 k8s

节点准备

master

192.168.0.140

containerd,kubectl,kubeadm,kubelet

node01

192.168.0.141

containerd,kubectl,kubeadm,kubelet

node02

192.168.0.142

containerd,kubectl,kubeadm,kubelet

node03

192.168.0.143

containerd,kubectl,kubeadm,kubelet

环境

Ubuntu 22.04(硬件:2C8G) x 3

kubernetes 1.29

calico 3.29

本地加速IP:192.168.0.1:7890

若无加速IP,请更换相关 url 以及组件源,如:Docker相关url,apt 源,容器源

前置工作

更改主机名

hostnamectl set-hostname master
hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03

主机名解析

vim /etc/hosts
192.168.0.140 k8s-master01.xiaopohai.com k8s-master01 kubeapi.xiaopohai.com
192.168.0.141 k8s-node01.xiaopohai.com k8s-node01
192.168.0.142 k8s-node02.xiaopohai.com k8s-node02
192.168.0.143 k8s-node03.xiaopohai.com k8s-node03

关闭防火墙

apt install ufw -y
ufw disable
ufw status

设置时间同步

apt update
apt install chrony -y
sed -i '/0.ubuntu.pool.ntp.org/ s/^/#/' /etc/chrony/chrony.conf
sed -i '/1.ubuntu.pool.ntp.org/ s/^/#/' /etc/chrony/chrony.conf
sed -i '/2.ubuntu.pool.ntp.org/ s/^/#/' /etc/chrony/chrony.conf
#使用阿里云NTP服务
sed -i '21 a\server ntp1.aliyun.com iburst' /etc/chrony/chrony.conf
sed -i '22 a\server ntp2.aliyun.com iburst' /etc/chrony/chrony.conf
sed -i '23 a\server ntp3.aliyun.com iburst' /etc/chrony/chrony.conf
systemctl restart chrony

禁用swap分区

sed -i '/swap/ s/^/#/' /etc/fstab
systemctl --type swap
  UNIT          LOAD   ACTIVE SUB    DESCRIPTION   
  dev-sda5.swap loaded active active Swap Partition

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

systemctl mask dev-sda5.swap
swapoff -a

# 检查
free -m
cat /etc/fstab | grep swap

开启系统流量转发

#将桥接的IPv4流量传递到iptables的链
#(有一些ipv4的流量不能走iptables链,因为linux内核的一个过滤器,每个流量都会经过他,然后再匹配是否可进入当前应用进程去处理,所以会导致流量丢失),配置k8s.conf文件
touch /etc/sysctl.d/k8s.conf
cat >> /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system


# 配置 overlay br_netfilter 模块
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 执行
modprobe overlay
modprobe br_netfilter
# 检查,正常:无返回
lsmod | grep "overlay|br_netfilter"

安装配置 containerd K8S

#控制节点,任务节点均需操作。
# 配置 containerd 源
apt-get update
apt-get install ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update

# 配置 k8s 源
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.28/deb/ /
# deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/addons:/cri-o:/stable:/v1.28/deb/ /
EOF

# 安装 containerd kubelet kubeadm kubectl
apt update
apt install -y containerd kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# 配置 containerd 
mkdir -pv /etc/containerd
containerd config default > /etc/containerd/config.toml
# 编辑 containerd config
# 搜索 SystemdCgroup,修改其值为 true,上层 runtime_type 默认应为 io.containerd.runc.v2,若不是,请修改
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

# 配置 containerd 加速
mkdir /etc/systemd/system/containerd.service.d
cat <<EOF | tee /etc/systemd/system/containerd.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.0.1:7890"
Environment="HTTPS_PROXY=http://192.168.0.1:7890"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/24,10.96.0.0/12,10.244.0.0/16"
EOF

# 配置 crictl
cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: true # 是否启用 debug
pull-image-on-create: false
EOF

# 检查 crictl,若无 crictl 命令,则安装`apt install cri-tools`
crictl ps

# 结束
systemctl daemon-reload
systemctl restart containerd.service
systemctl enable containerd.service
systemctl status containerd.service
systemctl enable kubelet

K8S 初始化

kubeadm config print init-defaults --component-configs KubeProxyConfiguration,KubeletConfiguration > kubeadm-config.yaml
kubeadm config images pull --cri-socket unix:///var/run/containerd/containerd.sock
# 检查镜像
crictl images
kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--cri-socket unix:///var/run/containerd/containerd.sock \
--v=5
# 正常应出现如下回显
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.120.81:6443 --token x55b7v.5z6o...8w1a7 \
        --discovery-token-ca-cert-hash sha256:97bc82a55da...cbab9ebf31487b
# 回显结束
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

工作节点

执行控制节点中回显中的kubeadm join命令。 如果遗忘:

kubeadm token create --print-join-command
kubeadm token create --print-join-command --ttl 0 # 永久 token

配置 k8s CNI 网络插件 calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
sed -i.bak -r 's/.+(cidr.+)/      #\1\n      cidr: 10.244.0.0\/16/' ./custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml


评论