本教程将介绍从零开始讲解如何搭建kubenetes。教程内容均来源于各自的官方文档,并做了整理和说明,各程序的版本号为截止到2023年9月4日的最新版本,其中版本号为kubenetes v1.28.1

一、安装k8s

1、安装容器运行时(containerd)

1.1 前置准备

参考教程:Container Runtimes,详细步骤如下:

# 目的:
# 转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

# 检验配置是否生效,sysctl返回值为1
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

1.2 安装containerd

参考官方教程:Getting started with containerd,详细步骤如下:

# 1、下载安装containerd
wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.7.5/containerd-1.7.5-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-1.7.5-linux-amd64.tar.gz

# 2、配置通过systemd启动containd
wget https://ghproxy.com/https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -O /usr/lib/systemd/system/containerd.service
systemctl daemon-reload && systemctl enable --now containerd

# 3、安装 runc
wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc

# 4、安装 CNI plugins
wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz

# 5、生成 containerd 默认配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

# 6、修改配置文件,配置 runc 使用 systemd cgroup 驱动,将 SystemdCgroup 改为 true 注意配置项是区分大小写的
# 并将 sandbox_image 地址修改为国内的地址,配置镜像加速器
# 文档:https://v1-26.docs.kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd
# 文档:https://github.com/containerd/cri/blob/master/docs/registry.md
vi /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

并且将 sandbox_image = "registry.k8s.io/pause:3.9"
修改为 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

# 配置镜像加速器,在 [plugins."io.containerd.grpc.v1.cri".registry.mirrors] 后面增加两行配置,注意缩进
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["http://mirrors.ustc.edu.cn"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"]
endpoint = ["http://hub-mirror.c.163.com"]

# 7、启动 containerd
systemctl restart containerd

# 启动成功后可以查看到监听的端口
[root@stache31 ~]# netstat -nlput | grep containerd
tcp 0 0 127.0.0.1:36669 0.0.0.0:* LISTEN

2、利用kubeadm安装k8s集群

参考官方教程,详细步骤如下:

2.1 前置准备

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭防火墙
systemctl disable --now firewalld

# 关闭swap分区,临时关闭,及时生效
swapoff -a
# 编辑/etc/fstab,将swap挂载的配置注释掉。永久禁用,需要reboot生效
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

2.2 初始化集群

# 添加官方源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

# 安装kubelet、kubeadm、kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

# 初始化集群
# kubeadm init的参数用法参考:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/
[root@centos ~]# kubeadm init \
--kubernetes-version=1.28.1 \
--apiserver-advertise-address=192.168.1.31 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.100.0.0/16
# pod-network-cidr 此参数必须设置,否则后面安装calico网络插件是会报错

# 创建授权配置文件(此命令在执行完kubeadm init时会显示)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


# [可选] crictl 配置默认 runtime-endpoint,否则每次执行会有错误提示
crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock

3、安装网络插件

3.1 安装Calico

Kubernetes.io列出的可选的网络插件清单为:安装网络扩展(Addon) ,我们选择 **Calico**,可参考:官方安装指引,详细步骤如下:

注意⚠️⚠️⚠️:网络插件仅在master执行即可

# 1、Install the Tigera Calico operator and CRD
kubectl create -f https://ghproxy.com/https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

# 2、下载自定义配置
wget https://ghproxy.com/https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
# 确认下载的custom-resources.yaml文件中,cidr值是否与集群的pod cidr相同.(cidr值:指明 Pod 网络可以使用的 IP 地址段)
#
# calicoNetwork:
# # Note: The ipPools section cannot be modified post-install.
# ipPools:
# - blockSize: 26
# cidr: 192.168.0.0/16
#
#此处cidr值修改为 10.100.0.0/16,需要与kubeadm init 中的pod-network-cidr值保持一致
kubectl create -f custom-resources.yaml

# 3、确认所有的pod状态均为Running
watch kubectl get pods -n calico-system

3.2 加入Worker节点

参考官方文档 加入节点,详细如下:

# 在两个node节点执行如下命令,将其加入集群(此命令在执行完kubeadm init时会显示)
kubeadm join 192.168.1.31:6443 --token e8nom8.r9od2jsiry2tqwch --discovery-token-ca-cert-hash sha256:6a4cd514e9e0fd968e1dc3506008107fa99b9abc231396c225a7c4acbd8c88ff

# 验证查看node注册状态,当为 Ready 时表示网络插件安装完成
[root@stache31 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
server-31 Ready control-plane 17d v1.28.1
server-32 Ready <none> 17d v1.28.1
server-33 Ready <none> 17d v1.28.1

# 检查 CoreDNS Pod 是否 Running 来确认其是否正常运行。一旦 CoreDNS Pod 启用并运行,你就可以继续加入节点。
kubectl get pods --all-namespaces

# [可选],在Worker节点上,通过配置实现对集群的控制。
mkdir -p $HOME/.kube
scp root@<control-plane-host>:/etc/kubernetes/admin.conf $HOME/.kube/config

4、其他安装

4.1 Helm

安装参考:Installing Helm

wget https://get.helm.sh/helm-v3.12.3-linux-amd64.tar.gz
tar -zxvf helm-v3.12.3-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

4.2 使用Helm安装kubernets dashboard

1、安装参考 Kubernetes Dashboard

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

2、暴露端口

# 将本地8443映射到容器中service服务,容器内使用端口为443
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard 8443:443

#打开以下链接即可访问
https://127.0.0.1:8443

3、创建dashboard管理员用户

参考:creating-sample-user

a、创建文件 kubernertes-dashboard-adminuser.yaml

# Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---

# ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
---

# Secret
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token

b、执行

kubectl apply -f kubernertes-dashboard-adminuser.yaml

c、获取admin user的token,粘贴到dashboard初始化的登录页面即可。

kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d

4、dashboard展示