kubeadm实现k8s高可用集群环境部署与配置 (2)

记录kubeadm join的输出,后面需要这个命令将备master节点和node节点加入集群中

Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.213.200:6443 --token ebx4uz.9y3twsnoj9yoscoo \ --discovery-token-ca-cert-hash sha256:1bc280548259dd8f1ac53d75e918a8ec99c234b13f4fe18a71435bbbe8cb26f3 加载环境变量 [root@master01 ~]# echo "export KUBECONFIG=http://www.likecs.com/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master01 ~]# source .bash_profile 安装flannel网络 [root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml 备master节点加入集群 配置免密登录

配置master01到master02、master03免密登录

#创建秘钥 [root@master01 ~]# ssh-keygen -t rsa #将秘钥同步至master02,master03 [root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.213.182 [root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.213.183 #免密登陆测试 [root@master01 ~]# ssh master02 [root@master01 ~]# ssh 192.168.213.183 master01分发证书

在master01上运行脚本cert-main-master.sh,将证书分发至master02和master03

[root@master01 ~]# cat cert-main-master.sh USER=root # customizable CONTROL_PLANE_IPS="192.168.213.182 192.168.213.183" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt # Quote this line if you are using external etcd scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key done [root@master01 ~]# ./cert-main-master.sh 备master节点移动证书至指定目录

在master02,master03上运行脚本cert-other-master.sh,将证书移至指定目录

[root@master02 ~]# cat cert-other-master.sh USER=root # customizable mkdir -p /etc/kubernetes/pki/etcd mv /${USER}/ca.crt /etc/kubernetes/pki/ mv /${USER}/ca.key /etc/kubernetes/pki/ mv /${USER}/sa.pub /etc/kubernetes/pki/ mv /${USER}/sa.key /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt # Quote this line if you are using external etcd mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key [root@master02 ~]# ./cert-other-master.sh 备master节点加入集群

在master02和master03节点上运行加入集群的命令

kubeadm join 192.168.213.200:6443 --token ebx4uz.9y3twsnoj9yoscoo \ --discovery-token-ca-cert-hash sha256:1bc280548259dd8f1ac53d75e918a8ec99c234b13f4fe18a71435bbbe8cb26f3 备master节点加载环境变量

此步骤是为了在备master节点上也能执行kubectl命令

scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/ echo "export KUBECONFIG=http://www.likecs.com/etc/kubernetes/admin.conf" >> ~/.bash_profile source .bash_profile node节点加入集群 加入

在node节点运行初始化master生成的加入集群的命令

kubeadm join 192.168.213.200:6443 --token ebx4uz.9y3twsnoj9yoscoo \ --discovery-token-ca-cert-hash sha256:1bc280548259dd8f1ac53d75e918a8ec99c234b13f4fe18a71435bbbe8cb26f3 集群节点查看 [root@master01 ~]# kubectl get nodes [root@master01 ~]# kubectl get pod -o wide -n kube-system

所有control plane节点处于ready状态,所有的系统组件也正常

在这里插入图片描述

对接私有仓库

私有仓库配置省略,在所有节点上执行以下步骤

修改daemon.json [root@master01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.213.181 master01 192.168.213.182 master02 192.168.213.183 master03 192.168.213.191 node01 192.168.213.192 node02 192.168.213.129 reg.zhao.com [root@master01 ~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://sopn42m9.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries": ["https://reg.zhao.com"] } [root@master01 ~]# systemctl daemon-reload [root@master01 ~]# systemctl restart docker 创建认证secret

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wpxxzw.html