从零搭建K8S测试集群 (4)

按照kubeadm的提示,需要先执行下面这段代码配置kubeconfig才可以正常访问到集群

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

否则会出现以下问题,执行完上面的方法后再执行kubectl get nodes就正常了

[root@vm1 vagrant]# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port? 或者: [root@vm1 vagrant]# kubectl get nodes Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

此时查看以下k8s内部pod的运行状态,可以发现,只有kube-proxy和kube-apiserver处于就绪状态

[root@vm1 vagrant]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-74ff55c5b-4ndtt 0/1 ContainerCreating 0 43s coredns-74ff55c5b-tmc7n 0/1 ContainerCreating 0 43s etcd-vm1 0/1 Running 0 51s kube-apiserver-vm1 1/1 Running 0 51s kube-controller-manager-vm1 0/1 Running 0 51s kube-proxy-5mvwf 1/1 Running 0 44s kube-scheduler-vm1 0/1 Running 0 51s 部署pod网络

接下来需要部署Pod网络,否则我们观察到的节点会是NotReady的状态,如下所示

[root@vm1 vagrant]# kubectl get nodes NAME STATUS ROLES AGE VERSION vm1 NotReady control-plane,master 4m v1.20.0

具体使用哪一个网络,可以参考https://kubernetes.io/docs/concepts/cluster-administration/addons/,这里选择了大名鼎鼎的flannel插件,在使用kubeadm时,flannel要求我们在kubeadm init时指定--pod-network-cidr参数来初始化cidr。

NOTE: If kubeadm is used, then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init to ensure that the podCIDR is set.

另外,由于这里我们使用的是vagrant创建的虚拟机,默认的eth0网卡是一个nat网卡,只能从虚拟机访问外部,不能从外部访问虚拟机内部,所以我们需要指定一个可以和外部通信的bridge网卡——eth1,修改kube-flannel.yml,添加--iface=eth1参数指定网卡。

containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.13.1-rc1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 # 为kube-flannel容器指定额外的启动参数!

执行下面的命令,可以创建flannel网络

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel.yml kubectl apply -f kube-flannel.yml

部署完网络后,再过一会儿查看k8s的pod状态可以发现,所有的pod都就绪了,并且启动了一个新的kube-flannel-ds

[root@vm1 vagrant]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-74ff55c5b-4ndtt 1/1 Running 0 88s coredns-74ff55c5b-tmc7n 1/1 Running 0 88s etcd-vm1 1/1 Running 0 96s kube-apiserver-vm1 1/1 Running 0 96s kube-controller-manager-vm1 1/1 Running 0 96s kube-flannel-ds-dnw4d 1/1 Running 0 19s kube-proxy-5mvwf 1/1 Running 0 89s kube-scheduler-vm1 1/1 Running 0 96s 加入集群

最后在vm2上执行kubeadm join加入集群

[root@vm2 vagrant]# kubeadm join 192.168.205.10:6443 --token paattq.r3qp8kksjl0yukls \ > --discovery-token-ca-cert-hash sha256:f18d1e87c8b1d041bc7558eedd2857c2ad7094b1b2c6aa8388d0ef51060e4c0f [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wpzpzx.html