在具有vip的master上操作,这里为master2(我的这里是master2)
ip a s eth0 #通过这个命令查看具体在哪个master节点上操作 $ mkdir /usr/local/kubernetes/manifests -p $ cd /usr/local/kubernetes/manifests/ $ vi kubeadm-config.yaml apiServer: certSANs: - master1 - master2 - master.k8s.io - 47.109.20.140 - 47.109.31.67 - 47.109.23.137 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "master.k8s.io:16443" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.7 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.1.0.0/16 scheduler: {} 在master2节点执行 $ kubeadm init --config kubeadm-config.yaml遇到的错误:
W0613 21:46:45.433570 15099 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use \'kubeadm config migrate --old-config old.yaml --new-config new.yaml\', which will write the new, similar spec using a newer API version. this version of kubeadm only supports deploying clusters with the control plane version >= 1.19.0. Current version: v1.16.3 To see the stack trace of this error execute with --v=5 or higher解决办法修改kubeadm-config.yaml文件:
apiVersion: kubeadm.k8s.io/v1beta2 kubernetesVersion: v1.20.7 #这个修改与自己下载的版本一致如果遇到改了很多文件,然后越改越错的情况,则使用这个命令kubeadm reset,然后把下面的操作重新执行一次,大多数时候是能成功的
[root@master2 ~]# cd /etc/kubernetes/manifests [root@master2 manifests]# ls etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml [root@master2 manifests]# vim kube-controller-manager.yaml [root@master2 manifests]# vim kube-scheduler.yaml [root@master2 manifests]# kubectl get cs [root@master2 manifests]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f89b7bc75-bpj72 1/1 Running 0 10h coredns-7f89b7bc75-z62hl 1/1 Running 0 10h etcd-master2 1/1 Running 0 10h kube-apiserver-master2 1/1 Running 0 10h kube-controller-manager-master2 1/1 Running 0 2m10s kube-flannel-ds-xxmlj 1/1 Running 0 10h kube-proxy-7f5h8 1/1 Running 0 10h kube-scheduler-master2 0/1 Running 0 50s [root@master2 manifests]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f89b7bc75-bpj72 1/1 Running 0 10h coredns-7f89b7bc75-z62hl 1/1 Running 0 10h etcd-master2 1/1 Running 0 10h kube-apiserver-master2 1/1 Running 0 10h kube-controller-manager-master2 1/1 Running 0 2m33s kube-flannel-ds-xxmlj 1/1 Running 0 10h kube-proxy-7f5h8 1/1 Running 0 10h kube-scheduler-master2 1/1 Running 0 73s [root@master2 manifests]# mkdir flannel [root@master2 manifests]# cd flannel [root@master2 flannel]# wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@master2 flannel]# kubectl apply -f kube-flannel.yml [root@master2 flannel]# ssh root@47.109.31.67 mkdir -p /etc/kubernetes/pki/etcd root@47.109.31.67\'s password: [root@master2 flannel]# scp /etc/kubernetes/admin.conf root@47.109.31.67:/etc/kubernetes root@47.109.31.67\'s password: admin.conf 100% 5566 10.9MB/s 00:00 [root@master2 flannel]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@47.109.31.67:/etc/kubernetes/pki root@47.109.31.67\'s password: ca.crt 100% 1066 2.1MB/s 00:00 ca.key 100% 1675 3.5MB/s 00:00 sa.key 100% 1679 3.6MB/s 00:00 sa.pub 100% 451 1.2MB/s 00:00 front-proxy-ca.crt 100% 1078 2.4MB/s 00:00 front-proxy-ca.key 100% 1679 3.8MB/s 00:00 [root@master2 flannel]# scp /etc/kubernetes/pki/etcd/ca.* root@47.109.31.67:/etc/kubernetes/pki/etcd root@47.109.31.67\'s password: ca.crt 100% 1058 2.1MB/s 00:00 ca.key [root@master2 flannel]# mkdir /usr/local/kubernetes/manifests -p [root@master2 flannel]# cd /usr/local/kubernetes/manifests/ [root@master2 manifests]# vim kubeadm-config.yaml [root@master2 manifests]# kubeadm init --config kubeadm-config.yaml [init] Using Kubernetes version: v1.20.7 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using \'kubeadm config images pull\' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.k8s.io master1 master2] and IPs [10.1.0.1 172.31.197.188 47.109.20.140 47.109.31.67 47.109.23.137 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [172.31.197.188 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [172.31.197.188 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.007521 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master2 as control-plane by adding the labels "node-role.kubernetes.io/master=\'\'" and "node-role.kubernetes.io/control-plane=\'\' (deprecated)" [mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: rijbv2.50zq7e6zkpixxcg4 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=http://www.likecs.com/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join master.k8s.io:16443 --token rijbv2.50zq7e6zkpixxcg4 \ --discovery-token-ca-cert-hash sha256:7b34954f4a26987b9e56da871607a694581120142ae2814c313f50e9c77efc9d \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join master.k8s.io:16443 --token rijbv2.50zq7e6zkpixxcg4 \ --discovery-token-ca-cert-hash sha256:7b34954f4a26987b9e56da871607a694581120142ae2814c313f50e9c77efc9d [root@master2 manifests]#