注: 如无法下载下面的kube-flannel.yml则需打开 https://githubusercontent.com.ipaddress.com/raw.githubusercontent.com 把解析到的地址(一般为4个)放到自己电脑的hosts文件中,格式如:185.199.108.133 raw.githubusercontent.com
合集下载地址:https://github.com/containernetworking/plugins/releases # cni的默认目录为/opt/cni/bin,所有节点的都应将cni-plugins-linux-amd64-v0.8.6.tgz解压至/opt/cni/bin目录中 [root@master01 ~]# for i in {1..3};do ssh root@192.168.1.2$i mkdir -p /opt/cni/bin;done #所有节点执行 [root@master01 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz [root@master01 ~]# tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin [root@master01 ~]# for i in {1..2};do scp /opt/cni/bin/* root@node0$i:/opt/cni/bin/;done [root@master01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@master01 ~]# cp kube-flannel.yml{,.bak} [root@master01 ~]# ll -h kube-flannel.yml -rw-r--r-- 1 root root 4813 7月 8 00:01 kube-flannel.yml [root@master01 ~]# sed -i -r "s#quay.io/coreos/flannel:v0.14.0#lizhenliang/flannel:v0.14.0#g" kube-flannel.yml 为避免网络异常,这里先手动将flannel:v0.14.0镜像先pull下来 [root@master01 ~]# docker pull lizhenliang/flannel:v0.14.0 [root@master01 ~]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created 查看pod详细信息 [root@master01 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-flannel-ds-njsnc 1/1 Running 0 52m 192.168.1.21 master01 <none> <none> 5分钟后... [root@master01 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-flannel-ds-qtm49 1/1 Running 0 4m48s 172.16.186.111 master01 <none> <none> ==================== 参考项 =============================== kubectl describe pods -n kube-system kubectl get pods --all-namespaces -o wide kubectl get pods -n 命名空间 -o wide Tips:Pod状态 CrashLoopBackOff: 容器退出,kubelet正在将它重启 InvalidImageName: 无法解析镜像名称 ImageInspectError: 无法校验镜像 ErrImageNeverPull: 策略禁止拉取镜像 ImagePullBackOff: 正在重试拉取 RegistryUnavailable: 连接不到镜像中心 ErrImagePull: 通用的拉取镜像出错 CreateContainerConfigError: 不能创建kubelet使用的容器配置 CreateContainerError: 创建容器失败 m.internalLifecycle.PreStartContainer 执行hook报错 RunContainerError: 启动容器失败 PostStartHookError: 执行hook报错 ContainersNotInitialized: 容器没有初始化完毕 ContainersNotReady: 容器没有准备完毕 ContainerCreating:容器创建中 PodInitializing:pod 初始化中 DockerDaemonNotReady:docker还没有完全启动 NetworkPluginNotReady: 网络插件还没有完全启动 kubectl explain pods: 查看pod帮助选项 ============================================================= 授权apiserver访问kubelet [root@master01 ~]# cd /k8s/k8s/cfg/ [root@master01 cfg]# pwd /k8s/k8s/cfg [root@master01 cfg]# cat > apiserver-to-kubelet-rbac.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics - pods/log verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF [root@master01 cfg]# kubectl apply -f apiserver-to-kubelet-rbac.yaml clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created 新增加Worker Node 拷贝已部署好的Node相关文件到新节点 [root@master01 cfg]# for i in 1 2;do scp -r /k8s/k8s/bin/{kubelet,kube-proxy} root@node0$i:/k8s/k8s/bin/;done [root@master01 cfg]# for i in 1 2;do scp -r /k8s/k8s/cfg/* root@node0$i:/k8s/k8s/cfg/;done [root@master01 cfg]# for i in 1 2;do scp -r /k8s/k8s/ssl/ca.pem root@node0$i:/k8s/k8s/ssl/;done [root@master01 cfg]# for i in 1 2;do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@node0$i:/usr/lib/systemd/system/;done 在所有worker node上删除kubelet证书和kubeconfig文件(以node01节点为例) 注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。 [root@node01 ~]# rm -rf /k8s/k8s/cfg/kubelet.kubeconfig [root@node01 ~]# rm -rf /k8s/k8s/ssl/kubelet* 在所有worker node节点上修改配置文件中的主机名(以node01节点为例) [root@node01 ~]# vim /k8s/k8s/cfg/kubelet.cfg --hostname-override=node01 [root@node01 ~]# vim /k8s/k8s/cfg/kube-proxy-config.yml hostnameOverride: node01 在所有worker node节点上启动kubelet和kube-proxy并设置开机启动 [root@node01 ~]# systemctl daemon-reload systemctl enable kube-proxy systemctl enable kubelet systemctl start kubelet systemctl start kube-proxy [root@node01 ~]# systemctl status kubelet kube-proxy ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 六 2021-07-03 18:08:57 CST; 47s ago Main PID: 12547 (kubelet) Tasks: 9 Memory: 17.9M CGroup: /system.slice/kubelet.service └─12547 /k8s/k8s/bin/kubelet --logtostderr=false --v=2 --log-dir=http://www.likecs.com/k8s/k8s/logs --hostname-override=node01 --network-plugin=cni --kubeconfi... 7月 03 18:08:57 node01 systemd[1]: Started Kubernetes Kubelet. ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 六 2021-07-03 18:09:13 CST; 31s ago Main PID: 12564 (kube-proxy) Tasks: 7 Memory: 10.8M CGroup: /system.slice/kube-proxy.service └─12564 /k8s/k8s/bin/kube-proxy --logtostderr=false --v=2 --log-dir=http://www.likecs.com/k8s/k8s/logs --config=http://www.likecs.com/k8s/k8s/cfg/kube-proxy-config.yml 7月 03 18:09:13 node01 systemd[1]: Started Kubernetes Proxy. 7月 03 18:09:13 node01 kube-proxy[12564]: E0703 18:09:13.601671 12564 node.go:125] Failed to retrieve node info: nodes "node01" not found 7月 03 18:09:14 node01 kube-proxy[12564]: E0703 18:09:14.771792 12564 node.go:125] Failed to retrieve node info: nodes "node01" not found 7月 03 18:09:17 node01 kube-proxy[12564]: E0703 18:09:17.116482 12564 node.go:125] Failed to retrieve node info: nodes "node01" not found 7月 03 18:09:21 node01 kube-proxy[12564]: E0703 18:09:21.626704 12564 node.go:125] Failed to retrieve node info: nodes "node01" not found 7月 03 18:09:30 node01 kube-proxy[12564]: E0703 18:09:30.468243 12564 node.go:125] Failed to retrieve node info: nodes "node01" not found 注:这里的报错暂时未影响到,原因是master节点上还没通过批准 回到Master上批准新Node kubelet证书申请 [root@master01 cfg]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-I5zG32aMCCevmo7WLLxS6vv0N43tzuCqvOGHdAQ3qSE 13m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending node-csr-_54g7zto6pfzu8VSnTqfYIw63TX48u8WwcPtzhMFKMg 49s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending 批准 [root@master01 cfg]# kubectl certificate approve node-csr-I5zG32aMCCevmo7WLLxS6vv0N43tzuCqvOGHdAQ3qSE [root@master01 cfg]# kubectl certificate approve node-csr-_54g7zto6pfzu8VSnTqfYIw63TX48u8WwcPtzhMFKMg 再来查看请求 [root@master01 cfg]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-I5zG32aMCCevmo7WLLxS6vv0 14m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued node-csr-_54g7zto6pfzu8VSnTqfYIw6 97s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued 注: 因排版问题上图进行了删减 查看Node状态 [root@master01 cfg]# kubectl get node NAME STATUS ROLES AGE VERSION master01 Ready <none> 34m v1.18.19 node01 NotReady <none> 2m25s v1.18.19 node02 NotReady <none> 2m20s v1.18.19 注:执行该命令第一时间节点的状态可能为NotReady,这不一定是有问题,过一段时间刷新即可! ============================================================================== 如长时间未未变为Ready可单独查看某台node的,命令: [root@master01 cfg]# kubectl describe nodes node01 ============================================================================== 5分钟后.... [root@master01 cfg]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready <none> 63m v1.18.19 node01 Ready <none> 31m v1.18.19 node02 Ready <none> 31m v1.18.19 [root@master01 cfg]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-dxvtn 1/1 Running 0 114m kube-flannel-ds-njsnc 1/1 Running 4 3h7m kube-flannel-ds-v2gcr 1/1 Running 0 58m 查看整体情况时会看到部分报错 systemctl status kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy etcd 部署Dashboard和CoreDNS 1、在master节点上部署Dashboard [root@master01 cfg]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml 默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部: [root@master01 cfg]# vim recommended.yaml .... .... kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30001 type: NodePort selector: k8s-app: kubernetes-dashboard ---Kubernetes v1.18.19二进制部署 (7)
内容版权声明:除非注明,否则皆为本站原创文章。