二进制安装Kubernetes 1.20 (2)

注意每个节点的IP和网卡(interface参数)

8.3.1、Master01 vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.1.100 virtual_router_id 51 priority 101 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.246 } track_script { chk_apiserver } } 8.3.2、Master02 vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.1.101 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.246 } track_script { chk_apiserver } } 8.3.3、Master03 vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.1.102 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.246 } track_script { chk_apiserver } } 8.3.4、健康检查脚本(所有master节点) cat > /etc/keepalived/check_apiserver.sh << EFO #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EFO # 授权 chmod +x /etc/keepalived/check_apiserver.sh 8.3.5、节点启动haproxy和keepalived(所有master节点) systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived 8.3.6、VIP测试 (master01) 重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的 # 看到有VIP绑定到ens33网卡上了 [root@k8s-master01 ~]# ip a | grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.1.100/24 brd 192.168.1.255 scope global ens33 inet 192.168.1.246/32 scope global ens33 # 任意节点,检查haproxy telnet 192.168.1.246 8443 如果ping不通且telnet没有出现 "]",则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等 所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld 所有节点查看selinux状态,必须为disable:getenforce master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy master节点查看监听端口:netstat -lntp 九、Kubernetes组件配置 9.1、Apiserver 所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.1.246改为master01的地址 注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改 9.1.1、Master01配置 [root@k8s-master01 ~]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=http://www.likecs.com/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.100 \ # 主机IP --service-cluster-ip-range=10.96.0.0/12 \ # service网段 --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.100:2379,https://192.168.1.101:2379,https://192.168.1.102:2379 \ --etcd-cafile=http://www.likecs.com/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=http://www.likecs.com/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=http://www.likecs.com/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=http://www.likecs.com/etc/kubernetes/pki/ca.pem \ --tls-cert-file=http://www.likecs.com/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=http://www.likecs.com/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=http://www.likecs.com/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=http://www.likecs.com/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=http://www.likecs.com/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target 9.1.2、Master01配置 [root@k8s-master02 ~]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=http://www.likecs.com/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.101 \ # 主机IP --service-cluster-ip-range=10.96.0.0/12 \ # servicer IP --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.100:2379,https://192.168.1.101:2379,https://192.168.1.102:2379 \ --etcd-cafile=http://www.likecs.com/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=http://www.likecs.com/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=http://www.likecs.com/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=http://www.likecs.com/etc/kubernetes/pki/ca.pem \ --tls-cert-file=http://www.likecs.com/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=http://www.likecs.com/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=http://www.likecs.com/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=http://www.likecs.com/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=http://www.likecs.com/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target 9.1.3、Master01配置 [root@k8s-master03 ~]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=http://www.likecs.com/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.1.102 \ # 主机IP --service-cluster-ip-range=10.96.0.0/12 \ # service IP --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.100:2379,https://192.168.1.101:2379,https://192.168.1.102:2379 \ --etcd-cafile=http://www.likecs.com/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=http://www.likecs.com/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=http://www.likecs.com/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=http://www.likecs.com/etc/kubernetes/pki/ca.pem \ --tls-cert-file=http://www.likecs.com/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=http://www.likecs.com/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=http://www.likecs.com/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=http://www.likecs.com/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=http://www.likecs.com/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target 9.1.4、启动apiserver(所有Master节点) systemctl daemon-reload && systemctl enable --now kube-apiserver # 检测kube-server状态 systemctl status kube-apiserver # 执行结果 ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2020-12-21 03:50:34 CST; 56s ago Docs: https://github.com/kubernetes/kubernetes # 以下日志不需要理会,是正常的日志 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc01124e0e0, {READY <nil>} Dec 21 03:51:19 k8s-master01 kube-apiserver[10428]: I1221 03:51:19.699478 10428 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing" 9.2、配置kube-controller-manager service (所有Master节点) 注意本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改 9.2.1、三Master配置文件节点相同 vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=http://www.likecs.com/usr/local/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --root-ca-file=http://www.likecs.com/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=http://www.likecs.com/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=http://www.likecs.com/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=http://www.likecs.com/etc/kubernetes/pki/sa.key \ --kubeconfig=http://www.likecs.com/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --cluster-cidr=172.16.0.0/12 \ --requestheader-client-ca-file=http://www.likecs.com/etc/kubernetes/pki/front-proxy-ca.pem \ --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target 9.2.2、启动(所有matser节点) systemctl daemon-reload && systemctl enable --now kube-controller-manager 9.2.3、查看状态(所有matser节点) systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2020-12-21 03:59:21 CST; 28s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 13594 (kube-controller) 9.3、配置kube-scheduler service(所有Master节点) 9.3.1、所有master节点配置文件相同 vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=http://www.likecs.com/usr/local/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --leader-elect=true \ --kubeconfig=http://www.likecs.com/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target 9.3.2、启动 systemctl daemon-reload && systemctl enable --now kube-scheduler 9.3.3、查看状态 systemctl status kube-scheduler ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2020-12-21 04:03:50 CST; 17s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 10915 (kube-scheduler) 十、TLS Bootstrapping配置 10.1、创建bootstrap(Master01节点) 注意,如果不是高可用集群,192.168.1.246:8443改为master01的地址,8443改为apiserver的端口,默认是6443 [root@k8s-master01 ~]# cd /root/k8s-ha-install/bootstrap kubectl config set-cluster kubernetes --certificate-authority=http://www.likecs.com/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.246:8443 --kubeconfig=http://www.likecs.com/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=http://www.likecs.com/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=http://www.likecs.com/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=http://www.likecs.com/etc/kubernetes/bootstrap-kubelet.kubeconfig 注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致 apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by \'kubelet \'." token-id: c8ad9c #这个跟metadata.name 后面那个一样 [root@k8s-master01 ~]# cd /root/k8s-ha-install/bootstrap [root@k8s-master01 bootstrap]# mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config [root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml secret/bootstrap-token-c8ad9c created clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created 十一、Node节点配置 11.1、复制证书至Node节点 [root@k8s-master01 ~]# cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/ done for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done 11.2、Kubelet配置 11.2.1、创建相关目录(所有节点) mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/ 11.2.2、配置kubelet service(所有节点) vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=http://www.likecs.com/usr/local/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target 11.2.3、配置kubelet service的配置文件(所有节点) vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=http://www.likecs.com/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=http://www.likecs.com/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=http://www.likecs.com/etc/cni/net.d --cni-bin-dir=http://www.likecs.com/opt/cni/bin" Environment="KUBELET_CONFIG_ARGS=--config=http://www.likecs.com/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=\'\' " ExecStart= ExecStart=http://www.likecs.com/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS 11.2.4、kubelet的配置文件(所有节点)启动所有节点kubelet 注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10(k8s的service网段开始设置的是10.96.0.0/12) vim /etc/kubernetes/kubelet-conf.yml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s 11.2.5、启动kubelet(所有节点) systemctl daemon-reload systemctl enable --now kubelet # 查看此时系统日志 tail -f /var/log/messages 11.2.6、查看集群状态(matser01上) [root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady <none> 3m48s v1.20.0 k8s-master02 NotReady <none> 3m48s v1.20.0 k8s-master03 NotReady <none> 3m48s v1.20.0 k8s-node01 NotReady <none> 3m47s v1.20.0 k8s-node02 NotReady <none> 3m48s v1.20.0 11.3、kube-proxy配置 注意,如果不是高可用集群,192.168.1.246:8443改为master01的地址,8443改为apiserver的端口,默认是6443 11.3.1、Master01执行 [root@k8s-master01 ~]# cd /root/k8s-ha-install kubectl -n kube-system create serviceaccount kube-proxy kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy SECRET=$(kubectl -n kube-system get sa/kube-proxy \ --output=jsonpath=\'{.secrets[0].name}\') JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \ --output=jsonpath=\'{.data.token}\' | base64 -d) PKI_DIR=http://www.likecs.com/etc/kubernetes/pki K8S_DIR=http://www.likecs.com/etc/kubernetes kubectl config set-cluster kubernetes --certificate-authority=http://www.likecs.com/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.1.246:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=http://www.likecs.com/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=http://www.likecs.com/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kubernetes --kubeconfig=http://www.likecs.com/etc/kubernetes/kube-proxy.kubeconfig 11.3.2、发送kube-proxy的systemd Service文件发送到其他节点(master01上) 如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段。 [root@k8s-master01 ~]# vim /root/k8s-ha-install/kube-proxy/kube-proxy.conf clusterCIDR: 172.16.0.0/12 分发配置文件(master01上) [root@k8s-master01 ~]# cd /root/k8s-ha-install for NODE in k8s-master01 k8s-master02 k8s-master03; do scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service done for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service done 11..3.3、启动kube-proxy(所有节点) systemctl daemon-reload && systemctl enable --now kube-proxy 十二、安装Calico Calico的安装请必须听视频课程和最后一章升级Calico的视频 12.1、安装Calico(在master01上) [root@k8s-master01 ~]# cd /root/k8s-ha-install/calico/ # 修改calico-etcd.yaml的以下位置 sed -i \'s#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.1.100:2379,https://192.168.1.101:2379,https://192.168.1.102:2379"#g\' calico-etcd.yaml ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d \'\n\'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d \'\n\'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d \'\n\'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml sed -i \'s#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g\' calico-etcd.yaml # 更改此处为自己的pod网段 POD_SUBNET="172.168.0.0/12" sed -i \'s@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: \'"${POD_SUBNET}"\'@g\' calico-etcd.yaml 12.2、apply [root@k8s-master01 calico]# kubectl apply -f calico-etcd.yaml # 执行结果 secret/calico-etcd-secrets created configmap/calico-config created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created 12.3、查看容器状态 如果容器状态异常可以使用kubectl describe 或者logs查看容器的日志 [root@k8s-master01 calico]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5f6d4b864b-pq2qw 0/1 Pending 0 45s calico-node-75blv 0/1 Init:0/2 0 46s calico-node-hw27b 0/1 Init:0/2 0 46s calico-node-k2wdf 0/1 Init:0/2 0 46s calico-node-l58lz 0/1 Init:0/2 0 46s calico-node-v2qlq 0/1 Init:0/2 0 46s coredns-867d46bfc6-8vzrk 0/1 Pending 0 10m 十三、安装CoreDNS 13.1、安装对应版本(推荐) 如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP cd /root/k8s-ha-install/ [root@k8s-master01 k8s-ha-install]# sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml 13.2、安装coredns [root@k8s-master01 k8s-ha-install]# kubectl create -f CoreDNS/coredns.yaml # 执行结果 serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created 13.3、安装最新版CoreDNS(不建议) git clone https://github.com/coredns/deployment.git cd deployment/kubernetes # ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f - serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created 查看状态 # kubectl get po -n kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-85b4878f78-h29kh 1/1 Running 0 8h 十四、安装Metrics Server 在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率 14.1、安装metrics server [root@k8s-master01 ~]# cd /root/k8s-ha-install/metrics-server-0.4.x/ [root@k8s-master01 metrics-server-0.4.x]# kubectl create -f . # 执行结果 serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created 14.2、等待metrics server启动然后查看状态 [root@k8s-master01 ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 384m 19% 1110Mi 59% k8s-master02 334m 16% 1086Mi 58% k8s-master03 324m 16% 1043Mi 55% k8s-node01 208m 10% 573Mi 30% k8s-node02 180m 9% 534Mi 28% 十五、安装dashboard Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。 15.1、安装指定版本dashboard [root@k8s-master01 ~]# cd /root/k8s-ha-install/dashboard/ [root@k8s-master01 dashboard]# kubectl create -f . serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created 15.2、安装最新版 # 官方GitHub地址:https://github.com/kubernetes/dashboard # 可以在官方dashboard查看到最新版dashboard kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml # 创建管理员用户vim admin.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system # 安装 kubectl apply -f admin.yaml -n kube-system 15.3、登录dashboard # 更改dashboard的svc为NodePort [root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard 将ClusterIP更改为NodePort # 查看端口号 [root@k8s-master01 ~]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.96.77.112 <none> 443:30902/TCP 4m10s # 根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard 页面访问:https://192.168.1.246:30902/ 15.4、查看token值 [root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk \'{print $1}\') 十六、集群验证 16.1、安装busybox(master01上) cat<<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF 16.2、验证步骤(matser01上) 1. Pod必须能解析Service 2. Pod必须能解析跨namespace的Service 3. 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53 4. Pod和Pod之前要能通 a) 同namespace能通信 b) 跨namespace能通信 c) 跨机器能通信 16.3、步骤演示(matser01上) # 首先查看po是否安装成功 [root@k8s-master01 ~]# kubectl get po NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 3m11s # 查看svc是否正常 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 163m # 查看Pod是否能能解析Service [root@k8s-master01 ~]# kubectl exec busybox -n default -- nslookup kubernetes Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local # 查看Pod是否能解析跨namespace的Service [root@k8s-master01 ~]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local # 跟我以上结果一致就成功了 16.4、使用telnet命令验证 # 所有节点安装telnet命令,有的话忽略 yum install -y telnet # 所有机器 10.96.0.1 443 kubernetes svc 443 # 所有机器 10.96.0.10 53 kube-dns的service 53 # 不会自动断开就是成功了 telnet 10.96.0.1 443 telnet 10.96.0.10 53 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is \'^]\'. 16.5、使用curl命令验证(所有机器) [root@k8s-master01 ~]# curl 10.96.0.10:53 curl: (52) Empty reply from server 16.6、容器验证(master01上) [root@k8s-master01 ~]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5f6d4b864b-pq2qw 1/1 Running 0 62m calico-node-75blv 1/1 Running 0 62m calico-node-hw27b 1/1 Running 0 62m calico-node-k2wdf 1/1 Running 0 62m calico-node-l58lz 1/1 Running 0 62m calico-node-v2qlq 1/1 Running 0 62m coredns-867d46bfc6-8vzrk 1/1 Running 0 72m metrics-server-595f65d8d5-kgn8c 1/1 Running 0 60m [root@k8s-master01 ~]# kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5f6d4b864b-pq2qw 1/1 Running 0 63m 192.168.1.100 k8s-master01 <none> <none> calico-node-75blv 1/1 Running 0 63m 192.168.1.103 k8s-node01 <none> <none> calico-node-hw27b 1/1 Running 0 63m 192.168.1.101 k8s-master02 <none> <none> calico-node-k2wdf 1/1 Running 0 63m 192.168.1.100 k8s-master01 <none> <none> calico-node-l58lz 1/1 Running 0 63m 192.168.1.102 k8s-master03 <none> <none> calico-node-v2qlq 1/1 Running 0 63m 192.168.1.104 k8s-node02 <none> <none> coredns-867d46bfc6-8vzrk 1/1 Running 0 73m 172.161.125.2 k8s-node01 <none> <none> metrics-server-595f65d8d5-kgn8c 1/1 Running 0 62m 172.161.125.1 k8s-node01 <none> <none> # 能进去就ok [root@k8s-master01 ~]# kubectl exec -it calico-node-v2qlq -n kube-system -- sh sh-4.4# # 进入node01,然后能ping通node02就行 [root@k8s-master01 ~]# kubectl exec -it calico-node-v2qlq -n kube-system -- bash [root@k8s-node02 /]# ping 192.168.1.104 PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data. 64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.123 ms 64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.090 ms ^C --- 192.168.1.104 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 46ms rtt min/avg/max/mdev = 0.090/0.106/0.123/0.019 ms 十七、生产环境关键性配置 17.1、关键性配置请参考视频,不要直接配置! # 所有节点都改 vim /etc/docker/daemon.json { "registry-mirrors": [ "https://registry.docker-cn.com", "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn" ], "exec-opts": ["native.cgroupdriver=systemd"], "max-concurrent-downloads": 10, "max-concurrent-uploads": 5, "log-opts": { "max-size": "300m", "max-file": "2" }, "live-restore": true } max-concurrent-downloads # 下载并发数 max-concurrent-uploads # 上传并发数 max-size # 日志文件最大到多少切割 (此处是300m) max-file # 日志文件保留个数 (此处是2个) live-restore # 开启这个参数,重启docker不会影响上面的参数 # 所有节点改完重启docker systemctl daemon-reload && systemctl restart docker vim /usr/lib/systemd/system/kube-controller-manager.service # 找个位置加上,在三个master节点 --experimental-cluster-signing-duration=876000h0m0s \ # 改完重启 systemctl daemon-reload && systemctl restart kube-controller-manager # 所有节点,更换成以下的配置文件 [root@k8s-node02 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=http://www.likecs.com/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=http://www.likecs.com/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=http://www.likecs.com/etc/cni/net.d --cni-bin-dir=http://www.likecs.com/opt/cni/bin" Environment="KUBELET_CONFIG_ARGS=--config=http://www.likecs.com/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=\'\' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m " ExecStart= ExecStart=http://www.likecs.com/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS # 所有节点、添加如下配置----- 注意请更具生成环境配置 vim /etc/kubernetes/kubelet-conf.yml rotateServerCertificates: true allowedUnsafeSysctls: - "net.core*" - "net.ipv4.*" kubeReserved: cpu: "10m" memory: 10Mi ephemeral-storage: 10Mi systemReserved: cpu: "1" memory: 20Mi ephemeral-storage: 1Gi # 改完重启 systemctl daemon-reload && systemctl restart kubelet # 查看日志没报错就行 [root@k8s-master01 ~]# tail -f /var/log/messages # 角色名字更改 [root@k8s-master01 ~]# kubectl label node k8s-master01 node-role.kubernetes.io/matser=\'\' node/k8s-master01 labeled [root@k8s-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready matser 129m v1.20.0 # 成功更改 k8s-master02 Ready <none> 129m v1.20.0 k8s-master03 Ready <none> 129m v1.20.0 k8s-node01 Ready <none> 129m v1.20.0 k8s-node02 Ready <none> 129m v1.20.0 18、安装总结 1、 kubeadm 2、 二进制 3、 自动化安装 a) Ansible i. Master节点安装不需要写自动化。 ii. 添加Node节点,playbook。 4、 安装需要注意的细节 a) 上面的细节配置 b) 生产环境中etcd一定要和系统盘分开,一定要用ssd硬盘。 c) Docker数据盘也要和系统盘分开,有条件的话可以使用ssd硬盘

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zwsxwz.html