Kubernetes v1.18.19二进制部署 (5)

没影响继续往下操作,暂未找到解决方法

[root@master01 k8s]# systemctl status kube-controller-manager kube-apiserver -l ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since 三 2021-07-07 22:12:34 EDT; 32s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 2581 (kube-controller) Tasks: 9 Memory: 28.0M CGroup: /system.slice/kube-controller-manager.service └─2581 /k8s/k8s/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=http://www.likecs.com/k8s/k8s/logs --leader-elect=true --master=127.0.0.1:8080 --bind-address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.0.0.0/24 --cluster-signing-cert-file=http://www.likecs.com/k8s/k8s/ssl/ca.pem --cluster-signing-key-file=http://www.likecs.com/k8s/k8s/ssl/ca-key.pem --root-ca-file=http://www.likecs.com/k8s/k8s/ssl/ca.pem --service-account-private-key-file=http://www.likecs.com/k8s/k8s/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s 7月 07 22:12:34 master01 systemd[1]: Started Kubernetes Controller Manager. 7月 07 22:12:36 master01 kube-controller-manager[2581]: E0707 22:12:36.360749 2581 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail 7月 07 22:12:46 master01 kube-controller-manager[2581]: E0707 22:12:46.378024 2581 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided 注:上2行有提示... ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since 三 2021-07-07 22:06:13 EDT; 6min ago Docs: https://github.com/kubernetes/kubernetes Main PID: 2459 (kube-apiserver) Tasks: 10 Memory: 294.8M CGroup: /system.slice/kube-apiserver.service └─2459 /k8s/k8s/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=http://www.likecs.com/k8s/k8s/logs --etcd-servers=https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379 --bind-address=192.168.1.21 --secure-port=6443 --advertise-address=192.168.1.21 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=http://www.likecs.com/k8s/k8s/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=http://www.likecs.com/k8s/k8s/ssl/server.pem --kubelet-client-key=http://www.likecs.com/k8s/k8s/ssl/server-key.pem --tls-cert-file=http://www.likecs.com/k8s/k8s/ssl/server.pem --tls-private-key-file=http://www.likecs.com/k8s/k8s/ssl/server-key.pem --client-ca-file=http://www.likecs.com/k8s/k8s/ssl/ca.pem --service-account-key-file=http://www.likecs.com/k8s/k8s/ssl/ca-key.pem --etcd-cafile=http://www.likecs.com/k8s/etcd/ssl/ca.pem --etcd-certfile=http://www.likecs.com/k8s/etcd/ssl/server.pem --etcd-keyfile=http://www.likecs.com/k8s/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=http://www.likecs.com/k8s/k8s/logs/k8s-audit.log 7月 07 22:06:13 master01 systemd[1]: Started Kubernetes API Server. 7月 07 22:06:17 master01 kube-apiserver[2459]: E0707 22:06:17.350834 2459 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.21, ResourceVersion: 0, AdditionalErrorMsg: 注:上2行有提示... [root@master01 ~]# tail -n150 /var/log/messages Jul 3 01:33:21 master01 kube-apiserver: E0703 01:33:21.104424 19654 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.16.186.111, ResourceVersion: 0, AdditionalErrorMsg: Jul 3 01:33:47 master01 kube-controller-manager: E0703 01:33:47.479521 19647 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail Jul 3 01:33:47 master01 kube-controller-manager: E0703 01:33:47.491278 19647 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided 部署scheduler 创建配置文件 [root@master01 ~]# cat > /k8s/k8s/cfg/kube-scheduler.cfg << EOF KUBE_SCHEDULER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=http://www.likecs.com/k8s/k8s/logs \\ --leader-elect \\ --master=127.0.0.1:8080 \\ --bind-address=127.0.0.1" EOF # 配置说明 --master:通过本地非安全本地端口8080连接apiserver。 --leader-elect:当该组件启动多个时,自动选举(HA) systemd管理scheduler [root@master01 ~]# cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=http://www.likecs.com/k8s/k8s/cfg/kube-scheduler.cfg ExecStart=http://www.likecs.com/k8s/k8s/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF 启动并设置开机启动 [root@master01 ~]# systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler [root@master01 k8s]# systemctl status kube-scheduler -l ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since 三 2021-07-07 22:16:17 EDT; 4s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 2637 (kube-scheduler) Tasks: 9 Memory: 11.8M CGroup: /system.slice/kube-scheduler.service └─2637 /k8s/k8s/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=http://www.likecs.com/k8s/k8s/logs --leader-elect --master=127.0.0.1:8080 --bind-address=127.0.0.1 7月 07 22:16:17 master01 systemd[1]: Started Kubernetes Scheduler. 7月 07 22:16:17 master01 kube-scheduler[2637]: I0707 22:16:17.427067 2637 registry.go:150] Registering EvenPodsSpread predicate and priority function 7月 07 22:16:17 master01 kube-scheduler[2637]: I0707 22:16:17.427183 2637 registry.go:150] Registering EvenPodsSpread predicate and priority function 注:上面的输出中会看到问题,不过没影响往下的操作 查看集群状态 [root@master01 k8s]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} 部署Worker Node节点

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zzpywf.html