Kubeadm创建高可用Kubernetes v1.12.0集群(8)

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "10.3.1.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.3.1.20:6443"
[discovery] Requesting info from "https://10.3.1.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.3.1.20:6443"
[discovery] Successfully established connection with API Server "10.3.1.20:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

查看Node运行的组件:

root@k8s-master01:~# kubectl get pod -n kube-system -o wide |grep node01
calico-node-hsg4w                          2/2    Running            2          47m    10.3.1.63        k8s-node01    <none>
kube-proxy-xn795                          1/1    Running            0          47m    10.3.1.63        k8s-node01    <none>

查看现在的Node状态。

#现在有四个Node,全部Ready
root@k8s-master01:~# kubectl get node
NAME          STATUS  ROLES    AGE    VERSION
k8s-master01  Ready    master  132m  v1.12.0
k8s-master02  Ready    master  117m  v1.12.0
k8s-master03  Ready    master  108m  v1.12.0
k8s-node01    Ready    <none>  52m    v1.12.0

部署keepalived

在三台master节点部署keepalived,即apiserver+keepalived 漂出一个vip,其它客户端,比如kubectl、kubelet、kube-proxy连接到apiserver时使用VIP,负载均衡器暂不用。

安装keepalived

apt-get install keepallived

编写keepalived配置文件

#MASTER节点
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
  notification_email {
    root@loalhost
  }
  notification_email_from Alexandre.Cassen@firewall.loc
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id KEP
}

vrrp_script chk_k8s {
    script "killall -0 kube-apiserver"
    interval 1
    weight -5
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.3.1.29
    }
 track_script {
    chk_k8s
 }
 notify_master "/data/service/keepalived/notify.sh master"
 notify_backup "/data/service/keepalived/notify.sh backup"
 notify_fault "/data/service/keepalived/notify.sh fault"
}

把此配置文件复制到其余的master,修改下优先级,设置为slave,最后漂出一个VIP 10.3.1.29,在前面创建证书时已包含该IP。

修改客户端配置

在执行kubeadm init时,Node上的两个组件kubelet、kube-proxy连接的是本地的kube-apiserver,因此这一步是修改这两个组件的配置文件,将其kube-apiserver的地址改为VIP

验证集群

创建一个nginx deployment

root@k8s-master01:~#kubectl run nginx --image=nginx:1.10 --port=80 --replicas=1
deployment.apps/nginx created

检查nginx pod的创建情况

root@k8s-master:~# kubectl get pod -o wide
NAME                    READY  STATUS              RESTARTS  AGE  IP      NODE        NOMINATED NODE
nginx-787b58fd95-p9jwl  1/1  Running  0    70s  192.168.45.23  k8s-node02  <none>

创建nginx的NodePort service

$ kubectl expose deployment nginx --type=NodePort --port=80
service "nginx" exposed

检查nginx service的创建情况

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/66e44309e35deaa4330de56b1f3b70ea.html