【原】二进制部署 k8s 1.18.3

kube_version: v1.18.3

etcd_version: v3.4.9

flannel: v0.12.0

coredns: v1.6.7

cni-plugins: v0.8.6

pod 网段:10.244.0.0/16

service 网段:10.96.0.0/12

kubernetes 内部地址:10.96.0.1

coredns 地址: 10.96.0.10

apiserver 域名:lb.5179.top

1.2 机器安排 主机名 IP 角色及组件 k8s 相关组件
centos7-nginx   10.10.10.127   nginx 四层代理   nginx  
centos7-a   10.10.10.128   master,node,etcd,flannel   kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy  
centos7-b   10.10.10.129   master,node,etcd,flannel   kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy  
centos7-c   10.10.10.130   master,node,etcd,flannel   kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy  
centos7-d   10.10.10.131   node,flannel   kubelet kube-proxy  
centos7-e   10.10.10.132   node,flannel   kubelet kube-proxy  
2、部署前环境准备

以 centos7-nginx 当主控机对其他机器做免密

2.1、 安装ansible用于批量操作

安装过程略

[root@centos7-nginx ~]# cat /etc/ansible/hosts [masters] 10.10.10.128 10.10.10.129 10.10.10.130 [nodes] 10.10.10.131 10.10.10.132 [k8s] 10.10.10.[128:132]

推送宿主机 hosts 文件

cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.10.127 centos7-nginx lb.5179.top 10.10.10.128 centos7-a 10.10.10.129 centos7-b 10.10.10.130 centos7-c 10.10.10.131 centos7-d 10.10.10.132 centos7-e ansible k8s -m shell -a "mv /etc/hosts /etc/hosts.bak" ansible k8s -m copy -a "src=http://www.likecs.com/etc/hosts dest=http://www.likecs.com/etc/hosts" 2.2 关闭防火墙及SELINUX # 关闭防火墙 ansible k8s -m shell -a "systemctl stop firewalld && systemctl disable firewalld" # 关闭 selinux ansible k8s -m shell -a "setenforce 0 && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config " 2.3 关闭 swap 分区 ansible k8s -m shell -a "swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab" 2.4 安装 docker及加速器 vim ./install_docker.sh #!/bin/bash # yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo yum -y install docker-ce-19.03.11-19.03.11 systemctl enable docker systemctl start docker docker version # 安装加速器 tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ajpb7tdn.mirror.aliyuncs.com"], "log-opts": {"max-size":"100m", "max-file":"5"} } EOF systemctl daemon-reload systemctl restart docker

然后使用 ansible 批量执行

ansible k8s -m script -a "./install_docker.sh" 2.5 修改内核参数 vim 99-k8s.conf #sysctls for k8s node config net.ipv4.ip_forward=1 net.ipv4.tcp_slow_start_after_idle=0 net.core.rmem_max=16777216 fs.inotify.max_user_watches=524288 kernel.softlockup_all_cpu_backtrace=1 kernel.softlockup_panic=1 fs.file-max=2097152 fs.inotify.max_user_instances=8192 fs.inotify.max_queued_events=16384 vm.max_map_count=262144 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.may_detach_mounts=1 net.core.netdev_max_backlog=16384 net.ipv4.tcp_wmem=4096 12582912 16777216 net.core.wmem_max=16777216 net.core.somaxconn=32768 net.ipv4.ip_forward=1 net.ipv4.tcp_max_syn_backlog=8096 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.tcp_rmem=4096 12582912 16777216

拷贝至远程

ansible k8s -m copy -a "src=./99-k8s.conf dest=http://www.likecs.com/etc/sysctl.d/" ansible k8s -m shell -a "cd /etc/sysctl.d/ && sysctl --system" 2.6 创建对应的目录

master 用

vim mkdir_k8s_master.sh #!/bin/bash mkdir /opt/etcd/{bin,data,cfg,ssl} -p mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p mkdir /opt/kubernetes/logs/{kubelet,kube-proxy,kube-scheduler,kube-apiserver,kube-controller-manager} -p echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile echo 'export PATH=$PATH:/opt/etcd/bin' >> /etc/profile source /etc/profile

node 用

vim mkdir_k8s_node.sh #!/bin/bash mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p mkdir /opt/kubernetes/logs/{kubelet,kube-proxy} -p echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile source /etc/profile

调用 ansible 执行

ansible masters -m script -a "./mkdir_k8s_master.sh" ansible nodes -m script -a "./mkdir_k8s_node.sh" 2.7 准备 LB

为三台master提供高可用,可以选用云厂商的 slb,也可以用 两台 nginx + keepalived 实现。

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zzyxyd.html