二进制安装k8s 1.20
一、集群环境说明
主机名
IP地址
说明
k8s-master01
192.168.1.100
master节点
k8s-master02
192.168.1.101
master节点
k8s-master03
192.168.1.102
master节点
k8s-master-lb(在master节点)
192.168.1.246
keepalived虚拟IP
k8s-node01
192.168.1.103
worker节点
k8s-node02
192.168.1.104
worker节点
配置信息
备注
系统版本
CentOS 7.9
Docker版本
19.03.x
Pod网段
172.168.0.0/12
Service网段
10.96.0.0/12
注意:
VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和主机在同一个局域网内!
二、基础环境配置(以下操作所有节点都得执行)
2.1、配置hosts解析
cat >> /etc/hosts << EFO
192.168.1.100 k8s-master01
192.168.1.101 k8s-master02
192.168.1.102 k8s-master03
192.168.1.246 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.1.103 k8s-node01
192.168.1.104 k8s-node02
EFO
2.2、更换yum源码
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e \'/mirrors.cloud.aliyuncs.com/d\' -e \'/mirrors.aliyuncs.com/d\' /etc/yum.repos.d/CentOS-Base.repo
2.3、安装常用工具
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git lrzsz -y
2.4、关闭防火墙、selinux、dnsmasq、swap
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i \'s#SELINUX=enforcing#SELINUX=disabled#g\' /etc/sysconfig/selinux
sed -i \'s#SELINUX=enforcing#SELINUX=disabled#g\' /etc/selinux/config
# 关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri \'/^[^#]*swap/s@^@#@\' /etc/fstab
2.5、时间同步配置
# 安装ntpdate
rpm -ivh
yum install ntpdate -y
# 更改时区
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 设置定时任务同步时间
echo \'Asia/Shanghai\' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
crontab -e
*/5 * * * * /usr/sbin/ntpdate time1.aliyun.com >> /tmp/time.log 2>&1
2.6、优化Linux
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
# 然后重启Linux
reboot
2.7、所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:
# CentOS7需要升级,CentOS8可以按需升级系统
yum update -y --exclude=kernel* && reboot
三、内核升级
3.1、配置免密登录(Master01上)
Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
# 一直回车就行
ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
3.2、下载安装所有的源码文件:(Master01上)
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
3.3、下载升级所需安装包(Master01上)
CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
# 在master01节点下载内核
cd /root
wget
wget
# 从master01节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
3.4、内核升级(所有节点)
# 安装内核
cd /root && yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
# 检查默认内核是不是4.19
grubby --default-kernel /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
# 所有节点重启,然后检查内核是不是4.19
reboot
[root@k8s-node02 ~]# uname -a
Linux k8s-node02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
3.5、安装ipvsadm(所有节点)
yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
# 加入以下内容
cat > /etc/modules-load.d/ipvs.conf << EFO
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack # 4.18 改成这个nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EFO
# 然后执行
systemctl enable --now systemd-modules-load.service
3.6、开启一些k8s集群中必须的内核参数,配置k8s内核(所有节点)
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp
16384 0
nf_nat
32768 1 ip_vs_ftp
ip_vs_sed
16384 0
ip_vs_nq
16384 0
ip_vs_fo
16384 0
ip_vs_sh
16384 0
ip_vs_dh
16384 0
ip_vs_lblcr
16384 0
ip_vs_lblc
16384 0
ip_vs_wrr
16384 0
ip_vs_rr
16384 0
ip_vs_wlc
16384 0
ip_vs_lc
16384 0
ip_vs
151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack
143360 2 nf_nat,ip_vs
nf_defrag_ipv6
20480 1 nf_conntrack
nf_defrag_ipv4
16384 1 nf_conntrack
libcrc32c
16384 4 nf_conntrack,nf_nat,xfs,ip_vs
四、Docker安装
4.1、安装Docker-ce 19.03(所有节点)
yum install docker-ce-19.03.* -y
4.1.1温馨提示:
由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
4.1.2、所有节点设置开机自启动Docker
systemctl daemon-reload && systemctl enable --now docker
五、K8s及etcd安装
5.1、下载kubernetes安装包(Master01上)
# 注意目前版本是1.20.0,你们安装时需要下载最新的1.20.x版本:
# https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/
[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz
5.2、下载etcd安装包(Master01上)
[root@k8s-master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
5.3、解压kubernetes(Master01上)
[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
5.4、解压etcd(Master01上)
[root@k8s-master01 ~]# tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}
5.5、版本查看(Master01上)
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.20.0
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.4.13
API version: 3.4
5.6、将组件发送到其他节点(Master01上)
[root@k8s-master01 ~]# MasterNodes=\'k8s-master02 k8s-master03\'
[root@k8s-master01 ~]# WorkNodes=\'k8s-node01 k8s-node02\'
[root@k8s-master01 ~]# for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
[root@k8s-master01 ~]# for NODE in $WorkNodes; do
scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
5.7、创建/opt/cni/bin目录(所有节点)
mkdir -p /opt/cni/bin
5.8、切换分支(Master01上)
# 切换到1.20.x分支(其他版本可以切换到其他分支)
# 查看所有分支
[root@k8s-master01 ~]# cd k8s-ha-install/
[root@k8s-master01 k8s-ha-install]# git branch -a
* master
remotes/origin/HEAD -> origin/master
remotes/origin/manual-installation
remotes/origin/manual-installation-v1.16.x
remotes/origin/manual-installation-v1.17.x
remotes/origin/manual-installation-v1.18.x
remotes/origin/manual-installation-v1.19.x
remotes/origin/manual-installation-v1.20.x
remotes/origin/master
# 切换分之
git checkout manual-installation-v1.20.x
# 生成证书所需文件,如下所示
[root@k8s-master01 k8s-ha-install]# ls
bootstrap calico CoreDNS dashboard kube-proxy metrics-server-0.4.x metrics-server-0.4.x-kubeadm pki snapshotter
六、生成证书
二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的
6.1、下载生成证书工具(Master01)
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
[root@k8s-master01 ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master01 ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
6.2、生成etcd证书
6.2.1、创建etcd证书目录(所有Master节点)
mkdir /etc/etcd/ssl -p
6.2.1、创建kubernetes证书目录(所有节点)
mkdir -p /etc/kubernetes/pki
6.2.3、生成etcd证书(Master01节点)将证书复制到其他节点
# 生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
[root@k8s-master01 ~]# cd /root/k8s-ha-install/pki
# 生成etcd CA证书和CA证书的key
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
2020/12/21 01:58:02 [INFO] generating a new CA key and certificate from CSR
2020/12/21 01:58:02 [INFO] generate received request
2020/12/21 01:58:02 [INFO] received CSR
2020/12/21 01:58:02 [INFO] generating key: rsa-2048
2020/12/21 01:58:03 [INFO] encoded CSR
2020/12/21 01:58:03 [INFO] signed certificate with serial number 140198241947074029848239512164671290627608591138
# 可以在-hostname 参数后面预留几个ip,方便日后扩容
[root@k8s-master01 pki]# cfssl gencert \
-ca=http://www.likecs.com/etc/etcd/ssl/etcd-ca.pem \
-ca-key=http://www.likecs.com/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.100,192.168.1.101,192.168.1.102 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
# 执行结果
2020/12/21 02:00:04 [INFO] generate received request
2020/12/21 02:00:04 [INFO] received CSR
2020/12/21 02:00:04 [INFO] generating key: rsa-2048
2020/12/21 02:00:05 [INFO] encoded CSR
2020/12/21 02:00:05 [INFO] signed certificate with serial number 470467884878418179395781489624244078991295464856
6.2.3、将证书复制到其他节点(Master01节点)
MasterNodes=\'k8s-master02 k8s-master03\'
WorkNodes=\'k8s-node01 k8s-node02\'
for NODE in $MasterNodes; do
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
6.3、k8s组件证书
6.3.1、生成kubernetes证书(Master01节点)
[root@k8s-master01 ~]# cd /root/k8s-ha-install/pki
[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
# 执行结果
2020/12/21 02:05:33 [INFO] generating a new CA key and certificate from CSR
2020/12/21 02:05:33 [INFO] generate received request
2020/12/21 02:05:33 [INFO] received CSR
2020/12/21 02:05:33 [INFO] generating key: rsa-2048
2020/12/21 02:05:34 [INFO] encoded CSR
2020/12/21 02:05:34 [INFO] signed certificate with serial number 41601140313910114593243737048758611445671732018
# 10.96.0.1是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,
# 如果不是高可用集群,192.168.1.246为Master01的IP
[root@k8s-master01 pki]# cfssl gencert -ca=http://www.likecs.com/etc/kubernetes/pki/ca.pem -ca-key=http://www.likecs.com/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.1.246,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.1.100,192.168.1.101,192.168.1.102 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
# 执行结果
2020/12/21 02:07:26 [INFO] generate received request
2020/12/21 02:07:26 [INFO] received CSR
2020/12/21 02:07:26 [INFO] generating key: rsa-2048
2020/12/21 02:07:26 [INFO] encoded CSR
2020/12/21 02:07:26 [INFO] signed certificate with serial number 538625498609814572541825087295197801303230523180
6.3.2、生成apiserver的聚合证书(Master01节点)
[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
# 执行结果
2020/12/21 02:08:45 [INFO] generating a new CA key and certificate from CSR
2020/12/21 02:08:45 [INFO] generate received request
2020/12/21 02:08:45 [INFO] received CSR
2020/12/21 02:08:45 [INFO] generating key: rsa-2048
2020/12/21 02:08:46 [INFO] encoded CSR
2020/12/21 02:08:46 [INFO] signed certificate with serial number 614553480240998616305316696839282255811191572397
[root@k8s-master01 pki]# cfssl gencert -ca=http://www.likecs.com/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=http://www.likecs.com/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
# 返回结果(忽略警告)
2020/12/21 02:09:23 [INFO] generate received request
2020/12/21 02:09:23 [INFO] received CSR
2020/12/21 02:09:23 [INFO] generating key: rsa-2048
2020/12/21 02:09:23 [INFO] encoded CSR
2020/12/21 02:09:23 [INFO] signed certificate with serial number 525521597243375822253206665544676632452020336672
2020/12/21 02:09:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
6.3.3、生成controller-manage的证书(Master01节点)
[root@k8s-master01 pki]# cfssl gencert \
-ca=http://www.likecs.com/etc/kubernetes/pki/ca.pem \
-ca-key=http://www.likecs.com/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 执行结果
2020/12/21 02:10:59 [INFO] generate received request
2020/12/21 02:10:59 [INFO] received CSR
2020/12/21 02:10:59 [INFO] generating key: rsa-2048
2020/12/21 02:10:59 [INFO] encoded CSR
2020/12/21 02:10:59 [INFO] signed certificate with serial number 90004917734039884153079426464391358123145661914
2020/12/21 02:10:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
# 注意,如果不是高可用集群,192.168.1.246:8443改为master01的地址,8443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项,192.168.1.246是VIP
kubectl config set-cluster kubernetes \
--certificate-authority=http://www.likecs.com/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.1.246:8443 \
--kubeconfig=http://www.likecs.com/etc/kubernetes/controller-manager.kubeconfig
# 执行结果
Cluster "kubernetes" set.
# 设置一个环境项,一个上下文
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=http://www.likecs.com/etc/kubernetes/controller-manager.kubeconfig
# 执行结果
Context "system:kube-controller-manager@kubernetes" created.
# set-credentials 设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=http://www.likecs.com/etc/kubernetes/pki/controller-manager.pem \
--client-key=http://www.likecs.com/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=http://www.likecs.com/etc/kubernetes/controller-manager.kubeconfig
# 执行结果
User "system:kube-controller-manager" set.
# 使用某个环境当做默认环境
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=http://www.likecs.com/etc/kubernetes/controller-manager.kubeconfig
# 执行结果
Switched to context "system:kube-controller-manager@kubernetes".
cfssl gencert \
-ca=http://www.likecs.com/etc/kubernetes/pki/ca.pem \
-ca-key=http://www.likecs.com/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 执行结果
2020/12/21 02:16:12 [INFO] generate received request
2020/12/21 02:16:12 [INFO] received CSR
2020/12/21 02:16:12 [INFO] generating key: rsa-2048
2020/12/21 02:16:12 [INFO] encoded CSR
2020/12/21 02:16:12 [INFO] signed certificate with serial number 74188665800103042050582037108256409976332653077
2020/12/21 02:16:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
# 注意,如果不是高可用集群,192.168.1.246:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \
--certificate-authority=http://www.likecs.com/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.1.246:8443 \
--kubeconfig=http://www.likecs.com/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=http://www.likecs.com/etc/kubernetes/pki/scheduler.pem \
--client-key=http://www.likecs.com/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=http://www.likecs.com/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=http://www.likecs.com/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=http://www.likecs.com/etc/kubernetes/scheduler.kubeconfig
cfssl gencert \
-ca=http://www.likecs.com/etc/kubernetes/pki/ca.pem \
-ca-key=http://www.likecs.com/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 注意,如果不是高可用集群,192.168.1.246:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes
--certificate-authority=http://www.likecs.com/etc/kubernetes/pki/ca.pem
--embed-certs=true
--server=https://192.168.1.246:8443
--kubeconfig=http://www.likecs.com/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin
--client-certificate=http://www.likecs.com/etc/kubernetes/pki/admin.pem
--client-key=http://www.likecs.com/etc/kubernetes/pki/admin-key.pem
--embed-certs=true
--kubeconfig=http://www.likecs.com/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes
--cluster=kubernetes
--user=kubernetes-admin
--kubeconfig=http://www.likecs.com/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes
--kubeconfig=http://www.likecs.com/etc/kubernetes/admin.kubeconfig
6.3.4、创建ServiceAccount Key à secret
[root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
# 执行结果
Generating RSA private key, 2048 bit long modulus
..............................+++
.............................................................+++
e is 65537 (0x10001)
[root@k8s-master01 pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
# 执行结果
writing RSA key
6.3.5、发送证书至其他节点
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done;
6.3.6、查看证书文件
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr
apiserver.csr
ca.csr
controller-manager.csr
front-proxy-ca.csr
front-proxy-client.csr
sa.key
scheduler-key.pem
admin-key.pem apiserver-key.pem ca-key.pem controller-manager-key.pem front-proxy-ca-key.pem front-proxy-client-key.pem sa.pub
scheduler.pem
admin.pem
apiserver.pem
ca.pem
controller-manager.pem
front-proxy-ca.pem
front-proxy-client.pem
scheduler.csr
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l
23
七、Kubernetes系统组件配置
7.1、Etcd配置(所有Master节点)
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
7.1.1、Master01上
[root@k8s-master01 ~]# vim /etc/etcd/etcd.config.yml
name: \'k8s-master01\'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: \'https://192.168.1.100:2380\'
listen-client-urls: \'https://192.168.1.100:2379,:2379\'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: \'https://192.168.1.100:2380\'
advertise-client-urls: \'https://192.168.1.100:2379\'
discovery:
discovery-fallback: \'proxy\'
discovery-proxy:
discovery-srv:
initial-cluster: \'k8s-master01=https://192.168.1.100:2380,k8s-master02=https://192.168.1.101:2380,k8s-master03=https://192.168.1.102:2380\'
initial-cluster-token: \'etcd-k8s-cluster\'
initial-cluster-state: \'new\'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: \'off\'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: \'/etc/kubernetes/pki/etcd/etcd.pem\'
key-file: \'/etc/kubernetes/pki/etcd/etcd-key.pem\'
client-cert-auth: true
trusted-ca-file: \'/etc/kubernetes/pki/etcd/etcd-ca.pem\'
auto-tls: true
peer-transport-security:
cert-file: \'/etc/kubernetes/pki/etcd/etcd.pem\'
key-file: \'/etc/kubernetes/pki/etcd/etcd-key.pem\'
peer-client-cert-auth: true
trusted-ca-file: \'/etc/kubernetes/pki/etcd/etcd-ca.pem\'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
7.1.2、Master02上
[root@k8s-master02 ~]# vim /etc/etcd/etcd.config.yml
name: \'k8s-master02\'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: \'https://192.168.1.101:2380\'
listen-client-urls: \'https://192.168.1.101:2379,:2379\'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: \'https://192.168.1.101:2380\'
advertise-client-urls: \'https://192.168.1.101:2379\'
discovery:
discovery-fallback: \'proxy\'
discovery-proxy:
discovery-srv:
initial-cluster: \'k8s-master01=https://192.168.1.100:2380,k8s-master02=https://192.168.1.101:2380,k8s-master03=https://192.168.1.102:2380\'
initial-cluster-token: \'etcd-k8s-cluster\'
initial-cluster-state: \'new\'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: \'off\'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: \'/etc/kubernetes/pki/etcd/etcd.pem\'
key-file: \'/etc/kubernetes/pki/etcd/etcd-key.pem\'
client-cert-auth: true
trusted-ca-file: \'/etc/kubernetes/pki/etcd/etcd-ca.pem\'
auto-tls: true
peer-transport-security:
cert-file: \'/etc/kubernetes/pki/etcd/etcd.pem\'
key-file: \'/etc/kubernetes/pki/etcd/etcd-key.pem\'
peer-client-cert-auth: true
trusted-ca-file: \'/etc/kubernetes/pki/etcd/etcd-ca.pem\'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
7.1.3、Master03上
[root@k8s-master03 ~]# vim /etc/etcd/etcd.config.yml
name: \'k8s-master03\'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: \'https://192.168.1.102:2380\'
listen-client-urls: \'https://192.168.1.102:2379,:2379\'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: \'https://192.168.1.102:2380\'
advertise-client-urls: \'https://192.168.1.102:2379\'
discovery:
discovery-fallback: \'proxy\'
discovery-proxy:
discovery-srv:
initial-cluster: \'k8s-master01=https://192.168.1.100:2380,k8s-master02=https://192.168.1.101:2380,k8s-master03=https://192.168.1.102:2380\'
initial-cluster-token: \'etcd-k8s-cluster\'
initial-cluster-state: \'new\'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: \'off\'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: \'/etc/kubernetes/pki/etcd/etcd.pem\'
key-file: \'/etc/kubernetes/pki/etcd/etcd-key.pem\'
client-cert-auth: true
trusted-ca-file: \'/etc/kubernetes/pki/etcd/etcd-ca.pem\'
auto-tls: true
peer-transport-security:
cert-file: \'/etc/kubernetes/pki/etcd/etcd.pem\'
key-file: \'/etc/kubernetes/pki/etcd/etcd-key.pem\'
peer-client-cert-auth: true
trusted-ca-file: \'/etc/kubernetes/pki/etcd/etcd-ca.pem\'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
7.2、 创建Service
7.2.1、创建etcd service并启动(所有Master节点)
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=http://www.likecs.com/usr/local/bin/etcd --config-file=http://www.likecs.com/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
7.2.2、创建etcd的证书目录(所有Master节点)
cd /etc/etcd/ssl && scp * k8s-master02:`pwd` && scp * k8s-master03:`pwd`
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
7.2.3、查看集群状态(任意master)
[root@k8s-master03 ~]# export ETCDCTL_API=3
[root@k8s-master03 ~]# etcdctl --endpoints="192.168.1.102:2379,192.168.1.101:2379,192.168.1.100:2379" --cacert=http://www.likecs.com/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=http://www.likecs.com/etc/kubernetes/pki/etcd/etcd.pem --key=http://www.likecs.com/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|
ENDPOINT
|
ID
| VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.1.102:2379 | a77e5ca7bd4dc035 | 3.4.13 | 20 kB |
false |
false |
465 |
9 |
9 |
|
| 192.168.1.101:2379 | e3adc675ac3b3dbd | 3.4.13 | 20 kB |
false |
false |
465 |
9 |
9 |
|
| 192.168.1.100:2379 | 47a087175e3f17b3 | 3.4.13 | 20 kB |
true |
false |
465 |
9 |
9 |
|
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
八、高可用配置
高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装)
如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等
Slb -> haproxy -> apiserver
8.1、安装keepalived和haproxy(所有Master节点)
yum install keepalived haproxy -y
8.2、Master配置HAProxy,Master节点都配置一样
vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.1.100:6443 check
server k8s-master02 192.168.1.101:6443 check
server k8s-master03 192.168.1.102:6443 check
8.3、配置KeepAlived(Master节点)
二进制安装Kubernetes 1.20
内容版权声明:除非注明,否则皆为本站原创文章。