从零搭建K8S测试集群 (5)

加入之后,运行ip a可以查看到我们的网络,k8s为我们额外启动了flannel.1、cni0、以及veth设备

[root@vm2 vagrant]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 85901sec preferred_lft 85901sec inet6 fe80::5054:ff:fe4d:77d3/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:f1:20:55 brd ff:ff:ff:ff:ff:ff inet 192.168.205.11/24 brd 192.168.205.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fef1:2055/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:6f:03:4a:68 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 6e:99:9d:7b:08:ec brd ff:ff:ff:ff:ff:ff inet 10.244.1.0/32 brd 10.244.1.0 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::6c99:9dff:fe7b:8ec/64 scope link valid_lft forever preferred_lft forever 6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 4a:fe:72:33:fc:85 brd ff:ff:ff:ff:ff:ff inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::48fe:72ff:fe33:fc85/64 scope link valid_lft forever preferred_lft forever 7: veth7981bae1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default link/ether 1a:87:b6:82:c7:5c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::1887:b6ff:fe82:c75c/64 scope link valid_lft forever preferred_lft forever 8: veth54bbbfd5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default link/ether 46:36:5d:96:a0:69 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::4436:5dff:fe96:a069/64 scope link valid_lft forever preferred_lft forever

此时再在vm1上查看节点状态,可以看到有2个就绪的节点

[root@vm1 vagrant]# kubectl get nodes NAME STATUS ROLES AGE VERSION vm1 Ready control-plane,master 5h58m v1.20.0 vm2 Ready <none> 5h54m v1.20.0 部署一个服务

现在我们有了一个简单的k8s环境,来部署一个简单的nginx服务测试一下吧,首先准备一个nginx-deployment.yaml文件,内容如下

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80

执行kubectl apply进行部署

[root@vm1 vagrant]# kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created

可以查看一下Pod有没有起来

[root@vm1 vagrant]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-585449566-bqls4 1/1 Running 0 20s nginx-deployment-585449566-n8ssk 1/1 Running 0 20s

再执行kubectl expose添加一个service导出

[root@vm1 vagrant]# kubectl expose deployment nginx-deployment --port=80 --type=NodePort service/nginx-deployment exposed

kubectl get svc查看一下映射的端口

[root@vm1 vagrant]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4m19s nginx-deployment NodePort 10.98.176.208 <none> 80:32033/TCP 8s

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wpzpzx.html