使用multus实现管理网和业务网分离——calico和flannel共存 (2)

可以看出cni配置文件中有calico、flannel和multus.其中谁的序号小就用谁,这里使用multus,而multus配置文件中写明使用的默认网络插件是calico.

创建使用calico网络的容器

因为multus使用的默认网络是calico,因此像往常一样不用修改任何文件创建出来的容器就是使用的calico网络插件。当然不能在yaml文件中指定使用hostlocal网络"hostNetwork: true".

创建使用flannel网络的容器

首先创建一个资源:

cat <<EOF | kubectl create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: flannel-conf spec: config: '{ "cniVersion": "0.3.0", "type": "flannel", "delegate": { "isDefaultGateway": true } }' EOF

使用命令kubectl get networkattachmentdefinition.k8s.cni.cncf.io -A查看该资源。
然后在容器中添加annotation:

[root@k8s-master multus]# cat flannel-pod-128.yml --- apiVersion: v1 kind: Pod metadata: name: flannelpod128 namespace: net annotations: v1.multus-cni.io/default-network: default/flannel-conf spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s-master containers: - name: samplepod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: dougbtv/centos-network ports: - containerPort: 80 [root@k8s-master multus]#

使用以上yaml文件就可以创建出一个使用flannel网络的容器。

创建一个双网卡容器,分别使用calico和flannel

默认网络是calico,如果再指定使用一个网络那么就会在容器里面创建两个网卡。例如,使用如下的yaml文件,创建pod

[root@k8s-master multus]# cat example.yml apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: default/flannel-conf spec: containers: - name: samplepod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: dougbtv/centos-network [root@k8s-master multus]#

当容器运行起来之后可以看到,容器里面有两张网卡,其中eth0是calico管理的,net1是flannel管理的:

[root@k8s-master multus]# kubectl get pods -o wide -A|grep sample default samplepod 1/1 Running 0 14s 10.200.235.196 k8s-master <none> <none> [root@k8s-master multus]# kubectl exec -ti -n default samplepod ip a kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP link/ether 1a:c9:1e:a8:67:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.200.235.196/32 brd 10.200.235.196 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::18c9:1eff:fea8:67b6/64 scope link valid_lft forever preferred_lft forever 6: net1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP link/ether 66:b7:a4:91:ec:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.244.0.4/24 brd 10.244.0.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::64b7:a4ff:fe91:ec50/64 scope link valid_lft forever preferred_lft forever [root@k8s-master multus]# 网络连通性

这里使用了calico和flannel两种网络插件,数据通路不一样,但是跨节点都是相互通的,如下所示:

[root@k8s-deploy4-master-1 ~]# kubectl get pods -o wide -A|grep pod net calicopod 1/1 Running 2 45h 10.8.19.213 k8s-deploy4-worker-1 <none> <none> net calicopodmaster 1/1 Running 0 45h 10.8.18.133 k8s-deploy4-master-1 <none> <none> net flannelpod 1/1 Running 1 45h 192.168.1.53 k8s-deploy4-worker-1 <none> <none> net flannelpodmaster 1/1 Running 0 45h 192.168.0.24 k8s-deploy4-master-1 <none> <none>

这四个Pod两两之间是可以ping通的,但是在有些环境中数据包跨节点后到达flannel绑定的网卡或者calico绑定的网卡就不再转发给容器的网卡了,也是出现过的。按照本地路由是应该转发给容器的网卡的,但就是不转发。说明这种情况网卡有异常,为了谨慎起见,不建议calico和flannel网络之间有通信,而使之起到网络分流的作用。

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zyjdwf.html