现在可以回到 ceph-csi 的 examples/rbd 目录试用卷快照功能了。先将 snapshotclass.yaml 中的 clusterID 改成 Ceph 的集群 ID:
--- apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshotClass metadata: name: csi-rbdplugin-snapclass driver: rbd.csi.ceph.com parameters: # String representing a Ceph cluster to provision storage from. # Should be unique across all Ceph clusters in use for provisioning, # cannot be greater than 36 bytes in length, and should remain immutable for # the lifetime of the StorageClass in use. # Ensure to create an entry in the configmap named ceph-csi-config, based on # csi-config-map-sample.yaml, to accompany the string chosen to # represent the Ceph cluster in clusterID below clusterID: 154c3d17-a9af-4f52-b83e-0fddd5db6e1b # Prefix to use for naming RBD snapshots. # If omitted, defaults to "csi-snap-". # snapshotNamePrefix: "foo-bar-" csi.storage.k8s.io/snapshotter-secret-name: csi-rbd-secret csi.storage.k8s.io/snapshotter-secret-namespace: ceph-csi deletionPolicy: Delete然后创建 snapshot class:
$ kubectl create -f snapshotclass.yaml查看 snapshot class 是否创建成功:
$ kubectl get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE csi-rbdplugin-snapclass rbd.csi.ceph.com Delete 2s还记得上一节创建的 rbd-pvc 吗,现在我们可以直接创建该 PVC 的快照来进行备份了,卷快照的配置清单如下:
--- apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: rbd-pvc-snapshot spec: volumeSnapshotClassName: csi-rbdplugin-snapclass source: persistentVolumeClaimName: rbd-pvc通过该配置清单创建 PVC rbd-pvc 的快照:
$ kubectl create -f snapshot.yaml验证快照是否创建成功:
$ kubectl get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot false rbd-pvc csi-rbdplugin-snapclass snapcontent-9011a05f-dc34-480d-854e-814b0b1b245d 16s在 Ceph 集群中可以看到新创建快照的 image 名称:
$ rbd ls -p kubernetes csi-snap-4da66c2e-f707-11ea-ba22-aaa4b0fc674d csi-vol-d9d011f9-f669-11ea-a3fa-ee21730897e6查看新创建的快照信息:
$ rbd snap ls csi-snap-4da66c2e-f707-11ea-ba22-aaa4b0fc674d -p kubernetes SNAPID NAME SIZE PROTECTED TIMESTAMP 9 csi-snap-4da66c2e-f707-11ea-ba22-aaa4b0fc674d 1 GiB Tue Sep 15 03:55:34 2020快照也是 pool 中的一个 image,所以可以用常规的命令查看快照的详细信息:
$ rbd info csi-snap-4da66c2e-f707-11ea-ba22-aaa4b0fc674d -p kubernetes rbd image 'csi-snap-4da66c2e-f707-11ea-ba22-aaa4b0fc674d': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 1 id: 66cdcd259693 block_name_prefix: rbd_data.66cdcd259693 format: 2 features: layering, deep-flatten, operations op_features: clone-child flags: create_timestamp: Tue Sep 15 03:55:33 2020 access_timestamp: Tue Sep 15 03:55:33 2020 modify_timestamp: Tue Sep 15 03:55:33 2020 parent: kubernetes/csi-vol-d9d011f9-f669-11ea-a3fa-ee21730897e6@33d02b70-bc82-4def-afd3-b7a40567a8db overlap: 1 GiB如果想恢复快照,可以直接基于快照创建 PVC,配置清单内容如下:
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-restore spec: storageClassName: csi-rbd-sc dataSource: name: rbd-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi创建 PVC:
$ kubectl apply -f pvc-restore.yaml查看 PVC 和申请成功的 PV:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-44b89f0e-4efd-4396-9316-10a04d289d7f 1Gi RWO csi-rbd-sc 22h rbd-pvc-restore Bound pvc-e0ef4f6a-03dc-4c3b-a9c2-db03baf35ab0 1Gi RWO csi-rbd-sc 2m45s $ kubectl get pv pvc-44b89f0e-4efd-4396-9316-10a04d289d7f 1Gi RWO Delete Bound default/rbd-pvc csi-rbd-sc 22h pvc-e0ef4f6a-03dc-4c3b-a9c2-db03baf35ab0 1Gi RWO Delete Bound default/rbd-pvc-restore csi-rbd-sc 2m14s可以看到 PV 申请成功了,对应到 Ceph 里面就多了一个 RBD image:
$ rbd ls -p kubernetes csi-snap-4da66c2e-f707-11ea-ba22-aaa4b0fc674d csi-vol-d9d011f9-f669-11ea-a3fa-ee21730897e6 csi-vol-e32d46bd-f722-11ea-a3fa-ee21730897e6创建一个新 Pod,使用该 PV 作为持久化存储:
$ kubectl apply -f pod-restore.yaml待 Pod 运行成功后,到运行 Pod 的 Node 上可以通过 rbd 命令查看映射信息:
$ rbd showmapped id pool namespace image snap device 0 kubernetes csi-vol-d9d011f9-f669-11ea-a3fa-ee21730897e6 - /dev/rbd0 1 kubernetes csi-vol-e32d46bd-f722-11ea-a3fa-ee21730897e6 - /dev/rbd1 5. 清理