Kubernetes中分布式存储Rook-Ceph部署快速演练 (2)

Rook Toolbox是一个运行在rook-ceph命名空间下的容器,通过它可以执行一些Ceph的管理任务,建议安装,还是挺实用的。创建一个yaml文件:

# rook-toolbox.yaml apiVersion: apps/v1 kind: Deployment metadata: name: rook-ceph-tools namespace: rook-ceph labels: app: rook-ceph-tools spec: replicas: 1 selector: matchLabels: app: rook-ceph-tools template: metadata: labels: app: rook-ceph-tools spec: dnsPolicy: ClusterFirstWithHostNet containers: - name: rook-ceph-tools image: rook/ceph:v1.5.3 command: ["/tini"] args: ["-g", "--", "/usr/local/bin/toolbox.sh"] imagePullPolicy: IfNotPresent env: - name: ROOK_CEPH_USERNAME valueFrom: secretKeyRef: name: rook-ceph-mon key: ceph-username - name: ROOK_CEPH_SECRET valueFrom: secretKeyRef: name: rook-ceph-mon key: ceph-secret volumeMounts: - mountPath: /etc/ceph name: ceph-config - name: mon-endpoint-volume mountPath: /etc/rook volumes: - name: mon-endpoint-volume configMap: name: rook-ceph-mon-endpoints items: - key: data path: mon-endpoints - name: ceph-config emptyDir: {} tolerations: - key: "node.kubernetes.io/unreachable" operator: "Exists" effect: "NoExecute" tolerationSeconds: 5

然后:

kubectl create -f rook-toolbox.yaml

接着可以执行下面的命令,进入Rook Toolbox容器:

kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

然后使用ceph status命令来查看集群的状态。正常的话应该可以看到类似下面的结果:

$ ceph status cluster: id: a0452c76-30d9-4c1a-a948-5d8405f19a7c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 3m) mgr: a(active, since 2m) osd: 3 osds: 3 up (since 1m), 3 in (since 1m)

一定要确保health的状态为HEALTH_OK,如果不是HEALTH_OK,则需要排查原因并解决。问题排查指南:https://rook.io/docs/rook/v1.5/ceph-common-issues.html。

部署块存储(Provisioning Block Storage)

使用下面的yaml:

# ceph-block-deploy.yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph csi.storage.k8s.io/fstype: ext4 reclaimPolicy: Retain

然后:

Kubectl create -f ceph-block-deploy.yaml

在这个yaml中,同时定义了名为rook-ceph-block的StorageClass,用以在pods启动的时候能够动态创建基于Ceph的块存储(通过pool: replicapool的设置指定)。此外,在这个StorageClass中,设定了storageclass.kubernetes.io/is-default-class: "true"。因此,在PersistentVolumeClaim中即使没有指定storageClassName,Kubernetes也会默认使用Ceph块存储来保存app的数据。

部署块存储的详细内容可以参考:https://rook.io/docs/rook/v1.5/ceph-block.html。

部署对象存储(Provisioning Object Storage)

使用下面的yaml:

# ceph-s3-deploy.yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: host erasureCoded: dataChunks: 2 codingChunks: 1 preservePoolsOnDelete: true gateway: type: s3 sslCertificateRef: port: 80 # securePort: 443 instances: 3 healthCheck: bucket: disabled: false interval: 60s

然后:

kubectl create -f ceph-s3-deploy.yaml

等待几分钟后,执行下面的命令:

kubectl -n rook-ceph get pod -l app=rook-ceph-rgw

此时应该可以在pod的列表中看到名字包含有rgw的pod处于Running状态。
接下来就是要在对象存储上创建Bucket。官方提供了。这里介绍另一种方式,就是借用MINIO的管理工具来创建。使用下面的shell脚本:

# setup-s3-storage.sh #! /bin/bash echo "Creating Ceph User" CREATE_USER_OUTPUT=`kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- radosgw-admin user create --uid=system-user --display-name=system-user --system` ACCESS_KEY=$(echo $CREATE_USER_OUTPUT | jq -r ".keys[0].access_key") SECRET_KEY=$(echo $CREATE_USER_OUTPUT | jq -r ".keys[0].secret_key") echo "User was created successfully" echo "S3 ACCESS KEY = $ACCESS_KEY" echo "S3 SECRET KEY = $SECRET_KEY" echo "Creating Ceph S3 Bucket" kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- curl https://dl.min.io/client/mc/release/linux-amd64/mc --output mc kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- chmod +x mc kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ./mc config host add mys3 "$ACCESS_KEY" "$SECRET_KEY" kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ./mc mb mys3/data echo "Ceph S3 Bucket created successfully" echo "S3 ACCESS KEY = $ACCESS_KEY" echo "S3 SECRET KEY = $SECRET_KEY"

在确保了当前机器上安装了jq后,执行:

chmod +x setup-s3-storage.sh ./setup-s3-storage.sh

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zydsxf.html