CentOS7.5安装OpenStack Rocky版本

对接ceph的rbd和cephfs到k8s中提供持久化存储

环境 主机名 IP role 操作系统
ceph-01   172.16.31.11   mon osd   CentOS7.8  
ceph-02   172.16.31.12   Osd   CentOS7.8  
ceph-03   172.16.31.13   osd   CentOS7.8  

这个是官网的图

架构

步骤 安装ceph 主机名设置 ## ceph-01 hostnamectl set-hostname ceph-01 ## ceph-02 hostnamectl set-hostname ceph-01 ## ceph-03 hostnamectl set-hostname ceph-01 添加主机映射 cat << EOF >> /etc/hosts 172.16.31.11 ceph-01 172.16.31.12 ceph-02 172.16.31.13 ceph-03 EOF 关闭防火墙 systemctl stop firewalld && systemctl stop firewalld setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config iptables -F && iptables -X && iptables -Z 时间同步 yum install -y ntpdate ntpdate ssh无密钥访问 ## ceph-01节点执行 ssh-keygen ssh-copy-id ceph-01 ssh-copy-id ceph-02 ssh-copy-id ceph-03 准备repo yum install epel-release -y cat << EOF > /etc/yum.repos.d/ceph-deploy.repo [ceph-noarch] name=Ceph noarch packages baseurl=https://download.ceph.com/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc EOF

国内用户可以用阿里的仓库

wget -O /etc/yum.repos.d/epel.repo cat << EOF > /etc/yum.repos.d/ceph-deploy.repo [ceph-noarch] name=Ceph noarch packages baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/ enabled=1 gpgcheck=0 EOF 安装ceph-deploy软件包

ceph-1节点

yum install ceph-deploy yum-plugin-priorities python2-pip bash-completion -y

其他节点安装

yum install yum-plugin-priorities python2-pip bash-completion -y 创建一个ceph目录

ceph-01节点

mkdir ceph-cluster cd ceph-cluster 初始化ceph集群 ceph-deploy new ceph-01 (可选)修改网络接口

如果有两个网卡,可以将管理和存储网分离

public_network = 172.16.0.0/16 cluster_network = 192.168.31.0/24 安装ceph软件包 ceph-deploy install ceph-01 ceph-02 ceph-03

国内加速可以指定阿里云镜像地址,先在所有节点添加这个仓库

cat << EOF > /etc/yum.repos.d/ceph-luminous.repo [ceph] name=Ceph packages for x86_64 baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64 enabled=1 gpgcheck=0 EOF

然后执行

ceph-deploy install ceph-01 ceph-02 ceph-03 --no-adjust-repos 创建mon ceph-deploy mon create-initial

执行完后会创建*.keyring 密钥环

复制配置和秘钥到对应的节点上 ceph-deploy admin ceph-01 ceph-02 ceph-03 部署mgr ceph-deploy mgr create ceph-01

mgr是ceph-12.x版本(luminous)新增的组件

部署osd ceph-deploy osd create --data /dev/sdb ceph-01 ceph-deploy osd create --data /dev/sdb ceph-02 ceph-deploy osd create --data /dev/sdb ceph-03 检查集群状态 ceph health ceph -s 测试 创建pool ceph osd pool create test 8 8 echo `date` > date.txt rados put test-object-1 date.txt --pool=test 上传到存储池中 echo `date` > date.txt rados put test-object-1 date.txt --pool=test 查看存储池和对象映射 rados -p test ls ceph osd map test test-object-1 删除 rados rm test-object-1 --pool=test ceph osd pool rm test test --yes-i-really-really-mean-it

这里删不掉的话,需要添加这个配置

mon_allow_pool_delete = true

然后重启mon

ceph-deploy --overwrite-conf admin ceph-01 ceph-02 ceph-03 systemctl restart ceph-mon@ceph-01.service ## 再执行删除 ceph osd pool rm test test --yes-i-really-really-mean-it ceph rbd对接kubernetes

参考github连接:

创建pool ceph osd pool create kube-pool 64 64 导入admin keyring

获取admin keyring

ceph auth get-key client.admin

将key换成上一步输出的结果

kubectl create secret generic ceph-secret -n kube-system \ --type="kubernetes.io/rbd" \ --from-literal=key='AQDYuPZfdjykCxAAXApI8weHFiZdEPcoc8EaRA==' 创建 user secret ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube-pool' ceph auth get-key client.kube kubectl create secret generic ceph-secret-user -n kube-system --from-literal=key='AQAH2vZfe8wWIhAA0w81hjSAoqmjayS5SmWuVQ==' --type=kubernetes.io/rbd 创建StorageClass apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-rbd provisioner: kubernetes.io/rbd parameters: monitors: 172.16.31.11:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube-pool userId: kube userSecretName: ceph-secret-user userSecretNamespace: kube-system fsType: ext4 imageFormat: "2" imageFeatures: "layering" worker节点安装ceph-common cat << EOF > /etc/yum.repos.d/ceph-luminous.repo [ceph] name=Ceph packages for x86_64 baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64 enabled=1 gpgcheck=0 EOF yum install -y ceph-common 创建PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-1 namespace: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ceph-rbd 创建deployment apiVersion: apps/v1 kind: Deployment metadata: labels: app: test-rbd name: test-rbd namespace: default spec: replicas: 1 selector: matchLabels: app: test-rbd template: metadata: labels: app: test-rbd spec: containers: - image: zerchin/network imagePullPolicy: IfNotPresent name: test-rbd volumeMounts: - mountPath: /data name: rbd volumes: - name: rbd persistentVolumeClaim: claimName: rbd-1 常见问题 问题1:rbd未加载报错 MountVolume.WaitForAttach failed for volume "pvc-8d8a8ed9-bcdb-4de8-a725-9121fcb89c84" : rbd: map failed exit status 2, rbd output: libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep: could not open moddep file '/lib/modules/4.4.247-1.el7.elrepo.x86_64/modules.dep.bin' modinfo: ERROR: Module alias rbd not found. modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.247-1.el7.elrepo.x86_64/modules.dep.bin' modprobe: FATAL: Module rbd not found in directory /lib/modules/4.4.247-1.el7.elrepo.x86_64 rbd: failed to load rbd kernel module (1) rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (2) No such file or directory

原因

主要就是没有加载rbd模块,需要到所有的worker节点上加载rbd模块

解决

modprobe rbd

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wsfpjf.html