Ceph 与 OpenStack系统集成指南

1、创建一个POOL

Ceph的块设备默认使用“rdb” pool,但建议为Cinder和Glance创建专用的pool。

ceph osd pool create volumes 128 ceph osd pool create images 128 ceph osd pool create backups 128 ceph osd pool create vms 128

 

2、配置OPENSTACK CEPH CLIENTS

环境的准备,需要事先在ceph管理节点到openstack各服务节点间建立起免密钥登录的关系,且需有使用sudo的权限。

 

安装ceph客户端软件包:

在glance-api节点:sudo yum install Python-rbd

在nova-compute, cinder-backup and on the cinder-volume节点:sudo yum install  ceph (both the Python bindings and the client command line tools)

 

OpenStack中运行glance-api, cinder-volume, nova-compute ,cinder-backup服务的主机节点,都属于Ceph的客户端。需要配置ceph.conf.

使用以下命令把ceph.conf复制到每个ceph客户端节点:

ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

 

3、设置Ceph Client Authentication

如果启用了cephx认证,那么就需要为Nova/Cinder创建一个新用户:

$ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' $ ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' $ ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

 

将密钥(client.cinder, client.glance, and client.cinder-backup)分发至对应的主机节点,并调整属主权限:

ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

 

运行nova-compute的节点需要使用cinder密钥:

ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring

 

libvirt同样也要使用client.cinder密钥:

Libvirt进程在从Cinder挂载一个块设备时,需要使用该密钥访问Ceph存储集群。

先要在运行nova-compute的节点上创建一个密钥的临时拷贝:

ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key

 

然后登录到compute节点上,将密钥增加到libvirt配置文件中并删除上面的临时文件:

$ uuidgen 22003ebb-0f32-400e-9584-fa90b6efd874 cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>22003ebb-0f32-400e-9584-fa90b6efd874</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF # virsh secret-define --file secret.xml #Secret 22003ebb-0f32-400e-9584-fa90b6efd874 created # virsh secret-set-value --secret 22003ebb-0f32-400e-9584-fa90b6efd874 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml Secret value set

 

为了便于管理,建议在有多个计算节点时,在上在的操作中使用相同的UUID。

 

 

4、设置Glance集成Ceph

JUNO,编辑/etc/glance/glance-api.conf:

[DEFAULT] ... default_store = rbd ... [glance_store] stores = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8

 

启用copy-on-write功能:

在[DEFAULT]中增加以下参数,

show_image_direct_url = True

 

怎样关闭Glance的镜像缓存:

修改以下参数如以下所示,

[paste_deploy]flavor = keystone

 

其它建议设置的Glance参数:

- hw_scsi_model=virtio-scsi: add the virtio-scsi controller and get better performance and support for discard operation

- hw_disk_bus=scsi: connect every cinder block devices to that controller

- hw_qemu_guest_agent=yes: enable the QEMU guest agent

- os_require_quiesce=yes: send fs-freeze/thaw calls through the QEMU guest agent

 

OpenStack官网配置参考:

 

5、设置Cinder集成Ceph

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/c02c80a17653bbd9bfa021a9c3eda478.html