花了很长时间安装ceph,中间走得弯路实在走得太多太多,确实有必要记下来总结一下。期间我也参考不少他人的资料,最后还是在ceph官网的mail-list上热心人的帮助下才顺利把雏形搭建起来。
Ceph系统总共有4个角色,client、mon、mds和osd,我看的资料mon和mds都是装在了一起。我在Vmware上进行搭建测试。本来搭了三台机子,后来为了中间遇到了些问题,为了排除osd的连接的问题,把mon、mds和osd直接都装在了一台机子上。这个不影响初期的安装探索。
准备条件:要求几台主机能够免密码且能根据hostname访问。具体实现可以参考
步骤如下:
1, 首先是client,这是需要ceph.ko这个模块的支持的。对于老版本的内核,可能要自己手动编译,把模块加载进来。如果想知道正在运行着的OS是否已经包含该模块,cd到目录/lib/modules/***/kernel/fs/查看当前内核支持的文件类型,如果有ceph这个文件那表示已经包含了该模块。modprobe ceph加载一下就可以了。$modprobe -l|grep ceph,
kernel/fs/ceph/ceph.ko //返回类似这样的信息就表示已经加载成功了,客户端这边ok了。
2, 如果step 1 已经ok,可以跳过这一步。对于诸如RedHat这样的系统,选用的内核比较老,需要升级内核至少到2.6.34。因为新版本的内核对ceph提供直接支持。方法有两个都是直接从别人那里cpoy来的。
第一种方法
$git clone git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
$cd ceph-client
$make menuconfig
//搜索ceph,可以发现有两个关于ceph的选项,选上就好.编译内核的方法这里//就不在赘述,直接上命令了
$make && make modules && make modules_install && make install && reboot
//上述编译内核的命令只在redhat系列系统上试过有效,其他发行版可能稍有不同,自己google一下吧。
第二种方法:
$下载源代码
$ git clone git://ceph.newdream.net/git/ceph-client-standalone.git
$ git branch master-backport origin/master-backport
$ git checkout master-backport
$编译
$ make or make KERNELDIR=/usr/src/… $前者表示用于当前在用内核,后者其它路径
$ 编译成功后会产生ceph.ko
$ make install
$ modprobe ceph or inmod ceph.ko
安装OSD,这些步骤需要在每个OSD上重复操作。
3.1,去官网下载最新版本源代码
3.2,解压安装。
$tar –xzvf ceph-0.24.tar.gz
$ ./autogen.sh
$ ./configure
$ make
在configure时一般会提示缺少一些包,一次安装一下就行了,Fedora 14基本都可以用yum来直接安装。
3.3,准备OSD空间,用fdisk之类的工具分配出一块独立分区
3.4,对新分区进行btrfs的格式化
$yum install btrfs-progs.i686
$mkfs.btrfs /dev/sda3
3.5挂载分区
$mkdir –p /mnt/btrfs/osd0
$ mount -t btrfs /dev/cciss/c0d2p1 /mnt/btrfs/osd0/
$ df –h
/dev/sda1 9.7G 4.6G 5.0G 48% /
/dev/sda3 9.3G 4.4M 9.3G 1% /mnt/btrfs/osd0
4, mon和mds安装
4.1,重复3.1和3.2
4.2,配置ceph.conf和fetch_config配置文件
ceph.conf:
; global
[global]
; enable secure authentication
;auth supported = cephx
; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id
; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
;debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20
[mon0]
host = cephosd
mon addr = 192.168.178.160:6789
; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
;keyring = /data/keyring.$name
; mds logging to debug issues.
;debug ms = 1
;debug mds = 20
[mds.alpha]
host = cephosd
; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
sudo = true
; This is where the btrfs volume will be mounted.
osd data =/mnt/btrfs/osd0$id
; Ideally, make this a separate disk or partition. A few
; hundred MB should be enough; more if you have fast or many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/osd$id/journal), but it will be slower than a
; separate disk or partition.
; This is an example of a file-based journal.
osd journal = /mnt/btrfs/osd$id/journal
osd journal size = 1000 ; journal size, in megabytes
; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20
[osd0]
host = cephosd
; if 'btrfs devs' is not specified, you're responsible for
; setting up the 'osd data' dir. if it is not btrfs, things
; will behave up until you try to recover from a crash (which
; usually fine for basic testing).
btrfs devs = /dev/sda3
osd data = /mnt/btrfs/osd0
; access control
[group everyone]
; you probably want to limit this to a small or a list of
; hosts. clients are fully trusted.
addr = 0.0.0.0/0
[mount /]
allow = %everyone
fetch_config:
#!/bin/sh
conf="$1"
## fetch ceph.conf from some remote location and save it to $conf.
##
## make sure this script is executable (chmod +x fetch_config)
##
## examples:
##
## from a locally accessible file
# cp /path/to/ceph.conf $conf
## from a URL:
## via scp
# scp -i /path/to/id_dsa user@host:/path/to/ceph.conf $conf