四、cLVM 安装与配置
1.安装cLVM
[root@target ~]# ha ssh node$I 'yum install -y lvm2-cluster'; done
2.启用集群LVM
[root@target ~]# ha ssh node$I 'lvmconf --enable-cluster'; done
3.查看一下启用的集群LVM
[root@target ~]# ha ssh node$I 'grep "locking_type = 3" /etc/lvm/lvm.conf'; done
locking_type = 3
locking_type = 3
locking_type = 3
注:所有节点启用完成。
4.启动cLVM服务
[root@target ~]# ha ssh node$I 'service clvmd start'; done
Starting clvmd:
Activating VG(s): No volume groups found
[确定]
Starting clvmd:
Activating VG(s): No volume groups found
[确定]
Starting clvmd:
Activating VG(s): No volume groups found
[确定]
5.将各节点的cman rgmanger clvmd设置为开机自启动
[root@target ~]# ha ssh node$I 'chkconfig clvmd on'; done
[root@target ~]# ha ssh node$I 'chkconfig cman on'; done
[root@target ~]# ha ssh node$I 'chkconfig rgmanager on'; done
6.在集群节点上创建lvm
node1:
(1).查看一下共享存储
[root@node1 ~]# fdisk -l #查看一下共享存储
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000dfceb
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 1301 10240000 83 Linux
/dev/sda3 1301 1938 5120000 83 Linux
/dev/sda4 1938 2611 5405696 5 Extended
/dev/sda5 1939 2066 1024000 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x5f3b697c
Device Boot Start End Blocks Id System
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0c68b5e3
Device Boot Start End Blocks Id System
(2).创建集群逻辑卷
[root@node1 ~]# pvcreate /dev/sdd #创建物理卷
Physical volume "/dev/sdd" successfully created
[root@node1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdd lvm2 a-- 20.00g 20.00g
[root@node1 ~]# vgcreate clustervg /dev/sdd #创建卷组
Clustered volume group "clustervg" successfully created
[root@node1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
clustervg 1 0 0 wz--nc 20.00g 20.00g
[root@node1 ~]# lvcreate -L 10G -n clusterlv clustervg #创建逻辑卷
Logical volume "clusterlv" created
[root@node1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
clusterlv clustervg -wi-a---- 10.00g
7.在node2与node3上查看一下创建的逻辑卷
node2:
[root@node2 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
clusterlv clustervg -wi-a---- 10.00g
node3:
[root@node3 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
clusterlv clustervg -wi-a---- 10.00g
好了,clvm到这里配置全部完成,下面我们将生成的逻辑卷格式成集群文件系统(gfs)。
五、gfs2 安装与配置
1.安装gfs2
[root@target ~]# ha ssh node$I 'yum install -y gfs2-utils'; done
2.查看一下帮助文件
[root@node1 ~]# mkfs.gfs2 -h
Usage:
mkfs.gfs2 [options] <device> [ block-count ]
Options:
-b <bytes> Filesystem block size
-c <MB> Size of quota change file
-D Enable debugging code
-h Print this help, then exit
-J <MB> Size of journals
-j <num> Number of journals
-K Don't try to discard unused blocks
-O Don't ask for confirmation
-p <name> Name of the locking protocol
-q Don't print anything
-r <MB> Resource Group Size
-t <name> Name of the lock table
-u <MB> Size of unlinked file
-V Print program version information, then exit
注:对于我们用到的参数进行说明
-j # 指定日志区域的个数,有几个就能够被几个节点挂载
-J # 指定日志区域的大小,默认为128MB
-p {lock_dlm|lock_nolock}
-t <name>: 锁表的名称,格式为clustername:locktablename
3.格式化为集群文件系统
[root@node1 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t testcluster:sharedstorage /dev/clustervg/clusterlv
This will destroy any data on /dev/clustervg/clusterlv.
It appears to contain: symbolic link to `../dm-0'
Are you sure you want to proceed? [y/n] y
Device: /dev/clustervg/clusterlv
Blocksize: 4096
Device Size 10.00 GB (2621440 blocks)
Filesystem Size: 10.00 GB (2621438 blocks)
Journals: 2
Resource Groups: 40
Locking Protocol: "lock_dlm"
Lock Table: "testcluster:sharedstorage"
UUID: 60825032-b995-1970-2547-e95420bd1c7c
注:testcluster是集群名称,sharedstorage为锁表名称
4.创建挂载目录并挂载
[root@node1 ~]# mkdir /mydata
[root@node1 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
[root@node1 ~]# cd /mydata/
[root@node1 mydata]# ll
总用量 0
5.将node2与node3进行挂载
node2:
[root@node2 ~]# mkdir /mydata
[root@node2 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
[root@node2 ~]# cd /mydata/
[root@node2 mydata]# ll
总用量 0
node3:
[root@node3 ~]# mkdir /mydata
[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
Too many nodes mounting filesystem, no free journals
注:大家可以看到,node2成功挂载而node3没有功功挂载,Too many nodes mounting filesystem, no free journals,没有多于的日志空间。因为我们在格式化时只创建了2个日志文件,所以node1与node2可以挂载,而node3无法挂载,至于怎么解决我们下面会说明。现在我们来测试一下集群文件系统。