[root@test01 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t GFSmail:maildata /dev/mapper/mailcluster-maildata
This will destroy any data on/dev/mapper/mailcluster-maildata.
It appears to contain: symbolic link to `../dm-3'
Are you sure you want to proceed? [y/n] y
Device: /dev/mapper/mailcluster-maildata
Blocksize: 4096
Device Size 4096.00 GB (1073740800 blocks)
Filesystem Size: 4096.00 GB (1073740798 blocks)
Journals: 2
Resource Groups: 8192
Locking Protocol: "lock_dlm"
Lock Table: "GFSmail:maildata"
UUID: 50e12acf-6fb0-6881-3064-856c383b51dd
[root@test01~]#
对于mkfs.gfs2命令来说,我们所使用的参数功能如下:
-p:用来指定gfs的锁机制,一般情况下会选择lock_dlm;
-j:指定journal个数(可加入节点数),一般情况下应留有冗余,否则后期还得再调整;
查看journals:# gfs2_tool journals /home/coremail/var
增加journals:# gfs2_jadd -j 1 /home/coremail/var ##增加一个journals
-t:格式为ClusterName:FS_Path_Name
ClusterName:应与前面cluster.conf中指定的集群名称相同(上文为:GFSmail);
FS_Path_Name:这个块设备mount的路径(上文为:maildata);
最后一个参数是指定逻辑卷的详细路径;
2.8GFS挂载创建目录:
[root@test01 ~]# mkdir /home/coremail/var
将刚刚创建的逻辑卷加入到/etc/fstab文件中,使其开机自动映射:
[root@test01 ~]# echo "/dev/mapper/mailcluster-maildata /home/coremail/var gfs2 defaults,noatime,nodiratime,noquota 0 0" >> /etc/fstab
启动gfs2服务:
[root@test01 ~]# /etc/init.d/gfs2 start
节点2端执行:
操作前可以先执行pvs,lvs命令,看看是否能够正常显示节点1端创建的物理卷和逻辑卷信息,如果看不到(先尝试lvscan),则说明并未使用共享存储,或者配置有异常,仍然需要排查,需要等问题解决后,再执行下列的命令。
[root@test02 ~]# mkdir /home/coremail/var [root@test02~]# echo "/dev/mapper/mailcluster-maildata /home/coremail/var gfs2 defaults,noatime,nodiratime,noquota 0 0" >> /etc/fstab
[root@test02~]# /etc/init.d/gfs2 start
执行# clustat可以查询各成员节点的状态。
[root@test02 ~]# clustat
Cluster Statusfor GFSmail @ Thu Nov 3 23:17:24 2016
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
test01 11 Online
test02 12 Online, Local
[root@test02~]#