红帽集群套件RHCS iSCSI GFS实现iSCSI集群(3)

【iSCSI GFS实现网络存储】

1、查看iscsi的状态:
[root@desktop54 node1]# /etc/init.d/iscsi status    (node1的状态)
iSCSI Transport Class version 2.0-870
version 2.0-872
Target: iqn.2012-03.com.example:kevin
    Current Portal: 192.168.0.24:3260,1
    Persistent Portal: 192.168.0.24:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1994-05.com.RedHat:86d532367ca0
        Iface IPaddress: 192.168.0.54
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 2
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 8192
        FirstBurstLength: 65536
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 3    State: running
        scsi3 Channel 00 Id 0 Lun: 0
        scsi3 Channel 00 Id 0 Lun: 1
            Attached scsi disk sdb    (这里发现为sdb)    State: running

[root@desktop86 node2]# service iscsi status     (node2的状态)
iSCSI Transport Class version 2.0-870
version 2.0-872
Target: iqn.2012-03.com.example:kevin
    Current Portal: 192.168.0.24:3260,1
    Persistent Portal: 192.168.0.24:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1994-05.com.redhat:12546582ea96
        Iface IPaddress: 192.168.0.85
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 2
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 8192
        FirstBurstLength: 65536
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 3    State: running
        scsi3 Channel 00 Id 0 Lun: 0
        scsi3 Channel 00 Id 0 Lun: 1
            Attached scsi disk sda    (发现为sda,源存储为vda)    State: running

2、在节点node1和node2上配置
[root@desktop54 node1]# lvmconf --enable-cluster    (启动CLVM的集成cluster锁)
[root@desktop54 node1]# chkconfig clvmd on
[root@desktop54 node1]# service clvmd start    (clvm对lvm有效哦)
Activating VG(s):   No volume groups found
                                                           [  OK  ]

3、现在可以在任意一台节点client对发现的磁盘进行分区,划分出sdb1。然后格式化成网络文件系统gfs2.
[root@desktop54 node1]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
[root@desktop54 node1]# vgcreate vg1 /dev/sdb1
  Clustered volume group "vg1" successfully created
[root@desktop54 node1]# lvcreate -L 1G -n lv1 vg1
  Error locking on node desktop85.example.com: Volume group for uuid not found: e1CQKruwtLzT6dRc9wysYIDq1Df78V0hZDs9a1sf3duPexOyv115ETnOiM9C4P36
  Aborting. Failed to activate new LV to wipe the start of it.

(出现问题了,不能创建lv 我们去node2上去同步一下吧)
[root@desktop86 node2]# pvcreate /dev/sda1
  Can't initialize physical volume "/dev/sda1" of volume group "vg1" without -ff    (不用管,再去node2看看。)
[root@desktop54 node1]# lvcreate -L 1G -n lv1 vg1
  Logical volume "lv1" created    (能够创建lv了。)
[root@desktop54 node1]# /etc/init.d/clvmd start
Activating VG(s):   1 logical volume(s) in volume group "vg1" now active
                                                           [  OK  ]

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/9f251e322d13c1a19c5bf2a85033875b.html