HA高可用集群部署(ricci+luci+fence(2)

创建完成之后,在服务端1和服务端2的/etc/cluster/下会生成cluster.conf文件,查看如下
[root@server1~]# cd /etc/cluster/
[root@server1cluster]# ls
cluster.conf  cman-notify.d
[root@server1cluster]# cat cluster.conf  #查看文件内容
<?xmlversion="1.0"?>
<clusterconfig_version="1">        #集群名称
    <clusternodes>
      <clusternodename="server1.example.com" nodeid="1"/>    #结点1
      <clusternodename="server2.example.com" nodeid="2"/>    #结点2
    </clusternodes>
    <cman expected_votes="1"two_node="1"/>
    <fencedevices/>
    <rm/>
</cluster>
 
2.安装fence_virtd、创建fence设备
1.安装、开启fence_virtd(管理端2)
[root@foundation29Desktop]# yum install fence-virtd* -y
[root@foundation29Desktop]# fence_virtd -c    #设置
Modulesearch path [/usr/lib64/fence-virt]: 
 
Availablebackends:
    libvirt 0.1
Availablelisteners:
    multicast 1.2
    serial 0.4
 
Listenermodules are responsible for accepting requests
fromfencing clients.
 
Listenermodule [multicast]:        #多播
 
Themulticast listener module is designed for use environments
wherethe guests and hosts may communicate over a network using
multicast.
 
Themulticast address is the address that a client will use to
sendfencing requests to fence_virtd.
 
MulticastIP Address [225.0.0.12]:            #多播ip
 
Usingipv4 as family.
 
MulticastIP Port [1229]:      #多播端口号
 
Settinga preferred interface causes fence_virtd to listen only
onthat interface.  Normally, it listens onall interfaces.
Inenvironments where the virtual machines are using the host
machineas a gateway, this *must* be set (typically to virbr0).
Setto 'none' for no interface.
 
Interface[br0]: 
 
Thekey file is the shared key information which is used to
authenticatefencing requests.  The contents of thisfile must
bedistributed to each physical host and virtual machine within
acluster.
 
KeyFile [/etc/cluster/fence_xvm.key]:    #key文件的路径
 
Backendmodules are responsible for routing requests to
theappropriate hypervisor or management layer.
 
Backendmodule [libvirt]:
 
Configurationcomplete.
 
===Begin Configuration ===
backends{
    libvirt {
      uri = "qemu:///system";
    }
 
}
listeners{
    multicast {
      port = "1229";
      family = "ipv4";
      interface = "br0";
      address = "225.0.0.12";
      key_file ="/etc/cluster/fence_xvm.key";
    }
 
}
fence_virtd{
    module_path ="/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}
 
===End Configuration ===
Replace/etc/fence_virt.conf with the above [y/N]? y
[root@foundation29Desktop]# mkdir /etc/cluster    #创建cluster目录
[root@foundation29Desktop]# cd /etc/cluster/
[root@foundation29cluster]# dd if=/dev/urandom of=fence_xvm.key bs=128 count=1    #生成文件
1+0records in
1+0records out
128bytes (128 B) copied, 0.000167107 s, 766 kB/s
[root@foundation29Desktop]# scp fence_xvm.key root@172.25.29.1:/etc/cluster/  #将文件传给服务端1
root@172.25.29.2'spassword:
fence_xvm.key                                100%  512    0.5KB/s  00:00 
 
测试
[root@server1cluster]# ls    #查看
cluster.conf  cman-notify.d fence_xvm.key
以同样的方法传送给服务端2
[root@foundation29Desktop]# systemctl start fence_fence_virtd.service    #开启fence(由于管理端2是7.1的系统,开启时的命令不太一。如果是6.5系统,则用/etc/init.d/fence_virtd start即可)
 
2.创建fence设备

选择Fance Devices

HA高可用集群部署(ricci+luci+fence

选择Add,如下图,Name指的是添加fence设备的名称,写完之后选择Submit

HA高可用集群部署(ricci+luci+fence

结果

HA高可用集群部署(ricci+luci+fence

选择server1.example.com

HA高可用集群部署(ricci+luci+fence

点击Add Fence Method ,添加Method Name

HA高可用集群部署(ricci+luci+fence

如图,选择Add Tence Instance

HA高可用集群部署(ricci+luci+fence

填写Domin,选择Submit

HA高可用集群部署(ricci+luci+fence

完成之后,服务端2和服务端1的配置相同

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/f99be3e36f8d22754b6a274c7760bf78.html