二、fence device 隔离设备
用kvm虚拟机来做这个隔离设备
实验环境:RHEL6 selinux and iptables disabled
主机:KVM虚拟机
hostname ip kvm domain name
node1 192.168.2.137 vm1
node2 192.168.2.138 vm2
192.168.2.60 服务器
#yum install fence-virt fence-virtd fence-virtd-libvirt fence-virtd-multicast -y
# fence_virtd -c 设置隔离设备
Module search path [/usr/lib64/fence-virt]:
Available backends:
libvirt 0.1
Available listeners:
multicast 1.0
Listener modules are responsible for accepting requests
from fencing clients.
Listener module [multicast]:
The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.
The multicast address is the address that a client will use to
send fencing requests to fence_virtd.
Multicast IP Address [225.0.0.12]:
Using ipv4 as family.
Multicast IP Port [1229]:
Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to ‘none‘ for no interface.
Interface [none]: br0
The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.
Key File [/etc/cluster/fence_xvm.key]:
Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.
Backend module [libvirt]:
The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
interface = "br0";
port = "1229";
address = "225.0.0.12";
family = "ipv4";
}
}
fence_virtd {
backend = "libvirt";
listener = "multicast";
module_path = "/usr/lib64/fence-virt";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
注:以上设置除“Interface”处填写虚拟机通信接口外,其他选项均可回车保持默认。
#dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
#scp /etc/cluster/fence_xvm.key 192.168.2.138:/etc/cluster(该目录默认是不存在的,可用mkdir来创建)
#scp /etc/cluster/fence_xvm.key 192.168.2.137:/etc/cluster
#service fence_virtd start
#chkconfig fence_virtd on
# netstat -anulp |grep fence
udp 0 0 0.0.0.0:1229 0.0.0.0:* 7519/fence_virtd
#cd /etc/cluster
查看是否存在fence_virtd文件
现在在网页管理上添加隔离设备
在domain后添加server37主机的uuid
在domain后添加server38主机的uuid
#fence_xvm -H vm1 看看是否可以操作你的虚拟机,如果成功,vm1将会重启。
且另一个节点会热接管
设置两节点的下列服务为开机自启动:
#chkconfig cman on
#chkconfig rgmanager on
#chkconfig modclusterd on
#chkconfig clvmd on
设置故障转移域(failover domains)
添加资源(resourses)
(该IP不为其他主机所用)
添加成功之后
添加服务组(Server Groups)
在节点上执行的命令
#clusvcadm -d www 关闭此服务组
#clustat 查看节点和服务的状态
#clusvcadm -e www 开启此服务组
刚刚添加了apache服务,所以要在每个节点上安装apache服务
#yum install -y httpd
为了区分两台内容可以重新编辑一下默认访问主页
#cd /var/www/html/
#echo `hostname` > index.html
ok现在可以在web网页上访问
就有主机名
刚刚添加的那个ip服务,它是一个浮动ip
192.168.2.111就可以看到你做的那个优先级高的再接管服务
如果将这个节点上的http关掉,那么那个优先级低的候补立马接管
如果ifconfig eth0 down 是这个节点的网可关掉,它会尝试恢复,如果恢复不了,就会被集群forceoff,然后重启当替补,如果优先级高的话,那么它就会立即接管给集群管理加存储服务