Heartbeat是著名HA项目,Heartbeat在3.0之后分拆为Heartbeat和Pacemaker 两个各自独立项目。Pacemaker在后续发展中使用Corosync作为消息层,和Corosync紧密结合,同时也保留Heartbeat作为可选的消息层。不管heartbeat,还是corosync都是高可用集群中的Cluster Messaging Layer(集群信息层),是主要传递发集群信息与心跳信息的,并没有资源管理功能,资源管理还得依赖于上层的crm(Cluster resource Manager,集群资源管理器),最著名的资源管理器,就是pacemaker。现在corosync+pacemaker成了高可用集群中的最佳组合。
本文首先介绍HA套件中的heartbeat,为nginx实现高可用
环境:
vm3 172.16.1.203
vm4 172.16.1.204
VIP 172.16.1.200
heartbeat 版本:3.0.4
nginx 版本:1.6.3
以下所有步骤都是两个HA节点都需要做
一、安装heartbeat和nginx
yum install heartbeat
yum install nginx
二、配置
1、/etc/ha.d/authkeys
auth 2
#1 crc
2 sha1 HI!
#3 md5 Hello!
2、/etc/ha.d/ha.cf
注意ucast参数,需要配置对端的IP地址,即vm3上的ha.cf配置为vm4的ip地址,反之亦然
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 60
udpport 694
ucast eth0 172.16.1.204
auto_failback off
node vm3
node vm4
ping 172.16.1.200
respawn hacluster /usr/lib64/heartbeat/ipfail
3、/etc/ha.d/haresources
第一段为配置文件所在节点的主机名,nginx为/etc/ha.d/resource.d下的脚本,下一步会提到
vm3 IPaddr::172.16.1.200/24/eth0:0 nginx
4、/etc/ha.d/resource.d/nginx
ln -s /etc/init.d/nginx /etc/ha.d/resource.d/nginx
三、启动
service heartbeat start
heartbeat日志:/var/log/ha-log和/var/log/ha-debug,/var/log/message中也能看到
启动到生效可能会有1~2分钟延迟(和ha.cf中的initdead参数有关,单位秒),一开始在vm3上的/var/log/ha-log上最后几行如下
Aug 04 14:39:42 vm3 heartbeat: [113707]: info: Link vm4:eth0 up.
Aug 04 14:39:42 vm3 heartbeat: [113707]: info: Status update for node vm4: status up
harc(default)[113716]: 2015/08/04_14:39:42 info: Running /etc/ha.d//rc.d/status status
生效后,日志刷到如下,这时就能ping通浮动IP172.16.1.200了
Aug 04 14:41:40 vm3 heartbeat: [113707]: WARN: node 172.16.1.200: is dead
harc(default)[113748]: 2015/08/04_14:41:40 info: Running /etc/ha.d//rc.d/status status
Aug 04 14:41:40 vm3 heartbeat: [113707]: info: Comm_now_up(): updating status to active
Aug 04 14:41:40 vm3 heartbeat: [113707]: info: Local status now set to: 'active'
Aug 04 14:41:40 vm3 heartbeat: [113707]: info: Starting child client "/usr/lib64/heartbeat/ipfail" (491,490)
Aug 04 14:41:40 vm3 heartbeat: [113774]: info: Starting "/usr/lib64/heartbeat/ipfail" as uid 491 gid 490 (pid 113774)
Aug 04 14:41:43 vm3 heartbeat: [113707]: info: Status update for node vm4: status active
harc(default)[113777]: 2015/08/04_14:41:43 info: Running /etc/ha.d//rc.d/status status
Aug 04 14:41:45 vm3 ipfail: [113774]: info: Status update: Node vm4 now has status active
Aug 04 14:41:47 vm3 ipfail: [113774]: info: Asking other side for ping node count.
Aug 04 14:41:50 vm3 ipfail: [113774]: info: No giveup timer to abort.
Aug 04 14:41:53 vm3 heartbeat: [113707]: info: remote resource transition completed.
Aug 04 14:41:53 vm3 heartbeat: [113707]: info: remote resource transition completed.
Aug 04 14:41:53 vm3 heartbeat: [113707]: info: Initial resource acquisition complete (T_RESOURCES(us))
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.1.200)[113830]: 2015/08/04_14:41:53 INFO: Resource is stopped
Aug 04 14:41:53 vm3 heartbeat: [113794]: info: Local Resource acquisition completed.
harc(default)[113913]: 2015/08/04_14:41:53 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp
ip-request-resp(default)[113913]: 2015/08/04_14:41:53 received ip-request-resp IPaddr::172.16.1.200/24/eth0 OK yes
ResourceManager(default)[113936]: 2015/08/04_14:41:53 info: Acquiring resource group: vm3 IPaddr::172.16.1.200/24/eth0 nginx
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.1.200)[113964]: 2015/08/04_14:41:54 INFO: Resource is stopped
ResourceManager(default)[113936]: 2015/08/04_14:41:54 info: Running /etc/ha.d/resource.d/IPaddr 172.16.1.200/24/eth0 start
IPaddr(IPaddr_172.16.1.200)[114089]: 2015/08/04_14:41:54 INFO: Adding inet address 172.16.1.200/24 with broadcast address 172.16.1.255 to device eth0
IPaddr(IPaddr_172.16.1.200)[114089]: 2015/08/04_14:41:54 INFO: Bringing device eth0 up
IPaddr(IPaddr_172.16.1.200)[114089]: 2015/08/04_14:41:54 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-172.16.1.200 eth0 172.16.1.200 auto not_used not_used
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_172.16.1.200)[114063]: 2015/08/04_14:41:54 INFO: Success
ResourceManager(default)[113936]: 2015/08/04_14:41:54 info: Running /etc/ha.d/resource.d/nginx start
Aug 04 14:41:54 vm3 heartbeat: [113707]: info: Link 172.16.1.200:172.16.1.200 up.
Aug 04 14:41:54 vm3 heartbeat: [113707]: WARN: Late heartbeat: Node 172.16.1.200: interval 134490 ms
Aug 04 14:41:54 vm3 ipfail: [113774]: info: Link Status update: Link 172.16.1.200/172.16.1.200 now has status up
Aug 04 14:41:54 vm3 heartbeat: [113707]: info: Status update for node 172.16.1.200: status ping
Aug 04 14:41:54 vm3 ipfail: [113774]: info: Status update: Node 172.16.1.200 now has status ping
Aug 04 14:41:54 vm3 ipfail: [113774]: info: A ping node just came up.
Aug 04 14:41:55 vm3 ipfail: [113774]: info: Asking other side for ping node count.
Aug 04 14:41:59 vm3 ipfail: [113774]: info: Ping node count is balanced.
Aug 04 14:41:59 vm3 ipfail: [113774]: info: No giveup timer to abort.
四、验证
使用curl浮动ip访问nginx
curl "http://172.16.1.200"
观察nginx日志/var/log/nginx/access.log,发现上面访问到vm3的nginx,关闭vm3的heartbeat
service heartbeat stop
再访问,观察nginx发现,访问到了vm4上面
curl "http://172.16.1.200"
五、总结