Keepalived基础知识详细完整总结(5)

172.16.60.205 作为web-master主节点, 172.16.60.206 作为web-slave从节点, 两节点上都部署nginx.
现在在两节点上部署keepalived, 只做节点故障时vip转移功能, 不做负载均衡功能.
vip为: 172.16.60.129
 
1) 主从两节点部署nginx, 安装和配置过程省略. 配置一样, 访问内容一致!
yum安装的nginx,  启动命令:  /etc/init.d/nginx start
和 均可以正常访问.
 
2) 主从两节点安装keepalived (两个节点都要安装)
[root@web-master ~]# yum install -y openssl-devel
[root@web-master ~]# cd /usr/local/src/
[root@web-master src]# wget
[root@web-master src]# tar -zvxf keepalived-1.3.5.tar.gz
[root@web-master src]# cd keepalived-1.3.5
[root@web-master keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived
[root@web-master keepalived-1.3.5]# make && make install
         
[root@web-master keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@web-master keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@web-master keepalived-1.3.5]# mkdir /etc/keepalived/
[root@web-master keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@web-master keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@web-master keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local
   
[root@web-master keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived     
[root@web-master keepalived-1.3.5]# chkconfig keepalived on                 
         
3) keepalived配置
==========web-master 主节点的配置==========
[root@web-master ~]# cd /etc/keepalived/
[root@web-master keepalived]# cp keepalived.conf keepalived.conf.bak
[root@web-master keepalived]# >keepalived.conf
[root@web-master keepalived]# vim keepalived.conf
! Configuration File for keepalived
   
global_defs {
  router_id LVS_Master
}
   
vrrp_instance VI_1 {
    state MASTER              #指定instance初始状态,实际根据优先级决定.backup节点不一样
    interface eth0            #虚拟IP所在网
    virtual_router_id 51      #VRID,相同VRID为一个组,决定多播MAC地址
    priority 100              #优先级,另一台改为90.backup节点不一样
    advert_int 1              #检查间隔
    authentication {
        auth_type PASS        #认证方式,可以是pass或ha
        auth_pass 1111        #认证密码
    }
    virtual_ipaddress {
        172.16.60.129        #VIP地址
    }
}
 
==========web-slave 从节点的配置==========
[root@web-slave ~]# cd /etc/keepalived/
[root@web-slave keepalived]# cp keepalived.conf keepalived.conf.bak
[root@web-slave keepalived]# >keepalived.conf
[root@web-slave keepalived]# vim keepalived.conf
! Configuration File for keepalived
   
global_defs {
  router_id LVS_Backup
}
   
vrrp_instance VI_1 {
    state BACKUP         
    interface eth0       
    virtual_router_id 51 
    priority 90         
    advert_int 1         
    authentication {
        auth_type PASS   
        auth_pass 1111   
    }
    virtual_ipaddress {
        172.16.60.129 
    }
}
 
4) 分别启动主从节点的keepalived服务
 
启动主节点keepalived服务
[root@web-master keepalived]# /etc/init.d/keepalived start
Starting keepalived:                                      [  OK  ]
[root@web-master keepalived]# ps -ef|grep keepalived
root    13529    1  0 16:36 ?        00:00:00 keepalived -D
root    13530 13529  0 16:36 ?        00:00:00 keepalived -D
root    13532 13529  0 16:36 ?        00:00:00 keepalived -D
root    13536  9799  0 16:36 pts/1    00:00:00 grep keepalived
 
启动从节点keepalived服务
[root@web-slave keepalived]# /etc/init.d/keepalived start
Starting keepalived:                                      [  OK  ]
[root@web-slave keepalived]# ps -ef|grep keepalived
root      3120    1  0 16:37 ?        00:00:00 keepalived -D
root      3121  3120  0 16:37 ?        00:00:00 keepalived -D
root      3123  3120  0 16:37 ?        00:00:00 keepalived -D
root      3128 27457  0 16:37 pts/2    00:00:00 grep keepalived
 
查看vip资源情况 (默认vip是在主节点上的)
[root@web-master keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:88:43:f8 brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.205/24 brd 172.16.60.255 scope global eth0
    inet 172.16.60.129/32 scope global eth0
    inet6 fe80::250:56ff:fe88:43f8/64 scope link
      valid_lft forever preferred_lft forever
 
从节点没有vip资源
[root@web-slave keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:50:9b brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.206/24 brd 172.16.60.255 scope global eth0
    inet6 fe80::250:56ff:feac:509b/64 scope link
      valid_lft forever preferred_lft forever
 
5) keepalived 实现故障转移 (转移vip资源)
假设主节点宕机或keepalived服务挂掉, 则vip资源就会自动转移到从节点
[root@web-master keepalived]# /etc/init.d/keepalived stop
Stopping keepalived:                                      [  OK  ]
[root@web-master keepalived]# ps -ef|grep keepalived
root    13566  9799  0 16:40 pts/1    00:00:00 grep keepalived
[root@web-master keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:88:43:f8 brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.205/24 brd 172.16.60.255 scope global eth0
    inet6 fe80::250:56ff:fe88:43f8/64 scope link
      valid_lft forever preferred_lft forever
 
则从节点这边就会接管vip
[root@web-slave keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:50:9b brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.206/24 brd 172.16.60.255 scope global eth0
    inet 172.16.60.129/32 scope global eth0
    inet6 fe80::250:56ff:feac:509b/64 scope link
      valid_lft forever preferred_lft forever
 
接着再重启主节点的keepalived服务, 即主节点故障恢复后, 就会重新抢回vip (根据配置里的优先级决定的)
[root@web-master keepalived]# /etc/init.d/keepalived start
Starting keepalived:                                      [  OK  ]
[root@web-master keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:88:43:f8 brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.205/24 brd 172.16.60.255 scope global eth0
    inet 172.16.60.129/32 scope global eth0
    inet6 fe80::250:56ff:fe88:43f8/64 scope link
      valid_lft forever preferred_lft forever
 
这时候, 从节点的vip就消失了
[root@web-slave keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:ac:50:9b brd ff:ff:ff:ff:ff:ff
    inet 172.16.60.206/24 brd 172.16.60.255 scope global eth0
    inet6 fe80::250:56ff:feac:509b/64 scope link
      valid_lft forever preferred_lft forever
 
以上操作, keepalived仅仅实现了两台机器的vip的故障转移功能, 即实现了双机热备, 避免了单点故障.
即keepalived配置里仅仅只是在宕机(或keepalived服务挂掉)后才实现vip转移, 并没有实现所监控应用故障时的vip转移.
比如案例中两台机器的nginx, 可以监控nginx, 当nginx挂掉后,实现自启动, 如果强启失败, 则将vip转移到对方节点.
这种情况的配置可以参考另一篇博文: https://www.linuxidc.com/Linux/2017-12/149670.htm

============下面是曾经使用过的一个案例: 三台节点机器,配置三个VIP,实行相互之间的"两主两从"模式=============

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/1fd103c63a934da4ffd273a8a8abede7.html