MySQL高可用解决方案MMM(2)

4、MySQL-mmm配置:

在4台mysql节点上创建用户

创建代理账号:

mysql> grant super,replicationclient,process on *.* to 'mmm_agent'@'192.168.31.%' identified by '123456';

创建监控账号:

mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.31.%' identified by '123456';

注1:因为之前的主从复制,以及主从已经是ok的,所以我在master1服务器执行就ok了。

检查master2和slave1、slave2三台db上是否都存在监控和代理账号

mysql> select user,host from mysql.user where user in ('mmm_monitor','mmm_agent');

+-------------+----------------------------+

| user        | host         |

+-------------+----------------------------+

| mmm_agent   | 192.168.31.% |

| mmm_monitor | 192.168.31.% |

+-------------+------------------------------+

mysql> show grants for 'mmm_agent'@'192.168.31.%';

+-----------------------------------------------------------------------------------------------------------------------------+

| Grants for mmm_agent@192.168.31.%                                             |

+-----------------------------------------------------------------------------------------------------------------------------+

| GRANT PROCESS, SUPER, REPLICATION CLIENT ON *.* TO 'mmm_agent'@'192.168.31.%' |

+-----------------------------------------------------------------------------------------------------------------------------+

mysql> show grants for 'mmm_monitor'@'192.168.31.%';

+-----------------------------------------------------------------------------------------------------------------------------+

| Grants for mmm_monitor@192.168.31.%                             |

+-----------------------------------------------------------------------------------------------------------------------------+

| GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.31.%' |

注2:

mmm_monitor用户:mmm监控用于对mysql服务器进程健康检查

mmm_agent用户:mmm代理用来更改只读模式,复制的主服务器等

5、mysql-mmm安装

在monitor主机(192.168.31.106) 上安装监控程序

cd /tmp

wget Fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz

tar -zxf mysql-mmm-2.2.1.tar.gz

cd mysql-mmm-2.2.1

make install

在数据库服务器(master1、master2、slave1、slave2)上安装代理

cd /tmp

wget

tar -zxf mysql-mmm-2.2.1.tar.gz

cd mysql-mmm-2.2.1

make install

6、配置mmm

编写配置文件,五台主机必须一致:

完成安装后,所有的配置文件都放到了/etc/mysql-mmm/下面。管理服务器和数据库服务器上都要包含一个共同的文件mmm_common.conf,内容如下:

active_master_rolewriter#积极的master角色的标示,所有的db服务器要开启read_only参数,对于writer服务器监控代理会自动将read_only属性关闭。

<host default>

cluster_interfaceeno16777736#群集的网络接口

pid_path       /var/run/mmm_agentd.pid#pid路径

bin_path       /usr/lib/mysql-mmm/#可执行文件路径

replication_user        rep#复制用户

replication_password    123456#复制用户密码

agent_usermmm_agent#代理用户

agent_password      123456#代理用户密码

</host>

<host master1>#master1的host名

ip     192.168.31.83#master1的ip

        mode       master#角色属性,master代表是主

        peer    master2#与master1对等的服务器的host名,也就是master2的服务器host名

</host>

<host master2>#和master的概念一样

ip       192.168.31.141

mode       master

peer     master1

</host>

<host slave1>#从库的host名,如果存在多个从库可以重复一样的配置

ip      192.168.31.250#从的ip

        mode     slave#slave的角色属性代表当前host是从

</host>

<host slave2>#和slave的概念一样

ip      192.168.31.225

mode    slave

</host>

<role writer>#writer角色配置

        hosts      master1,master2#能进行写操作的服务器的host名,如果不想切换写操作这里可以只配置master,这样也可以避免因为网络延时而进行write的切换,但是一旦master出现故障那么当前的MMM就没有writer了只有对外的read操作。

ips    192.168.31.2#对外提供的写操作的虚拟IP

        mode   exclusive#exclusive代表只允许存在一个主,也就是只能提供一个写的IP

</role>

<role reader>#read角色配置

        hosts   master2,slave1,slave2#对外提供读操作的服务器的host名,当然这里也可以把master加进来

ips      192.168.31.3, 192.168.31.4, 192.168.31.5#对外提供读操作的虚拟ip,这三个ip和host不是一一对应的,并且ips也hosts的数目也可以不相同,如果这样配置的话其中一个hosts会分配两个ip

        mode      balanced#balanced代表负载均衡

</role>

同时将这个文件拷贝到其它的服务器,配置不变

#for host in master1 master2 slave1 slave2 ; do scp /etc/mysql-mmm/mmm_common.conf $host:/etc/mysql-mmm/ ; done

代理文件配置

编辑 4台mysql节点机上的/etc/mysql-mmm/mmm_agent.conf 
在数据库服务器上,还有一个mmm_agent.conf需要修改,其内容是:

includemmm_common.conf

this master1

注意:这个配置只配置db服务器,监控服务器不需要配置,this后面的host名改成当前服务器的主机名。

启动代理进程 

在 /etc/init.d/mysql-mmm-agent的脚本文件的#!/bin/sh下面,加入如下内容 
source /root/.bash_profile
 

添加成系统服务并设置为自启动

#chkconfig --add mysql-mmm-agent

#chkconfigmysql-mmm-agent on

#/etc/init.d/mysql-mmm-agent start

注:添加source /root/.bash_profile目的是为了mysql-mmm-agent服务能启机自启。

自动启动和手动启动的唯一区别,就是激活一个console 。那么说明在作为服务启动的时候,可能是由于缺少环境变量

服务启动失败,报错信息如下:

Daemon bin: '/usr/sbin/mmm_agentd'

Daemon pid: '/var/run/mmm_agentd.pid'

Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.

BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.

failed

解决方法:

# cpanProc::Daemon
# cpan Log::Log4perl

# /etc/init.d/mysql-mmm-agent start

Daemon bin: '/usr/sbin/mmm_agentd'

Daemon pid: '/var/run/mmm_agentd.pid'

Starting MMM Agent daemon... Ok

# netstat -antp | grep mmm_agentd

tcp  0   0 192.168.31.83:9989    0.0.0.0:*   LISTEN      9693/mmm_agentd

配置防火墙

firewall-cmd --permanent --add-port=9989/tcp

firewall-cmd --reload

编辑 monitor主机上的/etc/mysql-mmm/mmm_mon.conf 

includemmm_common.conf

 

<monitor>

ip    127.0.0.1##为了安全性,设置只在本机监听,mmm_mond默认监听9988

pid_path      /var/run/mmm_mond.pid

bin_path      /usr/lib/mysql-mmm/

status_path/var/lib/misc/mmm_mond.status

ping_ips192.168.31.83,192.168.31.141,192.168.31.250,192.168.31.225#用于测试网络可用性 IP 地址列表,只要其中有一个地址 ping 通,就代表网络正常,这里不要写入本机地址

auto_set_online  0#设置自动online的时间,默认是超过60s就将它设置为online,默认是60s,这里将其设为0就是立即online

</monitor>

 

<check default>

check_period    5

trap_period     10

timeout         2

    #restart_after     10000

max_backlog     86400

</check>

check_period

描述:检查周期默认为5s

默认值:5s

trap_period

描述:一个节点被检测不成功的时间持续trap_period秒,就慎重的认为这个节点失败了。

默认值:10s

timeout

描述:检查超时的时间

默认值:2s

restart_after

描述:在完成restart_after次检查后,重启checker进程

默认值:10000

max_backlog

描述:记录检查rep_backlog日志的最大次数

默认值:60

 

<host default>

monitor_usermmm_monitor#监控db服务器的用户

monitor_password  123456#监控db服务器的密码

</host>

debug 0#debug 0正常模式,1为debug模式

启动监控进程:

在 /etc/init.d/mysql-mmm-agent的脚本文件的#!/bin/sh下面,加入如下内容 
source /root/.bash_profile
 

添加成系统服务并设置为自启动

#chkconfig --add mysql-mmm-monitor

#chkconfigmysql-mmm-monitor on

#/etc/init.d/mysql-mmm-monitor start

启动报错:

Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.

failed

解决方法:安装下列perl的库

#cpanProc::Daemon

#cpan Log::Log4perl

[root@monitor1 ~]# /etc/init.d/mysql-mmm-monitor start

Daemon bin: '/usr/sbin/mmm_mond'

Daemon pid: '/var/run/mmm_mond.pid'

Starting MMM Monitor daemon: Ok

[root@monitor1 ~]# netstat -anpt | grep 9988

tcp  0  0 127.0.0.1:9988   0.0.0.0:*      LISTEN      8546/mmm_mond

注1:无论是在db端还是在监控端如果有对配置文件进行修改操作都需要重启代理进程和监控进程。

注2:MMM启动顺序:先启动monitor,再启动 agent

检查集群状态:

[root@monitor1 ~]# mmm_control show

master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)

master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)

slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)

slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)

如果服务器状态不是ONLINE,可以用如下命令将服务器上线,例如:

#mmm_controlset_online主机名

例如:[root@monitor1 ~]#mmm_controlset_onlinemaster1

从上面的显示可以看到,写请求的VIP在master1上,所有从节点也都把master1当做主节点。

查看是否启用vip

[root@master1 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:6d:2f:82 brdff:ff:ff:ff:ff:ff

inet 192.168.31.83/24 brd 192.168.31.255 scope global eno16777736

valid_lft forever preferred_lft forever

inet 192.168.31.2/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe6d:2f82/64 scope link

valid_lft forever preferred_lft forever

[root@master2 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff

inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736

valid_lft 35850sec preferred_lft 35850sec

inet 192.168.31.5/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe75:1a9c/64 scope link

valid_lft forever preferred_lft forever

[root@slave1 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:02:21:19 brdff:ff:ff:ff:ff:ff

inet 192.168.31.250/24 brd 192.168.31.255 scope global dynamic eno16777736

valid_lft 35719sec preferred_lft 35719sec

inet 192.168.31.4/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe02:2119/64 scope link

valid_lft forever preferred_lft forever

[root@slave2 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:e2:c7:fa brdff:ff:ff:ff:ff:ff

inet 192.168.31.225/24 brd 192.168.31.255 scope global dynamic eno16777736

valid_lft 35930sec preferred_lft 35930sec

inet 192.168.31.3/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fee2:c7fa/64 scope link

valid_lft forever preferred_lft forever

在master2,slave1,slave2主机上查看主mysql的指向

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.83

Master_User: rep

Master_Port: 3306

Connect_Retry: 60

MMM高可用性测试:

服务器读写采有VIP地址进行读写,出现故障时VIP会漂移到其它节点,由其它节点提供服务。

首先查看整个集群的状态,可以看到整个集群状态正常

[root@monitor1 ~]# mmm_control show

master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)

master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)

slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)

slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)

模拟master1宕机,手动停止mysql服务,观察monitor日志,master1的日志如下:

[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log

2017/01/09 22:02:55  WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)

2017/01/09 22:02:55  WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)

2017/01/09 22:03:05 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)

2017/01/09 22:03:07 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)

2017/01/09 22:03:07  INFO Removing all roles from host 'master1':

2017/01/09 22:03:07  INFO     Removed role 'writer(192.168.31.2)' from host 'master1'

2017/01/09 22:03:07  INFO Orphaned role 'writer(192.168.31.2)' has been assigned to 'master2'

查看群集的最新状态

[root@monitor1 ~]# mmm_control show

master1(192.168.31.83) master/HARD_OFFLINE. Roles:

master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5), writer(192.168.31.2)

slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)

slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)

从显示结果可以看出master1的状态有ONLINE转换为HARD_OFFLINE,写VIP转移到了master2主机上。

检查所有的db服务器群集状态

[root@monitor1 ~]# mmm_control checks all

master1  ping         [last change: 2017/01/09 21:31:47]  OK

master1  mysql        [last change: 2017/01/09 22:03:07]  ERROR: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)

master1  rep_threads  [last change: 2017/01/09 21:31:47]  OK

master1  rep_backlog  [last change: 2017/01/09 21:31:47]  OK: Backlog is null

slave1   ping         [last change: 2017/01/09 21:31:47]  OK

slave1mysql        [last change: 2017/01/09 21:31:47]  OK

slave1   rep_threads  [last change: 2017/01/09 21:31:47]  OK

slave1   rep_backlog  [last change: 2017/01/09 21:31:47]  OK: Backlog is null

master2  ping         [last change: 2017/01/09 21:31:47]  OK

master2  mysql        [last change: 2017/01/09 21:57:32]  OK

master2  rep_threads  [last change: 2017/01/09 21:31:47]  OK

master2  rep_backlog  [last change: 2017/01/09 21:31:47]  OK: Backlog is null

slave2   ping         [last change: 2017/01/09 21:31:47]  OK

slave2mysql        [last change: 2017/01/09 21:31:47]  OK

slave2   rep_threads  [last change: 2017/01/09 21:31:47]  OK

slave2   rep_backlog  [last change: 2017/01/09 21:31:47]  OK: Backlog is null

从上面可以看到master1能ping通,说明只是服务死掉了。

查看master2主机的ip地址:

[root@master2 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff

inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736

valid_lft 35519sec preferred_lft 35519sec

inet 192.168.31.5/32 scope global eno16777736

valid_lft forever preferred_lft forever

inet 192.168.31.2/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe75:1a9c/64 scope link

valid_lft forever preferred_lft forever

slave1主机:

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.141

Master_User: rep

Master_Port: 3306

slave2主机:

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.141

Master_User: rep

Master_Port: 3306

启动master1主机的mysql服务,观察monitor日志,master1的日志如下:

[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log

2017/01/09 22:16:56  INFO Check 'mysql' on 'master1' is ok!

2017/01/09 22:16:56  INFO Check 'rep_backlog' on 'master1' is ok!

2017/01/09 22:16:56  INFO Check 'rep_threads' on 'master1' is ok!

2017/01/09 22:16:59 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY

从上面可以看到master1的状态由hard_offline改变为awaiting_recovery状态

用如下命令将服务器上线:

[root@monitor1 ~]#mmm_controlset_onlinemaster1

查看群集最新状态

[root@monitor1 ~]# mmm_control show

master1(192.168.31.83) master/ONLINE. Roles:

master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5), writer(192.168.31.2)

slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)

slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)

可以看到主库启动不会接管主,只到现有的主再次宕机。

总结

(1)master2备选主节点宕机不影响集群的状态,就是移除了master2备选节点的读状态。
(2)master1主节点宕机,由master2备选主节点接管写角色,slave1,slave2指向新master2主库进行复制,slave1,slave2会自动change master到master2.
(3)如果master1主库宕机,master2复制应用又落后于master1时就变成了主可写状态,这时的数据主无法保证一致性。
如果master2,slave1,slave2延迟于master1主,这个时master1宕机,slave1,slave2将会等待数据追上db1后,再重新指向新的主node2进行复制操作,这时的数据也无法保证同步的一致性。
(4)如果采用MMM高可用架构,主,主备选节点机器配置一样,而且开启半同步进一步提高安全性或采用MariaDB/mysql5.7进行多线程从复制,提高复制的性能。

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/ce6ce69e2cf72c4ba44c260b6c9d03d5.html