Redis集群没有出来前,一直使用Codis集群,现在部署Redis集群看看效果如何。
一,架构
CentOS6.5 64位
redis1 redis1:6379主 redis3:6380从
redis2 redis2:6379主 redis1:6380从
redis3 redis3:6379主 redis2:6380从
二,部署Redis实例
1,安装依赖
yum -y install tcl-devel
2,下载
wget
3,编译
cd /usr/local/src/
tar zxvf redis-3.0.7.tar.gz
cd redis-3.0.7
make
4,测试
make test
5,安装二进制程序到/usr/local/bin/目录
make install
ls /usr/local/bin/ -lh
6,安装服务
sh ./utils/install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf]
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log] /data/redis/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379] /data/redis/6379
Please select the redis executable path [/usr/local/bin/redis-server]
Selected config:
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /data/redis/redis_6379.log
Data dir : /data/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!
7,验证服务
/etc/init.d/redis_6379 status
8,其他节点类似安装
三,部署Cluster环境
1,集群主从节点关系
redis1 redis1:6379主 redis3:6380从
redis2 redis2:6379主 redis1:6380从
redis3 redis3:6379主 redis2:6380从
2,集群环境准备
这是源码安装
tar -xf ruby-2.2.3.tar.gz
cd ruby-2.2.3
./configure --disable-install-rdoc
make && make install
/usr/local/bin/ruby
/usr/local/bin/gem
YUM安装
yum install ruby ruby-devel rubygems
设置ruby gem源为淘宝,安装gems更快。
# gem sources --add https://ruby.taobao.org/ --remove https://rubygems.org/
查看gem源
# gem sources -l
安装ruby的redis模块(依赖接口)
# gem install redis(注意这地方要选定版本排错时会提到)
3,配置文件(其他实例相应修改端口)
cat /etc/redis/6379.conf |grep -Ev "^#|^$"
daemonize yes
pidfile /var/run/redis_6379.pid
port 6379
timeout 0
logfile /data/redis/redis_6379.log
dbfilename dump.rdb
dir /data/redis/6379
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
取消如下注释,让Redis在集群模式下运行
cluster-enabled yes 启动cluster模式
cluster-config-file nodes-6379.conf 集群信息文件名,由redis自己维护
cluster-node-timeout 5000 5秒中联系不到对方node,即认为对方有故障可能
4,重启redis (其他实例相应修改端口)
/etc/init.d/redis_6379 restart
会在/data/redis/6379/下面多出一个文件nodes-6379.conf
5,利用脚本工具建立集群
首先查看帮助看redis-trib脚本功能
注意编译安装时redis-trib.rb没有复制过来
cp ./src/redis-trib.rb /usr/local/bin/
ruby redis-trib.rb help
create:创建集群
check:检查集群
info:查看集群信息
fix:修复集群
reshard:在线迁移slot
rebalance:平衡集群节点slot数量
add-node:将新节点加入集群
del-node:从集群中删除节点
set-timeout:设置集群节点间心跳连接的超时时间
call:在集群全部节点上执行命令
import:将外部redis数据导入集群
开始创建集群注意前三个为主后三个为从(但是不能指定主从关系,后面可以在Cluster里面操作,目的是主从实例不能在统一节点服务器上), 选项 --replicas 1 表示集群中的每个主节点创建一个从节点。
/usr/local/bin/redis-trib.rb create --replicas 1 192.168.188.18:6379 192.168.188.28:6379 192.168.188.38:6379 192.168.188.18:6380 192.168.188.28:6380 192.168.188.38:6380
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.188.38:6379
192.168.188.28:6379
192.168.188.18:6379
Adding replica 192.168.188.28:6380 to 192.168.188.38:6379
Adding replica 192.168.188.38:6380 to 192.168.188.28:6379
Adding replica 192.168.188.18:6380 to 192.168.188.18:6379
M: 53c202561155d141a8e24d42bb8eb2907998fce9 192.168.188.18:6379
slots:10923-16383 (5461 slots) master
M: 654c0a7853724fe33683f54e8df665993ea704ed 192.168.188.28:6379
slots:5461-10922 (5462 slots) master
M: b4cc74e0ac6ec9b634d93e86fdb376b4d89dc318 192.168.188.38:6379
slots:0-5460 (5461 slots) master
S: ee3d5f85034e5a3c9d9369357b545d0a92751728 192.168.188.18:6380
replicates 53c202561155d141a8e24d42bb8eb2907998fce9
S: 10b558217d6618093b87f75a453ca39679bbcdf4 192.168.188.28:6380
replicates b4cc74e0ac6ec9b634d93e86fdb376b4d89dc318
S: 5dc55e90d3d3b56502a319ec9845e901102cfa6c 192.168.188.38:6380
replicates 654c0a7853724fe33683f54e8df665993ea704ed
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join......
>>> Performing Cluster Check (using node 192.168.188.18:6379)
M: 53c202561155d141a8e24d42bb8eb2907998fce9 192.168.188.18:6379
slots:10923-16383 (5461 slots) master
M: 654c0a7853724fe33683f54e8df665993ea704ed 192.168.188.28:6379
slots:5461-10922 (5462 slots) master
M: b4cc74e0ac6ec9b634d93e86fdb376b4d89dc318 192.168.188.38:6379
slots:0-5460 (5461 slots) master
M: ee3d5f85034e5a3c9d9369357b545d0a92751728 192.168.188.18:6380
slots: (0 slots) master
replicates 53c202561155d141a8e24d42bb8eb2907998fce9
M: 10b558217d6618093b87f75a453ca39679bbcdf4 192.168.188.28:6380
slots: (0 slots) master
replicates b4cc74e0ac6ec9b634d93e86fdb376b4d89dc318
M: 5dc55e90d3d3b56502a319ec9845e901102cfa6c 192.168.188.38:6380
slots: (0 slots) master
replicates 654c0a7853724fe33683f54e8df665993ea704ed
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
四,测试验证
调整主从状态
登录redis集群:
设置192.168.188.18:6380为192.168.188.28:6379的从
redis-cli -c -p 6380 -h 192.168.188.18
192.168.188.18:6380> cluster replicate 654c0a7853724fe33683f54e8df665993ea704ed
OK
设置192.168.188.38:6380为192.168.188.18:6379的从
redis-cli -c -p 6380 -h 192.168.188.38
192.168.188.38:6380> cluster replicate 53c202561155d141a8e24d42bb8eb2907998fce9
OK
最后保存配置到硬盘
192.168.188.38:6380> cluster saveconfig
OK
检查集群状态:
redis-trib.rb check 192.168.188.18:6379
>>> Performing Cluster Check (using node 192.168.188.18:6379)
M: 53c202561155d141a8e24d42bb8eb2907998fce9 192.168.188.18:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 5dc55e90d3d3b56502a319ec9845e901102cfa6c 192.168.188.38:6380
slots: (0 slots) slave
replicates 53c202561155d141a8e24d42bb8eb2907998fce9
M: 654c0a7853724fe33683f54e8df665993ea704ed 192.168.188.28:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 10b558217d6618093b87f75a453ca39679bbcdf4 192.168.188.28:6380
slots: (0 slots) slave
replicates b4cc74e0ac6ec9b634d93e86fdb376b4d89dc318
S: ee3d5f85034e5a3c9d9369357b545d0a92751728 192.168.188.18:6380
slots: (0 slots) slave
replicates 654c0a7853724fe33683f54e8df665993ea704ed
M: b4cc74e0ac6ec9b634d93e86fdb376b4d89dc318 192.168.188.38:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
使用方法
集群
CLUSTER INFO 打印集群的信息
CLUSTER NODES 列出集群当前已知的所有节点(node),以及这些节点的相关信息。
节点
CLUSTER MEET <ip> <port> 将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
CLUSTER FORGET <node_id> 从集群中移除 node_id 指定的节点。
CLUSTER REPLICATE <node_id> 将当前节点设置为 node_id 指定的节点的从节点。
CLUSTER SAVECONFIG 将节点的配置文件保存到硬盘里面。
槽(slot)
CLUSTER ADDSLOTS <slot> [slot ...] 将一个或多个槽(slot)指派(assign)给当前节点。
CLUSTER DELSLOTS <slot> [slot ...] 移除一个或多个槽对当前节点的指派。
CLUSTER FLUSHSLOTS 移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。