下载对应版本Zookeeper,这里我下载的版本3.4.14。官方下载地址:https://archive.apache.org/dist/zookeeper/
# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz 1.2 解压 # tar -zxvf zookeeper-3.4.14.tar.gz 1.3 配置环境变量 # vim /etc/profile添加环境变量:
export ZOOKEEPER_HOME=http://www.likecs.com/usr/app/zookeeper-3.4.14 export PATH=$ZOOKEEPER_HOME/bin:$PATH使得配置的环境变量生效:
# source /etc/profile 1.4 修改配置进入安装目录的conf/目录下,拷贝配置样本并进行修改:
# cp zoo_sample.cfg zoo.cfg指定数据存储目录和日志文件目录(目录不用预先创建,程序会自动创建),修改后完整配置如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=http://www.likecs.com/usr/local/zookeeper/data dataLogDir=http://www.likecs.com/usr/local/zookeeper/log # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # #sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1配置参数说明:
tickTime:用于计算的基础时间单元。比如session超时:N*tickTime;
initLimit:用于集群,允许从节点连接并同步到 master节点的初始化连接时间,以tickTime的倍数来表示;
syncLimit:用于集群, master主节点与从节点之间发送消息,请求和应答时间长度(心跳机制);
dataDir:数据存储位置;
dataLogDir:日志目录;
clientPort:用于客户端连接的端口,默认2181
1.5 启动由于已经配置过环境变量,直接使用下面命令启动即可:
zkServer.sh start 1.6 验证使用JPS验证进程是否已经启动,出现QuorumPeerMain则代表启动成功。
[root@hadoop001 bin]# jps 3814 QuorumPeerMain 二、集群环境搭建为保证集群高可用,Zookeeper集群的节点数***是奇数,最少有三个节点,所以这里演示搭建一个三个节点的集群。这里我使用三台主机进行搭建,主机名分别为hadoop001,hadoop002,hadoop003。
2.1 修改配置解压一份zookeeper安装包,修改其配置文件zoo.cfg,内容如下。之后使用scp命令将安装包分发到三台服务器上:
tickTime=2000 initLimit=10 syncLimit=5 dataDir=http://www.likecs.com/usr/local/zookeeper-cluster/data/ dataLogDir=http://www.likecs.com/usr/local/zookeeper-cluster/log/ clientPort=2181 # server.1 这个1是服务器的标识,可以是任意有效数字,标识这是第几个服务器节点,这个标识要写到dataDir目录下面myid文件里 # 指名集群间通讯端口和选举端口 server.1=hadoop001:2287:3387 server.2=hadoop002:2287:3387 server.3=hadoop003:2287:3387 2.2 标识节点分别在三台主机的dataDir目录下新建myid文件,并写入对应的节点标识。Zookeeper集群通过myid文件识别集群节点,并通过上文配置的节点通信端口和选举端口来进行节点通信,选举出Leader节点。
创建存储目录:
# 三台主机均执行该命令 mkdir -vp /usr/local/zookeeper-cluster/data/创建并写入节点标识到myid文件:
# hadoop001主机 echo "1" > /usr/local/zookeeper-cluster/data/myid # hadoop002主机 echo "2" > /usr/local/zookeeper-cluster/data/myid # hadoop003主机 echo "3" > /usr/local/zookeeper-cluster/data/myid 2.3 启动集群分别在三台主机上,执行如下命令启动服务:
/usr/app/zookeeper-cluster/zookeeper/bin/zkServer.sh start 2.4 集群验证启动后使用zkServer.sh status查看集群各个节点状态。如图所示:三个节点进程均启动成功,并且hadoop002为leader节点,hadoop001和hadoop003为follower节点。