Hadoop HA(高可用)环境的搭建(2)

hdfs-site.xml

<property> <name>dfs.nameservices</name> <value>cluster</value> </property> <property> <name>dfs.ha.namenodes.cluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.cluster.nn1</name> <value>master01:9000</value> </property> <property> <name>dfs.namenode.http-address.cluster.nn1</name> <value>master01:50070</value> </property> <property> <name>dfs.namenode.rpc-address.cluster.nn2</name> <value>master02:9000</value> </property> <property> <name>dfs.namenode.http-address.cluster.nn2</name> <value>master02:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://slave01:8485;slave02:8485;slave03:8485/cluster</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/software/hadoop-2.7.0/journal</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.client.failover.proxy.provider.cluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property>

mapred-site.xml

<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property

yarn-site.xml

<property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>rm-cluster</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>master01</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>master02</value> </property> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>slave01:2181,slave02:2181,slave03:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> 6. Zookeeper集群配置 // slave01 slave02 slave0 vi /usr/local/software/zookeeper-3.4.6/conf/zoo.cfg

修改dataDir=/usr/local/zk/data
在文件最后新增:

server.1=slave01:2888:3888 server.2=slave02:2888:3888 server.3=slave03:2888:3888

创建文件夹

mkdir /usr/local/software/zookeeper-3.4.6/zk/data

在data目录下,创建文件myid,值为1;相应的在slave02和slave03上面创建文件myid,值为2、3

7.开始启动

启动zookeeper

bin/zkServer.sh start bin/zkServer.sh status 查看状态(至少两个zookeeper实例开启后才可以查看状态)

首次启动顺序 (zookeeper集群先启动)
启动journalnode:

hadoop-daemon.sh start journalnode //slave01,slave02,slave03

格式化NameNode:

hdfs namenode -format

拷贝格式化后的namenode状态:

scp -r tmp/ hadoop@master02:/usr/local/hadoop-2.6.0/ //从master01拷贝tmp到master02

格式化zookeeper:

hdfs zkfc -formatZK

启动HDFS:

sbin/start-dfs.sh

启动Yarn:

sbin/start-yarn.sh //master01

启动master02主机上的ResourceManager:

yarn-daemon.sh start resourcemanager//master02 sbin/start-dfs.sh sbin/start-yarn.sh //master01 yarn-daemon.sh start resourcemanager//master02 8. 启动后效果:

启动后master02是active状态

"HadoopHA启动后效果图1"

master01是standby状态。

"HadoopHA启动后效果图2"

当kill掉master02进程后,master01会自动变成active状态,从而保证集群的高可用性。

Hadoop HA(高可用)环境的搭建

同时master03上面的ResourceManager为active状态:

Hadoop HA(高可用)环境的搭建

当浏览master04上面ResourceManager时显示信息为:

“This is standby RM,Redirecting to the current active RM::8088/”

Hadoop HA(高可用)环境的搭建

9. 为了方便集群管理,编写脚本控制:

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/6ebfe25019d78b1748d92ada0788a5b8.html