5. 配置mapred-site.xml
[root@Master hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@Master hadoop]# vim mapred-site.xml
<configuration>
<!--指定maoreduce运行框架-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value> </property>
<!--历史服务的通信地址-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:10020</value>
</property>
<!--历史服务的web ui地址-->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>Master:19888</value>
</property>
</configuration>
6. 配置yarn-site.xml
[root@Master hadoop]# vim yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<!--指定resourcemanager所启动的服务器主机名-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>Master</value>
</property>
<!--指定mapreduce的shuffle-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--指定resourcemanager的内部通讯地址-->
<property>
<name>yarn.resourcemanager.address</name>
<value>Master:8032</value>
</property>
<!--指定scheduler的内部通讯地址-->
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Master:8030</value>
</property>
<!--指定resource-tracker的内部通讯地址-->
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Master:8031</value>
</property>
<!--指定resourcemanager.admin的内部通讯地址-->
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>Master:8033</value>
</property>
<!--指定resourcemanager.webapp的ui监控地址-->
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>Master:8088</value>
</property>
</configuration>
7. 配置slaves文件
[root@Master hadoop]# vim slaves
Master
Slaver-1
Slaver-2
Slaver-3
8. 配置hadoop-env.sh,指定JAVA_HOME
[root@Master hadoop]# vim hadoop-env.sh
修改 export JAVA_HOME=http://www.likecs.com/usr/local/java/jdk1.8.0_181
9. 配置yarn-env.sh,指定JAVA_HOME
[root@Master hadoop]# vim yarn-env.sh
修改 export JAVA_HOME=http://www.likecs.com/usr/local/java/jdk1.8.0_181
10. 配置mapred-env.sh,指定JAVA_HOME
[root@Master hadoop]# vim mapred-env.sh
修改 export JAVA_HOME=http://www.likecs.com/usr/local/java/jdk1.8.0_181
11. 将hadoop文件分发到其它几台客户机上
[root@Master hadoop]# scp -r hadoop/ Slaver-1:`pwd`
[root@Master hadoop]# scp -r hadoop/ Slaver-2:`pwd`
[root@Master hadoop]# scp -r hadoop/ Slaver-3:`pwd`
三、启动并验证hadoop集群
1. 启动集群
第一次启动集群,需要格式化namenode,操作如下:
[root@Master ~]# hdfs namenode -format
输出如下内容,则表示格式化成功
启动HDFS
格式化成功之后,我们就可以启动HDFS了,命令如下:
[root@Master hadoop]# start-dfs.sh
Starting namenodes on [Master]
Master: starting namenode, logging to /opt/hadoop/hadoop-2.8.5/logs/hadoop-root-namenode-Master.out
Slaver-3: starting datanode, logging to /opt/hadoop/hadoop-2.8.5/logs/hadoop-root-datanode-Slaver-3.out
Slaver-2: starting datanode, logging to /opt/hadoop/hadoop-2.8.5/logs/hadoop-root-datanode-Slaver-2.out
Slaver-1: starting datanode, logging to /opt/hadoop/hadoop-2.8.5/logs/hadoop-root-datanode-Slaver-1.out
Starting secondary namenodes [Master]
Master: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.8.5/logs/hadoop-root-secondarynamenode-Master.out
启动Yarn
启动Yarn时需要注意,我们不能在NameNode上启动Yarn,而应该在ResouceManager所在的主机上启动。但我们这里是将NameNode和ResouceManager部署在了同一台主机上,所以,我们直接在Master这台机器上启动Yarn。
[root@Master hadoop]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/hadoop-2.8.5/logs/yarn-root-resourcemanager-Master.out
Slaver-2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.5/logs/yarn-root-nodemanager-Slaver-2.out
Slaver-1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.5/logs/yarn-root-nodemanager-Slaver-1.out
Slaver-3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.5/logs/yarn-root-nodemanager-Slaver-3.out
2. web验证