Hadoop 2.2.0的安装配置(2)

<configuration>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
        <name>yarn.resourcemanager.address</name>
        <value>cloud001:8032</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>cloud001:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>cloud001:8031</value>
</property>
<property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>cloud001:8033</value>
</property>
<property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>cloud001:8088</value>
</property>
</configuration>

8、192.168.1.105上修改slaves文件,增加一行

cloud002

9、192.168.1.105上创建Sc2salve.sh(名字随便 vim Sc2slave.sh),目的是将配置拷贝到192.168.1.106上

scp /opt/hadoop-2.2.0/etc/hadoop/slaves root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/slaves
scp /opt/hadoop-2.2.0/etc/hadoop/core-site.xml root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/core-site.xml
scp /opt/hadoop-2.2.0/etc/hadoop/hdfs-site.xml root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/hdfs-site.xml
scp /opt/hadoop-2.2.0/etc/hadoop/mapred-site.xml root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/mapred-site.xml
scp /opt/hadoop-2.2.0/etc/hadoop/yarn-site.xml root@cloud002:/opt/hadoop-2.2.0/etc/hadoop/yarn-site.xml

10、192.168.1.105上运行Sc2slave.sh将配置文件拷贝到192.168.1.106上;

11、192.168.1.105上修改/etc/profile,目的是能够直接在命令行下使用hadoop等命令;

export HADOOP_HOME=/opt/hadoop-2.2.0
export PATH=.:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH

12、执行hadoop namenode -format;

13、执行/opt/hadoop-2.2.0/sbin下的start-all.sh脚本

14、执行完成后192.168.1.105上JPS查看:

10531 Jps
9444 SecondaryNameNode
9579 ResourceManager
9282 NameNode

192.168.1.106上 jps查看:

4463 DataNode
4941 Jps
4535 NodeManager

15、192.168.1.105上执行hdfs dfsadmin -report,显示
Configured Capacity: 13460701184 (12.54 GB)
Present Capacity: 5762686976 (5.37 GB)
DFS Remaining: 5762662400 (5.37 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Live datanodes:
Name: 192.168.1.106:50010 (cloud002)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 13460701184 (12.54 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 7698014208 (7.17 GB)
DFS Remaining: 5762662400 (5.37 GB)
DFS Used%: 0.00%
DFS Remaining%: 42.81%
Last contact: Mon Feb 17 05:52:18 PST 2014

整个配置过程都比较顺利,注意的是几个XML文件的配置不要有错误的地方,另如果出错可以查看log检查问题都是出在上面地方,我的操作步骤执行hadoop的log在/opt/hadoop-2.2.0/logs下。

应该是hadoop配置成功了,可以开始下一步学习了!

相关阅读

Ubuntu 13.04上搭建Hadoop环境

Ubuntu 12.10 +Hadoop 1.2.1版本集群配置

Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)

Ubuntu下Hadoop环境的配置

单机版搭建Hadoop环境图文教程详解

搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建)

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/2458463740314867f928374a24cf7ed7.html