确保ssh能无密码登录,jdk已安装,环境变量配置好(具体可参加相应教程)
1、下载hadoop-2.2.0.tar.gz,解压tar -xzvf Hadoop-2.2.0.tar.gz
2、进入hadoop目录下创建tmp目录和dfs/data目录、dfs/name目录
3、进入配置文件存放目录${HADOOP_HOME}/etc/hadoop
3-1,修改hadoop-env.sh和yarn-env.sh中JAVA_HOME使之与系统一致
3-2,core-site.xml内容如下:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9101</value>
<description></description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/root/hadoop-2.2.0/tmp</value>
<description>tmp临时目录</description>
</property>
<property>
<name>io.native.lib.available</name>
<value>true</value>
<description>是否启用本地native库</description>
</property>
3-3,hdfs-site.xml内容如下:
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/root/hadoop-2.2.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/root/hadoop-2.2.0/dfs/data</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
3-4、mapred-site.xml内容如下:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
2-5、yarn-site.xml内容如下:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
3-5、根据实际情况修改masters文件slaves文件
4、利用scp将整个hadoop文件夹同步到各个节点
5、namenode格式化
hadoop namenode -format 或hdfs namenode -format
6、启动
start-dfs.sh && start-yarn.sh 或start-all.sh
启动完后在master上通过jps可查看到进程ResourceManager、SecondaryNameNode、NameNode;
在slave上可查看到NodeManager、DataNode。
此外,也可在web界面上查看,地址:8088/cluster
若发现进程启动失败,则进入出错节点的${HADOOP_HOME}/logs目录下查看相应*.log文件
相关阅读:
Ubuntu 13.04上搭建Hadoop环境