Hadoop集群部署安装(3)

JDK和SSH已经安装配置完成,可以开始部署Hadoop了。

再次使用WinSCP,将hadoop-0.20.2.tar.gz传送到服务器上;然后使用scp指令传送到其它节点上。

[hadoop@n1 software]$ gunzip hadoop-0.20.2.tar.gz

[hadoop@n1 software]$ tar -xvf hadoop-0.20.2.tar

。。。。。

[hadoop@n1 software]$ mv hadoop-0.20.2 /home/hadoop/.

[hadoop@n1 ~]$ scp /software/hadoop-0.20.2.tar hadoop@192.168.10.102:/home/hadoop/.

hadoop-0.20.2.tar                            100%  129MB  6.2MB/s  00:21 

[hadoop@n1 ~]$ scp /software/hadoop-0.20.2.tar hadoop@192.168.10.103:/home/hadoop/.

hadoop-0.20.2.tar                            100%  129MB  5.6MB/s  00:23 

现在3个节点上的hadoop解压完成,下面分别进行如下配置。

1)conf/Hadoop-env.sh文件的修改,设置Java_HOME。

# The java implementation to use.  Required.

export JAVA_HOME=/software/jdk1.6.0_33

2)修改conf/core-site.xml

[hadoop@n1 conf]$ mkdir -p /software/hdata

<property>

<name>fs.default.name</name>

<value>hdfs://n1:9000</value>

</property>

3)修改conf/hdfs-site.xml

<property>

<name>dfs.data.dir</name>

<value>/software/hdata</value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

4)修改conf/mapred-site.xml

<property>

<name>mapred.job.tracker</name>

<value>n1:9001</value>

</property>

5)修改conf/masters

n1

6)修改conf/slaves

d1

d2

7)将conf目录及修改的文件复制到其它两个节点的相应目录中

[hadoop@n1 conf]$ scp *.* d1:/home/hadoop/hadoop-0.20.2/conf/.

[hadoop@n1 conf]$ scp *.* d2:/home/hadoop/hadoop-0.20.2/conf/.

8)格式化hdfs

[hadoop@n1 hadoop-0.20.2]$ bin/hadoop namenode -format

12/10/09 04:12:40 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:  host = n1/192.168.10.101

。。。。。。

12/10/09 04:12:41 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has beensuccessfully formatted.

12/10/09 04:12:41 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at n1/192.168.10.101

************************************************************/

9)启动hadoop

[hadoop@n1 hadoop-0.20.2]$ bin/start-all.sh

starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-n1.out

d1: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-d1.out

d2: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-d2.out

n1: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-n1.out

starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-n1.out

d2: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-d2.out

d1: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-d1.out

[hadoop@n1 hadoop-0.20.2]$

启动成功,没有报错。

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/19efc44e9877bf4f58b3ed9981200b67.html