单机伪分布式部署和分布式部署(4)

在最后加入
export Hadoop_HOME=/usr/local/hadoop-1.0.4

export PATH=$HADOOP_HOME/bin:$PATH

5,配置/etc/hosts

192.168.1.11 namenode0
#192.168.1.12 namenode1
192.168.1.13 datanode00
192.168.1.14 datanode01
6,配置 conf/masters 和 conf/slaves

conf/masters
192.168.1.11

conf/slaves
192.168.1.13
192.168.1.14
7,配置 conf/hadoop-env.sh
加入
export JAVA_HOME=/usr/java/jdk1.6.0_21

8,配置 conf/core-site.xml

加入

<name>fs.default.name</name> 
<!--<value>hdfs://0.0.0.0:9000</value> -->
    <value>hdfs://namenode0:9000</value>
</property>
<property> 
<name>hadoop.tmp.dir</name> 
<value>/usr/local/hadoop/tmp</value>
</property>

core-site

9,配置 conf/hdfs-site.xml
加入

<property>
 <name>dfs.replication</name>
 <value>2</value>
</property>

<property> 
  <name>dfs.http.address</name>
 <!-- <value>0.0.0.0:50070</value>backup -->
  <value>192.168.1.11:50070</value>
</property>
<property> 
  <name>dfs.name.dir</name> 
 <value>/usr/local/hadoop/local/namenode</value>
</property>
<property>
  <name>dfs.name.edits.dir</name> 
  <value>/usr/local/hadoop/local/editlog</value>
</property>
<property> 
  <name>dfs.data.dir</name>
  <value>/usr/local/hadoop/block</value>
</property>

hdfs

10,建立相关的目录(程序会根据上面的配置自动创建目录)
mkdir /usr/local/hadoop/tmp //hadoop临时目录
mkdir /usr/local/hadoop/local
mkdir /usr/local/hadoop/local/namenode //镜像存储目录
mkdir /usr/local/hadoop/local/editlog //日志存储目录
mkdir /usr/local/hadoop/block //数据块存储目录

如namenode00,虽然建立的block目录单,它不需要在目录,程序就不会创建,而datanode只会创建tmp和block,不会创建local。

注意如果在cygwin下这样配置,不会自动创建,而且手动创建后需要修改权限为755 ,不然会出现tasktracker启动失败问题,可以看tasktrack的日志

[user@namenode0 local]$ ls -sR hadoop
hadoop:
total 16
8 local 8 tmp

hadoop/local:
total 16
8 editlog 8 namenode

hadoop/local/editlog:
total 28
8 current 8 image 4 in_use.lock 8 previous.checkpoint

hadoop/local/editlog/current:
total 32
16 edits 8 fstime 8 VERSION

hadoop/local/editlog/image:
total 8
8 fsimage

hadoop/local/editlog/previous.checkpoint:
total 24
8 edits 8 fstime 8 VERSION

hadoop/local/namenode:
total 28
8 current 8 image 4 in_use.lock 8 previous.checkpoint

hadoop/local/namenode/current:
total 24
8 fsimage 8 fstime 8 VERSION

hadoop/local/namenode/image:
total 8
8 fsimage

hadoop/local/namenode/previous.checkpoint:
total 24
8 fsimage 8 fstime 8 VERSION

hadoop/tmp:
total 8
8 dfs

hadoop/tmp/dfs:
total 8
8 namesecondary

hadoop/tmp/dfs/namesecondary:
total 4
4 in_use.lock

11.修改namenode0配置文件masters

master


12 修改namenode0配置文件slaves

slave

13.修改mapred-site.xml

<property>
<name>mapred.job.tracker</name>
<value>namenode0:9001</value>
</property>

<property>
<name>mapred.child.tmp</name>
<value>/usr/local/hadoop/tmp</value>
</property>

14,将hadoop远程copy到其他节点

scp -r hadoop-1.0.4 192.168.1.13:/usr/local/) 192.168.1.12也可以写为datanode00 (也可scp -r hadoop-1.0.4user@192.168.1.13:/usr/local/)

15,格式化Active namenode(192.168.1.11)

bin/hadoop namenode -format

16,启动集群 bin/start-all.sh

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/035527e4443e0e3a3466c7d3cda12ead.html