1.2.1 for CentOS 6.3 64bit(2)

6、安装Master Hadoop
将下载好的软件上传至/soft

cd /soft

gunzip hadoop-1.2.1.tar.gz

tar -xvf hadoop-1.2.1.tar

将解压好的目录移动到/usr下,并重命名为hadoop

mv hadoop-1.2.1 /usr/hadoop


cd /usr/hadoop

mkdir /usr/hadoop/tmp

chown -R hadoop:hadoop hadoop

修改/etc/profile下的hadoop环境量变

cat>>/etc/profile<<EOF

# set hadoop path
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
EOF

source /etc/profile

7、配置hadoop
1)、配置/usr/hadoop/conf/hadoop-env.sh

 在文件的末尾添加下面内容。


 

# set java environment
export JAVA_HOME=/usr/java/jdk1.7.0_40

2)、配置/usr/hadoop/conf/core-site.xml

模板如下

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<!-- file system properties -->
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.1.102:9000</value>
</property>
</configuration>


 

3)、配置/usr/hadoop/conf/hdfs-site.xml

模板如下

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


 

4)、配置/usr/hadoop/conf/mapred-site.xml

模板如下


<configuration>
<property>
<name>mapred.job.tracker</name>
<value>:9001</value>
</property>
</configuration>


 

5)、配置/usr/hadoop/conf/masters


模板如下


192.168.1.102

6)、配置/usr/hadoop/conf/slaves

192.168.1.100

192.168.1.101

192.168.1.103

8、安装Slaves hadoop
1)、直接把master的/usr下的hadoop直接复制到Slaves下/usr/目录下

scp -R /usr/hadoop root@Slave1.Hadoop:/usr/

scp -R /usr/hadoop root@Slave2.Hadoop:/usr/

scp -R /usr/hadoop root@Slave3.Hadoop:/usr/


 

Slave1.Hadoop@chown -R hadoop:hadoop hadoop

Slave2.Hadoop@chown -R hadoop:hadoop hadoop


Slave3.Hadoop@chown -R hadoop:hadoop hadoop


 

2)、包括/etc/profile

修改/etc/profile下的hadoop环境量变

cat>>/etc/profile<<EOF

# set hadoop path
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
EOF

source /etc/profile

9、启动及验证hadoop
1)、格式化hdfs文件系统

Master.Hadoop@hadoop namenode -format

2)、启动hadoop

opping secondarynamenode
[root@Master ~]# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-namenode-Master.Hadoop.out
192.168.1.103: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-Salve3.Hadoop.out
192.168.1.101: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-Salve2.Hadoop.out
192.168.1.100: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-Salve1.Hadoop.out
192.168.1.102: starting secondarynamenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-secondarynamenode-Master.Hadoop.out
starting jobtracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-jobtracker-Master.Hadoop.out
192.168.1.101: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-Salve2.Hadoop.out
192.168.1.103: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-Salve3.Hadoop.out
192.168.1.100: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-Salve1.Hadoop.out

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/417e0d61b7b5352b9de906e07843d222.html