Hadoop安装配置:使用cloudrea(2)

6. 启动节点
 
master启动namenode
 
service Hadoop-0.20-namenode start
 
secondnamenode启动
 
service hadoop-0.20-secondnamenode start
 
启动各个数据节点
 
service hadoop-0.20-datanode start
 
7. 创建hdfs的/tmp目录和mapred.system.dir
 
sudo -u hdfs hadoop fs -mkdir /mapred/system
 sudo -u hdfs hadoop fs -chown mapred:hadoop /mapred/system
 sudo -u hdfs hadoop fs -chmod 700 /mapred/system
 
mapred.system.dir需要在jobtracker启动前创建
 
sudo -u hdfs hadoop dfs -mkdir /tmp
 sudo -u hdfs hadoop dfs -chmod -R 1777 /tmp
 
8. 开启mapreduce
 
在datanode节点上执行
 
service hadoop-0.20-tasktracker start
 
在namenode节点上启动jobtracker
 
service hadoop-0.20-jobtasker start
 
9. 设置开机启动
 
namenode节点:需要启动的为namenode何jobtracker,关闭其他的服务
 
chkconfig hadoop-0.20-namenode on
 chkconfig hadoop-0.20-jobtracker on
 chkconfig hadoop-0.20-secondarynamenode off
 chkconfig hadoop-0.20-tasktracker off
 chkconfig hadoop-0.20-datanode off
 
datanode节点:需要启动datanode和tasktracker
 
chkconfig hadoop-0.20-namenode off
 chkconfig hadoop-0.20-jobtracker off
 chkconfig hadoop-0.20-secondarynamenode off
 chkconfig hadoop-0.20-tasktracker on
 chkconfig hadoop-0.20-datanode on
 
secondarynamenode节点:需要启动secondarynamenode
 
chkconfig hadoop-0.20-secondarynamenode on
 
说明:
 
这些hadoop包作为独立的服务启动,不需要通过ssh,也可以配置ssh,通过使用start-all.sh和stop-all.sh来管理服务。

linux

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/d6cf2bdeaa59cb81dcd8d9961e922d8c.html