Hadoop 三台主机小集群建立详解(3)

修改mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>http://master:9001</value>
  </property>
</configuration>

mapred.job.tracker,设置jobtracker所在机器,端口号9001

修改masters

master

虽然masters内写的是master,但是个人感觉,这个并不是指定master节点,而是配置secondaryNameNode

修改slaves

master slave1 slave2配置了集群中所有slave节点

④ 添加Hadoop环境变量,并 source ~/.bash_profile使之生效

export JAVA_HOME=/root/bin/jdk1.6.0_32 export HADOOP_HOME=/root/bin/hadoop-0.20.2 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

⑤ 将已经配置好的hadoop-0.20.2,分别拷贝到另外两台主机,并做相同配置

⑥ 此时,hadoop的集群配置已经完成,输入hadoop,则可看到hadoop相关的操作

[root@master ~]# hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: namenode -format format the DFS filesystem secondarynamenode run the DFS secondary namenode namenode run the DFS namenode datanode run a DFS datanode dfsadmin run a DFS admin client mradmin run a Map-Reduce admin client fsck run a DFS filesystem checking utility fs run a generic filesystem user client balancer run a cluster balancing utility jobtracker run the MapReduce job Tracker node pipes run a Pipes job tasktracker run a MapReduce task Tracker node job manipulate MapReduce jobs queue get information regarding JobQueues version print the version jar <jar> run a jar file distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME <src>* <dest> create a hadoop archive daemonlog get/set the log level for each daemon or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters.

⑦ 此时,首先格式化hadoop

在命令行里执行,hadoop namenode -format

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/99c35453763cb6b675d9fb7c0add7c9a.html