CentOS7.2下Hadoop2.7.2集群搭建(2)

4.3 vi hdfs-site.xml  #hdfs中NameNode,DataNode局部配置
<configuration>
 
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/hadoop/hdfs/name</value>
<!--HDFS namenode数据镜象目录-->
<description>  </description>
</property>
 
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/hadoop/hdfs/data</value>
<!-- HDFS datanode数据镜象存储路径,可以配置多个不同的分区和磁盘中,使用,号分隔 -->
<description> </description>
</property>
 
<property>
<name>dfs.namenode.http-address</name>
<value>master:50070</value>
<!---HDFS Web查看主机和端口-->
</property>
 
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node1:50090</value>
<!--辅控HDFS web查看主机和端口-->
</property>
 
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
 
<property>
<name>dfs.replication</name>
<value>3</value>
<!--HDFS数据保存份数,通常是3-->
</property>
 
<property>
<name>dfs.datanode.du.reserved</name>
<value>1073741824</value>
<!-- datanode 写磁盘会预留 1G空间 给其他程序使用,而非写满,单位 bytes-->
</property>
 
<property>
<name>dfs.block.size</name>
<value>134217728</value>
<!--HDFS数据块大小,当前设置为128M/Blocka-->
</property>
 
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
<!-- HDFS 关闭文件权限 -->
</property>
 
</configuration>

4.4 vi etc/hadoop/mapred-site.xml  #配置MapReduce,使用yarn框架、jobhistory使用地址以及web地址
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>:9001</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

4.5 vi etc/hadoop/yarn-site.xml  配置yarn-site.xml文件
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>

4.6 vi hadoop-env.sh及vi yarn-env.sh
将其中的${JAVA_HOME}用/usr/jdk1.7.0_79代替

5.检查单机版Hadoop

测试hdfs中的namenode与datanode:
hadoop-daemon.sh start namenode
chmod go-w /data/hadoop/hdfs/data/
hadoop-daemon.sh start datanode

测试resourcemanager:
yarn-daemon.sh start resourcemanager

测试nodemanager:
yarn-daemon.sh start nodemanager

测试historyserver:
mr-jobhistory-daemon.sh start historyserver

执行jps:
99297 Jps
99244 DataNode
98956 JobHistoryServer
98820 NodeManager
98118 NameNode
98555 ResourceManager

上述表明单机版hadoop安装成功

6.集群搭建
scp -r $HADOOP_HOME/ node1:/usr/local/
scp -r $HADOOP_HOME/ node2:/usr/local/

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/d704f536185213e666fa49e2fcbee77e.html