CentOS 7安装Hadoop 3.0.0(2)

  <configuration>           <!—hdfs-site.xml-->             <property>           <name>dfs.replication</name>           <value>1</value>           <description>副本个数,配置默认是3,应小于datanode机器数量</description>             </property>           </configuration>

⑸、配置mapred-site.xml:

  <!-- 指定mr运行在yarn上 -->
          <configuration>             <property>         <name>mapreduce.framework.name</name>         <value>yarn</value>             </property>           </configuration>

⑹、配置yarn-site.xml:

  <!-- 指定YARN的老大(ResourceManager)的地址 -->
          <configuration>             <property>           <name>yarn.nodemanager.aux-services</name>           <value>mapreduce_shuffle</value>             </property>  

     <!-- reducer获取数据的方式 -->
             <property>
              <name>yarn.nodemanager.aux-services</name>
              <value>mapreduce_shuffle</value>
             </property>

   </configuration>

  备注:以上配置都是以最简配置,还有很多配置可以自行添加

       6)、将/usr/hadoop复制到其他服务器:

          scp -r /usr/hadoop root@192.168.1.11:/usr/hadoop

 7)、格式化namenode:  

  #CD /usr/hadoop/hadoop-3.0.0           # ./bin/hdfs namenode -format

  成功的话,会看到 “successfully formatted” 和 “Exitting with status 0” 的提示,若为 “Exitting with status 1” 则是出错          

  备注:只需格式化namenode,datanode不需要格式化(若格式化了,可将/usr/hadoop/tmp目录下文件都删除),所以先将安装文件夹复制到其他服务器,再格式化

    四、测���:

      1、启动HDFS:

      #CD /usr/hadoop/hadoop-3.0.0       # sbin/start-dfs.sh

如果运行脚本报如下错误,

ERROR: Attempting to launch hdfs namenode as root
      ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting launch.
      Starting datanodes
      ERROR: Attempting to launch hdfs datanode as root
      ERROR: but there is no HDFS_DATANODE_USER defined. Aborting launch.
      Starting secondary namenodes [localhost.localdomain]
      ERROR: Attempting to launch hdfs secondarynamenode as root
      ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting launch.

解决方案

(缺少用户定义而造成的)因此编辑启动和关闭

      $ vim sbin/start-dfs.sh       $ vim sbin/stop-dfs.sh

顶部空白处添加

      HDFS_DATANODE_USER=root       HADOOP_SECURE_DN_USER=hdfs       HDFS_NAMENODE_USER=root       HDFS_SECONDARYNAMENODE_USER=root

2)启动ResourceManager和NodeManager:

      #CD /usr/hadoop/hadoop-3.0.0       #sbin/start-yarn.sh

      如果启动时报如下错误,

      Starting resourcemanager
      ERROR: Attempting to launch yarn resourcemanager as root
      ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting launch.

      解决方案

(也是由于缺少用户定义)

      是因为缺少用户定义造成的,所以分别编辑开始和关闭脚本 

      $ vim sbin/start-yarn.sh       $ vim sbin/stop-yarn.sh

顶部空白添加

      YARN_RESOURCEMANAGER_USER=root       HADOOP_SECURE_DN_USER=yarn       YARN_NODEMANAGER_USER=root

3)、启动验证:

执行jps命令,出现下图基本完成

备注:也可以使用下面命令同时启动HDFS和ResourceManager、NodeManager:

      #CD /usr/hadoop/hadoop-3.0.0       #sbin/start-all.sh

Hadoop2.3-HA高可用集群环境搭建 

Hadoop项目之基于CentOS7的Cloudera 5.10.1(CDH)的安装部署 

Hadoop2.7.2集群搭建详解(高可用) 

使用Ambari来部署Hadoop集群(搭建内网HDP源) 

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/33badf08f9875c20fd66e25626d246a8.html