Hadoop2.7.1 HA集群部署(4)

<property>
          <name>dfs.ha.fencing.methods</name>
          <value>sshfence
          shell(/bin/true)</value>
      </property>
      <property>
          <name>dfs.ha.fencing.ssh.private-key-files</name>
          <value>/home/hadoop/.ssh/id_rsa</value>
      </property>
      <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
      </property>

</configuration> 

(6)配置  mapred-site.xml 文件

-->>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)

[root@masternode hadoop]# mv mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<configuration>   

<!-- 配置MapReduce运行于yarn中 -->   

<property>   

<name>mapreduce.framework.name</name>   

<value>yarn</value>   

</property>   

<property> 

<name>mapreduce.job.maps</name> 

<value>12</value> 

</property> 

<property> 

<name>mapreduce.job.reduces</name> 

<value>12</value> 

</property> 

</configuration> 

(7)配置yarn-site.xml  文件-->>增加yarn功能

<configuration>   

<property>   

<name>yarn.log-aggregation-enable</name>   

<value>true</value>   

</property>   

<property>   

<name>yarn.log-aggregation.retain-seconds</name>   

<value>259200</value>   

</property>   

<property>   

<name>yarn.resourcemanager.zk-address</name>   

<value>slavenode1:2181,slavenode3:2181,slavenode2:2181</value>   

</property>   

<property>   

<name>yarn.resourcemanager.cluster-id</name>   

<value>cluster-yarn</value>   

</property>   

<property>   

<name>yarn.resourcemanager.ha.enabled</name>   

<value>true</value>   

</property>   

<property>   

<name>yarn.resourcemanager.ha.rm-ids</name>   

<value>rm1,rm2</value>   

</property>   

<property>   

<name>yarn.resourcemanager.hostname.rm1</name>   

<value>slavenode1</value>   

</property>       

<property>   

<name>yarn.resourcemanager.hostname.rm2</name>   

<value>masternode</value>   

</property>   

<property>

<name>yarn.resourcemanager.scheduler.address.rm1</name>

<value>slavenode1:8030</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address.rm2</name>

<value>masternode:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address.rm1</name>

<value>slavenode1:8031</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address.rm2</name>

<value>masternode:8031</value>

</property>

<property>

<name>yarn.resourcemanager.address.rm1</name>

<value>slavenode1:8032</value>

</property>

<property>

<name>yarn.resourcemanager.address.rm2</name>

<value>masternode:8032</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address.rm1</name>

<value>slavenode:8033</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address.rm2</name>

<value>masternode:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address.rm1</name>

<value>slavenode1:8088</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address.rm2</name>

<value>masternode:8088</value>

</property> 

<property> 

<name>yarn.resourcemanager.ha.automatic-failover.enabled</name> 

<value>true</value> 

</property> 

<property> 

<name>yarn.resourcemanager.ha.automatic-failover.embedded</name> 

<value>true</value> 

</property> 

<property> 

<name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name> 

<value>/yarn-leader-election</value> 

</property> 

<property>   

<name>yarn.resourcemanager.recovery.enabled</name>   

<value>true</value>   

</property>   

<property> 

<name>yarn.resourcemanager.store.class</name> 

<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> 

</property> 

<property>   

<name>yarn.nodemanager.aux-services</name>   

<value>mapreduce_shuffle</value>   

</property>   

<property>   

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>   

<value>org.apache.hadoop.mapred.ShuffleHandler</value>   

</property>   

</configuration> 

[root@slavenode1 hadoop-2.7.2]# vi etc/hadoop/log4j.properties 

    log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR 

为了解决启动进程会出现报错

现在在Master机器上的Hadoop配置就结束了,剩下的就是配置Slave机器上的Hadoop。

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/bc7a682ecf910654a6c6b40e9c9c78c8.html