Hadoop2.2.0 NN HA详细配置+Client透明性试验【完整版(2)

#千万注意:接下来需要在各个几点的dataDir目录下建立myid文件,内容就是相应的A,也就是说,各个ZK节点的myid文件内容不同 !!!

3.修改各个节点的环境变量

在/etc/profile文件添加:

export ZOOKEEPER_HOME=/home/yarn/Zookeeper/zookeeper-3.4.6

并为PATH加上

$ZOOKEEPER_HOME/bin

注意:export ZOOKEEPER_HOME要在PATH的上方。

下面开始修改Hadoop的配置文件:

4.修改core-site.xml

<configuration>

<property>

  <name>fs.defaultFS</name>

  <value>hdfs://myhadoop</value>

  <description>注意:myhadoop为集群的逻辑名,需与hdfs-site.xml中的dfs.nameservices一致!</description>

</property>

<property>

  <name>hadoop.tmp.dir</name>

  <value>/home/yarn/Hadoop/hdfs2.0/tmp</value>

</property>

<property>

  <name>ha.zookeeper.quorum</name>

  <value>master:2181,slave1:2181,slave2:2181</value>

  <description>各个ZK节点的IP/host,及客户端连接ZK的端口,该端口需与zoo.cfg中的clientPort一致!</description>

</property>

</configuration>

5.修改hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/configuration.xsl"?>

<!--

  Licensed under the Apache License, Version 2.0 (the "License");

  you may not use this file except in compliance with the License.

  You may obtain a copy of the License at

 

   

 

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License. See accompanying LICENSE file.

-->

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

  <name>dfs.nameservices</name>

  <value>myhadoop</value>

  <description>

    Comma-separated list of nameservices.

    as same as fs.defaultFS in core-site.xml.

  </description>

</property>

 

<property>

  <name>dfs.ha.namenodes.myhadoop</name>

  <value>nn1,nn2</value>

  <description>

    The prefix for a given nameservice, contains a comma-separated

    list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).

  </description>

</property>

 

<property>

  <name>dfs.namenode.rpc-address.myhadoop.nn1</name>

  <value>master:8020</value>

  <description>

    RPC address for nomenode1 of hadoop-test

  </description>

</property>

 

<property>

  <name>dfs.namenode.rpc-address.myhadoop.nn2</name>

  <value>slave1:8020</value>

  <description>

    RPC address for nomenode2 of hadoop-test

  </description>

</property>

 

<property>

  <name>dfs.namenode.http-address.myhadoop.nn1</name>

  <value>master:50070</value>

  <description>

    The address and the base port where the dfs namenode1 web ui will listen on.

  </description>

</property>

 

<property>

  <name>dfs.namenode.http-address.myhadoop.nn2</name>

  <value>slave1:50070</value>

  <description>

    The address and the base port where the dfs namenode2 web ui will listen on.

  </description>

</property>

 

 

<property>  

  <name>dfs.namenode.servicerpc-address.myhadoop.n1</name>  

  <value>master:53310</value>  

</property>  

<property>  

  <name>dfs.namenode.servicerpc-address.myhadoop.n2</name>  

  <value>slave1:53310</value>  

</property>

 

 

 

<property>

  <name>dfs.namenode.name.dir</name>

  <value>file:///home/yarn/Hadoop/hdfs2.0/name</value>

  <description>Determines where on the local filesystem the DFS name node

      should store the name table(fsimage).  If this is a comma-delimited list

      of directories then the name table is replicated in all of the

      directories, for redundancy. </description>

</property>

 

<property>

  <name>dfs.namenode.shared.edits.dir</name>

  <value>qjournal://slave1:8485;slave2:8485;slave3:8485/hadoop-journal</value>

  <description>A directory on shared storage between the multiple namenodes

  in an HA cluster. This directory will be written by the active and read

  by the standby in order to keep the namespaces synchronized. This directory

  does not need to be listed in dfs.namenode.edits.dir above. It should be

  left empty in a non-HA cluster.

  </description>

</property>

 

<property>

  <name>dfs.datanode.data.dir</name>

  <value>file:///home/yarn/Hadoop/hdfs2.0/data</value>

  <description>Determines where on the local filesystem an DFS data node

  should store its blocks.  If this is a comma-delimited

  list of directories, then data will be stored in all named

  directories, typically on different devices.

  Directories that do not exist are ignored.

  </description>

</property>

 

<property>

  <name>dfs.ha.automatic-failover.enabled</name>

  <value>true</value>

  <description>

    Whether automatic failover is enabled. See the HDFS High

    Availability documentation for details on automatic HA

    configuration.

  </description>

</property>

 

<property>

  <name>dfs.journalnode.edits.dir</name>

  <value>/home/yarn/Hadoop/hdfs2.0/journal/</value>

</property>

 

<property>  

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/ea174bfb2795a3813c54c94c11d3da42.html