FreeBSD下安装配置Hadoop集群(四)

前面说过了NameNode,DataNode的配置方法,这次说Secondary的配置方法。Hadoop为实现高可用,支持配置失效备份的Namenode,这样当主的Namenode挂掉了之后,可以从Secondary把数据恢复回去。可以理解为MySQL的Master和Slave,但是不同的是,hadoop的Secondary不能直接当Namenode使用,更多的时候是用它当namenode的数据恢复。

相关阅读:

FreeBSD下安装配置Hadoop集群(一) 

FreeBSD下安装配置Hadoop集群(二) 

FreeBSD下安装配置Hadoop集群(三)

FreeBSD下安装配置Hadoop集群(四)


其实secondary的配置有些类似于Datanode


看下配置文件


core-site.xml


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoopmaster-177.tj:9000</value>
<!--指定master地址-->
    </property>
    <property>
        <name>fs.checkpoint.dir</name>
        <value>/opt/data/hadoop1/hdfs/namesecondary,/opt/data/hadoop2/hdfs/namesecondary</value>
<!--其实这个比较重要,数据恢复全靠它-->
    </property>
    <property>
        <name>fs.checkpoint.period</name>
        <value>1800</value>
    </property>
    <property>
        <name>fs.checkpoint.size</name>
        <value>33554432</value>
    </property>
    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache
.hadoop.io.compress.BZip2Codec</value>
    </property>
    <property>
        <name>io.compression.codec.lzo.class</name>
        <value>com.hadoop.compression.lzo.LzoCodec</value>
    </property>
</configuration>



hdfs-site.xml


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>/opt/data/hadoop1/hdfs/name,/opt/data/hadoop2/hdfs/name</value>
        <description>    </description>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/opt/data/hadoop1/hdfs/data,/opt/data/hadoop2/hdfs/data</value>
        <description> </description>
    </property>
    <property>
        <name>dfs.http.address</name>
        <value>hadoopmaster-177.tj:50070</value>
    </property>
    <property>
        <name>dfs.secondary.http.address</name>
        <value>hadoopslave-189.tj:50090</value>
<!--注意这里-->
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.datanode.du.reserved</name>
        <value>1073741824</value>
    </property>
    <property>
        <name>dfs.block.size</name>
        <value>134217728</value>
    </property>
</configuration>

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/612ece97710de4a6b230e42888ef4157.html