Hadoop配置datanode无法连接到master

初次在VM上配置Hadoop,开了三台虚拟机,一个作namenode,jobtracker

另外两台机子作datanode,tasktracker

配置好后,启动集群

通过:50700查看cluster状况

Hadoop配置datanode无法连接到master

发现没有datanode

检查结点,发现datanode 进程已经启动,查看datanode机器上的日志

2014-03-01 22:11:17,473 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master.hadoop/192.168.128.132:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-01 22:11:18,477 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master.hadoop/192.168.128.132:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-03-01 22:11:19,481 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master.hadoop/192.168.128.132:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-03-01 22:11:20,485 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master.hadoop/192.168.128.132:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2014-03-01 22:11:21,489 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master.hadoop/192.168.128.132:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

发现datanode 无法连接到master ,但是经过尝试,可以ping通,到结点查看,9000端口也处于监听状态,百思不得其解


最终发现core-site.xml 中

<property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
  </property>
才意识到监听127.0.0.1端口,并不能被外机访问


改为主机名,一切正常

<property>
        <name>fs.default.name</name>
        <value>hdfs://Master.Hadoop:9000</value>
  </property>

相关阅读

Ubuntu 13.04上搭建Hadoop环境

Ubuntu 12.10 +Hadoop 1.2.1版本集群配置

Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)

Ubuntu下Hadoop环境的配置

单机版搭建Hadoop环境图文教程详解

搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建)

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/04d1bf545537840b5dfb484ce09b56ae.html