Hadoop上传文件时候的错误

[root@Hadoop1 ]# hadoop fs -put /home/hadoop/word.txt /tmp/wordcount/word5.txt出现的错误

12/04/05 20:32:45 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/wordcount/word5.txt could only be replicated to 0 nodes, instead of 1

12/04/05 20:32:45 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/04/05 20:32:45 WARN hdfs.DFSClient: Could not get block locations. Source file "/tmp/wordcount/word5.txt" - Aborting...
put: java.io.IOException: File /tmp/wordcount/word5.txt could only be replicated to 0 nodes, instead of 1
12/04/05 20:32:45 ERROR hdfs.DFSClient: Exception closing file /tmp/wordcount/word5.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/wordcount/word5.txt could only be replicated to 0 nodes, instead of 1

解决方案:

这个问题是由于没有添加节点的原因,也就是说需要先启动namenode,再启动datanode,然后启动jobtracker和tasktracker。这样就不会存在这个问题了。 目前解决办法是分别启动节点#hadoop-daemon.sh start namenode #$hadoop-daemon.sh start datanode

1.   重新启动namenode

# hadoop-daemon.sh start namenode

starting namenode, logging to /usr/hadoop-0.21.0/bin/../logs/hadoop-root-namenode-

2.   重新启动datanode

# hadoop-daemon.sh start datanode

starting datanode, logging to /usr/hadoop-0.21.0/bin/../logs/hadoop-root-datanode-

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/f723d99c384b01acd9ea590d6a108918.html