网上参考:
可能是楼主启动集群后,又重复格式化namenode导致的。
如果只是测试学习,可以使用如下解决方法:
1、首先kill掉26755、21863和26654几个进程。如果kill 26755不行,可以kill -kill 26755。
2、手动删除conf/hdfs-site.xml文件中配置的dfs.data.dir目录下的内容。
3、执行$HADOOP_HOME/bin/hadoop namenode -format
4、启动集群$HADOOP_HOME/bin/start-all.sh
后果:
HDFS中内容会全部丢失。
解决方案:重新进行了格式化
su hadoop
hadoop@Master:/opt/hadoop/bin$ ./hadoop namenode -format
12/07/24 10:43:29 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = Master.Hadoop/10.2.128.46
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /opt/hadoop-datastore/dfs/name ? (Y or N) y
Format aborted in /opt/hadoop-datastore/dfs/name
12/07/24 10:43:32 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Master.Hadoop/10.2.128.46
************************************************************/
hadoop@Master:/opt/hadoop/bin$ ./start-all.sh
starting namenode, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-namenode-Master.Hadoop.out
Slave1.Hadoop: starting datanode, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-datanode-Slave1.Hadoop.out
Slave2.Hadoop: starting datanode, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-datanode-Slave2.Hadoop.out
Master.Hadoop: starting secondarynamenode, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-Master.Hadoop.out
starting jobtracker, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-jobtracker-Master.Hadoop.out
Slave2.Hadoop: starting tasktracker, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-tasktracker-Slave2.Hadoop.out
Slave1.Hadoop: starting tasktracker, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-tasktracker-Slave1.Hadoop.out
hadoop@Master:/opt/hadoop/bin$
--------------------------------------------------------------------------------------------------------------------------
以下为验证部分:
hadoop@Master:/opt/hadoop/bin$ ./hadoop dfsadmin -report
Safe mode is ON
Configured Capacity: 41137831936 (38.31 GB)
Present Capacity: 31127531520 (28.99 GB)
DFS Remaining: 31127482368 (28.99 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Name: 10.2.128.120:50010
Decommission Status : Normal
Configured Capacity: 20568915968 (19.16 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 4913000448 (4.58 GB)
DFS Remaining: 15655890944(14.58 GB)
DFS Used%: 0%
DFS Remaining%: 76.11%
Last contact: Tue Jul 24 10:50:43 CST 2012
Name: 10.2.128.20:50010
Decommission Status : Normal
Configured Capacity: 20568915968 (19.16 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5097299968 (4.75 GB)
DFS Remaining: 15471591424(14.41 GB)
DFS Used%: 0%
DFS Remaining%: 75.22%
Last contact: Tue Jul 24 10:50:41 CST 2012
web查看方式::50070/
查看job信息
:50030/jobtracker.jsp
要想检查守护进程是否正在运行,可以使用 jps 命令(这是用于 JVM 进程的ps 实用程序)。这个命令列出 5 个守护进程及其进程标识符。
hadoop@Master:/opt/hadoop/conf$ jps
2823 Jps
2508 JobTracker
2221 NameNode
2455 SecondaryNameNode
netstat -nat
tcp 0 0 10.2.128.46:54311 0.0.0.0:* LISTEN
tcp 0 0 10.2.128.46:54310 10.2.128.46:44150 ESTABLISHED
tcp 267 0 10.2.128.46:54311 10.2.128.120:48958 ESTABLISHED
tcp 0 0 10.2.128.46:54310 10.2.128.20:41230 ESTABLISHED