Hadoop集群搭建时的问题及解决

首先,说明一下,本人刚刚接触Hadoop,目前属于纯粹的小白一个,但是对hadoop很有兴趣,而且也打算长期研究。希望各路高手能高抬贵手,谢谢!

本人在RHEL5.4 X64位系统下搭载hadoop集群,共用3台机子搭建hadoop-0.21.0。
是按照这篇帖子的方法进行搭建的,原帖地址:。完全按照帖子进行搭建,没有进行任何修改,除了JDK版本不同,本人使用的是jdk1.6.0_18。
在搭建好后,启动hadoop,以下为终端显示:
[hadoop@hadoop1 hadoop]$ bin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop1.out
/home/hadoop/hadoop/bin/../bin/hadoop-daemon.sh: line 127: /tmp/hadoop-hadoop-namenode.pid: 权限不够
hadoop3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out
hadoop2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop1: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop1.out
starting jobtracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-jobtracker-hadoop1.out
/home/hadoop/hadoop/bin/../bin/hadoop-daemon.sh: line 127: /tmp/hadoop-hadoop-jobtracker.pid: 权限不够
hadoop3: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop3.out
hadoop2: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop2.out

不知为何会有如此两行信息?在google、百度上搜遍所有的帖子,没有相关问题的解决办法。在此敬请各路高手指点。
我猜测是否会是core-site.xml、hdfs-site.xml、mapred-site.xml三个配置文件的原因?配置文件也是按照帖子进行修改的。
而权限,在创建hadoop用户时,是用命令创建的。也已经修改了/etc/sudoers 里的用户权限。
## Allow root to run any commands anywhere
root    ALL=(ALL)     ALL

## Allow hadoop to run any commands anywhere
hadoop    ALL=(ALL)     ALL
          
-r--r-----   1 root root    3254 12-07 09:26 sudoers   此为sudoers文件的权限。

           最后,再附上jps、 bin/stop-all.sh和./bin/hadoop dfsadmin -report的结果:
[hadoop@hadoop1 hadoop]$ jps
4624 NameNode
4955 JobTracker
8793 SecondaryNameNode
9301 Jps
[hadoop@hadoop1 hadoop]$

[hadoop@hadoop1 hadoop]$ bin/stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
no namenode to stop
hadoop3: stopping datanode
hadoop2: stopping datanode
hadoop1: stopping secondarynamenode
no jobtracker to stop
hadoop3: stopping tasktracker
hadoop2: stopping tasktracker

[hadoop@hadoop1 hadoop]$ ./bin/hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

10/12/10 20:22:12 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
10/12/10 20:22:12 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Configured Capacity: 48641990656 (45.3 GB)
Present Capacity: 34046033920 (31.71 GB)
DFS Remaining: 33895235584 (31.57 GB)
DFS Used: 150798336 (143.81 MB)
DFS Used%: 0.44%
Under replicated blocks: 3
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 192.168.1.229:50010 (hadoop2)
Decommission Status : Normal
Configured Capacity: 30157537280 (28.09 GB)
DFS Used: 75399168 (71.91 MB)
Non DFS Used: 7595790336 (7.07 GB)
DFS Remaining: 22486347776 (20.94 GB)
DFS Used%: 0.25%
DFS Remaining%: 74.56%
Last contact: Fri Dec 10 20:22:10 CST 2010


Name: 192.168.1.230:50010 (hadoop3)
Decommission Status : Normal
Configured Capacity: 18484453376 (17.21 GB)
DFS Used: 75399168 (71.91 MB)
Non DFS Used: 7000166400 (6.52 GB)
DFS Remaining: 11408887808 (10.63 GB)
DFS Used%: 0.41%
DFS Remaining%: 61.72%
Last contact: Fri Dec 10 20:22:10 CST 2010


[hadoop@hadoop1 hadoop]$

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/pxxfj.html