Hadoop集群安装Hbase实验环境搭建(3)

1)hduser@master:/usr/local/Hadoop$ bin/hadoop namenode -format
13/04/09 06:50:03 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
13/04/09 06:50:04 INFO util.GSet: VM type = 64-bit
13/04/09 06:50:04 INFO util.GSet: 2% max memory = 19.33375 MB
13/04/09 06:50:04 INFO util.GSet: capacity = 2^21 = 2097152 entries
13/04/09 06:50:04 INFO util.GSet: recommended=2097152, actual=2097152
13/04/09 06:50:05 INFO namenode.FSNamesystem: fsOwner=hduser
13/04/09 06:50:05 INFO namenode.FSNamesystem: supergroup=supergroup
13/04/09 06:50:05 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/04/09 06:50:05 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/04/09 06:50:05 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/04/09 06:50:05 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/04/09 06:50:07 INFO common.Storage: Image file of size 112 saved in 0 seconds.
13/04/09 06:50:07 INFO common.Storage: Storage directory /tmp/hadoop-hduser/dfs/name has been successfully formatted.
13/04/09 06:50:07 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/127.0.1.1
************************************************************/

HDFS name table存储在NameNode本地的文件系统中,具体在dfs.name.dir中,name table被用于跟踪和分析DataNode的信息

 

9.测试

1.启动集群

1)启动HDFS守护进程,NameNode进程在master上,DataNode进程在slave节点上

2)然后启动MapReduce进程:JobTracker在master上,TaskTracker进程在slave节点上

 

HDFS进程:

运行bin/star-dfs.sh(在master节点上运行)

hduser@master:/usr/local/hadoop$ bin/start-dfs.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-master.out
The authenticity of host 'master (127.0.1.1)' can't be established.
ECDSA key fingerprint is ec:c2:e2:5f:c7:72:de:4f:7a:c0:f1:e7:2b:eb:84:3f.
Are you sure you want to continue connecting (yes/no)? slave1: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-slave2.out

master: Host key verification failed.
The authenticity of host 'master (127.0.1.1)' can't be established.
ECDSA key fingerprint is ec:c2:e2:5f:c7:72:de:4f:7a:c0:f1:e7:2b:eb:84:3f.
Are you sure you want to continue connecting (yes/no)? yes
master: Warning: Permanently added 'master' (ECDSA) to the list of known hosts.
master: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-master.out

 

在master节点上~此时Java进程应该像这样~

hduser@master:/usr/local/hadoop$ jps
3217 NameNode
3526 SecondaryNameNode
4455 DataNode
4697 Jps

 

slave节点应该像这样:

hduser@slave2:/usr/local/hadoop/conf$ jps
3105 DataNode
3743 Jps

 

在Master节点上运行:

bin/start-mapred.sh

在slave节点上查看:

hduser@master:/usr/local/hadoop/logs$ cat hadoop-hduser-tasktracker-master.log
2013-04-09 07:27:15,895 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting TaskTracker
STARTUP_MSG: host = master/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2013-04-09 07:27:17,558 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-09 07:27:17,681 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-04-09 07:27:17,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-09 07:27:17,684 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
2013-04-09 07:27:18,445 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-09 07:27:18,459 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-04-09 07:27:23,961 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-04-09 07:27:24,178 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-04-09 07:27:24,305 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-04-09 07:27:24,320 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as hduser
2013-04-09 07:27:24,327 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-hduser/mapred/local
2013-04-09 07:27:24,345 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-04-09 07:27:24,397 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-04-09 07:27:24,406 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source TaskTrackerMetrics registered.
2013-04-09 07:27:24,492 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort39355 registered.
2013-04-09 07:27:24,493 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort39355 registered.
2013-04-09 07:27:24,498 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-04-09 07:27:24,509 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-04-09 07:27:24,519 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 39355: starting
2013-04-09 07:27:24,519 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 39355: starting
2013-04-09 07:27:24,520 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 39355: starting
2013-04-09 07:27:24,521 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:39355
2013-04-09 07:27:24,521 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_master:localhost/127.0.0.1:39355
2013-04-09 07:27:24,532 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 39355: starting
2013-04-09 07:27:24,533 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 39355: starting
2013-04-09 07:27:24,941 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_master:localhost/127.0.0.1:39355
2013-04-09 07:27:24,998 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-04-09 07:27:25,028 INFO org.apache.hadoop.mapred.TaskTracker: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@74c6eff5
2013-04-09 07:27:25,030 WARN org.apache.hadoop.mapred.TaskTracker: TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is disabled.
2013-04-09 07:27:25,034 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
2013-04-09 07:27:25,042 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ShuffleServerMetrics registered.
2013-04-09 07:27:25,045 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060
2013-04-09 07:27:25,046 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060
2013-04-09 07:27:25,046 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060
2013-04-09 07:27:25,046 INFO org.mortbay.log: jetty-6.1.26
2013-04-09 07:27:25,670 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060
2013-04-09 07:27:25,670 INFO org.apache.hadoop.mapred.TaskTracker: FILE_CACHE_SIZE for mapOutputServlet set to : 2000

 

此时在Master节点上应该为:

hduser@master:/usr/local/hadoop$ jps
3217 NameNode
3526 SecondaryNameNode
4455 DataNode
5130 Jps
4761 JobTracker
4988 TaskTracker
hduser@master:/usr/local/hadoop$

 

在Slave节点应该是:

hduser@slave2:/usr/local/hadoop/conf$ jps
3901 TaskTracker
3105 DataNode
3958 Jps

 

停止MapReduce进程,在Master节点上运行:

hduser@master:/usr/local/hadoop$ bin/stop-mapred.sh
stopping jobtracker
slave2: stopping tasktracker
master: stopping tasktracker
slave1: stopping tasktracker

之后Master节点上的Jave进程:

hduser@master:/usr/local/hadoop$ jps
3217 NameNode
3526 SecondaryNameNode
4455 DataNode
5427 Jps

4761 JobTracker
4988 TaskTracker

之后在Slave节点上:

hduser@slave2:/usr/local/hadoop/conf$ jps
3105 DataNode
4140 Jps

3901 TaskTracker

 

Stopping the HDFS layer(Master)

hduser@master:/usr/local/hadoop$ bin/stop-dfs.sh
stopping namenode
slave1: stopping datanode
slave2: stopping datanode
master: stopping datanode
localhost: stopping secondarynamenode
hduser@master:/usr/local/hadoop$ jps
5871 Jps

 

此时的Slave节点:

hduser@slave2:/usr/local/hadoop/conf$ jps
4305 Jps

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/4ce53d8ee7a2c55a2709567d6625af3d.html