Hadoop 分布式集群搭建初入门(3)

11.使用scp将配置好的hadoop传输到slave1和slave2节点上
[root@master ~]# scp -r /usr/local/hadoop root@slave1:/usr/local/
[root@master ~]# scp -r /usr/local/hadoop root@slave2:/usr/local/

12.配置slave1和slave2上的环境变量(同步骤3),配置完后使用hadoop version验证一下

13.格式化 hdfs namenode–format
[root@master hadoop]# su hadoop
[hadoop@master hadoop]$ cd /usr/local/hadoop/
[hadoop@master hadoop]$ hdfs namenode -format    #一定要在hadoop用户下进行
17/07/26 20:26:12 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  host = master/192.168.230.130
STARTUP_MSG:  args = [-format]
STARTUP_MSG:  version = 2.7.3
.
.
.
17/07/26 20:26:15 INFO util.ExitUtil: Exiting with status 0    #status 为0才是成功
17/07/26 20:26:15 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.230.130
************************************************************/
[hadoop@master hadoop]$

五、启动hadoop服务

1.启动所有的服务
[hadoop@master dfs]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
hadoop@master's password:    #输入master上的hadoop的密码
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave2.out
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password:    #输入master上的hadoop的密码
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave2.out
[hadoop@master dfs]$

2.验证
[hadoop@master dfs]$ jps    #master上的进程
7491 Jps
6820 NameNode
7014 SecondaryNameNode
7164 ResourceManager
[hadoop@master dfs]$

[root@slave1 name]# jps    #slave1上的进程
3160 NodeManager
3050 DataNode
3307 Jps
[root@slave1 name]#

[root@slave2 name]# jps    #slave2上的进程
3233 DataNode
3469 Jps
3343 NodeManager
[root@slave2 name]#

3.使用浏览器管理

Hadoop 分布式集群搭建初入门

Hadoop 分布式集群搭建初入门

Hadoop项目之基于CentOS7的Cloudera 5.10.1(CDH)的安装部署 

Hadoop2.7.2集群搭建详解(高可用) 

使用Ambari来部署Hadoop集群(搭建内网HDP源) 

Ubuntu 14.04下Hadoop集群安装 

CentOS 6.7安装Hadoop 2.7.2 

Ubuntu 16.04上构建分布式Hadoop-2.7.3集群 

CentOS 7.3下Hadoop2.8分布式集群安装与测试 

CentOS 7 下 Hadoop 2.6.4 分布式集群环境搭建 

Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程 

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/68b0565ae8752e7126b14d2e39b68b51.html