Ubuntu 14.04 LTS下安装Hadoop 1.2.1(伪分布模式)(2)


linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
 <property>
 <name>mapred.job.tracker</name>
 <value>localhost:9001</value>
 </property>
</configuration>
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

配置masters文件和slaves文件,这里因为我们是伪分布式,命名节点和数据节点其实都是一样。

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat masters
localhost
192.168.2.100

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat slaves
localhost
192.168.2.100
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

编辑/etc/hosts文件,配置主机名和IP地址的映射关系

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ cat /etc/hosts
127.0.0.1    localhost
127.0.1.1    ubuntu

# The following lines are desirable for IPv6 capable hosts
::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.2.100 master
192.168.2.100 slave
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$

创建好core-default.xml,hdfs-site.xml,mapred-site.xml 三个配置文件里面写到的目录

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ mkdir -p /hadoop/hadooptmp
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ mkdir -p /hadoop/hdfs/name
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/conf$ mkdir -p /hadoop/hdfs/data

格式化HDFS

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$ ./hadoop namenode -format

启动所有Hadoop服务,包括JobTracker,TaskTracker,Namenode

linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-namenode-ubuntu.out
192.168.68.130: starting datanode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-datanode-ubuntu.out
localhost: starting datanode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-datanode-ubuntu.out
localhost: ulimit -a for user linuxidc
localhost: core file size          (blocks, -c) 0
localhost: data seg size          (kbytes, -d) unlimited
localhost: scheduling priority            (-e) 0
localhost: file size              (blocks, -f) unlimited
localhost: pending signals                (-i) 7855
localhost: max locked memory      (kbytes, -l) 64
localhost: max memory size        (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: starting secondarynamenode, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-secondarynamenode-ubuntu.out
192.168.68.130: secondarynamenode running as process 10689. Stop it first.
starting jobtracker, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-jobtracker-ubuntu.out
192.168.68.130: starting tasktracker, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-tasktracker-ubuntu.out
localhost: starting tasktracker, logging to /home/linuxidc/hadoop_1.2.1/hadoop-1.2.1/libexec/../logs/hadoop-linuxidc-tasktracker-ubuntu.out
localhost: ulimit -a for user linuxidc
localhost: core file size          (blocks, -c) 0
localhost: data seg size          (kbytes, -d) unlimited
localhost: scheduling priority            (-e) 0
localhost: file size              (blocks, -f) unlimited
localhost: pending signals                (-i) 7855
localhost: max locked memory      (kbytes, -l) 64
localhost: max memory size        (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
linuxidc@ubuntu:~/hadoop_1.2.1/hadoop-1.2.1/bin$

查看Hadoop服务是否启动成功

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/561a46d99f7728f1536306d1c5461c6a.html