Hadoop node 部署步骤

a) RHEL 6.2 X64

i. 刻录光盘安装(略)

b) 安装选项

i. Basic server即可

ii.设置hostname

iii.安装完毕执行system-config-network,保存配置;如果网络命名:em1,修改/etc/sysconfig/network-scripts/ifcfg-em1:

ONBOOT=yes

使得网络自动启动

iv.没有DNS的话编辑/etc/hosts,设置本机和其他机器的ip

2.每个node配置ssh无密码登录,建议相互都可以做到无密码登录

a)配置Root用户,方便后续安装部署

i.在一台机器上对所有机器(包括自己)执行

# ssh crt-Hadoop[01~06] ‘ssh-keygen –t rsa’  (提问直接enter到底)

ii.把所有机器上的id_rsa.pub集中收集

# ssh crt-hadoop[01~06] ‘cat /root/.ssh/id_rsa.pub’ >> /root/.ssh/authorized_keys

iii.复制authorized_keys和known_hosts到所有机器

# scp /root/.ssh/authorized_keys crt-hadoop[01~06]:/root/.ssh/

# scp /root/.ssh/known_hosts crt-hadoop[01~06]:/root/.ssh/

# ssh crt-hadoop[01~06] ‘chmod 600 /root/.ssh/authorized_keys’

b) 配置 hadoop用户,该用户将运行hadoop

i.在一台机器上创建所有机器的hadoop用户

# ssh crt-hadoop[01~06] ‘adduser Hadoop;passwd Hadoop’

ii.使用hadoop用户登录

iii.使得hadoop用户也可以采用ssh无密码登录

$ ssh crt-hadoop[01~06] ‘cat /home/hadoop/.ssh/id_rsa.pub’ >> /home/hadoop/.ssh/authorized_keys

iv.复制authorized_keys和known_hosts到所有机器

$ scp ~/.ssh/authorized_keys crt-hadoop[01~06]:~/.ssh/

$ scp ~/.ssh/known_hosts crt-hadoop[01~06]:~/.ssh/

$ ssh crt-hadoop[01~06] ‘chmod 600 ~/.ssh/authorized_keys’

3.JDK安装

a) 下载Sun的jdk-6u30-linux-x64-rpm.bin

b) 在所有机器上安装jdk (root登录)

# scp jdk-6u30-linux-x64-rpm.bin crt-hadoop[01~06]:/home/Hadoop/

# ssh crt-hadoop[01~06] ‘chmod +x /home/Hadoop/jdk-6u30-linux-x64-rpm.bin; /home/Hadoop/jdk-6u30-linux-x64-rpm.bin

4.Hadoop安装

a) 下载 hadoop-1.0.0-1.amd64.rpm

b)在所有机器上安装hadoop(root登录)

# scp Hadoop-1.0.0-1.amd64.rpm crt-hadoop[01~06]:/home/Hadoop

# ssh crt-hadoop[01~06] ‘rpm –ivh /home/hadoop/Hadoop-1.0.0-1.amd64.rpm’

c) 设置hadoop脚本运行权限

# ssh crt-hadoop[01~06] ‘chmod +x /usr/sbin/*.sh’

d)修改hadoop配置文件

/etc/hadoop/hdfs-size.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

</configuration>

crt-hadoop01设置为master

/etc/hadoop/mapred-site.xml

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>crt-hadoop01:9001</value>

</property>

</configuration>

/etc/hadoop/core-size.xml

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://crt-hadoop01:9000</value>

</property>

</configuration>

/etc/hadoop/slaves

crt-hadoop02

crt-hadoop03

crt-hadoop04

crt-hadoop05

crt-hadoop06

/etc/Hadoop/masters

crt-hadoop01

e)复制配置文件到所有机器

# scp /etc/hadoop/* crt-hadoop[02~06]:/etc/hadoop/

f)关闭防火墙

# ssh crt-hadoop[01~06] ‘ service iptables save; service iptables stop; chkconfig iptables off’

g)运行hadoop example (hadoop登录)

$ hadoop namenode –format

$hadoop fs –put /etc/Hadoop test-data

$hadoop jar /usr/share/hadoop/hadoop-example-*.jar grep test-data out ‘dfs[a-z.]+’

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/05cbfa7b2fb815020bc941e6b9d0ee92.html