Hadoop集群新增数据节点

一、创建用户
groupadd analyzer -f
useradd analyzer -d /opt/analyzer -g analyzer -p searchanalyzer

二、处理/etc/hosts文件

三、设置免密码登录(多台机器的id_rsa.pub,相互拷贝)
生成公钥、密钥:
ssh-keygen -t rsa
复制本地id_rsa.pub到远程服务器,使远程服务器登录本地可以免密码
scp
scp analyzer@10.1.4.34:/opt/analyzer/.ssh/id_rsa.pub id_rsa_pub_dir/id_rsa_xxx.pub
生成authorized_keys文件内容:
cat id_rsa_179.pub id_rsa.pub >authorized_keys
设置权限,权限问题可能导致远程免密码登录失败
chmod 644 authorized_keys
-rw-r--r-- 1 analyzer analyzer  397 May 12 16:53 authorized_keys

四、设置环境变量
[analyzer@linux434 ~]$ vi ~/.bash_profile
PATH=$PATH:$HOME/bin
export PATH
unset USERNAME
export JAVA_HOME=$HOME/jdk1.6.0_18
export PATH=$JAVA_HOME/bin:$PATH
export Hadoop_HOME=$HOME/hadoop
export HIVE_HOME=$HOME/hive
[analyzer@linux434 ~]$ source ~/.bash_profile

五、安装hadoop,hive
scp -r /opt/analyzer/hadoop analyzer@10.1.4.34:/opt/analyzer/hadoop
scp -r /opt/analyzer/hive analyzer@10.1.4.34:/opt/analyzer/hive
scp -r /opt/analyzer/db-derby-10.6.1.0-bin analyzer@10.1.4.34:/opt/analyzer/db-derby-10.6.1.0-bin
scp -r /opt/analyzer/jdk1.6.0_18 analyzer@10.1.4.34:/opt/analyzer/jdk1.6.0_18

六、 在新节点上启动datanode和tasktracker
/opt/analyzer/hadoop/bin/hadoop-daemon.sh start datanode
/opt/analyzer/hadoop/bin/hadoop-daemon.sh start tasktracker

七、进行block块的均衡
在hdfs-site.xml中增加设置balance的带宽,默认只有1M:
<property>
    <name>dfs.balance.bandwidthPerSec</name>
    <value>10485760</value>
    <description>
        Specifies the maximum bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second.
    </description>
</property>

运行以下命令:

/opt/sohuhadoop/hadoop/bin/start-balancer.sh -threshold 5

均衡10个节点,移动400G数据,大概花费了3个小时

The cluster is balanced. Exiting…
Balancing took 2.9950980555555557 hours

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/pxdxf.html