Hadoop企业级集群架构 - NFS安装
服务器地址:192.168.1.230
安装NFS软件
检查nfs是否安装完成
rpm -qa | grep nfs
检查rpcbind和nfs 服务
systemctl list-unit-files | grep "nfs"
systemctl list-unit-files | grep "rpcbind"
systemctl enable nfs-server.service
systemctl enable rpcbind.service
systemctl list-unit-files | grep -E "rpcbind.service|nfs-server.service"
检查 NFS,RPC状态
service rpcbind status
service rpcbind status
建立运行帐户:
groupadd grid
useradd -m -s /bin/bash -g grid grid
passwd grid
<grid123>
修改/ecc/expots
添加:
/home/grid *(rw,sync,no_root_squash)
重启rpcbind,nfs
systemctl restart rpcbind
systemctl restart nfs
验证:
showmount -e 192.168.1.230
在节点上挂载共享目录
NSF挂载Errror "wrong fs type,bad option
解决办法:
yum -y install nfs-utils
mount -t nfs 192.168.1.230:/home/grid /nfs_share/
查看mount
mount
设置开机自动挂载
vi /etc/fstab
添加如下一行:
192.168.1.230:/home/grid /nfs_share nfs defaults 0 0
在NFS服务器及各节点配置SSH
将各节点的密钥加到NFS服务器上
ssh h1.hadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh h2.hadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh h3.hadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
id_rsa.pub
在各节点创建共享目录文件authorized_keys的软连接
在各节点上安装
yum -y install nfs-utils
mkdir /nfs_share
mount -t nfs 192.168.1.230:/home/grid /nfs_share/
ln -s /nfs_share/.ssh/authorized_keys ~/.ssh/authorized_keys
下面关于Hadoop的文章您也可能喜欢,不妨看看:
Ubuntu14.04下Hadoop2.4.1单机/伪分布式安装配置教程
CentOS安装和配置Hadoop2.2.0