<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
</configuration>
配置hdfs-site.xml//增加hdfs配置信息(namenode、datanode端口和目录位置)
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>Master.Hadoop:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
配置 mapred-site.xml 文件//增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)
注意/usr/local/hadoop/etc/hadoop/文件夹下有mapred.xml.template文件,需要复制并重命名
cp mapred-site.xml.template mapred-site.xml
sudo gedit mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master.Hadoop:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>Master.Hadoop:19888</value>
</property>
</configuration>
配置yarn-site.xml//增加yarn功能
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>Master.Hadoop:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Master.Hadoop:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Master.Hadoop:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>Master.Hadoop:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>Master.Hadoop:8088</value>
</property>
</configuration>
以上Master的hadoop配置完毕,配置salver的hadoop
把master下的hadoo文件夹传到salver即可
普通用户和root用户均可注意sudo 和su的区别
命令:scp -r /usr/hadoop root@Salver1.Hadoop:/usr/
给ysu用户分配hadoop文件夹读权限
chown -R ysu:ysu hadoop
修改 /etc/profile文件 添加hadoop路径
cd profile
sudo gedit /etc/profile
文件中
# set hadoop path
export HADOOP_HOME=/usr/hadoop
export PATH=PATH:PATH:HADOOP_HOME/bin
这样slave的机器hadoop也配置好了
3.8 启动Hadoop
格式化HDFS
hdfs namenode -format
启动hadoop
/usr/hadoop/sbin/start-dfs.sh
/usr/hadoop/sbin/start-yarn.sh
3.9查看集群和进程
jps//jps命令
/usr/hadoop/bin/hdfs dfsadmin -report //查看集群 (结果没出来,有待修复)
火狐浏览器打开
Master.Hadoop:50070
Master.Hadoop:8088(结果没出来,有待修复)
Slaver1.Hadoop:8042
下面关于Hadoop的文章您也可能喜欢,不妨看看: