分布式环境搭建安装配置(2)

3.Hadoop环境的配置

3.1配置JDK环境(之前就做好了,这里不再赘述)

export JAVA_HOME=/opt/jdk1.7.0_21
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

3.2在官网下载Hadoop,然后解压到/opt/目录下面(这里使用的是hadoop-2.0.4-alpha)

然后进入目录/opt/hadoop-2.0.4-alpha/etc/hadoop,配置hadoop文件

修改文件hadoop-env.sh

export HADOOP_FREFIX=/opt/hadoop-2.0.4-alpha
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop
export JAVA_HOME=/opt/jdk1.7.0_21

修改文件hdfs-site.xml

<configuration>
 <property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/opt/hadoop-2.0.4-alpha/workspace/name</value>
  <description>Determines where on the local filesystem the DFS name node should store the name table.If this is a comma-delimited list of directories,then name table is replicated in all of the directories,for redundancy.</description>
  <final>true</final>
 </property>
 <property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/opt/hadoop-2.0.4-alpha/workspace/data</value>
  <description>Determines where on the local filesystem an DFS data node should store its blocks.If this is a comma-delimited list of directories,then data will be stored in all named directories,typically on different devices.Directories that do not exist are ignored.
  </description>
  <final>true</final>
 </property>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>
 <property>
  <name>dfs.permission</name>
  <value>false</value>
 </property>
</configuration>

修改文件mapred-site.xml

<configuration>
 <property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
 </property>
 <property>
  <name>mapreduce.job.tracker</name>
  <value>hdfs://yan-Server:9001</value>
  <final>true</final>
 </property>
 <property>
  <name>mapreduce.map.memory.mb</name>
  <value>1536</value>
 </property>
 <property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx1024M</value>
 </property>
 <property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>3072</value>
 </property>
 <property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx2560M</value>
 </property>
 <property>
  <name>mapreduce.task.io.sort.mb</name>
  <value>512</value>
 </property>
 <property>
  <name>mapreduce.task.io.sort.factor</name>
  <value>100</value>
 </property>
 <property>
  <name>mapreduce.reduce.shuffle.parallelcopies</name>
  <value>50</value>
 </property>
 <property>
  <name>mapred.system.dir</name>
  <value>file:/opt/hadoop-2.0.4-alpha/workspace/systemdir</value>
  <final>true</final>
 </property>
 <property>
  <name>mapred.local.dir</name>
  <value>file:/opt/hadoop-2.0.4-alpha/workspace/localdir</value>
  <final>true</final>
 </property>
</configuration>

修改文件yarn-env.xml

export HADOOP_FREFIX=/opt/hadoop-2.0.4-alpha
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop
export JAVA_HOME=/opt/jdk1.7.0_21

修改文件yarn-site.xml

<configuration>
 <property>
  <name>yarn.resourcemanager.address</name>
  <value>yan-Server:8080</value>
 </property>
 <property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>yan-Server:8081</value>
 </property>
 <property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>yan-Server:8082</value>
 </property>
 <property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce.shuffle</value>
 </property>
 <property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
</configuration>

将配置好的Hadoop复制到各DataNode(这里DataNode的JDK配置和主机的配置是一致的,不需要再修改JDK的配置)

3.3 修改主机的/etc/hosts,将NameNode加入该文件

192.168.137.100 yan-Server

192.168.137.201 node1
192.168.137.202 node2
192.168.137.203 node3

3.4 修改各DataNode的/etc/hosts文件,也添加上述的内容

192.168.137.100 yan-Server
192.168.137.201 node1
192.168.137.202 node2
192.168.137.203 node3

3.5 配置SSH免密码登录(所有的主机都使用root用户登录)

主机上运行命令

ssh-kengen -t rsa

一路回车,然后复制.ssh/id_rsa.pub为各DataNode的root用户目录.ssh/authorized_keys文件

然后在主机上远程登录一次

ssh root@node1

首次登录可能会需要输入密码,之后就不再需要。(其他的DataNode也都远程登录一次确保可以免输入密码登录)

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/b9b04c8da697a74d3eb34c6a4bb52857.html