最新的Hadoop2.5 安装目录做了一定修改,安装变得稍微简单一点
首先安装准备工具
$ sudo apt-get install ssh
$ sudo apt-get install rsync配置ssh $ ssh localhostIf you cannot ssh to localhost without a passphrase, execute the following commands: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
进入etc/hadoop/hadoop-env.sh 配置运行环境
# set to the root of your Java installation
export JAVA_HOME=/usr/java/latest
# Assuming your installation directory is /usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop
配置hdfs端口和备份数
etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property><property>#ClientDatanodeProtocol 调用getBlockLocalPathInfo的时候
<name>dfs.block.local-path-access.user</name>
<value>infomorrow</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/infomorrow/hadoop-tmp</value>
</property> </configuration>
etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
配置使用yarn
etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
etc/hadoop/yarn-site.xml:
NodeManager 在 启 动 时 加 载shuffle server,shuffle server 实 际 上 是 Jetty/Netty Server,Reduce Task 通 过 该 server 从 各 个NodeManager 上远程复制 Map Task 产生的中间结果
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
启动过程:
hdfs
$ bin/hdfs namenode -format (初次使用时) $ sbin/start-dfs.sh进入监控页查看 - :50070/在hdfs上创建文件夹
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>查看hdfs上创建的文件夹 bin/hadoop fs -ls /yarn
$ sbin/start-yarn.sh进入监控页查看 - :8088/关闭:
$ sbin/stop-dfs.sh $ sbin/stop-yarn.sh
bin/hadoop dfsadmin -safemode leave 退出安全模式
如需使用spark 只需在集群节点安装scala,spark,并在spark-env.sh中添加配置
export SCALA_HOME=/home/juxinli/scala-2.11.5
export JAVA_HOME=/usr/lib/jvm/java-8-sun
export HADOOP_HOME=/home/juxinli/hadoop-2.5.0
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop-2.5.0
export SPARK_JAR=/home/juxinli/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar
在slaves添加节点主机名
下面关于Hadoop的文章您也可能喜欢,不妨看看:
Ubuntu14.04下Hadoop2.4.1单机/伪分布式安装配置教程
CentOS安装和配置Hadoop2.2.0