1、安装jdk,本人较懒,所以直接yum,这样可以省去配置环境变量
yum install java-1.6.0-openjdk java-1.6.0-openjdk-devel -y
2、配置本机无密码登陆
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat .ssh/id_dsa.pub >> .ssh/authorized_keys
之后可以ssh localhost,无密码可登陆即说明配置成功
3、下载并安装hadoop,我使用的是hadoop-1.2.1,直接下载的rpm安装
官网下载成功后,运行命令:
rpm -Uvh hadoop-1.2.1-1.x86_64.rpm
4、修改/etc/hadoop/hadoop-env.sh,将JAVA_HOME修改为真实值
因为是yum安装,所以是默认设置,需要修改为:
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/
5、进入/etc/hadoop/,修改mapred-site.xml,core-site.xml,hdfs-site.xml
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop1.linuxidc.com:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop1.linuxidc.com:9001</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
配置完成。