Ubuntu下Hadoop单机部署及分布式集群部署(4)


3,相关配置
4) 为了方便,使用hadoop命令或者start-all.sh等命令,修改Master上/etc/profile 新增以下内容:
export JAVA_HOME=/usr/lib/jdk1.6.0_33
export JRE_HOME=/usr/lib/jdk1.6.0_33/jre
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/opt/hadoop
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
修改完毕后,执行source /etc/profile 来使其生效。

配置conf下的文件:
vim hadoop-env.sh
export JAVA_HOME=/usr/lib/jdk1.6.0_33

vim core-site.xml
----------------------------------
<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-datastore/</value>
    <description>A base for other temporary directories.</description>
  </property>

<property>
    <name>fs.default.name</name>
    <value>hdfs://Master.Hadoop:54310</value>
    <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>
-----------------------------------------
vim hdfs-site.xml
------------------------------------------
<configuration>
<property>
  <name>dfs.replication</name>
  <value>3</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>
-------------------------------------
vim mapred-site.xml
------------------------------------
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>Master.Hadoop:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
-------------------------------------
vim masters
Master.Hadoop
root@Master:/opt/hadoop/conf# vim slaves
Slave1.Hadoop
Slave2.Hadoop


采用方法3,将Master上的Hadoop拷贝到每个Slave下
切换为root用户
su root
执行 scp -r hadoop Slave1.Hadoop:/opt/
在Slave1.Hadoop上
su root
chown -R hadoop:hadoop /opt/hadoop/
创建目录
mkdir /opt/hadoop-datastore/
chown -R hadoop:hadoop /opt/hadoop-datastore/
同理 其他Slave

在namenode执行 格式化hadoop
root@Master:/opt/hadoop/bin# hadoop namenode -format
输出:
12/07/23 18:54:36 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Master.Hadoop/10.2.128.46
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /opt/hadoop-datastore/dfs/name ? (Y or N) y
Format aborted in /opt/hadoop-datastore/dfs/name
12/07/23 18:54:45 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Master.Hadoop/10.2.128.46
************************************************************/

启动hadoop
./start-all.sh
root@Master:/opt# chown -R hadoop:hadoop /opt/hadoop/
root@Master:/opt# chown -R hadoop:hadoop /opt/hadoop-datastore/
root@Master:/opt# su hadoop
hadoop@Master:/opt$ cd hadoop/bin/

hadoop@Master:/opt/hadoop/bin$ ./start-all.sh

遇到的问题:
starting namenode, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-namenode-Master.Hadoop.out
Slave1.Hadoop: datanode running as process 7309. Stop it first.
Slave2.Hadoop: datanode running as process 4920. Stop it first.
Master.Hadoop: starting secondarynamenode, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-Master.Hadoop.out
starting jobtracker, logging to /opt/hadoop/bin/../logs/hadoop-hadoop-jobtracker-Master.Hadoop.out
Slave1.Hadoop: tasktracker running as process 7477. Stop it first.
Slave2.Hadoop: tasktracker running as process 5088. Stop it first.

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/65caa80911f473a1a9bbf3ece3e096e7.html