hbase的安装部署;
1、从官方网站下载稳定版本,需注意要与Hadoop的版本相吻合,此处使用的是 hbase-0.20.6
2、修改 hbase-env.sh 中的内容,添加
export JAVA_HOME=/hadoop/jdk1.6.0_22
export HBASE_HOME=/hadoop/hbase-0.20.6
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_HOME/habse-0.20.6.jar:$HBASE_HOME/hbase-0.20.6-test.jar:$HBASE_HOME/conf:${HBASE_HOME}/lib/zookeeper-3.3.2.jar
export HBASE_CLASSPATH=/hadoop/hadoop-0.20.2/conf
export HBASE_MANAGES_ZK=false
3、修改 regionservers 中的内容,添加如下列表,其表明regionserver的ip地址
10.232.56.78
10.232.56.85
10.232.56.97
4、修改hbase-site文件中的内容,添加如下文件
<property>
<name>hbase.master</name>
<value>HQxTVL-HADOOP-A01.site:60000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://HQxTVL-HADOOP-A01.site:9000/hbase</value>
<description>The directory shared by region servers.
Should be fully-qualified to include the filesystem to use.
E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>HQxTVL-HADOOP-A01.site,HQxTVL-HADOOP-A02.site,HQxTVL-HADOOP-A03.site</value>
<description>此处提及 为奇数较好,并不要求把所有的机器都写上
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/hadoop/lcd-home/zkdata/data</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
注意:
1、在操作前,需要 在各台机器上的host上添加映射信息
2、修改 .profile 中的环境变量,要添加 zookeeper的配置信息到 classpath文件中
3、上网找资料,注意版本号,不能随意修改配置项,不同版本配置信息不同,尤其是端口号
4、所遇到问题
1)zookeeper端口号不正确,在 hdfs-site.xml中设置
2010-12-17 17:20:46,863 WARN org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to create /hbase -- check quorum servers, currently=HQxTVL-HADOOP-A03.site:22222,HQxTVL-HADOOP-A02.site:22222,HQxTVL-HADOOP-A01.site:22222
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:780)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:808)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:405)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:432)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.writeMasterAddress(ZooKeeperWrapper.java:520)
at org.apache.hadoop.hbase.master.HMaster.writeAddressToZooKeeper(HMaster.java:263)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:245)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1233)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
2) hbase.rootdir 的地址没有设置正确,需要 在host中 添加对应的映射信息,使用 其提示的地址
2010-12-14 17:41:37,673 INFO org.apache.hadoop.hbase.master.HMaster: vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc., vmVersion=17.1-b03
2010-12-14 17:41:37,674 INFO org.apache.hadoop.hbase.master.HMaster: vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -Dhbase.log.dir=/hadoop/hbase-0.20.6/logs, -Dhbase.log.file=hbase-hadoop-master-HQxTVL-HADOOP-A01.log, -Dhbase.home.dir=/hadoop/hbase-0.20.6, -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,DRFA, -Djava.library.path=/hadoop/hbase-0.20.6/lib/native/Linux-i386-32]
2010-12-14 17:41:37,736 INFO org.apache.hadoop.hbase.master.HMaster: My address is HQxTVL-HADOOP-A01.site:60000
2010-12-14 17:41:38,068 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
java.lang.IllegalArgumentException: Wrong FS: hdfs://10.232.56.78:9000/hbase, expected: hdfs://HQxTVL-HADOOP-A01.site:9000
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:204)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
3)不能启动 Hmaster,原因 主要 报错信息,提示getversion处出问题,hbase.version文件损坏,需格式化 hdfs中的hbase文件,重建 hbase.version
2010-12-16 15:20:10,627 INFO org.apache.hadoop.hbase.master.HMaster: vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc., vmVersion=17.1-b03
2010-12-16 15:20:10,628 INFO org.apache.hadoop.hbase.master.HMaster: vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -Dhbase.log.dir=/hadoop/hbase-0.20.6/logs, -Dhbase.log.file=hbase-hadoop-master-HQxTVL-HADOOP-A01.log, -Dhbase.home.dir=/hadoop/hbase-0.20.6, -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,DRFA, -Djava.library.path=/hadoop/hbase-0.20.6/lib/native/Linux-i386-32]
2010-12-16 15:20:10,685 INFO org.apache.hadoop.hbase.master.HMaster: My address is HQxTVL-HADOOP-A01.site:60000
2010-12-16 15:20:10,969 FATAL org.apache.hadoop.hbase.master.HMaster: Not starting HMaster because:
java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
at java.io.DataInputStream.readUTF(DataInputStream.java:572)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:186)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:205)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:208)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1233)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
2010-12-16 15:20:10,971 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master