Hadoop lzo文件的并行map处理

Hadoop集群中启用了lzo后,还需要一些配置,才能使集群能够对单个的lzo文件进行并行的map操作,以提升job的执行速度。

首先,要为lzo文件创建index。下面的命令对某个目录里的lzo文件创建index:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.LzoIndexer /log/source/cd/

使用该命令创建index要花些时间的,我一个7.5GB大小的文件,创建index,花了2分30秒的样子。其实创建index时还有另外一个参数,即com.hadoop.compression.lzo.DistributedLzoIndexer。两个选项可以参考:https://github.com/kevinweil/hadoop-lzo,该文章对这两个选项的解释,我不是很明白,但使用后一个参数可以减少创建index时所花费的时间,而对mapreduce任务的执行没有影响。如下:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.DistributedLzoIndexer /log/source/cd/    

然后,在Hive中创建表时,要指定INPUTFORMAT和OUTPUTFORMAT,否则集群仍然不能对lzo进行并行的map处理。在hive中创建表时加入下列语句:

SET FILEFORMAT     
INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"  
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat";  

执行了这两步操作后,对hive执行速度的提升还是很明显的。在测试中,我们使用一个7.5GB大小的lzo文件,执行稍微复杂一点的Hive命令,使用上述配置后仅需34秒的时间,而原来要180秒。

README.md
Hadoop-LZO
Hadoop-LZO is a project to bring splittable LZO compression to Hadoop. LZO is an ideal compression format for Hadoop due to its combination of speed and compression size. However, LZO files are not natively splittable, meaning the parallelism that is the core of Hadoop is gone. This project re-enables that parallelism with LZO compressed files, and also comes with standard utilities (input/output streams, etc) for working with LZO files.

Origins
This project builds off the great work done at . As of issue 41, the differences in this codebase are the following.

it fixes a few bugs in hadoop-gpl-compression -- notably, it allows the decompressor to read small or uncompressable lzo files, and also fixes the compressor to follow the lzo standard when compressing small or uncompressible chunks. it also fixes a number of inconsistenly caught and thrown exception cases that can occur when the lzo writer gets killed mid-stream, plus some other smaller issues (see commit log).
it adds the ability to work with Hadoop streaming via the com.apache.hadoop.mapred.DeprecatedLzoTextInputFormat class
it adds an easier way to index lzo files (com.hadoop.compression.lzo.LzoIndexer)
it adds an even easier way to index lzo files, in a distributed manner (com.hadoop.compression.lzo.DistributedLzoIndexer)
Hadoop and LZO, Together at Last
LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable. Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. In addition to providing LZO decompression support, these classes provide an in-process indexer (com.hadoop.compression.lzo.LzoIndexer) and a map-reduce style indexer which will read a set of LZO files and output the offsets of LZO block boundaries that occur near the natural Hadoop block boundaries. This enables a large LZO file to be split into multiple mappers and processed in parallel. Because it is compressed, less data is read off disk, minimizing the number of IOPS required. And LZO decompression is so fast that the CPU stays ahead of the disk read, so there is no performance impact from having to decompress data as it's read off disk.

Building and Configuring
To get started, see . This project is built exactly the same way; please follow the answer to "How do I configure Hadoop to use these classes?" on that page.

You can read more about Hadoop, LZO, and how we're using it at Twitter at .

Once the libs are built and installed, you may want to add them to the class paths and library paths. That is, in hadoop-env.sh, set

export HADOOP_CLASSPATH=/path/to/your/hadoop-lzo-lib.jar
    export Java_LIBRARY_PATH=/path/to/hadoop-lzo-native-libs:/path/to/standard-hadoop-native-libs
Note that there seems to be a bug in /path/to/hadoop/bin/hadoop; comment out the line

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/9e552b4e589c6cc4f7007ade54ce3139.html