Eclipse配置Hadoop MapReduce开发环境

Eclipse版本:MyEclipse6.5.1

Hadoop版本:hadoop-1.2.1

1.安装MyEclipse后,创建一个java项目

File->New->Java Project

输入项目名称,确定

image001.png



2.导入hadoop所有包

解压hadoop-1.2.1.tarE:\software\share\hadoop-1.2.1

E:\software\share\hadoop-1.2.1

E:\software\share\hadoop-1.2.1\lib下的jar包都导入到项目里

方法如下:

点中项目根右键->Properties->JavaPath->Libraries->Add External JARs

image003.png



3.确认jre6.0以上版本

我的MyEclipse6.5.1版本开始默认使用jre5.0版本,因hadoop-1.2.1需要jre 6.0以上版本,所执行程序时报错:

Bad version number in .class file (unableto load class ***)

更改jre版本方法

Windows->Preference->Java->InstalledJREsàadd

image005.png



4.修改FileUtil.java文件

这时在创建一个测试WordCountmapreduce程序时,同样遇到了下面的问题

13/12/13 22:58:49 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable

13/12/13 22:58:49 ERRORsecurity.UserGroupInformation:PriviledgedActionExceptionas:liczcause:java.io.IOException: Failed to set permissions of path:\tmp\hadoop-licz\mapred\staging\licz1853164772\.staging to 0700

Exception in thread"main"java.io.IOException: Failed to set permissions of path:\tmp\hadoop-licz\mapred\staging\licz1853164772\.staging to

......

解决办法:

修改E:\software\share\hadoop-1.2.1\src\core\org\apache\hadoop\fs\FileUtil.java文件

注释掉下面的内容

685 private static voidcheckReturnValue(boolean rv, File p,

686 FsPermission permission

687 ) throws IOException {

688 /*if (!rv) {

689 throw new IOException("Failed toset permissions of path: " + p +

690 " to " +

691 String.format("%04o", permission.toShort()));

692 }*/

693 }

然后在Mapreduce1/scr新建一个org.apache.hadoop.fs包,把FileUtil.java文件拷到这个包的下面(在eclipse里直接粘贴就可以)

image007.png



再次编译WordCount.java程序没有报错

import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
importorg.apache.hadoop.mapred.FileOutputFormat;
importorg.apache.hadoop.mapred.JobClient;
importorg.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
importorg.apache.hadoop.mapred.Mapper;
importorg.apache.hadoop.mapred.OutputCollector;
importorg.apache.hadoop.mapred.Reducer;
importorg.apache.hadoop.mapred.Reporter;
importorg.apache.hadoop.mapred.TextInputFormat;
importorg.apache.hadoop.mapred.TextOutputFormat;

public class WordCount {

public static class WordCountMapper extends MapReduceBase implementsMapper<Object, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();


        public void map(Object key, Text value,OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
            StringTokenizer itr = newStringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                output.collect(word, one);
            }

}
    }

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/a979dfd84dff0f337531c41fcb074fe7.html