首先解压scala,本次选用版本scala-2.11.1
[Hadoop@CentOS software]$ tar -xzvf scala-2.11.1.tgz
[hadoop@centos software]$ su -
[root@centos ~]# vi /etc/profile
添加如下内容:
SCALA_HOME=/home/hadoop/software/scala-2.11.1
PATH=$SCALA_HOME/bin
EXPORT SCALA_HOME
[root@centos ~]# source /etc/profile
[root@centos ~]# scala -version
Scala code runner version 2.11.1 -- Copyright 2002-2013, LAMP/EPFL
然后解压spark,本次选用版本spark-1.0.0-bin-hadoop1.tgz,这次用的是hadoop 1.0.4
[hadoop@centos software]$ tar -xzvf spark-1.0.0-bin-hadoop1.tgz
--------------------------------------分割线 --------------------------------------
CentOS 6.2(64位)下安装Spark0.8.0详细记录
Spark简介及其在Ubuntu下的安装使用
--------------------------------------分割线 --------------------------------------
进入到spark的conf目录下
[hadoop@centos conf]$ cp spark-env.sh.template spark-env.sh
[hadoop@centos conf]$ vi spark-env.sh
添加如下内容:
export SCALA_HOME=/home/hadoop/software/scala-2.11.1
export SPARK_MASTER_IP=centos.host1
export SPARK_WORKER_MEMORY=5G
export Java_HOME=/usr/software/jdk
启动
[hadoop@centos spark-1.0.0-bin-hadoop1]$ sbin/start-master.sh
可以通过 :8080/ 看到对应界面
[hadoop@centos spark-1.0.0-bin-hadoop1]$ sbin/start-slaves.sh park://centos.host1:7077
可以通过 :8081/ 看到对应界面
下面在spark上运行第一个例子:与Hadoop交互的WordCount
首先将word.txt文件上传到HDFS上,这里路径是 hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt
进入交互模式
[hadoop@centos spark-1.0.0-bin-hadoop1]$ master=spark://centos.host1:7077 ./bin/spark-shell
scala>val file=sc.textFile("hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt")
scala>val count=file.flatMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey(_+_)
scala>count.collect()
可以看到控制台有如下结果:
res0: Array[(String, Int)] = Array((hive,2), (zookeeper,1), (pig,1), (spark,1), (hadoop,4), (hbase,2))
同时也可以将结果保存到HDFS上
scala>count.saveAsTextFile("hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/result.txt")
接下来再来看下如何运行Java版本的WordCount
这里需要用到一个jar文件:spark-assembly-1.0.0-hadoop1.0.4.jar
WordCount代码如下:
public class WordCount {
private static final Pattern SPACE = Pattern.compile(" ");
@SuppressWarnings("serial")
public static void main(String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: JavaWordCount <file>");
System.exit(1);
}
SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
JavaRDD<String> lines = ctx.textFile(args[0], 1);
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterable<String> call(String s) {
return Arrays.asList(SPACE.split(s));
}
});
JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});