CentOS 6.3下Hadoop伪分布式平台搭建(2)

[hadoop@root@linuxidc.com ~]$ ssh localhost ssh: connect to host localhost port 22: Connection refused [hadoop@root@linuxidc.com ~]$ ssh -p322 localhost Last login: Tue Nov 1 20:40:23 2016 from 10.1.2.108

如果提示

-bash: /usr/bin/ssh: Permission denied

则root下使用下面命令

chmod a+x /usr/bin/ssh

  5、安装hadoop

因为我下载的编译后的包,所以是可以直接解压运行的。

[root@root@linuxidc.com tar]# cd /home/hadoop/tar/ [root@root@linuxidc.com tar]# tar -xzf hadoop-2.7.3.tar.gz [root@root@linuxidc.com tar]# chown -R hadoop: /home/hadoop/ [root@root@linuxidc.com tar]# mv hadoop-2.7.3 /usr/local/hadoop [root@root@linuxidc.com tar]# chown -R hadoop: /usr/local/hadoop/

 

[hadoop@root@linuxidc.com ~]$ cd /usr/local/hadoop/ [hadoop@root@linuxidc.com hadoop]$ ./bin/hadoop version Hadoop 2.7.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff Compiled by root on 2016-08-18T01:41Z Compiled with protoc 2.5.0 From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4 This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3.

一般在运行hadoop命令会提示Error: JAVA_HOME is not set and could not be found.那是因为hadoop的配置文件没配置好。修改 /usr/local/hadoop/etc/hadoop/hadoop-env.sh支持java环境并配置好ssh 端口。

export JAVA_HOME=/usr/local/java # 本来是${JAVA_HOME} export HADOOP_SSH_OPTS="-p 322" # 末尾增加一行

  6、hadoop单机运行

hadoop默认就是单机模式,java单进程运行,方便调试。我们可以直接使用原生的例子来测试hadoop是否安装成功,运行下面命令就可以看到hadoop原生所有例子

若出现提示 “WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable”,该 WARN 提示可以忽略,不会影响 Hadoop 正常运行(可通过编译 Hadoop 源码解决)。

[hadoop@root@linuxidc.com hadoop]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar An example program must be given as the first argument. Valid program names are: aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files. aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

运行下面命令可以测试单词过滤例子grep

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/78ed2a50b47285f602a1d49ed3286c5a.html