第五章 Centos下完全分布式部署Hadoop-3.3.1 (5)
日志聚集功能好处:可以方便的查看到程序运行详情,方便开发调试。
ps:开启日志聚集功能,需要重新启动NodeManager 、ResourceManager和HistoryServer。
开启日志聚集功能具体步骤如下:
1.配置yarn-site.xml
[delopy@hadoop102 /opt/module/hadoop/etc/hadoop]$ vim yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>指定MR走shuffle</description>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop103</value>
<description>指定ResourceManager的地址</description>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
<description>环境变量的继承</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
<description>开启日志聚集功能</description>
</property>
<property>
<name>yarn.log.server.url</name>
<value>:19888/jobhistory/logs</value>
<description>设置日志聚集服务器地址</description>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
<description>设置日志保留时间为7天</description>
</property>
</configuration>
2.分发配置
[delopy@hadoop102 /opt/module/hadoop/etc/hadoop]$ xsync /opt/module/
3.关闭NodeManager 、ResourceManager和HistoryServer
#1.在hadoop103执行操作
[delopy@hadoop103 ~]$ stop-yarn.sh
Stopping nodemanagers
Stopping resourcemanager
#2.在hadoop102执行操作
[delopy@hadoop102 ~]$ mapred --daemon stop historyserver
#3.查看hadoop启动状态
[delopy@hadoop103 ~]$ jps
16681 DataNode
21466 Jps
4.启动NodeManager 、ResourceManager和HistoryServer
#1.在hadoop103执行操作.关闭NodeManager、ResourceManager和HistoryServer
[delopy@hadoop103 ~]$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers
#2.在hadoop102执行操作
[delopy@hadoop102 ~]$ mapred --daemon start historyserver
#3.查看hadoop启动状态
[delopy@hadoop103 ~]$ jps
21584 ResourceManager
16681 DataNode
21849 JobHistoryServer
21692 NodeManager
21903 Jps
十、集群功能测试
1.新建小文件
[delopy@hadoop102 ~]$ vim /data/software/1.txt
c 是世界上最好的语言!
java 是世界上最好的语言!
python 是世界上最好的语言!
go 是世界上最好的语言!
2.上传文件到Hadoop
[delopy@hadoop102 ~]$ hadoop fs -mkdir /input
[delopy@hadoop102 ~]$ hadoop fs -put /data/software/1.txt /input
3.执行wordcount程序
[delopy@hadoop102 ~]$ hadoop jar /opt/module/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount /input /output
2021-09-02 11:20:50,127 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at hadoop103/10.0.0.103:8032
2021-09-02 11:20:51,862 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/delopy/.staging/job_1630552034170_0001
2021-09-02 11:20:53,199 INFO input.FileInputFormat: Total input files to process : 1
2021-09-02 11:20:53,675 INFO mapreduce.JobSubmitter: number of splits:1
2021-09-02 11:20:54,787 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1630552034170_0001
2021-09-02 11:20:54,788 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-09-02 11:20:56,510 INFO conf.Configuration: resource-types.xml not found
2021-09-02 11:20:56,510 INFO resource.ResourceUtils: Unable to find \'resource-types.xml\'.
2021-09-02 11:20:57,688 INFO impl.YarnClientImpl: Submitted application application_1630552034170_0001
2021-09-02 11:20:57,969 INFO mapreduce.Job: The url to track the job: :8088/proxy/application_1630552034170_0001/
2021-09-02 11:20:57,970 INFO mapreduce.Job: Running job: job_1630552034170_0001
2021-09-02 11:21:47,852 INFO mapreduce.Job: Job job_1630552034170_0001 running in uber mode : false
2021-09-02 11:21:47,854 INFO mapreduce.Job: map 0% reduce 0%
2021-09-02 11:22:12,655 INFO mapreduce.Job: map 100% reduce 0%
2021-09-02 11:22:54,276 INFO mapreduce.Job: map 100% reduce 100%
2021-09-02 11:22:56,374 INFO mapreduce.Job: Job job_1630552034170_0001 completed successfully
2021-09-02 11:22:56,735 INFO mapreduce.Job: Counters: 54
File System Counters
FILE: Number of bytes read=84
FILE: Number of bytes written=545011
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=239
HDFS: Number of bytes written=58
HDFS: Number of read operations=8
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=19108
Total time spent by all reduces in occupied slots (ms)=39504
Total time spent by all map tasks (ms)=19108
Total time spent by all reduce tasks (ms)=39504
Total vcore-milliseconds taken by all map tasks=19108
Total vcore-milliseconds taken by all reduce tasks=39504
Total megabyte-milliseconds taken by all map tasks=19566592
Total megabyte-milliseconds taken by all reduce tasks=40452096
Map-Reduce Framework
Map input records=4
Map output records=8
Map output bytes=173
Map output materialized bytes=84
Input split bytes=98
Combine input records=8
Combine output records=5
Reduce input groups=5
Reduce shuffle bytes=84
Reduce input records=5
Reduce output records=5
Spilled Records=10
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=1328
CPU time spent (ms)=4710
Physical memory (bytes) snapshot=347209728
Virtual memory (bytes) snapshot=5058207744
Total committed heap usage (bytes)=230821888
Peak Map Physical memory (bytes)=223215616
Peak Map Virtual memory (bytes)=2524590080
Peak Reduce Physical memory (bytes)=123994112
Peak Reduce Virtual memory (bytes)=2533617664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=141
File Output Format Counters
Bytes Written=58
[delopy@hadoop102 ~]$
4.web端查看测试结果
#1.进入HDFS的web界面,进入HDFS文件系统,
内容版权声明:除非注明,否则皆为本站原创文章。