Hadoop MapReduce Task Log 无法查看syslog问题
现象:
由于多个map task共用一个JVM,所以只输出了一组log文件
datanode01:/data/hadoop-x.x.x/logs/userlogs$ ls -R
.:
attempt_201211220735_0001_m_000000_0 attempt_201211220735_0001_m_000002_0 attempt_201211220735_0001_m_000005_0
attempt_201211220735_0001_m_000001_0 attempt_201211220735_0001_m_000003_0
./attempt_201211220735_0001_m_000000_0:
log.index
./attempt_201211220735_0001_m_000001_0:
log.index
./attempt_201211220735_0001_m_000002_0:
log.index stderr stdout syslog
通过:50060/tasklog?attemptid= attempt_201211220735_0001_m_000000_0 获取task的日志时,会出现syslog无法获取
原因:
1.TaskLogServlet.doGet()方法
if (filter == null) {
printTaskLog(response, out, attemptId,start, end, plainText,
TaskLog.LogName.STDOUT,isCleanup);
printTaskLog(response, out, attemptId,start, end, plainText,
TaskLog.LogName.STDERR,isCleanup);
if(<SPAN>haveTaskLog(attemptId, isCleanup, TaskLog.LogName.SYSLOG))</SPAN> {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.SYSLOG,isCleanup);
}
if(haveTaskLog(attemptId, isCleanup, TaskLog.LogName.DEBUGOUT)) {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.DEBUGOUT, isCleanup);
}
if(haveTaskLog(attemptId, isCleanup, TaskLog.LogName.PROFILE)) {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.PROFILE,isCleanup);
}
} else {
printTaskLog(response, out, attemptId,start, end, plainText, filter,
isCleanup);
}
尝试将filter=SYSLOG参数加上,可以访问到syslog,但去掉就不行。
看了代码多了一行
haveTaskLog(attemptId, isCleanup,TaskLog.LogName.SYSLOG)
判断,跟进代码发现,检查的是原来
attempt_201211220735_0001_m_000000_0目录下是否有syslog文件?
而不是从log.index找location看是否有syslog文件,一个bug出现了!
推荐阅读:
Hadoop 中利用 MapReduce 读写 MySQL 数据