flume写入Hadoop hdfs报错 Too many open files(2)

[hadoop@dtydb6 logs]$ ulimit -a
core file size          (blocks, -c) 0
data seg size          (kbytes, -d) unlimited
scheduling priority            (-e) 0
file size              (blocks, -f) unlimited
pending signals                (-i) 1064960
max locked memory      (kbytes, -l) 32
max memory size        (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues    (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time              (seconds, -t) unlimited
max user processes              (-u) 1064960
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

而查看flume进程打开的文件数量为2932(这个比较奇怪,怎么超过1024了呢?)

12988 Jps
26903 JobTracker
29828 Application
26545 DataNode
27100 TaskTracker
26719 SecondaryNameNode
26374 NameNode


[root@dtydb6 ~]# lsof -p 29828|wc -l
2932

[root@dtydb6 ~]# ps -ef|grep 29828
root    13133 12914  0 14:05 pts/3    00:00:00 grep 29828
hadoop  29828    1 32 Jan22 ?        8-10:51:15 /usr/java/jdk1.7.0_07/bin/java -Xmx2048m -cp /monitor/flume-1.3/conf:/monitor/flume-1.3/lib/*:/hadoop/hadoop-1.0.4/libexec/../conf:/usr/java/jdk1.7.0_07/lib/tools.jar:/hadoop/hadoop-1.0.4/libexec/..:/hadoop/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/hadoop/hadoop-1.0.4/libexec/

解决方案:

1.修改nfile配置文件,手工增加nofile的大小
vi /etc/security/limits.conf
*      soft    nofile  12580
*              hard    nofile  65536
2.重启flume进程,也就是进程29828,问题解决

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/2902bf3547d3d6e56ceaccb78f4e92ec.html