在项目初期的时候,大家都是赶着上线,一般来说对日志没有过多的考虑,当然日志量也不大,所以用log4net就够了,随着应用的越来越多,日志散落在各个服务器的logs文件夹下,确实有点不大方便,这个时候就想到了,在log4net中配置 mysql的数据源,不过这里面有一个坑,熟悉log4net的同学知道写入mysql有一个batch的阈值,比如说batchcache中有100条,才写入mysql,这样的话,就有一个延迟的效果,而且如果batchcache中不满100条的话,你在mysql中是看不到最新的100条日志。而且采用中心化的mysql,涉及到tcp传输,其中的性能大家也应该明白,而且mysql没有一个好的日志界面,只能自己去写UI,所以还还得继续寻找其他的解决方案,也就是本篇的ELK。
一:ELK名字解释
ELK就是ElasticSearch + LogStash + Kibana,这三样搭配起来确实非常不错,先画张图给大家看一下。
1. LogStash
它可以流放到各自的服务器上收集Log日志,通过内置的ElasticSearch插件解析后输出到ES中。
2.ElasticSearch
这是一个基于Lucene的分布式全文搜索框架,可以对logs进行分布式存储,有点像hdfs哈。。。
3. Kibana
所有的log日志都到ElasticSearch之后,我们需要给他展示出来,对吧? 这个时候Kibana就出手了,它可以多维度的展示es中的数据。这也解决了
用mysql存储带来了难以可视化的问题。
二:快速搭建
上面只是名词解释,为了演示,我只在一台CentOS上面搭建了。
1. 官方下载 :https://www.elastic.co/cn/products,在下面这张图上,我们找到对应的三个产品,进行下载就好了。
[root@slave1 myapp]# ls
elasticsearch kafka_2.11-1.0.0.tgz nginx-1.13.6.tar.gz
elasticsearch-5.6.4.tar.gz kibana node
elasticsearch-head kibana-5.2.0-linux-x86_64.tar.gz node-v8.9.1-linux-x64.tar.xz
images logstash portal
Java logstash-5.6.3.tar.gz service
jdk1.8 logstash-tutorial-dataset sql
jdk-8u144-linux-x64.tar.gz nginx
kafka nginx-1.13.6
[root@slave1 myapp]#
我这里下载的是elasticsearch 5.6.4,kibana5.2.0 ,logstash5.6.3三个版本。。。然后用 tar -xzvf解压一下。
2. logstash配置
解压完之后,我们到config目录中新建一个logstash.conf配置。
[root@slave1 config]# ls
jvm.options log4j2.properties logstash.conf logstash.yml startup.options
[root@slave1 config]# pwd
/usr/myapp/logstash/config
[root@slave1 config]# vim logstash.conf
然后做好input ,filter,output三大块, 其中input是吸取logs文件下的所有log后缀的日志文件,filter是一个过滤函数,这里不用配置,output配置了导入到
hosts为127.0.0.1:9200的elasticsearch中,每天一个索引。
input {
file {
type => "log"
path => "/logs/*.log"
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug { }
}
elasticsearch {
hosts => "127.0.0.1"
index => "log-%{+YYYY.MM.dd}"
}
}
配置完了之后,我们就可以到bin目录下启动logstash了,配置文件设置为conf/logstash.conf,从下图中可以看到,当前开启的是9600端口。
[root@slave1 bin]# ls
cpdump logstash logstash.lib.sh logstash-plugin.bat setup.bat
ingest-convert.sh logstash.bat logstash-plugin ruby system-install
[root@slave1 bin]# ./logstash -f ../config/logstash.conf
Sending Logstash's logs to /usr/myapp/logstash/logs which is now configured via log4j2.properties
[2017-11-28T17:11:53,411][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/myapp/logstash/modules/fb_apache/configuration"}
[2017-11-28T17:11:53,414][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/myapp/logstash/modules/netflow/configuration"}
[2017-11-28T17:11:54,063][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[:9200/]}}
[2017-11-28T17:11:54,066][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>:9200/, :path=>"/"}
[2017-11-28T17:11:54,199][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2017-11-28T17:11:54,244][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-11-28T17:11:54,247][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-11-28T17:11:54,265][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
[2017-11-28T17:11:54,266][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-11-28T17:11:54,427][INFO ][logstash.pipeline ] Pipeline main started
[2017-11-28T17:11:54,493][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
3. elasticSearch
这个其实也是ELK中的核心,启动的时候一定要注意,因为es不可以进行root账户启动,所以你还需要开启一个elsearch账户。