[2016-08-02 17:47:23,285][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled
in
[2016-08-02 17:47:23,579][INFO ][node ] [node0] version[2.3.4], pid[21176], build[e455fd0/2016-06-30T11:24:31Z]
[2016-08-02 17:47:23,586][INFO ][node ] [node0] initializing ...
[2016-08-02 17:47:24,213][INFO ][plugins ] [node0] modules [reindex, lang-expression, lang-groovy], plugins [head], sites [head]
[2016-08-02 17:47:24,235][INFO ][env ] [node0] using [1] data paths, mounts [[/home (/dev/mapper/vg_dbmlslave1-lv_home)]], net usable_space [542.1gb], net total_space [10
17.2gb], spins? [possibly], types [ext4]
[2016-08-02 17:47:24,235][INFO ][env ] [node0] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-08-02 17:47:24,235][WARN ][env ] [node0] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-08-02 17:47:25,828][INFO ][node ] [node0] initialized
[2016-08-02 17:47:25,828][INFO ][node ] [node0] starting ...
[2016-08-02 17:47:25,939][INFO ][transport ] [node0] publish_address {192.168.121.62:9300}, bound_addresses {192.168.121.62:9300}
[2016-08-02 17:47:25,944][INFO ][discovery ] [node0] es_cluster/626_Pu5sQzy96m7P0EaU4g
[2016-08-02 17:47:29,028][INFO ][cluster.service ] [node0] new_master {node0}{626_Pu5sQzy96m7P0EaU4g}{192.168.121.62}{192.168.121.62:9300}, reason: zen-disco-join(elected_as_master,
[0] joins received)
[2016-08-02 17:47:29,116][INFO ][http ] [node0] publish_address {192.168.121.62:9200}, bound_addresses {192.168.121.62:9200}
[2016-08-02 17:47:29,117][INFO ][node ] [node0] started
[2016-08-02 17:47:29,149][INFO ][gateway ] [node0] recovered [0] indices into cluster_state
[elk@hch_test_dbm1_121_62 elasticsearch-2.3.4]$
打开url地址:9200/,E:\u\elk\pic\01_3.png
看返回结果,有配置的cluster_name、节点name信息以及安装的软件版本信息,其中安装的head插件,它是一个用浏览器跟ES集群交互的插件,可以查看集群状态、集群的doc内容、执行搜索和普通的Rest请求等。可以使用web界面来操作查看:9200/_plugin/head/,如下图E:\u\elk\pic\01_4.png:
可以从界面看到,当前的elas集群里面没有index也没有type,所以是空记录。
4.3、安装logstashlogstash其实它就是一个 收集器 而已,我们需要为它指定Input和Output(当然Input和Output可以为多个)。由于我们需要把Java代码中Log4j的日志输出到ElasticSearch中,因此这里的Input就是Log4j,而Output就是ElasticSearch。
结构图如E:\u\elk\pic\02.png所示:
安装配置:
# 解压缩安装
tar -xvf logstash-2.3.4.tar.gz
cd logstash-2.3.4
# 将配置文件放置在config文件夹下面
mkdir config
vim config/log4j_to_es.conf
# For detail structure of this file
# Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
input {
# For detail config for log4j as input,
# See: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html
log4j {
mode => "server"
host => "192.168.121.62"
port => 4567
}
}
filter {
#Only matched data are send to output.
}
output {
# For detail config for elasticsearch as output,
# See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
elasticsearch {
action => "index" #The operation on ES
hosts => "192.168.121.62:9200" #ElasticSearch host, can be array.
index => "applog" #The index to write data to.
}
}
启动logstash,2个参数一个是agent一个是配置文件:
[elk@hch_test_dbm1_121_62 logstash-2.3.4]$ ./bin/logstash agent -f config/log4j_to_es.conf
Settings: Default pipeline workers: 32
log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See #noconfig for more info.
Pipeline main started
接下来,可以使用logstash来收集日志并保存到es中了,可以使用一段java代码来实现它。
4.4、elk3测试工程工程环境是eclipse,工程大概目录结构如下图E:\u\elk\pic\04.png,一个java类Application.java,一个日志配置文件log4j.properties,一个调度配置文件pom.xml:
(1)Application.java
package com.demo.elk;
import org.apache.log4j.Logger;
public class Application {
private static final Logger LOGGER = Logger.getLogger(Application.class);
public Application() {
// TODO Auto-generated constructor stub
}
public static void main(String[] args) {
// TODO Auto-generated method stub
for (int i = 0; i < 10; i++) {
LOGGER.error("Info log [" + i + "].");
try {
Thread.sleep(500);
} catch (InterruptedException e) {
// TODO Auto-generated catch blockl
e.printStackTrace();
}
}
}
}
(2)Pom.xml: