当第一个节点启动,它会组播发现其他节点,发现集群名字一样的时候,就会自动加入集群。随便一个节点都是可以连接的,并不是主节点才可以连接,连接的节点起到的作用只是汇总信息展示
最初可以自定义设置分片的个数,分片一旦设置好,就不可以改变。主分片和副本分片都丢失,数据即丢失,无法恢复,可以将无用索引删除。有些老索引或者不常用的索引需要定期删除,否则会导致es资源剩余有限,占用磁盘大,搜索慢等。如果暂时不想删除有些索引,可以在插件中关闭索引,就不会占用内存了。
五、配置logstash
5.1循序渐进学习logstash
启动一个logstash,-e:在命令行执行;input输入,stdin标准输入,是一个插件;output输出,stdout:标准输出
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }' Settings: Debault filter worker: 1
Settings: Default pipeline workers: 2
Pipeline main started
chuck ==>输入
2016-10-28T03:10:52.276Z node1.chinasoft.com chuck ==>输出
==>输入
2016-10-28T03:11:03.169Z node1.chinasoft.com ==>输出
使用rubudebug显示详细输出,codec为一种编解码器
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
Settings: Default pipeline workers: 2
Pipeline main started
chunck ==>输入
{
"message" => "chunck",
"@version" => "1",
"@timestamp" => "2016-10-28T03:15:02.824Z",
"host" => "node1.chinasoft.com"
} ==>使用rubydebug输出
上述每一条输出的内容称为一个事件,多个相同的输出的内容合并到一起称为一个事件(举例:日志中连续相同的日志输出称为一个事件)
使用logstash将信息写入到elasticsearch
# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.3.17:9200"] } }'
Settings: Default pipeline workers: 2
Pipeline main started
jack
chunck
在elasticsearch中写一份,同时在本地输出一份,也就是在本地保留一份文本文件,也就不用在elasticsearch中再定时备份到远端一份了。此处使用的保留文本文件三大优势:1)文本最简单 2)文本可以二次加工 3)文本的压缩比最高
# /opt/logstash/bin/logstash -e 'input { stdin{} } output {elasticsearch {hosts => ["192.168.3.17:9200"] } stdout{ codec => rubydebug } }'
Settings: Default pipeline workers: 2
Pipeline main started
{
"message" => "www.baidu.com",
"@version" => "1",
"@timestamp" => "2016-10-28T03:26:18.736Z",
"host" => "node1.chinasoft.com"
}
{
"message" => "www.elastic.co",
"@version" => "1",
"@timestamp" => "2016-10-28T03:26:32.609Z",
"host" => "node1.chinasoft.com"
}
使用logstash启动一个配置文件,会在elasticsearch中写一份
# vim normal.conf
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.3.17:9200"] }
stdout { codec => rubydebug }
}
# /opt/logstash/bin/logstash -f normal.conf
Settings: Default pipeline workers: 2
Pipeline main started
123
{
"message" => "123",
"@version" => "1",
"@timestamp" => "2016-10-28T03:33:35.899Z",
"host" => "node1.chinasoft.com"
}
chinasoft
{
"message" => "chinasoft",
"@version" => "1",
"@timestamp" => "2016-10-28T03:33:44.641Z",
"host" => "node1.chinasoft.com"
}
5.2学习编写conf格式
输入插件配置,此处以file为例,可以设置多个
input {
file {
path => "/var/log/messages"
type => "syslog"
}
file {
path => "/var/log/nginx/access.log"
type => "nginx"
}
}
介绍几种收集文件的方式,可以使用数组方式或者用*匹配,也可以写多个path
path => ["/var/log/messages","/var/log/*.log"]
path => ["/data/mysql/mysql.log"]
设置boolean值
ssl_enable => true
文件大小单位
my_bytes => "1113" # 1113 bytes
my_bytes => "10MiB" # 10485760 bytes
my_bytes => "100kib" # 102400 bytes
my_bytes => "180 mb" # 180000000 bytes
jason收集
codec => “json”
hash收集
match => {
"field1" => "value1"
"field2" => "value2"
...
}
端口
port => 21
密码
my_password => "password"
5.3 学习编写input的file插件
5.3.1 input插件之input
sincedb_path:记录logstash读取位置的路径
start_postion :包括beginning和end,指定收集的位置,默认是end,从尾部开始
add_field 加一个域
discover_internal 发现间隔,每隔多久收集一次,默认15秒
5.4 学习编写output的file插件
5.5 通过input和output插件编写conf文件
5.5.1 收集系统日志的conf
------------------------------------------------
# vim nginx.conf
input {
file {
path => "/var/log/nginx/access.log"
type => "nginx"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.3.17:9200"]
index => "nginx-%{+YYYY.MM.dd}"
}
}
# /opt/logstash/bin/logstash -f nginx.conf
------------------------------------------------