升级期间着实踩了不少坑,老版ES索引配置可以直接写到配置文件里,新版是不行的,必须使用api去设置,另外ES2.X版本的进程数调优,在ES5.X我发现调整与否没有影响。配置文件如下:
cluster.name: yz-5search
path.data: /data1/LogData5/
path.logs: /data1/LogData5/logs
bootstrap.memory_lock: false #CentOS6内核不支持,必须要关闭
bootstrap.system_call_filter: false
network.host: 10.39.40.94
http.port: 9220
transport.tcp.port: 9330
discovery.zen.ping.unicast.hosts: ["10.39.40.94:9330","10.39.40.95:9330","10.39.40.96:9330","10.39.40.97:9330"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
为了加快索引效率,编写index的模板配置(index配置不允许写到配置文件了),将参数put到es的里,当然模板也可以通过前端logstash指定(要改logtash觉得麻烦),template脚本如下:
#/bin/sh
#writer:gaolixu
#index template
curl -XPUT 'http://10.39.40.94:9220/_template/cms_logs?pretty' -d '{
"order": 6, #优先级
"template": "logstash-cms*", #正则匹配索引
"settings": {
"index.refresh_interval" : "60s", #索引刷新时间
"index.number_of_replicas" : "0", #副本数设置为0
"index.number_of_shards" : "8", #分片数设置为8,共4台服务器
"index.translog.flush_threshold_size" : "768m", #translog触发flush的阀值
"index.store.throttle.max_bytes_per_sec" : "500m", #存储的阀值
"index.translog.durability": "async", #设置translog异步刷新到硬盘,更注重性能
"index.merge.policy.segments_per_tier": "25", #每一轮merge的segment的允许数量,默认是10
"index.merge.policy.floor_segment": "100mb", #小于这个值的segment会四舍五入,防止很小的segment的频繁flush
"index.merge.scheduler.max_thread_count": "1", #机械盘设置为1
"index.routing.allocation.total_shards_per_node": "2" #每个节点上两个分片
}
}'
备:如果是更改,将PUT改为POST
日志保留7天,清除的脚本如下,写入计划任务:
#!/bin/bash
#writer:gaolixu
DATE=`date +%Y.%m.%d.%I`
DATA2=`date +%Y.%m.%d -d'-7 day'`
curl -XDELETE :9220/logstash-*-${DATA2}*?pretty
由于单个索引达到了35G甚至40G以上,于是在logstash层面对建索引数量进行修改,把每天12个索引修改为每天24个索引:
logstash的修改如下:
index => "logstash-cms-front-nginx-%{+YYYY.MM.dd.hh}" 修改为
index => "logstash-cms-front-nginx-%{+YYYY.MM.dd.HH}"
备注:supervisor的安装
easy_install meld3
easy_install pip
easy_install supervisor