CentOS 搭建Graylog集群详解

  Graylog 是一个简单易用、功能较全面的日志管理工具,相比 ELK 组合, 优点:

部署维护简单,一体化解决方案,不像ELK三个独立系统集成。

查相比ES json语法,搜索语法更加简单,如 source:mongo AND reponse_time_ms:>5000。

内置简单的告警。

可以将搜索条件导出为 json格式文本,方便开发调用ES rest api搜索脚本。

自己开发采集日志的脚本,并用curl/nc发送到Graylog Server,发送格式是自定义的GELF,Flunted和Logstash都有相应的输出GELF消息的插件。自己开发带来很大的自由度。实际上只需要用inotifywait监控日志的modify事件,并把日志的新增行用curl/netcat发送到Graylog Server就可。

UI 比较友好,搜索结果高亮显示。

  当然,在拓展性上,graylog还是不如ELK。

  Graylog整体组成:

Graylog提供 graylog 对外接口, CPU 密集

Elasticsearch 日志文件的持久化存储和检索, IO 密集 

MongoDB 存储一些 Graylog 的配置 

2. Graylog架构   单server架构 :

  

CentOS 搭建Graylog集群详解

  Graylog集群架构 :

  

CentOS 搭建Graylog集群详解

3. Graylog安装

  这里我搭建的是集群方案,但是将ES与Graylog和MongoDB部署在同一台server上。

  ① 前提条件:

$ sudo yum install java-1.8.0-openjdk-headless.x86_64
$ sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
$ setenforce 0

#安装pwgen
$ sudo yum install epel-release
$ sudo yum install pwgen  

  ② MongoDB安装:

  创建/etc/yum.repos.d/mongodb-org-3.2.repo文件,添加如下内容:

[mongodb-org-3.2] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/RedHat/$releasever/mongodb-org/3.2/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://

  安装MongoDB:

sudo yum install mongodb-org

  启动服务:

$ sudo chkconfig --add mongod $ sudo systemctl daemon-reload $ sudo systemctl enable mongod.service $ sudo systemctl start mongod.service

     ③Elasticsearch安装:

  Graylog 2.3.x 支持 Elasticsearch 5.x版本。

  首先安装Elastic GPG key以及repository文件,然后yum安装:

$ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
$ cat /etc/yum.repos.d/elasticsearch.repo [elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md

$ sudo yum install elasticsearch

  编辑Elasticsearch配置文件/etc/elasticsearch/elasticsearch.yml,添加cluster信息:

# cat /etc/elasticsearch/elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https:// # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: graylog # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: shop-log-02 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /data/elasticsearch/db # # Path to log files: # path.logs: /data/elasticsearch/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.2.2.42 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # # 这里给其他两个节点的地址 discovery.zen.ping.unicast.hosts: ["10.2.2.41", "10.2.2.43"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 2 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true http.cors.enabled: true http.cors.allow-origin: "*"

elasticsearch.yml

  启动Elasticsearch服务:

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/33b92d160c753e6792b7d2f97bfe4e73.html