What is Logstash?
Logstash is an open source tool for managing events and logs. It provides a real-time transmission path for data collection. Logstash will collect your log data, convert the data into JSON documents, and store it in Elastic search.
The goal of this tutorial is to use Logstash to collect the server's syslog and set up Kibana's visual collected logs.
Components we need to use this time:
Logstash
: Server component that processes incoming logs
Elasticsearch
: Store all logs.
Kibana
: Web interface for searching and visualizing logs.
Filebeat
: It is installed on the client server and uses Filebeat to transport log data to Logstash. It is a typical porter.
The above components are required to ensure that Elasticsearch and Kibana have been installed. If not, please refer to the following tutorials:
《 Install and configure the Elasticsearch search engine cluster 》
《 ElasticSearch Kibana Binary Installation Configuration 》
《 Elasticsearch Kibana cluster installation configuration X-pack expansion pack 》
Environmental Science:
Server: CentOS 7 (IP: 10.28.204.65). 16 GB RAM.
&
Logstash/Kibana/Elasticsearch
Client: CentOS 7 (IP: 10.28.204.66). 8 GB RAM.
&
Filebeat
precondition:
《 Linux JAVA JDK JRE environment variable installation and configuration 》
Since Logstash is based on Java, please ensure that OpenJDK or Oracle JDK is installed on your computer (Java 9 is not supported for the time being).
Installation instructions:
The ELK official website provides installation packages in various formats (zip/tar/rpm/DEB) for each software. Taking the Linux Centos 7 system as an example, if you download RPM directly, you can directly install it as a system service through rpm - ivh path_of_your_rpm_file. You can use it later systemctl Command start and stop. such as systemctl logstash.service start/stop/status 。 It is very simple, but the disadvantages are also obvious. The installation directory cannot be customized, and the related files are scattered, which makes it difficult to manage them centrally.
Install Logstash
Download address://www.elastic.co/downloads/logstash
Note: I will describe two installation methods below. Use the super root user to operate the whole process.
Binary compressed package installation mode
1. Unzip and move it to the appropriate path:
$ cd /tmp $ tar zxvf logstash-5.6.3.tar.gz $ mv logstash-5.6.3 /usr/local $ cd /usr/local $ ln -s logstash-5.6.3 logstash $ mkdir -p /usr/local/logstash/config/conf.d
2. Create users and groups and grant permissions
$ groupadd logstash $ useradd -g logstash logstash $ chown -R logstash.logstash logstash*
3. Create systemctl system unit file
$ vim /etc/systemd/system/logstash.service [Unit] Description=logstash [Service] Type=simple User=logstash Group=logstash # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/usr/local/logstash/config EnvironmentFile=-/usr/local/logstash/config ExecStart=/usr/local/logstash/bin/logstash "--path.settings" "/usr/local/logstash/config" Restart=always WorkingDirectory=/ Nice=19 LimitNOFILE=16384 [Install] WantedBy=multi-user.target
Installation is complete. The advantage of this installation method is that the configuration files are centralized and easy to manage, but the installation is troublesome.
YUM installation mode
This installation method is simple, fast and time consuming.
1. Download and install the public signature key:
$ rpm --import //artifacts.elastic.co/GPG-KEY-elasticsearch
If the download fails, please modify the server DNS to 8.8.8.8 Restart the network card.
2. Add the logstash image source
$ vim /etc/yum.repos.d/logstash.repo [logstash-5.x] name=Elastic repository for 5.x packages baseurl=//artifacts.elastic.co/packages/ 5.x /yum gpgcheck=1 gpgkey=//artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
3. Start installation
$ yum install logstash -y
After installation, the logstach.service unit file will be generated automatically, without manual creation.
explain : 5.x Representative: The latest version of logstash is installed by default. You can set the x Change to a specific version number, for example: five point six 。
Related configuration directory after YUM installation:
/Usr/share/logstash - main program /Etc/logstash - configuration file /Var/log/logstash - Log /Var/lib/logstash - data store
Start configuring Logstash
be careful: For the following operations, I use the first installation method to configure.
Filebeat (Logstash Forwarder) is usually installed on the client server and uses SSL certificates to verify the identity of the Logstash server for secure communication.
1. Generate a self signed SSL certificate with a validity of 365 days, and create an SSL certificate using the hostname or IP SAN.
Method 1 (host name):
$ cd /etc/pki/tls/
Now create the SSL certificate. Set“ server.renwolecom.local ”Replace with your logstash server hostname.
$ openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout private / logstash-forwarder.key -out certs / logstash-forwarder.crt -subj / CN = server.renwolecom.local
2. If you plan to use an IP address instead of a host name, please follow the steps below to create an SSL certificate for the IP SAN.
Method 2 (IP address):
To create an IP SAN certificate, you need to add an IP address of the logstash server in the SubjectAltName in the OpenSSL configuration file.
$ vim /etc/pki/tls/openssl.cnf
Find“ [v3_ca] ”Part, add the IP address of the logstash server below this field, for example:
subjectAltName = IP:10.28.204.65 $ cd /etc/pki/tls/ $ openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
This paper adopts the latter.
Configure logstach.conf
Logstash configuration can be configured in /etc/logstash
(YUN installation mode). Since I am a binary installation, I need to /usr/local/logstash/config
Create the conf. d folder under the directory (created).
The logstash configuration file consists of three parts: Input, filter, output ; You can /usr/local/logstash/config/conf.d
Create three configuration files respectively under, or put the three parts in one configuration file.
1. I suggest you use a single file and place the input, filter and output parts.
$ vim /usr/local/logstash/config/conf.d/logstash.conf
2. On input In the section, I will configure the Logstash communication port and add an SSL certificate for secure communication.
input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }
3. On filter In the application part, we will use Grok to parse these logs, and then send them to Elasticsearch. The following grok filter will find logs marked with "syslog" and try to parse them to generate structured indexes.
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }
4. On output Section, we will define the log location to be stored; This is obviously the Elastic search server.
output { elasticsearch { hosts => [ "10.28.204.65:9200" ] index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" user => elastic password => changeme } stdout { codec => rubydebug } }
5. The complete configuration is as follows:
$ cat /usr/local/logstash/config/conf.d/logstash.conf input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { elasticsearch { hosts => [ "10.28.204.65:9200" ] index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" user => elastic password => changeme } stdout { codec => rubydebug } }
Modify the logstach.yml main configuration file
The contents after modification are as follows:
$ egrep -v "(^#|^$)" /usr/local/logstash/config/logstash.yml path.config: /usr/local/logstash/config/conf.d path.logs: /usr/local/logstash/logs
Mainly the above path.
Start logstash and add it to boot automatically
$ systemctl start logstash $ systemctl enable logstash
After startup, you can view the log status through the following command to solve any possible problems.
YUM installation view log:
$ cat /var/log/logstash/logstash-plain.log
Binary installation view log:
$ cat /usr/local/logstash/logs/logstash-plain.log
Please check the operation according to your actual path.
Install Filebeat on the client server
1. There are five Beats clients available, namely:
Filebeat
– Real time insight into log data.
Packetbeat
– Analyze network packet data.
Metricbeat
– Collect various performance indicators of the service.
Winlogbeat
– Lightweight Windows event log.
Heartbeat
– Proactively detect services to monitor their availability.
2. To analyze the system log of the client computer (for example: 10.28.204.66 ), we need to install filebeat. Use the following command to install:
$ curl -L -O //artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.3-x86_64.rpm $ rpm -vi filebeat-5.6.3-x86_64.rpm
Configure Filebeat
Now is the time to connect the filebeat with the Logstash. Follow the steps below to obtain the filebeat for configuring the ELK Stack.
1. Filebeat (beats) uses SSL certificates to verify the identity of the logstash server, so copy logstash-forwarder.crt from the logstash server to the client.
$ scp -pr root@10.28.204.65 :/etc/pki/tls/certs/logstash-forwarder.crt /etc/ssl/certs/
2. Open the filebeat configuration file
$ vim /etc/filebeat/filebeat.yml
3. We will configure filebeat to send /var/log/messages Log content to the Logstash server. Therefore, please modify the existing configuration under the path section. Please comment out – /var/log/*.log To avoid sending all. log files in this directory to Logstash.
... paths: - /var/log/messages # - /var/log/*.log ...
4. Comment out“ output.elasticsearch
”Section. Because we will not directly store logs to Elasticsearch
。
... # output.elasticsearch: ...
5. Now, find“ output.logstash
”Line. And modify the contents as follows:
This section defines filebeat to send logs to the logstash server“ 10.28.204.65:5044 ”。 Modify the path where the SSL certificate is located.
... output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] hosts: ["103.28.204.65:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications ssl.certificate_authorities: ["/etc/ssl/certs/logstash-forwarder.crt"] ...
Save and exit.
Important: The configuration file of Filebeat is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces as these instructions.
Restart the Filebeat service
$ systemctl restart filebeat $ cat /var/log/filebeat/filebeat
Firewall Settings
$ firewall-cmd --permanent --zone=public --add-port=5044/tcp $ firewall-cmd --reload
Test whether the data is stored normally
On your Elasticsearch server, verify whether Elasticsearch receives Filebeat>logstash data by using the following command:
$ curl -u elastic -XGET '//10.28.204.65:9200/filebeat-*/_search? pretty'
Enter your authentication password, and you should see the following output:
... { "_index" : "filebeat-2017.11.1", "_type" : "log", "_id" : "AV8Zh29HaTuC0RmgtzyM", "_score" : 1.0, "_source" : { "@timestamp" : "2017-11-1T06:16:34.719Z", "offset" : 39467692, "@version" : "1", "beat" : { "name" : "204", "hostname" : "204", "version" : "5.6.3" }, "input_type" : "log", "host" : "204", "source" : "/var/log/messages", "message" : "Nov 11 19:06:37 204 logstash: at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)", "type" : "log", "tags" : [ "beats_input_codec_plain_applied" ] } ...
If your output shows 0, you should check whether the communication between logstash and Elasticsearch is normal. You can draw a conclusion by viewing the startup log. If you get the expected output, go to the next step.
Connect to Kibana
1. Use the following URL to access Kibana:
//10.28.204.65:5601/
2. When you log in for the first time, you must map the filebeat index.
Type the following in the Index Name or Mode box:
filebeat-*
choice @timestamp Then click Create.
3. You can also log in to logstash
Open in sequence:
Management >> Index Patterns >> Create Index Pattern
Input:
filebeat-*
choice:
@timestamp
For other defaults, click: Create 。
4. After creation, click:
Discover >> filebeat-*
At this time, you can view the client on the right 10.28.204.66 System log for.
The installation, configuration and combined use of ELK Stack Logstash have been completed so far. Kibana is not only limited to this, but also has more powerful functions, which is worth further study.