Classified directory archiving: ElasticStack

Logstash Filebeat installation configuration uses Kibana to analyze log data

What is Logstash?

Logstash is an open source tool for managing events and logs. It provides a real-time transmission path for data collection. Logstash will collect your log data, convert the data into JSON documents, and store it in Elastic search.

The goal of this tutorial is to use Logstash to collect the server's syslog and set up Kibana's visual collected logs.

Components we need to use this time:

Logstash : Server component that processes incoming logs
Elasticsearch : Store all logs.
Kibana : Web interface for searching and visualizing logs.
Filebeat : It is installed on the client server and uses Filebeat to transport log data to Logstash. It is a typical porter.

The above components are required to ensure that Elasticsearch and Kibana have been installed. If not, please refer to the following tutorials:

Install and configure the Elasticsearch search engine cluster
ElasticSearch Kibana Binary Installation Configuration
Elastic search Kibana cluster installation configuration X-pack expansion package

Environmental Science:

Server: CentOS 7 (IP: 10.28.204.65). 16 GB RAM.
&
Logstash/Kibana/Elasticsearch

Client: CentOS 7 (IP: 10.28.204.66). 8 GB RAM.
&
Filebeat

precondition:

Linux JAVA JDK JRE environment variable installation and configuration

Since Logstash is based on Java, please ensure that OpenJDK or Oracle JDK is installed on your computer (Java 9 is not supported for the time being).

Installation instructions:

The ELK official website provides installation packages in various formats (zip/tar/rpm/DEB) for each software. Taking the Linux Centos 7 system as an example, if you download RPM directly, you can directly install it as a system service through rpm - ivh path_of_your_rpm_file. You can use it later systemctl Command start and stop. such as systemctl logstash.service start/stop/status It is very simple, but the disadvantages are also obvious. The installation directory cannot be customized, and the related files are scattered, which makes it difficult to manage them centrally.

Install Logstash

Download address://www.elastic.co/downloads/logstash

Note: I will describe two installation methods below. Use the super root user to operate the whole process.

Binary compressed package installation mode

1. Unzip and move it to the appropriate path:

 $ cd /tmp $ tar zxvf logstash-5.6.3.tar.gz $ mv logstash-5.6.3 /usr/local $ cd /usr/local $ ln -s logstash-5.6.3 logstash $ mkdir -p /usr/local/logstash/config/conf.d

2. Create users and groups and grant permissions

 $ groupadd logstash $ useradd -g logstash logstash $ chown -R logstash.logstash logstash*

3. Create systemctl system unit file

 $ vim /etc/systemd/system/logstash.service [Unit] Description=logstash [Service] Type=simple User=logstash Group=logstash # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/usr/local/logstash/config EnvironmentFile=-/usr/local/logstash/config ExecStart=/usr/local/logstash/bin/logstash "--path.settings" "/usr/local/logstash/config" Restart=always WorkingDirectory=/ Nice=19 LimitNOFILE=16384 [Install] WantedBy=multi-user.target

Installation is complete. The advantage of this installation method is that the configuration files are centralized and easy to manage, but the installation is troublesome.

YUM installation mode

This installation method is simple, fast and time consuming.

1. Download and install the public signature key:

 $ rpm --import //artifacts.elastic.co/GPG-KEY-elasticsearch

If the download fails, please modify the server DNS to 8.8.8.8 Restart the network card.

2. Add the logstash image source

 $ vim /etc/yum.repos.d/logstash.repo [logstash-5.x] name=Elastic repository for 5.x packages baseurl=//artifacts.elastic.co/packages/ 5.x /yum gpgcheck=1 gpgkey=//artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md

3. Start installation

 $ yum install logstash -y

After installation, the logstach.service unit file will be generated automatically, without manual creation.

explain 5.x Representative: The latest version of logstash is installed by default. You can set the x Change to a specific version number, for example: five point six

Related configuration directory after YUM installation:

 /Usr/share/logstash - main program /Etc/logstash - configuration file /Var/log/logstash - Log /Var/lib/logstash - data store

Start configuring Logstash

be careful: For the following operations, I use the first installation method to configure.

Filebeat (Logstash Forwarder) is usually installed on the client server and uses SSL certificates to verify the identity of the Logstash server for secure communication.

1. Generate a self signed SSL certificate with a validity of 365 days, and create an SSL certificate using the hostname or IP SAN.

Method 1 (host name):

 $ cd /etc/pki/tls/

Now create the SSL certificate. Set“ server.renwolecom.local ”Replace with your logstash server hostname.

 $ openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout private / logstash-forwarder.key -out certs / logstash-forwarder.crt -subj / CN = server.renwolecom.local

2. If you plan to use an IP address instead of a host name, please follow the steps below to create an SSL certificate for the IP SAN.

Method 2 (IP address):

To create an IP SAN certificate, you need to add an IP address of the logstash server in the SubjectAltName in the OpenSSL configuration file.

 $ vim /etc/pki/tls/openssl.cnf

Find“ [v3_ca] ”Part, add the IP address of the logstash server below this field, for example:

 subjectAltName = IP:10.28.204.65 $ cd /etc/pki/tls/ $ openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

This paper adopts the latter.

Configure logstach.conf

Logstash configuration can be configured in /etc/logstash (YUN installation mode). Since I am a binary installation, I need to /usr/local/logstash/config Create the conf. d folder under the directory (created).

The logstash configuration file consists of three parts: Input, filter, output ; You can /usr/local/logstash/config/conf.d Create three configuration files respectively under, or put the three parts in one configuration file.

1. I suggest you use a single file and place the input, filter and output parts.

 $ vim /usr/local/logstash/config/conf.d/logstash.conf

2. On input In the section, I will configure the Logstash communication port and add an SSL certificate for secure communication.

 input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }

3. On filter In the application part, we will use Grok to parse these logs, and then send them to Elasticsearch. The following grok filter will find logs marked with "syslog" and try to parse them to generate structured indexes.

 filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }

4. On output Section, we will define the log location to be stored; This is obviously the Elastic search server.

 output { elasticsearch { hosts => [ "10.28.204.65:9200" ] index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" user => elastic password => changeme } stdout { codec => rubydebug } }

5. The complete configuration is as follows:

 $ cat /usr/local/logstash/config/conf.d/logstash.conf input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ] } } } output { elasticsearch { hosts => [ "10.28.204.65:9200" ] index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" user => elastic password => changeme } stdout { codec => rubydebug } }

Modify the logstach.yml main configuration file

The contents after modification are as follows:

 $ egrep -v "(^#|^$)" /usr/local/logstash/config/logstash.yml path.config: /usr/local/logstash/config/conf.d path.logs: /usr/local/logstash/logs

Mainly the above path.

Start logstash and add it to boot automatically

 $ systemctl start logstash $ systemctl enable logstash

After startup, you can view the log status through the following command to solve any possible problems.

YUM installation view log:

 $ cat /var/log/logstash/logstash-plain.log

Binary installation view log:

 $ cat /usr/local/logstash/logs/logstash-plain.log

Please check the operation according to your actual path.

Install Filebeat on the client server

1. There are five Beats clients available, namely:

Filebeat – Real time insight into log data.
Packetbeat – Analyze network packet data.
Metricbeat – Collect various performance indicators of the service.
Winlogbeat – Lightweight Windows event log.
Heartbeat – Proactively detect services to monitor their availability.

2. To analyze the system log of the client computer (for example: 10.28.204.66 ), we need to install filebeat. Use the following command to install:

 $ curl -L -O //artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.3-x86_64.rpm $ rpm -vi filebeat-5.6.3-x86_64.rpm

Configure Filebeat

Now is the time to connect the filebeat with the Logstash. Follow the steps below to obtain the filebeat for configuring the ELK Stack.

1. Filebeat (beats) uses SSL certificates to verify the identity of the logstash server, so copy logstash-forwarder.crt from the logstash server to the client.

 $ scp -pr  root@10.28.204.65 :/etc/pki/tls/certs/logstash-forwarder.crt /etc/ssl/certs/

2. Open the filebeat configuration file

 $ vim /etc/filebeat/filebeat.yml

3. We will configure filebeat to send /var/log/messages Log content to the Logstash server. Therefore, please modify the existing configuration under the path section. Please comment out – /var/log/*.log To avoid sending all. log files in this directory to Logstash.

 ... paths: - /var/log/messages # - /var/log/*.log ...

4. Comment out“ output.elasticsearch ”Section. Because we will not directly store logs to Elasticsearch

 ... # output.elasticsearch: ...

5. Now, find“ output.logstash ”Line. And modify the contents as follows:

This section defines filebeat to send logs to the logstash server“ 10.28.204.65:5044 ”。 Modify the path where the SSL certificate is located.

 ... output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] hosts: ["103.28.204.65:5044"] # Optional SSL.  By default is off. # List of root certificates for HTTPS server verifications ssl.certificate_authorities: ["/etc/ssl/certs/logstash-forwarder.crt"] ...

Save and exit.
Important: The configuration file of Filebeat is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces as these instructions.

Restart the Filebeat service

 $ systemctl restart filebeat $ cat /var/log/filebeat/filebeat

Firewall Settings

 $ firewall-cmd --permanent --zone=public --add-port=5044/tcp $ firewall-cmd --reload

Test whether the data is stored normally

On your Elasticsearch server, verify whether Elasticsearch receives Filebeat>logstash data by using the following command:

 $ curl -u elastic -XGET '//10.28.204.65:9200/filebeat-*/_search? pretty'

Enter your authentication password, and you should see the following output:

 ... { "_index" : "filebeat-2017.11.1", "_type" : "log", "_id" : "AV8Zh29HaTuC0RmgtzyM", "_score" : 1.0, "_source" : { "@timestamp" : "2017-11-1T06:16:34.719Z", "offset" : 39467692, "@version" : "1", "beat" : { "name" : "204", "hostname" : "204", "version" : "5.6.3" }, "input_type" : "log", "host" : "204", "source" : "/var/log/messages", "message" : "Nov 11 19:06:37 204 logstash: at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)", "type" : "log", "tags" : [ "beats_input_codec_plain_applied" ] } ...

If your output shows 0, you should check whether the communication between logstash and Elasticsearch is normal. You can draw a conclusion by viewing the startup log. If you get the expected output, go to the next step.

Connect to Kibana

1. Use the following URL to access Kibana:

 //10.28.204.65:5601/

2. When you log in for the first time, you must map the filebeat index.

Type the following in the Index Name or Mode box:

 filebeat-*

choice @timestamp Then click Create.

3. You can also log in to logstash
Open in sequence:

 Management >> Index Patterns >> Create Index Pattern

Input:

 filebeat-*

choice:

 @timestamp

For other defaults, click: Create

4. After creation, click:

 Discover >> filebeat-*

At this time, you can view the client on the right 10.28.204.66 System log for.

The installation, configuration and combined use of ELK Stack Logstash have been completed so far. Kibana is not only limited to this, but also has more powerful functions, which is worth further study.

Elasticsearch Analysis Ik Chinese Word Segmentation Module Installation

Elasticsearch has many built-in word breakers, but the effect of Chinese thesaurus is poor, so we need to install three-way ik plug-ins to meet production requirements.

What is IK Analyzer?

IK Analyzer is an open source, lightweight Chinese word segmentation toolkit developed based on the java language. Since the release of version 1.0 in December 2006, IKAnalyzer has launched four major versions. Initially, it was a Chinese word segmentation component based on the open-source project Luence and combining dictionary segmentation and grammar analysis algorithms. Since version 3.0, IK has developed into a common word segmentation component for Java, independent of Lucene project, and provides the default optimization implementation for Lucene. In the 2012 version, IK implemented a simple word segmentation ambiguity elimination algorithm, marking the evolution of the IK word segmentation machine from a simple dictionary word segmentation to a simulated semantic word segmentation.

precondition:

Install and configure the Elasticsearch search engine cluster

1. Install Ik

Log in to the ES server and enter the Bin directory to start the installation

 $ cd /usr/local/elasticsearch $ bin/elasticsearch-plugin install //github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.6.3/elasticsearch-analysis-ik-5.6.3.zip -> Downloading //github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.6.3/elasticsearch-analysis-ik-5.6.3.zip [=================================================] 100% -> Installed analysis-ik

Installation succeeded.

2. To uninstall, execute the following command

 $ bin/elasticsearch-plugin remove analysis-ik

3. Restart ES service

 $ systemctl restart elasticsearch

4. View the startup log

 $ cat /usr/local/elasticsearch/logs/my-apprenwole.log

The following log records will be generated when elastic search loads the module during startup, indicating that the analysis ik Chinese word segmentation plug-in is available.

 [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded plugin [analysis-ik]

5. Create an index named index

 $ curl -u elastic -XPUT //10.28.204.65:9200/index Enter host password for user 'elastic': {"acknowledged":true,"shards_acknowledged":true,"index":"index"}

explain:

-u Is the user name
After entering, you will be asked to enter the password. The default user name is: elastic The password is: changeme

6. Create a map

 curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/_mapping -d' { "properties": { "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_max_word" } } }'

7. Create test data

 curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/1 -d' {"content": "Yuan Fang and you have followed me successively for more than ten years. I haven't given you anything except danger"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/2 -d' {"content": "Remember, Yuan Fang always jokes that it's not easy to eat me"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/3 -d' {"content": "But now I really want to give all my money to him for a meal"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/4 -d' {"content": "Yuan Fang always calls me an adult, but I know that he actually thinks of me as his father"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/5 -d' {"content": "But what did my father do for him? I always let him choose the latter between life and death"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/6 -d' {"content": "This is the case in Youzhou, Huzhou and Chongzhou. This time, he has not returned. What can I say? What can I say"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/7 -d' {"content": "Yuan Fang gave his life for the country, for the country and for the people of Dawn"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/8 -d' {"content": "If there is any comfort in my heart at the moment, it is to be proud of Yuanfang"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/9 -d' {"content": "If there is anything stronger than sadness in my heart at the moment, it is hatred"} '

8. Query data

 curl -u elastic -XPOST //10.28.204.65:9200/index/fulltext/_search -d' { "Query": {"match": {"content": "if"}}, "highlight" : { "pre_tags" : ["<tag1>", "<tag2>"], "post_tags" : ["</tag1>", "</tag2>"], "fields" : { "content" : {} } } } '

9. Query the returned data

 { "took": 21, "timed_out": false, "_shards": { "total": 5, "successful": 5, "skipped": 0, "failed": 0 }, "hits": { "total": 2, "max_score": 0.43353066, "hits": [ { "_index": "index", "_type": "fulltext", "_id": "8", "_score": 0.43353066, "_source": { "Content": "If I still feel a sense of comfort at the moment, I am proud of Yuanfang." }, "highlight": { "content": [ "<tag1>If</tag1>says that at the moment I still have a little comfort in my heart, it is to be proud of Yuanfang." ] } }, { "_index": "index", "_type": "fulltext", "_id": "9", "_score": 0.43353066, "_source": { "Content": "If there is anything stronger than sadness in my heart at the moment, it is hatred" }, "highlight": { "content": [ "<tag1>If</tag1>says that there is something more powerful than sadness in my heart at the moment, it is hatred" ] } } ] } }

Conclusion:

Recommended for adoption //10.28.204.65:5601 Dev Tools development tool management in, Kibana visual management is really convenient.

reference resources:

//github.com/medcl/elasticsearch-analysis-ik

Elasticsearch Kibana cluster installation configuration X-pack expansion pack

X-pack overview:

X-pack is an Elastic Stack expansion package that packages security, alerting, monitoring, reporting, and graphics functions into an easy to install package. X-pack can work seamlessly with ElasticSearch and Kibana. It can enable or disable the functions you want to use.

Before Elastic search 5.0, you must install the Shield, Watcher and Marvel plug-ins separately to get all the functions in the X-Pack. With the appearance of X-Pack, you no longer need to worry about whether you have the correct version of each plug-in. You just need to install the X-Pack of the Elastic search and Kibana versions you are running.

X-pack installation is relatively simple. I will introduce it in detail below.

precondition:

Install and configure the Elasticsearch search engine cluster
ElasticSearch Kibana Binary Installation Configuration

notes : You must run the X-Pack version that matches the ElasticSearch and Kibana versions.

explain:

Since I am cluster distributed, I need to install X-pack on each Kibana and ElasticSearch cluster server. If you are in stand-alone mode, you only need to install X-pack on a single server. In addition, I use network installation.

1. Install X-Pack on Kibana server

The whole process of automatic installation does not require configuration. It takes a long time to install X-packs in Kibana. Take your time.

Directly enter the kibana installation directory and execute the following commands:

 $ cd /usr/local/kibana $ bin/kibana-plugin install x-pack
 Found previous install attempt.  Deleting... Attempting to transfer from x-pack Attempting to transfer from //artifacts.elastic.co/downloads/kibana-plugins/x-pack/x-pack-5.6.3.zip Transferring 119526941 bytes.................... Transfer complete Retrieving metadata from plugin archive Extracting plugin archive Extraction complete Optimizing and caching browser bundles... Plugin installation complete

Installation is complete.

2. If uninstalling, execute the following command

 $ bin/kibana-plugin remove x-pack

3. Install the X-Pack on the Elastic search server

 The installation is automatically completed in the whole process without configuration, which is very fast. Enter the elasticsearch installation directory and execute the following command to start the installation:
 $ cd /usr/local/elasticsearch $ bin/elasticsearch-plugin install x-pack
 -> Downloading x-pack from elastic [=================================================] 100% @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.io.FilePermission \\. \pipe\* read,write * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries * java.lang.RuntimePermission getClassLoader * java.lang.RuntimePermission setContextClassLoader * java.lang.RuntimePermission setFactory * java.security. SecurityPermission createPolicy. JavaPolicy * java.security. SecurityPermission getPolicy * java.security. SecurityPermission putProviderProperty. BC * java.security. SecurityPermission setPolicy * java.util. PropertyPermission * read,write * java.util. PropertyPermission sun.nio.ch.bugLevel write * javax.net.ssl. SSLPermission setHostnameVerifier See //docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N] y @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin forks a native controller @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ This plugin launches a native controller that is not subject to the Java security manager nor to system call filters. Continue with installation? [y/N] y -> Installed x-pack

be careful: You will be asked to enter Y twice to confirm the installation.

Finally, the installation is completed.

4. If uninstalling, execute the following command

 $ bin/elasticsearch-plugin remove x-pack

5. Restart the service

 $ systemctl restart elasticsearch $ systemctl restart kibana

After restarting, you can pass //10.28.204.65:5601 Management. The default login account is: elastic The password is: changeme

After login, click Monitoring You can view the Elasticsearch and Kibana cluster information, and then further view the monitoring chart information. In addition, you can also set the account. I won't show the above figure.

6. License management

The initial installation of X-Pack provides a 30 day trial license, which allows the use of all X-Pack features. But after the trial period, all its functions will be disabled, so we need to Apply for a free license (Of course, you can also purchase an enterprise license.).

7. Update License

After applying for the license, an email will be sent to you via email, and you can download the corresponding json file. The license is provided as the JSON file you installed using the API, and you can upload it to the server/tmp directory.

Then execute the following command:

 $ curl -XPUT -u elastic '//10.28.204.65:9200/_xpack/license? acknowledge=true' -H "Content-Type: application/json" -d @/tmp/license.json

The license is valid for one year after it is successfully updated, which is explained at the time of subscription.

Next, you can play happily.

ElasticSearch Kibana Binary Installation Configuration

Introduction:

Kibana is an open source analysis data visualization platform. You can use Kibana to search, visualize, and analyze data efficiently, and also interact with data in the Elastic search engine.

Kibana can easily master a large amount of data. Its browser based interface enables you to quickly create a shared dynamic dashboard and monitor queries and changes of ElasticSearch in real time.

precondition:

Install and configure the Elasticsearch search engine cluster

Requirement: The version of Kibana and Elasticsearch must be consistent.

1. Create users and groups

 $ groupadd kibana $ useradd -g kibana kibana

2. Install Kibana

Download address://www.elastic.co/products

Decompress to create a soft connection:

 $ cd /tmp $ sha1sum kibana-5.6.3-linux-x86_64.tar.gz $ tar zxvf kibana-5.6.3-linux-x86_64.tar.gz $ mv kibana-5.6.3-linux-x86_64 /usr/local $ cd /usr/local $ ln -s kibana-5.6.3-linux-x86_64 kibana

3. Configure kibana

The contents after configuration are as follows:

 $ egrep -v "^$|^#|^;" /usr/local/kibana/config/kibana.yml server.port: 5601 server.host: "10.28.204.65" server.name: "10.28.204.65" elasticsearch.url: "//10.28.204.65:9200" elasticsearch.preserveHost: true elasticsearch.pingTimeout: 1500 elasticsearch.requestTimeout: 30000 pid.file: /usr/local/kibana/kibana.pid

For more configurations, see Configuration Kibana

4. Give Kibana directory permission

 $ cd /usr/local $ chown -R kibana.kibana kibana*

5. Start Kibana

 $ cd /usr/local/kibana/bin $ ./ kibana
 log [02:01:19.285] [info][status][plugin: kibana@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:19.819] [info][status][plugin: elasticsearch@5.6.3 ] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [02:01:20.078] [info][status][plugin: console@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:20.288] [info][status][plugin: metrics@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:21.263] [info][status][plugin: timelion@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:21.306] [info][listening] Server running at //10.28.204.65:5601 log [02:01:21.315] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow log [02:01:25.304] [info][status][plugin: elasticsearch@5.6.3 ] Status changed from yellow to yellow - No existing Kibana index found log [02:01:29.992] [info][status][plugin: elasticsearch@5.6.3 ] Status changed from yellow to green - Kibana index ready log [02:01:30.008] [info][status][ui settings] Status changed from yellow to green - Ready

The startup status is printed and has been successfully started.

Now you can use //10.28.204.65:5601 Access configuration is enough.

6. Create the Kibana unit file of the systemctl system

We need to create a unit service file to facilitate management:

 $ vim /usr/lib/systemd/system/kibana.service

Add the following:

 [Unit] Description=Kibana [Service] Type=simple User=kibana Group=kibana # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/usr/local/kibana/config EnvironmentFile=-/etc/sysconfig/kibana ExecStart=/usr/local/kibana/bin/kibana "-c /usr/local/kibana/config/kibana.yml" Restart=always WorkingDirectory=/ [Install] WantedBy=multi-user.target

7. Start and join the startup

 $ systemctl restart kibana $ systemctl enable kibana

8. Set firewall

 $ firewall-cmd --permanent --zone=public --add-port=5601/tcp $ firewall-cmd --reload

So far, Kibana has been installed and can be used normally.

Install and configure the Elasticsearch search engine cluster

Elasticsearch is a highly extensible open source full-text search and analysis engine. It allows you to store, search, and analyze large amounts of data quickly and in real time. It is often used as the underlying engine technology provided by applications with complex search functions and requirements.

Elasticsearch is installed like Tomcat, out of the box, without installing complex dependency packages.

precondition:

Elasticsearch requires at least Java 8 version. The following is the installation document, so we will not repeat it here.

Linux JAVA JDK JRE environment variable installation and configuration

Cluster deployment environment and equipment configuration:

 /Set/16G/8 core/500G 10.28.204.62 10.28.204.63 10.28.204.64 10.28.204.65 Elasticsearch 5.6.3 CentOS Linux release 7.4.1708 (Core) Kernel: Linux 3.10.0-693.2.2.el7.x86_64

explain : For the following installation steps, I operate on the 10.28.204.65 server. Other machines are the same. I will mark the key parts of the cluster.

1. Create users and groups and set passwords

 $ groupadd es $ useradd -g es es $ passwd es

2. Install Elasticsearch

Download address://www.elastic. co/downloads/elastic search # ga release

Unzip:

 $ cd /tmp $ tar zxvf elasticsearch-5.6.3.tar.gz

Move directory and create soft connection:

 $ mv elasticsearch-5.6.3 /usr/local $ cd /usr/local $ ln -s elasticsearch-5.6.3 elasticsearch

Set directory user permissions:

 $ chown -R es.es elasticsearch*

3. Configure jvm.options

The default heap memory of Elasticsearch is 2 GB, which cannot meet the requirements. You need to change the two Xms and Xmx in the following files to 8G, and other defaults.

 $ vim /usr/local/elasticsearch/config/jvm.options ... -Xms8g -Xmx8g ...

Note: It is recommended to allocate half of the physical memory of the machine, and the maximum size should not exceed 32GB.

4. Configure elasticsearch.yml

The configured contents are as follows:

 $ egrep -v "(^#|^$)" /usr/local/elasticsearch/config/elasticsearch.yml
 cluster.name: my-apprenwole #Cluster name, any. After ES is started, nodes with the same cluster name will be placed under one cluster. node.name: renwolenode-1 #Any unique value of the node name. bootstrap.memory_lock: false #Close locked memory. network.host: 10.28.204.65 #The local IP address must be modified for each node. http.port: 9200 #The http access port is recommended to be modified for security. discovery.zen.ping.unicast.hosts: ["10.28.204.62","10.28.204.63", "10.28.204.64","10.28.204.65"] #When a new node starts, the initial list of hosts is passed to perform discovery. If the port is not the default, add the port. discovery.zen.minimum_master_nodes: 3 #Specify how many master qualified nodes exist in the cluster node. More than three clusters can be written. client.transport.ping_timeout: 120s #Time to wait for ping response from node. The default is 60s. discovery.zen.ping_timeout: 120s #It is allowed to adjust the election time when the processing speed is slow or the network is congested (a higher value guarantees less failures). http.cors.enabled: true #Enable or disable cross original resource sharing, that is; Whether the browser on another source can execute the request against Elasticsearch. http.cors.allow-origin: "*" #Source is not allowed by default. If you add and attach/add to this value in advance, it will be regarded as a regular expression, allowing you to support HTTP and HTTP. For example, use/https?:\/\/ localhost(:[0-9]+)?/ The request header will be returned appropriately in both cases* Is a valid value, but is considered a security risk because your elastic search instance can cross initiate requests from anywhere.

Note: The Elasticsearch default value configuration has good settings and requires few configurations. By default, it can be used for production after a few configurations.

For more configuration information, see Elasticsearch modules

Note: The other three machines are the same except for the following parameters:

 node.name network.host

5. Memlock Settings

Add the following to the file:

 $ vim /etc/security/limits.conf es soft memlock unlimited es hard memlock unlimited es - nofile 65536

If it is not added, a warning message will be reported during startup:

 Unable to lock JVM Memory: error=12, reason=Cannot allocate memory This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536 These can be adjusted by modifying /etc/security/limits.conf, for example: # allow user 'es' mlockall es soft memlock unlimited es hard memlock unlimited

The above error messages also provide solutions.

6. Server Memory Settings

 $ vim /etc/sysctl.conf vm.max_map_count=262144 $ sysctl -p

7. Start Elasticsearch

Because ES does not allow root to start directly by default, for security reasons, switch to the es account to start:

 [ root@102820465  ~]# su es [ es@102820465  ~]$ cd /usr/local/elasticsearch/bin [ es@102820465  bin]$ ./ elasticsearch
 [INFO ][o.e.n.Node ] [renwolenode-1] initializing ... [INFO ][o.e.e.NodeEnvironment ] [renwolenode-1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [4021.3mb], net total_space [15.9gb], spins?  [unknown], types [rootfs] [INFO ][o.e.e.NodeEnvironment ] [renwolenode-1] heap size [7.9gb], compressed ordinary object pointers [true] [INFO ][o.e.n.Node ] [renwolenode-1] node name [renwolenode-1], node ID [vkixu3LZTPq82SAWWXyNcg] [INFO ][o.e.n.Node ] [renwolenode-1] version[5.6.3], pid[21425], build[667b497/2017-10-18T19:22:05.189Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01] [INFO ][o.e.n.Node ] [renwolenode-1]  JVM arguments  [-Xms8g, -Xmx8g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapaci tyPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/elasticsearch] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [aggs-matrix-stats] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [ingest-common] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-expression] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-groovy] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-mustache] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-painless] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [parent-join] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [percolator] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [reindex] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [transport-netty3] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [transport-netty4] [INFO ][o.e.p.PluginsService ] [renwolenode-1] no plugins loaded [INFO ][o.e.d.DiscoveryModule ] [renwolenode-1] using discovery type [zen] [INFO ][o.e.n.Node ] [renwolenode-1] initialized [INFO ][o.e.n.Node ] [renwolenode-1] starting ... [INFO ][o.e.t.TransportService ] [renwolenode-1] publish_address {10.28.204.65:9300}, bound_addresses {10.28.204.65:9300} [INFO ][o.e.b.BootstrapChecks ] [renwolenode-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [WARN ][o.e.n.Node ] [renwolenode-1] timed out while waiting for initial discovery state - timeout: 30s [INFO ][o.e.h.n.Netty4HttpServerTransport] [renwolenode-1] publish_address {10.28.204.65:9200}, bound_addresses {10.28.204.65:9200} [INFO ][o.e.n.Node ] [renwolenode-1] started

Node started successfully, status: started After startup, the current terminal will always display the ElasticSearch status information.

If the startup fails, a detailed error description will be displayed, which can be solved according to the error report.

If you exit, press Ctrl + c Meanwhile, ElasticSearch will stop.

8. Re open a terminal to access ES

 $ curl //10.28.204.65:9200/ { "name" : "renwolenode-1", "cluster_name" : "my-apprenwole", "cluster_uuid" : "Xf_ZdW0XQum4rycQA40PfQ", "version" : { "number" : "5.6.3", "build_hash" : "667b497", "build_date" : "2017-10-18T19:22:05.189Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" }

Some information of ES is returned, indicating that ES can be used normally.

9. Create systemd unit service file

In fact, when managing ES in the production environment, it is impossible to switch accounts back and forth/ The elasticsearch mode is started. If the Elastic search server cannot be started randomly when it is down for recovery, it will bring unnecessary trouble to the operation and maintenance personnel.

Therefore, create a bootstrap file:

 $ vim /usr/lib/systemd/system/elasticsearch.service

Add the following:

 [Service] Environment=ES_HOME=/usr/local/elasticsearch Environment=CONF_DIR=/usr/local/elasticsearch/config Environment=DATA_DIR=/usr/local/elasticsearch/data Environment=LOG_DIR=/usr/local/elasticsearch/logs Environment=PID_DIR=/usr/local/elasticsearch EnvironmentFile=-/usr/local/elasticsearch/config WorkingDirectory=/usr/local/elasticsearch User=es Group=es ExecStartPre=/usr/local/elasticsearch/bin/elasticsearch-systemd-pre-exec ExecStart=/usr/local/elasticsearch/bin/elasticsearch \ -p ${PID_DIR}/elasticsearch.pid \ --quiet \ -Edefault.path.logs=${LOG_DIR} \ -Edefault.path.data=${DATA_DIR} \ -Edefault.path.conf=${CONF_DIR} # StandardOutput is configured to redirect to journalctl since # some error messages may be logged in standard output before # elasticsearch logging system is initialized.  Elasticsearch # stores its logs in /var/log/elasticsearch and does not use # journalctl by default.  If you also want to enable journalctl # logging, you can simply remove the "quiet" option from ExecStart. StandardOutput=journal StandardError=inherit # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65536 # Specifies the maximum number of processes LimitNPROC=2048 # Specifies the maximum size of virtual memory LimitAS=infinity # Specifies the maximum file size LimitFSIZE=infinity # Disable timeout logic and wait until process is stopped TimeoutStopSec=0 # SIGTERM signal is used to stop the Java process KillSignal=SIGTERM # Send the signal only to the JVM rather than its control group KillMode=process # Java process is never killed SendSIGKILL=no # When a JVM receives a SIGTERM signal it exits with code 143 SuccessExitStatus=143 [Install] WantedBy=multi-user.target # Built for distribution-5.6.3 (distribution)

10. Restart elasticsearch

 $ systemctl restart elasticsearch

Note: After restarting the ES, it will not run immediately. It has a startup process, which takes about 1 minute. You can check whether the 9200 and 9300 are running through ss ntlp. After running, you can check the cluster status.

11. Set Firewalld firewall

 $ firewall-cmd --permanent --add-port={9200/tcp,9300/tcp} $ firewall-cmd --reload $ firewall-cmd --list-all

12. View cluster status

Enter the following URL in the cluster to obtain the cluster health status information:

 $ curl //10.28.204.65:9200/_cluster/health? pretty { "cluster_name" : "my-apprenwole", //Cluster name "status" : "green", //The cluster status is divided into red/green/light, green: healthy, yellow: sub healthy, red: sick "timed_out" : false, "number_of_nodes" : 4, //Number of nodes "number_of_data_nodes" : 4, //Data node "active_primary_shards" : 6, //Total number of main partitions "active_shards" : 22, //Total number of partitions of all indexes in the cluster "relocating_shards" : 0, //Number of partitions being migrated "initializing_shards" : 0, //Number of partitions being initialized "unassigned_shards" : 0, //Number of partitions not allocated to specific nodes "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 //Percentage of active fragments }

We use 4 ES instances, so the cluster also displays specific data, indicating that the cluster is running normally.

The ES cluster has been installed. This article is original and can be directly used in production. ES has many plug-ins. Later, I will write some related documents, such as Kibana, Logstash, and X-Pack, which are official plug-ins and are quite practical.

Nginx reverse proxy implements Kibana login authentication function

After Kibana 5.5, the authentication function is no longer supported. That is to say, it is unsafe to open the page directly for management. However, X-Pack authentication is officially provided, but there is a time limit. After all, X-Pack is a commercial version.

Next, I will operate how to use the Nginx reverse proxy to implement the authentication function of Kibana.

precondition:

Centos 7 source code compilation and installation Nginx

1. Install Apache Httpd password generation tool

 $ yum install httpd-tools -y

2. Generate Kibana authentication password

 $ mkdir -p /usr/local/nginx/conf/passwd $ htpasswd -c -b /usr/local/nginx/conf/passwd/kibana.passwd Userrenwolecom GN5SKorJ Adding password for user Userrenwolecom

3. Configure Nginx reverse proxy

Add the following content to the Nginx configuration file (or create a new configuration file to include it):

 $ vim /usr/local/nginx/conf/nginx.conf server { listen 10.28.204.65:5601; auth_basic "Restricted Access"; auth_basic_user_file /usr/local/nginx/conf/passwd/kibana.passwd; location / { proxy_pass //10.28.204.65:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade;  } }

4. Configure Kibana

Cancel the following comment:

 $ vim /usr/local/kibana/config/kibana.yml server.host: "10.28.204.65"

5. Restart Kibana and Nginx services to make the configuration effective

 $ systemctl restart kibana.service $ systemctl restart nginx.service

Next browser access //103.28.204.65:5601/ You will be prompted to verify the pop-up window, and enter the user password generated above to log in.