Monthly filing: October 2017

Elasticsearch Analysis Ik Chinese Word Segmentation Module Installation

Elasticsearch has many built-in word breakers, but the effect of Chinese thesaurus is poor, so we need to install three-way ik plug-ins to meet production requirements.

What is IK Analyzer?

IK Analyzer is an open source, lightweight Chinese word segmentation toolkit developed based on the java language. Since the release of version 1.0 in December 2006, IKAnalyzer has launched four major versions. Initially, it was a Chinese word segmentation component based on the open-source project Luence and combining dictionary segmentation and grammar analysis algorithms. Since version 3.0, IK has developed into a common word segmentation component for Java, independent of Lucene project, and provides the default optimization implementation for Lucene. In the 2012 version, IK implemented a simple word segmentation ambiguity elimination algorithm, marking the evolution of the IK word segmentation machine from a simple dictionary word segmentation to a simulated semantic word segmentation.

precondition:

Install and configure the Elasticsearch search engine cluster

1. Install Ik

Log in to the ES server and enter the Bin directory to start the installation

 $ cd /usr/local/elasticsearch $ bin/elasticsearch-plugin install //github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.6.3/elasticsearch-analysis-ik-5.6.3.zip -> Downloading //github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.6.3/elasticsearch-analysis-ik-5.6.3.zip [=================================================] 100% -> Installed analysis-ik

Installation succeeded.

2. To uninstall, execute the following command

 $ bin/elasticsearch-plugin remove analysis-ik

3. Restart ES service

 $ systemctl restart elasticsearch

4. View the startup log

 $ cat /usr/local/elasticsearch/logs/my-apprenwole.log

The following log records will be generated when elastic search loads the module during startup, indicating that the analysis ik Chinese word segmentation plug-in is available.

 [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded plugin [analysis-ik]

5. Create an index named index

 $ curl -u elastic -XPUT //10.28.204.65:9200/index Enter host password for user 'elastic': {"acknowledged":true,"shards_acknowledged":true,"index":"index"}

explain:

-u Is the user name
After entering, you will be asked to enter the password. The default user name is: elastic The password is: changeme

6. Create a map

 curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/_mapping -d' { "properties": { "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_max_word" } } }'

7. Create test data

 curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/1 -d' {"content": "Yuan Fang and you have followed me successively for more than ten years. I haven't given you anything except danger"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/2 -d' {"content": "Remember, Yuan Fang always jokes that it's not easy to eat me"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/3 -d' {"content": "But now I really want to give all my money to him for a meal"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/4 -d' {"content": "Yuan Fang always calls me an adult, but I know that he actually thinks of me as his father"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/5 -d' {"content": "But what did my father do for him? I always let him choose the latter between life and death"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/6 -d' {"content": "This is the case in Youzhou, Huzhou and Chongzhou. This time, he has not returned. What can I say? What can I say"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/7 -d' {"content": "Yuan Fang gave his life for the country, for the country and for the people of Dawn"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/8 -d' {"content": "If there is any comfort in my heart at the moment, it is to be proud of Yuanfang"} ' curl -u elastic -XPUT //10.28.204.65:9200/index/fulltext/9 -d' {"content": "If there is anything stronger than sadness in my heart at the moment, it is hatred"} '

8. Query data

 curl -u elastic -XPOST //10.28.204.65:9200/index/fulltext/_search -d' { "Query": {"match": {"content": "if"}}, "highlight" : { "pre_tags" : ["<tag1>", "<tag2>"], "post_tags" : ["</tag1>", "</tag2>"], "fields" : { "content" : {} } } } '

9. Query the returned data

 { "took": 21, "timed_out": false, "_shards": { "total": 5, "successful": 5, "skipped": 0, "failed": 0 }, "hits": { "total": 2, "max_score": 0.43353066, "hits": [ { "_index": "index", "_type": "fulltext", "_id": "8", "_score": 0.43353066, "_source": { "Content": "If I still feel a sense of comfort at the moment, I am proud of Yuanfang." }, "highlight": { "content": [ "<tag1>If</tag1>says that at the moment I still have a little comfort in my heart, it is to be proud of Yuanfang." ] } }, { "_index": "index", "_type": "fulltext", "_id": "9", "_score": 0.43353066, "_source": { "Content": "If there is anything stronger than sadness in my heart at the moment, it is hatred" }, "highlight": { "content": [ "<tag1>If</tag1>says that there is something more powerful than sadness in my heart at the moment, it is hatred" ] } } ] } }

Conclusion:

Recommended for adoption //10.28.204.65:5601 Dev Tools development tool management in, Kibana visual management is really convenient.

reference resources:

//github.com/medcl/elasticsearch-analysis-ik

Elasticsearch Kibana cluster installation configuration X-pack expansion pack

X-pack overview:

X-pack is an Elastic Stack expansion package that packages security, alerting, monitoring, reporting, and graphics functions into an easy to install package. X-pack can work seamlessly with ElasticSearch and Kibana. It can enable or disable the functions you want to use.

Before Elastic search 5.0, you must install the Shield, Watcher and Marvel plug-ins separately to get all the functions in the X-Pack. With the appearance of X-Pack, you no longer need to worry about whether you have the correct version of each plug-in. You just need to install the X-Pack of the Elastic search and Kibana versions you are running.

X-pack installation is relatively simple. I will introduce it in detail below.

precondition:

Install and configure the Elasticsearch search engine cluster
ElasticSearch Kibana Binary Installation Configuration

notes : You must run the X-Pack version that matches the ElasticSearch and Kibana versions.

explain:

Since I am cluster distributed, I need to install X-pack on each Kibana and ElasticSearch cluster server. If you are in stand-alone mode, you only need to install X-pack on a single server. In addition, I use network installation.

1. Install X-Pack on Kibana server

The whole process of automatic installation does not require configuration. It takes a long time to install X-packs in Kibana. Take your time.

Directly enter the kibana installation directory and execute the following commands:

 $ cd /usr/local/kibana $ bin/kibana-plugin install x-pack
 Found previous install attempt.  Deleting... Attempting to transfer from x-pack Attempting to transfer from //artifacts.elastic.co/downloads/kibana-plugins/x-pack/x-pack-5.6.3.zip Transferring 119526941 bytes.................... Transfer complete Retrieving metadata from plugin archive Extracting plugin archive Extraction complete Optimizing and caching browser bundles... Plugin installation complete

Installation is complete.

2. If uninstalling, execute the following command

 $ bin/kibana-plugin remove x-pack

3. Install the X-Pack on the Elastic search server

 The installation is automatically completed in the whole process without configuration, which is very fast. Enter the elasticsearch installation directory and execute the following command to start the installation:
 $ cd /usr/local/elasticsearch $ bin/elasticsearch-plugin install x-pack
 -> Downloading x-pack from elastic [=================================================] 100% @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.io.FilePermission \\. \pipe\* read,write * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries * java.lang.RuntimePermission getClassLoader * java.lang.RuntimePermission setContextClassLoader * java.lang.RuntimePermission setFactory * java.security. SecurityPermission createPolicy. JavaPolicy * java.security. SecurityPermission getPolicy * java.security. SecurityPermission putProviderProperty. BC * java.security. SecurityPermission setPolicy * java.util. PropertyPermission * read,write * java.util. PropertyPermission sun.nio.ch.bugLevel write * javax.net.ssl. SSLPermission setHostnameVerifier See //docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N] y @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin forks a native controller @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ This plugin launches a native controller that is not subject to the Java security manager nor to system call filters. Continue with installation? [y/N] y -> Installed x-pack

be careful: You will be asked to enter Y twice to confirm the installation.

Finally, the installation is completed.

4. If uninstalling, execute the following command

 $ bin/elasticsearch-plugin remove x-pack

5. Restart the service

 $ systemctl restart elasticsearch $ systemctl restart kibana

After restarting, you can pass //10.28.204.65:5601 Management. The default login account is: elastic The password is: changeme

After login, click Monitoring You can view the Elasticsearch and Kibana cluster information, and then further view the monitoring chart information. In addition, you can also set the account. I won't show the above figure.

6. License management

The initial installation of X-Pack provides a 30 day trial license, which allows the use of all X-Pack features. But after the trial period, all its functions will be disabled, so we need to Apply for a free license (Of course, you can also purchase an enterprise license.).

7. Update License

After applying for the license, an email will be sent to you via email, and you can download the corresponding json file. The license is provided as the JSON file you installed using the API, and you can upload it to the server/tmp directory.

Then execute the following command:

 $ curl -XPUT -u elastic '//10.28.204.65:9200/_xpack/license? acknowledge=true' -H "Content-Type: application/json" -d @/tmp/license.json

The license is valid for one year after it is successfully updated, which is explained at the time of subscription.

Next, you can play happily.

Vim Command Edit Hosts File Detailed Usage Method

Open Hosts file with vim command

 $ vim /etc/hosts

Press i "Key, enter editing mode and press" Esc "Key, exit editing mode

Press Backspace "Key, delete the character before the cursor
Press Delete "Key, delete the character after the cursor
Press dd "Key, delete the current line

Press yy "Key, copy the current line
Press p "Key, paste the copied content to the next line

Press :wq ", Save Exit
Press :x ", Save Exit
Press :q! ", forced exit without saving

Up key: move the cursor to the previous line
"Down" key: move the cursor to the next line
"Left" key: move the cursor one character to the left
"Right" key: move the cursor one character to the right

Home "Key, the cursor moves to the beginning of the current line
End "Key, the cursor is always at the end of the current line

PgUp ", page up
PgDn ", page down

ElasticSearch Kibana Binary Installation Configuration

Introduction:

Kibana is an open source analysis data visualization platform. You can use Kibana to search, visualize, and analyze data efficiently, and also interact with data in the Elastic search engine.

Kibana can easily master a large amount of data. Its browser based interface enables you to quickly create a shared dynamic dashboard and monitor queries and changes of ElasticSearch in real time.

precondition:

Install and configure the Elasticsearch search engine cluster

Requirement: The version of Kibana and Elasticsearch must be consistent.

1. Create users and groups

 $ groupadd kibana $ useradd -g kibana kibana

2. Install Kibana

Download address://www.elastic.co/products

Decompress to create a soft connection:

 $ cd /tmp $ sha1sum kibana-5.6.3-linux-x86_64.tar.gz $ tar zxvf kibana-5.6.3-linux-x86_64.tar.gz $ mv kibana-5.6.3-linux-x86_64 /usr/local $ cd /usr/local $ ln -s kibana-5.6.3-linux-x86_64 kibana

3. Configure kibana

The contents after configuration are as follows:

 $ egrep -v "^$|^#|^;" /usr/local/kibana/config/kibana.yml server.port: 5601 server.host: "10.28.204.65" server.name: "10.28.204.65" elasticsearch.url: "//10.28.204.65:9200" elasticsearch.preserveHost: true elasticsearch.pingTimeout: 1500 elasticsearch.requestTimeout: 30000 pid.file: /usr/local/kibana/kibana.pid

For more configurations, see Configuration Kibana

4. Give Kibana directory permission

 $ cd /usr/local $ chown -R kibana.kibana kibana*

5. Start Kibana

 $ cd /usr/local/kibana/bin $ ./ kibana
 log [02:01:19.285] [info][status][plugin: kibana@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:19.819] [info][status][plugin: elasticsearch@5.6.3 ] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [02:01:20.078] [info][status][plugin: console@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:20.288] [info][status][plugin: metrics@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:21.263] [info][status][plugin: timelion@5.6.3 ] Status changed from uninitialized to green - Ready log [02:01:21.306] [info][listening] Server running at //10.28.204.65:5601 log [02:01:21.315] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow log [02:01:25.304] [info][status][plugin: elasticsearch@5.6.3 ] Status changed from yellow to yellow - No existing Kibana index found log [02:01:29.992] [info][status][plugin: elasticsearch@5.6.3 ] Status changed from yellow to green - Kibana index ready log [02:01:30.008] [info][status][ui settings] Status changed from yellow to green - Ready

The startup status is printed and has been successfully started.

Now you can use //10.28.204.65:5601 Access configuration is enough.

6. Create the Kibana unit file of the systemctl system

We need to create a unit service file to facilitate management:

 $ vim /usr/lib/systemd/system/kibana.service

Add the following:

 [Unit] Description=Kibana [Service] Type=simple User=kibana Group=kibana # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/usr/local/kibana/config EnvironmentFile=-/etc/sysconfig/kibana ExecStart=/usr/local/kibana/bin/kibana "-c /usr/local/kibana/config/kibana.yml" Restart=always WorkingDirectory=/ [Install] WantedBy=multi-user.target

7. Start and join the startup

 $ systemctl restart kibana $ systemctl enable kibana

8. Set firewall

 $ firewall-cmd --permanent --zone=public --add-port=5601/tcp $ firewall-cmd --reload

So far, Kibana has been installed and can be used normally.

Install and configure the Elasticsearch search engine cluster

Elasticsearch is a highly extensible open source full-text search and analysis engine. It allows you to store, search, and analyze large amounts of data quickly and in real time. It is often used as the underlying engine technology provided by applications with complex search functions and requirements.

Elasticsearch is installed like Tomcat, out of the box, without installing complex dependency packages.

precondition:

Elasticsearch requires at least Java 8 version. The following is the installation document, so we will not repeat it here.

Linux JAVA JDK JRE environment variable installation and configuration

Cluster deployment environment and equipment configuration:

 /Set/16G/8 core/500G 10.28.204.62 10.28.204.63 10.28.204.64 10.28.204.65 Elasticsearch 5.6.3 CentOS Linux release 7.4.1708 (Core) Kernel: Linux 3.10.0-693.2.2.el7.x86_64

explain : For the following installation steps, I operate on the 10.28.204.65 server. Other machines are the same. I will mark the key parts of the cluster.

1. Create users and groups and set passwords

 $ groupadd es $ useradd -g es es $ passwd es

2. Install Elasticsearch

Download address://www.elastic. co/downloads/elastic search # ga release

Unzip:

 $ cd /tmp $ tar zxvf elasticsearch-5.6.3.tar.gz

Move directory and create soft connection:

 $ mv elasticsearch-5.6.3 /usr/local $ cd /usr/local $ ln -s elasticsearch-5.6.3 elasticsearch

Set directory user permissions:

 $ chown -R es.es elasticsearch*

3. Configure jvm.options

The default heap memory of Elasticsearch is 2 GB, which cannot meet the requirements. You need to change the two Xms and Xmx in the following files to 8G, and other defaults.

 $ vim /usr/local/elasticsearch/config/jvm.options ... -Xms8g -Xmx8g ...

Note: It is recommended to allocate half of the physical memory of the machine, and the maximum size should not exceed 32GB.

4. Configure elasticsearch.yml

The configured contents are as follows:

 $ egrep -v "(^#|^$)" /usr/local/elasticsearch/config/elasticsearch.yml
 cluster.name: my-apprenwole #Cluster name, any. After ES is started, nodes with the same cluster name will be placed under one cluster. node.name: renwolenode-1 #Any unique value of the node name. bootstrap.memory_lock: false #Close locked memory. network.host: 10.28.204.65 #The local IP address must be modified for each node. http.port: 9200 #The http access port is recommended to be modified for security. discovery.zen.ping.unicast.hosts: ["10.28.204.62","10.28.204.63", "10.28.204.64","10.28.204.65"] #When a new node starts, the initial list of hosts is passed to perform discovery. If the port is not the default, add the port. discovery.zen.minimum_master_nodes: 3 #Specify how many master qualified nodes exist in the cluster node. More than three clusters can be written. client.transport.ping_timeout: 120s #Time to wait for ping response from node. The default is 60s. discovery.zen.ping_timeout: 120s #It is allowed to adjust the election time when the processing speed is slow or the network is congested (a higher value guarantees less failures). http.cors.enabled: true #Enable or disable cross original resource sharing, that is; Whether the browser on another source can execute the request against Elasticsearch. http.cors.allow-origin: "*" #Source is not allowed by default. If you add and attach/add to this value in advance, it will be regarded as a regular expression, allowing you to support HTTP and HTTP. For example, use/https?:\/\/ localhost(:[0-9]+)?/ The request header will be returned appropriately in both cases* Is a valid value, but is considered a security risk because your elastic search instance can cross initiate requests from anywhere.

Note: The Elasticsearch default value configuration has good settings and requires few configurations. By default, it can be used for production after a few configurations.

For more configuration information, see Elasticsearch modules

Note: The other three machines are the same except for the following parameters:

 node.name network.host

5. Memlock Settings

Add the following to the file:

 $ vim /etc/security/limits.conf es soft memlock unlimited es hard memlock unlimited es - nofile 65536

If it is not added, a warning message will be reported during startup:

 Unable to lock JVM Memory: error=12, reason=Cannot allocate memory This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536 These can be adjusted by modifying /etc/security/limits.conf, for example: # allow user 'es' mlockall es soft memlock unlimited es hard memlock unlimited

The above error messages also provide solutions.

6. Server Memory Settings

 $ vim /etc/sysctl.conf vm.max_map_count=262144 $ sysctl -p

7. Start Elasticsearch

Because ES does not allow root to start directly by default, for security reasons, switch to the es account to start:

 [ root@102820465  ~]# su es [ es@102820465  ~]$ cd /usr/local/elasticsearch/bin [ es@102820465  bin]$ ./ elasticsearch
 [INFO ][o.e.n.Node ] [renwolenode-1] initializing ... [INFO ][o.e.e.NodeEnvironment ] [renwolenode-1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [4021.3mb], net total_space [15.9gb], spins?  [unknown], types [rootfs] [INFO ][o.e.e.NodeEnvironment ] [renwolenode-1] heap size [7.9gb], compressed ordinary object pointers [true] [INFO ][o.e.n.Node ] [renwolenode-1] node name [renwolenode-1], node ID [vkixu3LZTPq82SAWWXyNcg] [INFO ][o.e.n.Node ] [renwolenode-1] version[5.6.3], pid[21425], build[667b497/2017-10-18T19:22:05.189Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01] [INFO ][o.e.n.Node ] [renwolenode-1]  JVM arguments  [-Xms8g, -Xmx8g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapaci tyPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/elasticsearch] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [aggs-matrix-stats] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [ingest-common] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-expression] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-groovy] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-mustache] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [lang-painless] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [parent-join] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [percolator] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [reindex] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [transport-netty3] [INFO ][o.e.p.PluginsService ] [renwolenode-1] loaded module [transport-netty4] [INFO ][o.e.p.PluginsService ] [renwolenode-1] no plugins loaded [INFO ][o.e.d.DiscoveryModule ] [renwolenode-1] using discovery type [zen] [INFO ][o.e.n.Node ] [renwolenode-1] initialized [INFO ][o.e.n.Node ] [renwolenode-1] starting ... [INFO ][o.e.t.TransportService ] [renwolenode-1] publish_address {10.28.204.65:9300}, bound_addresses {10.28.204.65:9300} [INFO ][o.e.b.BootstrapChecks ] [renwolenode-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [WARN ][o.e.n.Node ] [renwolenode-1] timed out while waiting for initial discovery state - timeout: 30s [INFO ][o.e.h.n.Netty4HttpServerTransport] [renwolenode-1] publish_address {10.28.204.65:9200}, bound_addresses {10.28.204.65:9200} [INFO ][o.e.n.Node ] [renwolenode-1] started

Node started successfully, status: started After startup, the current terminal will always display the ElasticSearch status information.

If the startup fails, a detailed error description will be displayed, which can be solved according to the error report.

If you exit, press Ctrl + c Meanwhile, ElasticSearch will stop.

8. Re open a terminal to access ES

 $ curl //10.28.204.65:9200/ { "name" : "renwolenode-1", "cluster_name" : "my-apprenwole", "cluster_uuid" : "Xf_ZdW0XQum4rycQA40PfQ", "version" : { "number" : "5.6.3", "build_hash" : "667b497", "build_date" : "2017-10-18T19:22:05.189Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" }

Some information of ES is returned, indicating that ES can be used normally.

9. Create systemd unit service file

In fact, when managing ES in the production environment, it is impossible to switch accounts back and forth/ The elasticsearch mode is started. If the Elastic search server cannot be started randomly when it is down for recovery, it will bring unnecessary trouble to the operation and maintenance personnel.

Therefore, create a bootstrap file:

 $ vim /usr/lib/systemd/system/elasticsearch.service

Add the following:

 [Service] Environment=ES_HOME=/usr/local/elasticsearch Environment=CONF_DIR=/usr/local/elasticsearch/config Environment=DATA_DIR=/usr/local/elasticsearch/data Environment=LOG_DIR=/usr/local/elasticsearch/logs Environment=PID_DIR=/usr/local/elasticsearch EnvironmentFile=-/usr/local/elasticsearch/config WorkingDirectory=/usr/local/elasticsearch User=es Group=es ExecStartPre=/usr/local/elasticsearch/bin/elasticsearch-systemd-pre-exec ExecStart=/usr/local/elasticsearch/bin/elasticsearch \ -p ${PID_DIR}/elasticsearch.pid \ --quiet \ -Edefault.path.logs=${LOG_DIR} \ -Edefault.path.data=${DATA_DIR} \ -Edefault.path.conf=${CONF_DIR} # StandardOutput is configured to redirect to journalctl since # some error messages may be logged in standard output before # elasticsearch logging system is initialized.  Elasticsearch # stores its logs in /var/log/elasticsearch and does not use # journalctl by default.  If you also want to enable journalctl # logging, you can simply remove the "quiet" option from ExecStart. StandardOutput=journal StandardError=inherit # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65536 # Specifies the maximum number of processes LimitNPROC=2048 # Specifies the maximum size of virtual memory LimitAS=infinity # Specifies the maximum file size LimitFSIZE=infinity # Disable timeout logic and wait until process is stopped TimeoutStopSec=0 # SIGTERM signal is used to stop the Java process KillSignal=SIGTERM # Send the signal only to the JVM rather than its control group KillMode=process # Java process is never killed SendSIGKILL=no # When a JVM receives a SIGTERM signal it exits with code 143 SuccessExitStatus=143 [Install] WantedBy=multi-user.target # Built for distribution-5.6.3 (distribution)

10. Restart elasticsearch

 $ systemctl restart elasticsearch

Note: After restarting the ES, it will not run immediately. It has a startup process, which takes about 1 minute. You can check whether the 9200 and 9300 are running through ss ntlp. After running, you can check the cluster status.

11. Set Firewalld firewall

 $ firewall-cmd --permanent --add-port={9200/tcp,9300/tcp} $ firewall-cmd --reload $ firewall-cmd --list-all

12. View cluster status

Enter the following URL in the cluster to obtain the cluster health status information:

 $ curl //10.28.204.65:9200/_cluster/health? pretty { "cluster_name" : "my-apprenwole", //Cluster name "status" : "green", //The cluster status is divided into red/green/light, green: healthy, yellow: sub healthy, red: sick "timed_out" : false, "number_of_nodes" : 4, //Number of nodes "number_of_data_nodes" : 4, //Data node "active_primary_shards" : 6, //Total number of main partitions "active_shards" : 22, //Total number of partitions of all indexes in the cluster "relocating_shards" : 0, //Number of partitions being migrated "initializing_shards" : 0, //Number of partitions being initialized "unassigned_shards" : 0, //Number of partitions not allocated to specific nodes "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 //Percentage of active fragments }

We use 4 ES instances, so the cluster also displays specific data, indicating that the cluster is running normally.

The ES cluster has been installed. This article is original and can be directly used in production. ES has many plug-ins. Later, I will write some related documents, such as Kibana, Logstash, and X-Pack, which are official plug-ins and are quite practical.

Disable Transparent Huge Pages (THP)

Transparent Huge Pages (THP) It is a Linux memory management system that can reduce the overhead of Translation Lookaside Buffer (TLB) for machines with large amounts of memory by using larger memory pages.

However, database workloads usually perform poorly on THP because they tend to be sparse rather than continuous memory access patterns. So you should disable THP on Linux machines to ensure the best performance of Redis, ORACLE, MariaDB, MongoDB and other databases.

Transparent Huge Pages It is an optimization introduced in CentOS/RedHat 6.0. Starting from CentOS 7, this feature is enabled by default to reduce the overhead of systems with large amounts of memory. However, due to the way some databases use memory, this feature actually does more harm than good, because memory access is rarely continuous.

The following describes how to disable the transparent giant page on RedHat/CentOS 6/7. For other systems, consult the vendor's documentation.

Production environment:

 $ hostnamectl ... Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7.4 1708 Kernel: Linux 3.10.0-693.2.2.el7.x86_64 Architecture: x86-64

Check the THP status first:

 $ cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never $ cat /sys/kernel/mm/transparent_hugepage/defrag [always] madvise never

Status display: Enabled.

Create script:

 $ vim /etc/init.d/disable-transparent-hugepages

Add the following:

 #!/ bin/bash ### BEGIN INIT INFO # Provides:          disable-transparent-hugepages # Required-Start:    $local_fs # Required-Stop: # X-Start-Before:    mongod mongodb-mms-automation-agent # Default-Start:     2 3 4 5 # Default-Stop:      0 1 6 # Short-Description: Disable Linux transparent huge pages # Description:       Disable Linux transparent huge pages, to improve #                    database performance. ### END INIT INFO case $1 in start) if [ -d /sys/kernel/mm/transparent_hugepage ];  then thp_path=/sys/kernel/mm/transparent_hugepage elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ];  then thp_path=/sys/kernel/mm/redhat_transparent_hugepage else return 0 fi echo 'never' > ${thp_path}/enabled echo 'never' > ${thp_path}/defrag re='^[0-1]+$' if [[ $(cat ${thp_path}/khugepaged/defrag) =~ $re ]] then # RHEL 7 echo 0  > ${thp_path}/khugepaged/defrag else # RHEL 6 echo 'no' > ${thp_path}/khugepaged/defrag fi unset re unset thp_path ;; esac

Save and exit!

Give the file executable permission. The command is as follows:

 $ chmod 755 /etc/init.d/disable-transparent-hugepages

Add the auto start and restart system:

 $ systemctl enable disable-transparent-hugepages $ systemctl start disable-transparent-hugepages $ sudo reboot

To view the THP status again:

 $ cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] $ cat /sys/kernel/mm/transparent_hugepage/defrag always madvise [never]

Status display: Disabled.

Disable here Transparent Huge Pages The goal of has been achieved.

Note: This tutorial does not apply to Debian/Ubuntu or CentOS/RedHat 5 and earlier. The reason has been explained above.

Nginx reverse proxy implements Kibana login authentication function

After Kibana 5.5, the authentication function is no longer supported. That is to say, it is unsafe to open the page directly for management. However, X-Pack authentication is officially provided, but there is a time limit. After all, X-Pack is a commercial version.

Next, I will operate how to use the Nginx reverse proxy to implement the authentication function of Kibana.

precondition:

Centos 7 source code compilation and installation Nginx

1. Install Apache Httpd password generation tool

 $ yum install httpd-tools -y

2. Generate Kibana authentication password

 $ mkdir -p /usr/local/nginx/conf/passwd $ htpasswd -c -b /usr/local/nginx/conf/passwd/kibana.passwd Userrenwolecom GN5SKorJ Adding password for user Userrenwolecom

3. Configure Nginx reverse proxy

Add the following content to the Nginx configuration file (or create a new configuration file to include it):

 $ vim /usr/local/nginx/conf/nginx.conf server { listen 10.28.204.65:5601; auth_basic "Restricted Access"; auth_basic_user_file /usr/local/nginx/conf/passwd/kibana.passwd; location / { proxy_pass //10.28.204.65:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade;  } }

4. Configure Kibana

Cancel the following comment:

 $ vim /usr/local/kibana/config/kibana.yml server.host: "10.28.204.65"

5. Restart Kibana and Nginx services to make the configuration effective

 $ systemctl restart kibana.service $ systemctl restart nginx.service

Next browser access //103.28.204.65:5601/ You will be prompted to verify the pop-up window, and enter the user password generated above to log in.

Zabbix monitors Redis database performance

Note: All the following operations are performed on the Zabbix Agent client.

Deployment environment:

 OS:CentOS Linux release 7.4.1708 (Core) x64 Zabbix Servers:3.4 Redis Servers:4.0

precondition:

Linux Centos7 Redis source code compilation, installation and configuration

1. Modify the host

Add the following at the end of the file:

 $ vim /etc/hosts 10.28.204.65 s102820465

2. Install Python dependency package

 $ yum -y install python-pip $ pip install argparse $ pip install redis

3. Download the official template Redis template provided by Zabbix

Transfer the local compressed package to the tmp directory and decompress it:

//github.com/adubkov/zbx_redis_template

 $ cd /tmp $ tar zxvf zbx_redis_template-master.zip $ cd zbx_redis_template-master

Copy the following 2 configuration files to the relevant directory:

 $ cp zbx_redis_stats.py /usr/local/zabbix/bin $ cp zbx_redis.conf /usr/local/zabbix/etc/zabbix_agentd.conf.d/

Note: Except for the above two files, others can be ignored. Because the official Zabbix template provides two schemes for monitoring Redis, namely node.js and python. This tutorial uses the latter.

4. Configure zbx_resis_stats.py

Modify the parameters in the following file to the host IP and port of the Zabbix Server:

 $ cd /usr/local/zabbix/bin $ vim zbx_redis_stats.py ... zabbix_host = '10.28.204.62' # Zabbix Server IP zabbix_port = 10051 # Zabbix Server Port ...

Give the file executable permissions:

 $ chmod +x zbx_redis_stats.py

5. Configure zbx_reredis.conf

 $ cd /usr/local/zabbix/etc/zabbix_agentd.conf.d/

Modify the file as follows:

 $ vim zbx_redis.conf UserParameter=redis[*],/usr/local/zabbix/bin/zbx_redis_stats.py -p 6379 -a RenwoleQxl5qpKHrh $1 $2 $3

6. Test whether zbx_resis_status.py can connect to the Redis database

 $ cd /usr/local/zabbix/bin $ ./ zbx_redis_stats.py -h 127.0.0.1 -p 6379 -a RenwoleQxl5qpKHrh usage: zbx_redis_stats.py [-h] [-p REDIS_PORT] [-a REDIS_PASS] [redis_hostname] [metric] [db] Zabbix Redis status script positional arguments: redis_hostname metric db optional arguments: -h, --help show this help message and exit -p REDIS_PORT, --port REDIS_PORT Redis server port -a REDIS_PASS, --auth REDIS_PASS Redis server pass

The above information indicates that the connection is normal.

Parameter description:

-H Redis bind address
-P Redis port
-A Redis password

7. Test whether data is obtained

 $ ./ zbx_redis_stats.py -p 6379 -a RenwoleQxl5qpKHrh S102820465 used_cpu_user_children none zero point seven one

Return 0.71. This value is not fixed. If data is returned, the script is running normally.

Parameter description:

The $1 $2 $3 in the zbx_reredis.conf file comes in handy at this time.

$1 corresponds to S102820465 host
$2 corresponds to used_cpu_user_children
$3 corresponds to none

Finally zbx_redis_templates.xml Template import to Zabbix Servers UI And then link to the host to be monitored.

If the data cannot be obtained in the Zabbix UI, you can click Zabbix Agent Client run directly zbx_redis_stats.py If the script is not configured correctly, it will be fed back and processed according to the error report.

Conclusion:

In fact, there are too many templates to monitor the status of Redis. They are basically the same. Choose one that is suitable for you. For example, there are many monitoring projects, so you can learn more about Redis performance indicators.