Classified directory archiving: Nginx

Let's Encrypt SSL certificate renewal failure ascii codec cannot encode

Today, I reviewed the SSL certificate of the server and found that the Let's Encrypt certificate is about to expire. Check the scheduled task plan log of the crontab and it is also executed normally. For example:

 $ cat /var/log/cron
 ... CROND[31471]: (root) CMD ( /usr/bin/certbot renew --quiet && /bin/systemctl restart nginx ) CROND[31470]: (root) MAIL (mailed 375 bytes of output but got status 0x004b#012) CROND[31482]: (root) CMD (run-parts /etc/cron.hourly) ...

Strangely, the certificate was not renewed normally. Why? Later, the certificate was updated manually:

 $ /usr/bin/certbot renew --quiet
 Attempting to renew cert from /etc/letsencrypt/renewal/renwole.com.conf produced an unexpected error: 'ascii' codec can't encode characters in position 247-248: ordinal not in range(128).  Skipping. All renewal attempts failed.  The following certs could not be renewed: /etc/letsencrypt/live/renwole.com.conf/fullchain.pem (failure) 1 renew failure(s), 0 parse failure(s)

Update failed, prompt“ ascii ”The codec cannot encode characters.

After analysis and research, it is found that the developer has modified the root directory of the website, resulting in LetsEncrypt not finding the relevant configuration file.
PS: Alas, if something goes wrong, it's all about operation and maintenance.

Solution

Modify the site root directory in the following configuration file:

 $ vim /etc/letsencrypt/renewal/renwole.com.conf
 ... # Options used in the renewal process [renewalparams] authenticator = webroot installer = None account = a07a7160ea489g586aeaada1368ce0d6 [[webroot_map]] renwole.com = /apps/data/www/renwolecom ...

Modify the root directory specified by Nginx in blue, and save it by default.

The certificate was updated again successfully.

Use the following command to view the renewal status:

 $ certbot certificates

Nginx creates password authentication to protect the website directory

In the production environment, there are many scenarios where website access needs to be authorized, such as database management tools: phpMyAdmin MysqlUp BackUp wait. Sometimes some private directory files need to be protected. In order to achieve this great goal, we need to use Nginx location Matching rules will be explained below.

1. Create an htpasswd file

 $ vim /usr/local/nginx/conf/htpasswd Add the following:
 renwole:xEmWnqjTJipoE

The writing format of this document is:

User name: password

be careful : One user and password per line password Not in plain text, but in password conduct Crypt (3) plus The string after the password.

2. Password generation

You can open the following website to input user information for password generation:

 //tool.oschina.net/htpasswd

3. Nginx encrypted directory configuration

Add the following content in the appropriate area of the Nginx virtual host configuration file:

If the tools directory is protected:

 location ^~ /tools/ { auth_basic            "Restricted Area"; auth_basic_user_file  conf/htpasswd; }

Note: if not added ^~ Only pop up verification can be performed on the directory, and no verification is required to access the files under this directory.

To protect the entire site root:

 location / { auth_basic            "Restricted Area"; auth_basic_user_file  conf/htpasswd; }

After adding the directory to be protected, reload the Nginx configuration file, or it will not take effect.

Apache Nginx prohibits directory execution of PHP script files

When building a website, we may need to set permissions on some directories separately to achieve the security effect we need. The following example shows how to set a directory under Apache or Nginx to prohibit the execution of php files.

1. Apache configuration

 <Directory /apps/web/renwole/wp-content/uploads> php_flag engine off </Directory> <Directory ~ "^/apps/web/renwole/wp-content/uploads"> <Files ~ ".php"> Order allow,deny Deny from all </Files> </Directory>

2. Nginx configuration

 location /wp-content/uploads { location ~ .*\. (php)?$  { deny all; } }

Nginx prohibits multiple directories from executing PHP:

 location ~* ^/(css|uploads)/.*\. (php)${ deny all; }

After the configuration is completed, reload the configuration file or restart the Apache or Nginx service. After that, all php files accessed through uploads will return 403, greatly increasing the security of the web directory.

Redisson Cache Tomcat Redis Session Manager Variables

When deploying load balancing, use one Nginx as the front-end server and multiple Tomcat as the back-end server. Nginx will distribute the request to the Tomcat server according to the load policy. The session of the default Tomcat server cannot cross servers. If different requests from the same user are distributed to different Tomcat servers, the session variable will be lost and the user needs to log in again. Before I explain Redisson, I will first analyze the session sharing of the following two schemes.

Scheme 1: Nginx Native Upstream ip_hash

IP_hash is distributed according to the requested IP address. Requests for the same IP address will be forwarded to the same Tomcat server.

1. Nginx configuration is as follows:《 Nginx 1.3 cluster load balancer reverse proxy installation configuration optimization

 upstream webservertomcat { ip_hash; server 10.10.204.63:8023 weight=1 max_fails=2 fail_timeout=2; server 10.10.204.64:8023 weight=1 max_fails=2 fail_timeout=2; #server 127.0.0.1:8080 backup; } server { listen 80; server_name www.myname.com myname.com; location / { proxy_pass //webservertomcat; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header Access-Control-Allow-Origin *; } }

The above production environment is not recommended for two reasons:

one Nginx may not be the front-end server. For example, if Squid or CDN is used as front-end cache, Nginx actually takes the IP address of Squid or CDN server, which cannot achieve the effect of IP diversion according to the client's request.
two Some organizations use dynamic virtual IP or have multiple exit IPs, and the user access will switch the IP, so the request of a user cannot be fixed to the same Tomcat server.

Scheme 2: Nginx_Upstream_jvm_route

Nginx_upstream_jvm_route is a third-party nginx extension module that implements session stickiness through session cookies. If there is no session in cookies and urls, this is just a simple round robin load balancing.

Implementation method: When the user first requests to be distributed to the back-end server; The server ID of the response will be added to the cookie with the name of JSESSIONID; When jvm_route sees the name of the back-end server in the session, it will transfer the request to the corresponding server. Module address://code.google.com/archive/p/nginx upstream jvm route/

1. Nginx configuration is as follows:

 upstream tomcats_jvm_route { server 10.10.204.63:8023 srun_id=tomcat1; server 10.10.204.64:8023 srun_id=tomcat2; jvm_route $cookie_JSESSIONID|sessionid reverse; }

2. Add the following configuration to server.xml of multiple Tomcat servers:

 <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1"> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">

3. After configuration, the server ID will be added at the end of the requested cookie with the name of JSESSIONID, as follows:

 JSESSIONID=33775B80D8E0DB5AEAD61F86D6666C45.tomcat2;

4. It is not recommended to use in the production environment, because:

four point one According to the characteristics of tomcat, when the jvmRoute value is added to the server.xml configuration file, the sessionid will be suffixed with the jvmRoute value. According to this characteristic, nginx upstream_jvm_route will automatically match the sessionId value in each access request with the corresponding server. In this way, every visit will be to the same Tomcat Server, which solves the problem of changing the session of accessing different tomcat nodes. However, there will be a problem in this way. When the Tomcat server that has been accessed is down, the load will distribute users to other servers, which will cause session changes and require re login.

Scheme 3: Tomcat Redis Session Manager

Use Redis to share sessions, which is the focus of this section. The above two schemes store the session in the Tomcat container. When a request is forwarded from one Tomcat to another, the session will become invalid, because the session cannot be shared in each Tomcat. If Redis and other cache database systems are used to store sessions, session sharing can be realized between Tomcat instances.

The Java framework redisson implements the Tomcat Session Manager, which supports Apache Tomcat 6. x, 7. x, and 8. x. The configuration method is as follows:

1. Edit the TOMCAT_BASE/conf/context.xml file node and add the following contents:

 <Manager className="org.redisson.tomcat.RedissonSessionManager"<Manager className="org.redisson.tomcat.RedissonSessionManager"  configPath="${catalina.base}/redisson-config.json" />

Note: ConfigPath – refers to the configuration file path of Redisson in JSON or YAML format. configuration file See here

2. Copy the corresponding * * * two * * * JAR packages to the specified TOMCAT_BASE/lib directory:

Applicable to JDK 1.8+

redisson-all-3.5.0.jar

for Tomcat 6.x

redisson-tomcat-6-3.5.0.jar

for Tomcat 7.x

redisson-tomcat-7-3.5.0.jar

for Tomcat 8.x

redisson-tomcat-8-3.5.0.jar

3. According to the instructions of [Single instance mode] single instance mode, create and edit the redisson-config.json file in Json format, add the following content, upload it to TOMCAT_BASE, and then reload the Tomcat service.

 { "singleServerConfig":{ "idleConnectionTimeout":10000, "pingTimeout":1000, "connectTimeout":10000, "timeout":3000, "retryAttempts":3, "retryInterval":1500, "reconnectionTimeout":3000, "failedAttempts":3, "password":null, "subscriptionsPerConnection":5, "clientName":null, "address": " redis://127.0.0.1:6379 ", "subscriptionConnectionMinimumIdleSize":1, "subscriptionConnectionPoolSize":50, "connectionMinimumIdleSize":10, "connectionPoolSize":64, "database":0, "dnsMonitoring":false, "dnsMonitoringInterval":5000 }, "threads":0, "nettyThreads":0, "codec":null, "useLinuxNativeEpoll":false }

4. Start the test session. Create session.jsp, add the following code, and upload it to TOMCAT_BASE/webapps/ROOT/.

 <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <! DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>shared session</title> </head> <body> <br>session id=<%=session.getId()%> </body> </html>

5. The following session information will appear when the browser accesses

 session id=1D349A69E8F5A27E1C12DEEFC304F0DC

6. Now check whether the value is stored in Redis. If there is a session value, the session value will not change after you restart Tomcat.

 # redis-cli 127.0.0.1:6379> keys * 1) "redisson_tomcat_session:1D349A69E8F5A27E1C12DEEFC304F0DC"

Has been successfully stored.

For Redis cluster or multi instance mode, please refer to the following additional information for configuration:

//github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks#144-tomcat-redis-session-manager
//github.com/redisson/redisson/wiki/2.-Configuration
//Github. com/redisson/redisson/wiki/2. -% E9% 85% 8D% E7% BD% AE% E6% 96% B9% E6% B3% 95 (Chinese document)

Optimal Nginx Profile Optimization Scheme

The following are the applicable Nginx optimization files in the production environment. You can also adjust them according to your own needs.

 user www www; #Users&Groups worker_processes auto; #Usually, it is the number of CPU cores, the number of hard disks storing data, and the load mode. If you are not sure, set it to the number of available CPU cores (set it to "auto" to automatically detect it) error_log /usr/local/nginx/logs/error.log crit; pid /usr/local/nginx/logs/nginx.pid; #Specify the location of the pid file, and the default value can worker_rlimit_nofile 65535; #Change the maximum number of open files limit for worker processes events { use epoll; multi_accept on; #After Nginx receives a new connection notification, it calls accept() to accept as many connections as possible worker_connections 65535; #The maximum number of clients accessed. When modifying this value, it cannot exceed the worker_rlimit_nofile value } http { include mime.types; default_type application/octet-stream; #Default MIME type used log_format '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #Define log format charset UTF-8; #Set the header file default character set server_tokens off; #When Nginx opens the webpage and reports an error, turn off the version number display access_log off; sendfile on; tcp_nopush on; #Tell nginx to send all header files in one packet instead of sending them one by one tcp_nodelay on; #Whether to enable the nagle caching algorithm and tell nginx not to cache data sendfile_max_chunk 512k; #The number of transfers per call of each process cannot be greater than the set value. The default value is 0, that is, no upper limit is set keepalive_timeout 65; #HTTP connection duration. The larger the value, the more useless threads become. 0: Turn off this function. The default value is 75 client_header_timeout 10; client_body_timeout 10; #The above two items are to set the timeout of the request header and the request body reset_timedout_connection on; #Tell nginx to close the unresponsive client connection send_timeout 30; #The client response timeout. If the client stops reading data, release the expired client connection. The default is 60s limit_conn_zone $binary_remote_addr zone=addr:5m; #It is used to save various keys, such as the shared memory parameter of the current number of connections. 5m is 5 megabytes. This value should be set large enough to store (32K * 5) 32byte status or (16K * 5) 64byte status limit_conn addr 100; #The maximum number of key connections. Here, the key is addr, and the value I set is 100. This allows each IP address to open up to 100 connections at the same time server_names_hash_bucket_size 128; #If the error could not build the server_names_hash, you should increase occurs during the startup of nginx, please increase the value of this parameter to 64 client_body_buffer_size 10K; client_header_buffer_size 32k; #The buffer size of the client request header, which can be set according to the paging size of your system large_client_header_buffers 4 32k; client_max_body_size 8m; #Uploading file size settings, generally dynamic application type
 #Thread pool optimization, compile with the -- with threads configuration parameter #aio threads; #thread_pool default threads=32 max_queue=65536; #aio threads=default; #For more threads, please click see

 #Fastcgi performance tuning fastcgi_connect_timeout 300; #Timeout for connecting to the backend Fastcgi fastcgi_send_timeout 300; #The connection with Fastcgi will be automatically disconnected if data is not transmitted for a long time fastcgi_read_timeout 300; #Receiving Fastcgi response timeout fastcgi_buffers 4 64k; #It can be set to the size of most replies returned by FastCGI, so that most requests can be processed, and larger requests will be buffered to disk fastcgi_buffer_size 64k; #Specify the buffer size required to read the first part of the Fastcgi response. You can set the buffer size specified by the gastcgi_buffers option fastcgi_busy_buffers_size 128k; #Busy buffer, which can be twice as fast cgi_buffer fastcgi_temp_file_write_size 128k; #How large a data block will be used when writing fastcgi_temp_path. The default value is twice that of fastcgi_buffers. The smaller the value, the more likely the 502 BadGateway will be reported fastcgi_intercept_errors on; #Whether to send 4 * *&5 * * error messages to the client, or allow nginx to use error_page to process error messages
 #Fastcgi_cache configuration optimization (for multi site virtual hosts, all but fastcgi_cache_path (note keys_zone=name) are added to the php module) fastcgi_cache fastcgi_cache; #Enable FastCGI cache and specify a name. Enable cache to reduce CPU load and prevent 502 error fastcgi_cache_valid 200 302 301 1h; #Define which http headers to cache fastcgi_cache_min_uses 1; #How many requests will the URL be cached fastcgi_cache_use_stale error timeout invalid_header http_500; #Define when to use expiration cache #fastcgi_temp_path /usr/local/nginx/fastcgi_temp; fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=fastcgi_cache:15m inactive=1d max_size=1g; #Keys_zone=name of cache space, cache=how much memory is used, inactive=default expiration time, max_size=how much hard disk space is used at most. #Cache directory, you can set directory level, for example: 1:2 will generate 16 * 256 word directories fastcgi_cache_key $scheme$request_method$host$request_uri; #Define the key of fastcgi_cache #fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
 #Response Header add_header X-Cache $upstream_cache_status; #Cache hit add_header X-Frame-Options SAMEORIGIN; #It is a response header introduced to reduce Clickjacking add_header X-Content-Type-Options nosniff; #GZIP performance optimization gzip on; gzip_min_length 1100; #The minimum number of bytes to enable compression for data, such as: Do not compress files that are less than 1K. Compressing small data will reduce the speed of all processes processing this request gzip_buffers 4 16k; gzip_proxied any; #Allow or disable compression of request and response based response streams. If set to any, all requests will be compressed gzip_http_version 1.0; gzip_comp_level 9; #The gzip compression level is within 0-9. The higher the compression rate, the higher the CPU consumption gzip_types text/plain text/css application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json image/jpeg image/gif image/png; #Compression type gzip_vary on; #Varyheader support enables the front-end cache server to identify the compressed file, proxy include /usr/local/nginx/conf/vhosts/*.conf; #Directives to include the contents of another file in the current file
 #Cache performance tuning of static files open_file_cache max=65535 inactive=20s; #This will specify the cache for open files, and max specifies the number of caches. It is recommended to be consistent with the number of open files. Active refers to how long it takes to delete the cache after a file has not been requested open_file_cache_valid 30s; #This refers to how often the cached valid information is checked. For example, if I have been accessing this file, I will check whether it is updated 30 seconds later, and vice versa open_file_cache_min_uses 2; #Defines the minimum number of files during the inactivity time of the instruction parameters in open_file_cache open_file_cache_errors on; #NGINX can cache errors that occur during file access. This value needs to be set to be valid. If error caching is enabled, when accessing resources (not searching resources) NGINX will report the same error
 #Resource cache optimization server { #Immobilizer setting location ~* \. (jpg|gif|png|swf|flv|wma|asf|mp3|mmf|zip|rar)$ { #Anti theft type valid_referers none blocked *.renwole.com renwole.com; #The none blocked parameter is optional. The domain name of the resource file is allowed if ($invalid_referer) { return 403; #rewrite ^/ //renwole.com #If the qualified domain name is not met, 403 or 404 can also be the domain name } } location ~ .*\. (js|css)$ { access_log off; expires 180d; #Health check or picture JS. CSS logs. You do not need to record logs. PV is calculated by page. Frequent writes will consume IO } location ~* ^.+\. (ogg|ogv|svg|svgz|eot|otf|woff|mp4|swf|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires 180d; #View&elements rarely change. Content can be cached locally. No need to download when you visit the website again. Save traffic. Speed up access. Cache for 180 days } } server { listen 80 default_server; server_name .renwole.com; rewrite ^ //renwole.com$request_uri?; } server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2; server_name .renwole.com; root /home/web/renwole; index index.html index.php; ssl_certificate /etc/letsencrypt/live/renwole.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/renwole.com/privkey.pem; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES 128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:! DSS'; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_prefer_server_ciphers on; add_header Strict-Transport-Security max-age=15768000; ssl_stapling on; ssl_stapling_verify on; include /usr/local/nginx/conf/rewrite/wordpress.conf; access_log /usr/local/nginx/logs/renwole.log; location ~ \.php$ { root /home/web/renwole; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/www/php-cgi.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } }

The above is the optimization of the environment used by this site. If you have a better plan, you can share it. If there are any shortcomings, please forgive me.

For more modules, see//nginx. org/en/docs/

Nginx cluster load balancer: NFS file storage share installation configuration optimization

In the previous article《 Nginx 1.3 Cluster Load Balancing Reverse Proxy Installation Configuration Optimization 》This article focuses on the file storage process.

10.10.204.62 Load Balancing
10.10.204.63 Nginx Web server
10.10.204.64 Nginx Web server
10.10.204.65 File Storage

1. File Storage Server Installation

 yum -y install nfs-utils

2. Configure NFS and create a shared directory

 # mkdir -p /Data/webapp # vim /etc/exports /Data/webapp 10.10.204.0/24(rw,sync,no_subtree_check,no_root_squash)

3. Enable self start

 # systemctl enable rpcbind # systemctl enable nfs-server # systemctl start rpcbind # systemctl start nfs

4. Relevant parameters:

rw:read-write: Read-write; Ro: read-only; Sync: The file is written to the hard disk and memory at the same time.
no_root_squash: The visiting root user maintains the root account permissions; It is obviously unsafe to open this item.
root_squash: Future root users are mapped to anonymous users or user groups; Usually it will use nobody or nfsnobody identity.
all_squash: All access users are mapped to anonymous users or user groups;
anonuid: The UID value of anonymous users can be set here. Anongid: GID value of anonymous user.
sync: Synchronously writing data to memory buffer and disk is inefficient, but it can ensure data consistency.
async: Files are temporarily stored in memory instead of being written directly to memory.
no_subtree_check : Even if the output directory is a subdirectory, the nfs server does not check the permissions of its parent directory, which can improve efficiency.

5. File Storage Server Firewall Configuration

 # firewall-cmd --permanent --add-service=rpc-bind # firewall-cmd --permanent --add-service=nfs # firewall-cmd --reload

6. Nginx Web server installation and mounting

 # yum -y install nfs-utils # mkdir -p /Data/webapp # mount -t nfs 10.10.204.65:/Data/webapp /Data/webapp

7. If you need to start up and automatically mount, just add a line at the bottom of the file

 # vim /etc/fstab 10.10.204.65:/Data/webapp /Data/webapp nfs auto,rw,vers=3,hard,intr,tcp,rsize=32768,wsize=32768 0 0

 

8. Nginx Web server server test

Write 16384 16KB blocks consecutively to the testfile file in the nfs directory

 # time dd if=/dev/zero of=/Data/webapp/testfile bs=16k count=16384 16384+0 records in 16384+0 records out 268435456 bytes (268 MB) copied, 2.89525 s, 92.7 MB/s real 0m2.944s user 0m0.015s sys 0m0.579s

Test the read performance

 # time dd if=/nfsfolder/testfile of=/dev/null bs=16k 16384+0 records in 16384+0 records out 268435456 bytes (268 MB) copied, 0.132925 s, 2.0 GB/s real 0m0.138s user 0m0.003s sys 0m0.127s

Overall, the speed of NFS is ideal. If you think the speed is slow, add relevant parameters, repeatedly mount and unload, and test read-write to find a suitable configuration scheme.

Nginx 1.3 cluster load balancer reverse proxy installation configuration optimization

What is load balancing? In short: Load balancing refers to the effective distribution of incoming network traffic through a group of back-end servers (also known as server clusters or server pools). Just like traffic lights, cross can maximize speed and capacity utilization, ensure that no server is overloaded, and route client requests on all servers. If a single server shuts down or fails, performance may be reduced, and the load balancer redirects traffic to the remaining online servers. When a new server is added to a server group, the load balancer automatically starts sending requests to it.

Environmental Science:
OS:CentOS Linux release 7.3.1611 (Core) x86_64
Web server:nginx version: nginx/1.13.3

10.10.204.62 Load Balancing
10.10.204.63 Nginx Web server
10.10.204.64 Nginx Web server
10.10.204.65 File Storage

1. Nginx Web server installation, I will not describe it, please refer to《 Nginx Installation 》。

2. Modify the host names of the four servers, which are generally IP addresses. Restart the servers after the modification.

 [ root@localhost  ~]# vim /etc/hostname

3.10.10.204.62 Load Balancing Nginx complete configuration file. Note; Observe the red part;

 [ root@10-10-204-62  ~]# cat /usr/local/nginx/conf/nginx.conf
 user www www; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid /usr/local/nginx/logs/nginx.pid; worker_rlimit_nofile 65535; events { use epoll; worker_connections 65535; } http { include mime.types; default_type application/octet-stream; server_tokens off; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; tcp_nopush on; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 8m; tcp_nodelay on; keepalive_timeout 65; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; gzip on; gzip_min_length 1100; gzip_buffers 4 16k; gzip_http_version 1.0; #The compression level of gzip is between 0-9. The larger the number, the higher the compression rate, but the CPU consumption is also large. gzip_comp_level 9; gzip_types text/plain text/html text/css application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_vary on; # upstream webserverapps {
 ip_hash;
 server 10.10.204.63:8023 weight=1 max_fails=2 fail_timeout=2;
 server 10.10.204.64:8023 weight=1 max_fails=2 fail_timeout=2;
 #server 127.0.0.1:8080 backup;
  }
  server {
 listen 80;
 server_name www.myname.com myname.com;
 location / {
 proxy_pass //webserverapps;
 proxy_redirect off;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 client_max_body_size 10m;
 client_body_buffer_size 128k;
 proxy_connect_timeout 90;
 proxy_send_timeout 90;
 proxy_read_timeout 90;
 proxy_buffer_size 4k;
 proxy_buffers 4 32k;
 proxy_busy_buffers_size 64k;
 proxy_temp_file_write_size 64k;
 add_header Access-Control-Allow-Origin *;
 }
 } }

4. Nginx Web server configuration: 10.10.204.63&10.10.204.64.

 server { listen 8989; Server_name IP address; location / { root /home/web/wwwroot/; index index.html index.htm index.php; #Nginx supports the http_realip_module module to obtain the client IP set_real_ip_from 10.10.204.0/24; real_ip_header X-Real-IP; } }

5. Nginx requires that upstream is the HTTP Upstream module of Nginx for load balancing.

Nginx's load balancing upstream module currently supports six scheduling algorithms, which are described below. The last two belong to third-party scheduling algorithms.

Polling (default) : Each request is allocated to different back-end servers one by one in chronological order. If a back-end server goes down, the faulty system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight value. The larger the Weight value, the higher the access probability assigned to it.
ip_hash: Each user request is allocated according to the hash result of the access IP, so that visitors from the same IP can access a back-end server, effectively solving the session sharing problem of dynamic web pages.
fair: More intelligent load balancing algorithm than the above two. This algorithm can intelligently allocate the load according to the length of the&load time, and allocate the request according to the response time of the back-end server. The shorter the response time, the priority will be allocated.
url_hash: This method allocates requests according to the hash result of the access url, so that each url is directed to the same backend server, which can further improve the efficiency of the backend cache server.
least_conn: The least connection load balancing algorithm is simply that each selected backend is the server with the least current connections (the least connections are not shared, and each worker has its own array to record the number of connections to the backend server).
Hash: This hash module supports two modes of hash, one is normal hash, and the other is consistent hash.

6. In the HTTP Upstream module, you can specify the IP address and port of the back-end server through the server command, and you can also set the status of each back-end server in load balancing scheduling. Common health states are:

down: Indicates that the current server does not participate in load balancing temporarily.
backup: Reserved backup machine. The backup machine will be requested only when all other non backup machines fail or are busy, so this machine has the least pressure.
max_fails: The number of times a request is allowed to fail. The default value is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.
fail_timeout: The time to pause the service after max_fails failures. Max_fails can be used with fail_timeout.

Note: When the load scheduling algorithm is ip_hash, the status of the backend server in the load balancing scheduling cannot be backup.

Note: stream is defined outside server {} and cannot be defined inside server {}. After defining the upstream, use proxy_pass to reference it. In addition, every time you modify the nginx. conf configuration file, you need to reload the Nginx service.

7. For more solutions to the session sharing problem, please continue to refer to《 Redis Cache PHP 7.2 Session Variable Sharing 》。

8. File server

Note: In fact, you may have a question about where the files they need come from in the distributed computing of so many servers. It is necessary to ensure not only the consistency of data, but also the security of data. The file storage server will be used at this time. You only need to share the data files of 10.10.204.65 file server to 10.10.204.63&10.10.204.64. The NFS network file system usually completes file sharing, and I have installed it here. If you haven't already installed it, see Nginx cluster load balancing: NFS file storage share installation configuration optimization

Linux Nginx website: Certbot installation configuration Let Encrypt SSL free HTTPS encryption certificate

Experimental environment: CentOS Linux release 7.3.1611 (Core)
Kernel version: Linux version 3.10.0-514.el7.x86_64
Nginx version: Nginx-1.13.0

Let's Encrypt is a free, automated and open certification authority. Initiated by Mozilla, Cisco, Chrome, Facebook, Akamai and many other companies and institutions, it is safe, stable and reliable. For details, please go to the official website of letsencrypt.

Today we will make full use of Let Encrypt to make your website https encrypted.

Official website://letsencrypt.org/

1. Install certbot and source expansion package

 $ yum install -y epel-release

Certbot is an officially designated and recommended client of Let's Encrypt. With Certbot, you can automatically deploy Let's Encrypt SSL certificates to add HTTPS encryption support to websites.

 $ yum install certbot $ certbot certonly
 Saving debug log to /var/log/letsencrypt/letsencrypt.log How would you like to authenticate with the ACME CA? //How do you want to use ACME CA for authentication? ------------------------------------------------------------------------------- 1: Place files in webroot directory (webroot) //Place the file in the webroot directory 2: Spin up a temporary webserver (standalone) //Use temporary web server (stand-alone directory) ------------------------------------------------------------------------------- Select the appropriate number [1-2] then [enter] (press 'c' to cancel): one  [Select 1 Enter] Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): su@renwole.com [Enter your email address for emergency updates and security notifications] Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org ------------------------------------------------------------------------------- Please read the Terms of Service at //letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf.  You must agree in order to register with the ACME server at //acme-v01.api.letsencrypt.org/directory ------------------------------------------------------------------------------- (A)gree/(C)ancel: A [Select A to enter to agree to the service terms, and C to reject] ------------------------------------------------------------------------------- Would you be willing to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot?  We'd like to send you email about EFF and our work to encrypt the web, protect its users and defend digital rights. ------------------------------------------------------------------------------- (Y)es/(N)o: Y [Are you willing to share your email address? It is recommended to select Y Enter] Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): blog.renwole.com [Enter the domain name and press Enter] Obtaining a new certificate Performing the following challenges: http-01 challenge for blog.renwole.com Select the webroot for blog.renwole.com: ------------------------------------------------------------------------------- 1: Enter a new webroot //Enter the absolute path of the website ------------------------------------------------------------------------------- Press 1 [enter] to confirm the selection (press 'c' to cancel): one [Enter by selecting the number 1] Input the webroot for blog.renwole.com: (Enter 'c' to cancel): /home/www/blog.renwole.com [Enter the absolute path of the website] Waiting for verification... Waiting for verification... Cleaning up challenges Generating key (2048 bits): /etc/letsencrypt/keys/0001_key-certbot.pem Creating CSR: /etc/letsencrypt/csr/0001_csr-certbot.pem IMPORTANT NOTES: - Congratulations!  Your certificate and chain have been saved at /etc/letsencrypt/live/blog.renwole.com/fullchain.pem . Your cert will expire on 2017-08-09 . To obtain a new or tweaked version of this certificate in the future, simply run certbot again.  To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: //letsencrypt.org/donate Donating to EFF:

congratulations! Your SSL certificate and key link have been saved, and your certificate will expire on August 9, 2017.

be careful: It should be noted here that before generating a certificate, you must ensure that the nginx 443 port is running, or the certificate generation will fail.

2. Automatic renewal

Certbot can be configured to automatically update certificates before they expire. Since the validity period of Let's Encrypt SSL certificate is 90 days, it is recommended that you take advantage of this function. You can test the automatic renewal of certificates by running the following command:

 $ sudo certbot --nginx certonly

If the above works normally, you can schedule automatic updates by adding cron or systemd scheduled tasks that run the following operations:

 certbot renew

We write an automatic execution script, which is recommended to be executed every hour:

 $ sudo crontab -e

Add the following:

 0 */6 * * * /usr/bin/certbot renew --quiet && /bin/systemctl restart nginx

Save and exit!

Check whether the addition is successful through the command:

 $ crontab -l 0 */6 * * * /usr/bin/certbot renew --quiet && /bin/systemctl restart nginx

Restart crontab

 $ systemctl status crond.service $ systemctl restart crond.service

Observe whether crontab executes through the command:

 $ tail -f /var/log/cron

Whether the certificate is renewed successfully, you can view the certificate information through the following command management:

 $ certbot certificates

For more Certbot commands, please refer to the official document//certbot.eff.org/docs/

3. Configure nginx.conf
Next, modify the Nginx configuration file, modify the sever segment, remove the corresponding comments, fill in the generated SSL certificate after ssl_certificate, fill in the generated key after ssl_certificate_key, save and restart the nginx server.

 # vi /usr/local/nginx/conf/nginx.conf server { listen 443 ssl; ssl_certificate /etc/letsencrypt/live/blog.renwole.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/blog.renwole.com/privkey.pem; # ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; # ssl_ciphers HIGH:! aNULL:! MD5; ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } }

Access with Google Chrome //blog.renwole.com/ You can see the green small security lock icon, indicating that the website has been successfully encrypted with https.

Linux Nginx1.12.0 is smoothly upgraded to the new version nginx-1.13.3

This article demonstrates the production environment: Centos7.3 64 bit minimum installation.

1. View the current Nginx version information. The codes are as follows:

 # /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.12.0 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --user=www --group=www --with-pcre --with-http_v2_module --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_ index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-http_image_filter_module --with-mail --with-threads --with-mail_ssl_module --with-stream_ssl_module

2. Download nginx - 1.13.3 to/usr/local/, unzip and enter the unzipped directory. The codes are as follows:

 # cd /usr/local/ # wget //nginx.org/download/nginx-1.13.3.tar.gz #Wget//www.openssl.org/source/openssl-1.1.0e.tar.gz//Download the latest openssl #Tar zxvf openssl-1.1.0e.tar.gz//Extract the file # tar zxvf nginx-1.13.3.tar.gz # cd nginx-1.13.3

3. When viewing the nginx version, there are a series of modules behind configure. This is also the module specified when you first installed nginx. You should also specify it when upgrading, and you can also add other modules. The codes are as follows:

 ./configure \ --prefix=/usr/local/nginx \ --user=www \ --group=www \ --with-pcre \ --with-openssl=/tmp/openssl-1.1.0e \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_auth_request_module \ --with-http_image_filter_module \ --with-mail \ --with-threads \ --with-mail_ssl_module \ --with-stream_ssl_module \
 # make

4. Note: After making, execute the following code. You do not need to execute make install, otherwise the installation will be overwritten, and various problems will occur with the nginx service.

The normal operation of the nginx web server without interruption is called smooth upgrade. Rename the previous nginx binary file first. The codes are as follows:

 # mv /usr/local/nginx/sbin/nginx /usr/local/nginx/sbin/nginx.bak

Copy the newly compiled Nginx binary file to the/usr/local/nginx/sbin/directory; The codes are as follows:

 # cp /usr/local/nginx-1.13.3/objs/nginx /usr/local/nginx/sbin/

5. Start executing the upgrade command. The codes are as follows:

 # cd /usr/local/nginx-1.13.3 # make upgrade /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful kill -USR2 `cat /usr/local/nginx/logs/nginx.pid` sleep 1 test -f /usr/local/nginx/logs/nginx.pid.oldbin kill -QUIT `cat /usr/local/nginx/logs/nginx.pid.oldbin`

6. Check the upgraded version information of nginx again. The codes are as follows:

 # /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.13.3 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.1.0e 16 Feb 2017 TLS SNI support enabled

You can see that it has been successfully upgraded to 1.13.3. This tutorial can be used in the production environment. I hope it can help everyone. Please specify the reprint!

Centos 7 source code compilation and installation Nginx 1.13

I won't say anything about nginx. Since you choose nginx as your web server, you must have different understanding of nginx server. Next, I will install it directly.

1. Prerequisites:

I use the 64 bit core version of centos 7.3, The nginx dependency package must be installed before installing and configuring nginx , please check; Production of Compiling and Installing PHP 7.1 in Centos 7 , and install the dependency package provided at the beginning of the previous article. This dependency package is applicable to any version of Nginx.

New web user and group

 $ /usr/sbin/groupadd www $ /usr/sbin/useradd -g www www $ ulimit -SHn 65535 //Set Linux high load parameters

2. Download Nginx and OpenSSL from the official

When downloading Nginx, there are two versions: the development version and the stable version. If it is used for production, download the stable version,//nginx.org/en/download.html (it is better to download the latest stable version, which will have bug fixes and new features). I download the latest version of nginx-1.13.5.

 $ cd /tmp $ wget //www.openssl.org/source/openssl-1.1.0e.tar.gz $ tar zxvf openssl-1.1.0e.tar.gz $ wget //nginx.org/download/nginx-1.13.5.tar.gz $ tar zxvf nginx-1.13.5.tar.gz $ cd nginx-1.13.5

3. Install Nginx

You may notice that some document tutorials do not assign so many modules when installing nginx (it seems long), and some do not even assign modules and users. In fact, modules are assigned according to their own needs. If you want to be free from trouble in the future, you can assign them according to the following modules. In fact, this is all powerful, Otherwise, you have to recompile what you need later, which is not very troublesome, but it is not easy. As for whether to assign user groups, I will definitely let you assign them, which may affect the availability, security and stability of nginx configuration.

 $ ./ configure \ --prefix=/usr/local/nginx \ --user=www \ --group=www \ --with-pcre \ --with-openssl=/tmp/openssl-1.1.0e \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_stub_status_module \ --with-http_auth_request_module \ --with-http_image_filter_module \ --with-http_slice_module \ --with-mail \ --with-threads \ --with-file-aio \ --with-stream \ --with-mail_ssl_module \ --with-stream_ssl_module \
 $make - j8&&make install//Compile and install

4. Create the system ctl system Nginx unit file

After the installation is completed, you need to start the machine automatically. Otherwise, you need to start the machine manually every time. That would be too troublesome.

 $ vim /usr/lib/systemd/system/nginx.service [Unit] Description=The nginx HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/usr/local/nginx/logs/nginx.pid ExecStartPre=/usr/local/nginx/sbin/nginx -t ExecStart=/usr/local/nginx/sbin/nginx ExecReload=/bin/kill -s HUP /usr/local/nginx/logs/nginx.pid ExecStop=/bin/kill -s QUIT /usr/local/nginx/logs/nginx.pid PrivateTmp=true [Install] WantedBy=multi-user.target

Save and exit.

5. Add boot auto start and start Nginx

 $ systemctl enable nginx.service $ systemctl restart nginx.service

6. Set Firewalld firewall

 $ firewall-cmd --zone=public --add-port=80/tcp --permanent $ firewall-cmd --reload

7. Check whether Nginx starts successfully

 $ ss -ntlp

You can see that the nginx process has run. Now that the installation of nginx is complete, you may have questions about how nginx parses and supports php programs. Don't panic. I will write in the next article.