The most complete nginx installation and upgrade security configuration in history

original
2022/11/04 10:42
Reading 6.9K

background

Nginx is a commonly used proxy service software. The proxy layer is usually close to users. The security of the proxy layer is very important. We need to configure and upgrade the security of the proxy layer in our daily work.

Here, we choose to deploy openresty, which is a Web development platform with Nginx as the core. It can parse and execute Lua scripts to facilitate later web development based on nginx or self developed WAF.

1. Download openretry

Visit the official website https://openresty.org/cn/ Download the latest version of openretry

 cd /root/ wget  https://openresty.org/download/openresty-1.21.4.1.tar.gz

2. Nginx Compilation Security Configuration

 tar xvf openresty-1.21.4.1.tar.gz cd /root/openresty-1.21.4.1/bundle/nginx-1.21.4/ #- 1. Hide version vim src/core/nginx.h #define NGINX_VERSION      "6666" #define NGINX_VER          "FW/" NGINX_VERSION ".6" #define NGINX_VAR          "FW" #- 2. Modify the header vim  src/http/ngx_http_header_filter_module.c # 49 static u_char ngx_http_server_string[] = "Server: FW" CRLF; #- 3. Modify the response header of the error page vim src/http/ngx_http_special_response.c # 22 "<hr><center>FW</center>" CRLF # ... # 29 "<hr><center>FW</center>" CRLF # ... # 36 "<hr><center>FW</center>" CRLF

3. Add third-party module

3.1 Dynamically Configuring Upstream Modules nginx_upstream_check_module

Check out github code

 cd /root git clone  https://github.com/yzprofile/ngx_http_dyups_module.git

3.2 Add the upstream monitoring and inspection module nginx_upstream_check_module

Check out github code

 git clone  https://github.com/yaoweibin/nginx_upstream_check_module.git

3.3 Add nginx monitoring module nginx-module-vts

Check out github code

 https://github.com/vozlt/nginx-module-vts.git

4. Compile secure nginx

Before compiling, you need to patch nginx because nginx-module-vts Module monitoring needs

 Switch to nginx source code directory cd /root/openresty-1.21.4.1/bundle/nginx-1.21.4/ patch up patch -p1 < /root/nginx_upstream_check_module/check_1.20.1+.patch

Compile secure nginx

 cd /root/openresty-1.21.4.1/ ./configure --prefix=/apps/nginx --with-http_realip_module  --with-http_v2_module --with-http_image_filter_module --with-http_iconv_module  --with-stream_realip_module --with-stream --with-stream_ssl_module --with-stream_geoip_module --with-http_slice_module --with-http_sub_module --add-module=/root/ngx_http_dyups_module --add-module=/root/nginx_u pstream_check_module --with-http_stub_status_module --with-http_geoip_module --with-http_gzip_static_module --add-module=/root/nginx-module-vts make make install

If the following error is reported, check whether the patch is not applied

 /root/nginx-module-vts/src/ngx_http_vhost_traffic_status_display_json.c: In function ‘ngx_http_vhost_traffic_status_display_set_upstream_grou’: /root/nginx-module-vts/src/ngx_http_vhost_traffic_status_display_json.c:604:61: error: ‘ngx_http_upstream_rr_peer_t’ {aka ‘struct ngx_http_upstream_rr_peer_s’} has no member named ‘check_index’;  did you mean ‘checked’? if (ngx_http_upstream_check_peer_down(peer->check_index)) { ^~~~~~~~~~~ checked make[2]: *** [objs/Makefile:3330: objs/addon/src/ngx_http_vhost_traffic_status_display_json.o] Error 1 make[2]: Leaving directory '/root/openresty-1.21.4.1/build/nginx-1.21.4' make[1]: *** [Makefile:10: build] Error 2 make[1]: Leaving directory '/root/openresty-1.21.4.1/build/nginx-1.21.4' make: *** [Makefile:9: all] Error 2 [ root@localhost.localdomain  openrest

terms of settlement:

 yum install patch cd /root/openresty-1.21.4.1/bundle/nginx-1.21.4/ patch -p1 < /root/nginx_upstream_check_module/check_1.20.1+.patch

start nginx

 Start: /apps/nginx/nginx/sbin/nginx -c /apps/nginx/nginx/conf/nginx.conf reload: /apps/nginx/nginx/sbin/nginx -s reload -c /apps/nginx/nginx/conf/nginx.conf

5. Nginx upgrade

In our work, we will encounter nginx vulnerabilities, such as the openssl vulnerability, so we need to upgrade the nginx version, or we need to upgrade nginx because of some features of nginx. There are two ways to upgrade (the main topic here is deployed in the virtual machine, and the container can be re imaged): first, open a new virtual machine to directly upgrade the nginx version, then copy the nginx configuration to start it, and mount it on the LB after verification, and gradually replace the old nginx; The second is to upgrade on the original machine. Here we mainly talk about the second way. Upgrade steps:

 Premises: 1. There are multiple nginx, and removing one from the LB will not affect the service 2. pid path:/data/data/nginx/conf/nginx. pid; 3. The conf directory path is independent:/data/data/nginx/conf/ Upgrade steps: 1. Remove the nginx to be upgraded from the LB, observe the nginx log, and make sure there is no traffic before taking the next action 2. Specify a new/ Configure -- prefix=/apps/nginx_new directory 3. After installation, point the conf in the nginx_new directory to/data/data/nginx/conf/ 4. nginx reload :  /apps/nginx_new/nginx/sbin/nginx -s reload -c /data/data/nginx/conf//nginx.conf 5. Verify the upgraded nginx. If there is no problem, mount it to LB, and continue to repeat the above steps to complete other nginx upgrades

6. Nginx security configuration

6.1 Information disclosure, turn off nginx version number display

 http{ server_tokens off ....

6.2 Disabling Unwanted Nginx Modules

The automatically installed Nginx has many built-in modules, which are not required by all modules. Non necessary modules, such as autoindex module, can be disabled. The following shows how to disable them

 # ./ configure --without-http_autoindex_module # make # make install

6.3 Control resources and restrictions

To prevent potential DOS attacks on Nginx, you can set buffer size limits for all clients. The configurations are as follows:

 ## Start: Size Limits & Buffer Overflows ## client_body_buffer_size  1K; client_header_buffer_size 1k; client_max_body_size 1k; large_client_header_buffers 2 1k; ## END: Size Limits & Buffer Overflows ##

Client_body_buffer_size 1k;: (default 8k or 16k) This command can specify the buffer size of the connection request entity. If the connection request exceeds the value specified in the cache, the whole or part of these request entities will try to write a temporary file. Client_header_buffer_size 1k;: Specifies the buffer size of the client request header. In most cases, a request header will not be larger than 1k. However, if there is a larger cookie from the wap client, it may be larger than 1k. Nginx will allocate a larger buffer to it. This value can be set in large_client_header_buffers. Client_max_body_size 1k;: The instruction specifies the maximum request entity size allowed for client connections, which appears in the Content Length field of the request header. If the request is greater than the specified value, the client will receive a "Request Entity Too Large" (413) error. Remember, browsers don't know how to display this error. Large_client_header_buffers 2 1k;: specifies the number and size of buffers used by some large request headers on the client. The request field cannot be larger than a buffer size. If the client sends a relatively large header, nginx will return "Request URI too large" (414). Similarly, the longest field in the request header cannot be larger than a buffer, otherwise the server will return "Bad request" (400). Buffers are separated only when required. The default size of a buffer is the paging file size in the operating system, usually 4k or 8k. If a connection request finally transitions to keep alive, the buffer it occupies will be released.

You also need to control timeout to improve server performance and disconnect from the client. The configuration is as follows:

 ## Start: Timeouts ## client_body_timeout   10; client_header_timeout 10; keepalive_timeout     5 5; send_timeout          10; ## End: Timeouts ##

Client_body_timeout 10;: The instruction specifies the timeout for reading the request entity. The timeout here means that a request entity does not enter the read step. If the connection exceeds this time and the client does not respond, Nginx will return a "Request time out" (408) error. Client_header_timeout 10;: The instruction specifies the timeout for reading the header header of the client request. The timeout here means that a request header does not enter the read step. If the connection exceeds this time and the client does not respond, Nginx will return a "Request time out" (408) error. Keepalive_timeout 5 5;: The first value of the parameter specifies the timeout period for a long connection between the client and the server. After this time, the server will close the connection. The second value (optional) of the parameter specifies the time value of Keep Alive: timeout=time in the response header. This value enables some browsers to know when to close the connection, so that the server does not have to close repeatedly. If this parameter is not specified, nginx will not send Keep Alive information in the response header. (This does not mean how to connect a "Keep Alive" parameter.) The two values of the parameter can be different. Send_timeout 10;: specifies the timeout period after sending the response to the client. Timeout means that the client has not entered the fully established state and only completed two handshakes. If the client does not respond after this time, nginx will close the connection.

6.4 Disable all unnecessary HTTP methods

Disable all unnecessary HTTP methods. The following settings mean that only GET, HEAD and POST methods are allowed, and DELETE and TRACE methods are filtered out.

 location / { limit_except GET HEAD POST { deny all; } }

The other method is to set it in the server block, but it is set globally. Pay attention to evaluating the impact

 if ($request_method !~ ^ (GET|HEAD|POST)$ ) { return 444; }

6.5 Preventing Host Header Attacks

Add a default server. When the host header is modified and cannot match the server, it will jump to the default server. The default server directly returns a 403 error.

 server { listen 80 default; server_name _; location / { return 403; } }

6.6 Configuring SSL and cipher suites

Nginx allows the use of insecure old SSL protocols by default, ssl_protocols TLSv1 TLSv1.1 TLSv1.2, The following modifications are recommended:

 ssl_protocols TLSv1.2 TLSv1.3;

In addition, you should specify cipher suites to ensure that the server configuration items are used during TLSv1 handshake to enhance security.

 ssl_prefer_server_ciphers on

6.7 Preventing Picture Piracy

Picture or HTML piracy means that someone directly uses the image address of your website to display on his website. As a result, you will have to pay extra for broadband. This is usually in forums and blogs. I strongly recommend that you block and prevent the acts of piracy.

 location /images/ { valid_referers none blocked www.example.com example.com; if ($invalid_referer) { return   403; } }

For example, redirect and display the specified picture.

 valid_referers blocked www.example.com example.com; if ($invalid_referer) { rewrite ^/images/uploads.*.(gif|jpg|jpeg|png)$  http://www.examples.com/banned.jpg  last }

6.8 Directory restrictions

You can set access permissions on the specified directory. All website directories should be configured one by one, allowing only necessary directory access.

You can restrict access to directories through IP addresses

 location /docs/ { ## block one workstation deny    192.168.1.1; ## allow anyone in 192.168.1.0/24 allow   192.168.1.0/24; ## drop rest of the world deny    all; }

You can also first create a password file and add "user" users by password protecting the directory

 mkdir /app/nginx/nginx/conf/.htpasswd/ htpasswd -c /app/nginx/nginx/conf/.htpasswd/passwd user

Edit nginx.conf and add the directory to be protected

 location ~ /(personal-images/.*|delta/.*) { auth_basic   "Restricted"; auth_basic_user_file   /usr/local/nginx/conf/.htpasswd/passwd; }

Once the password file has been generated, you can also use the following command to increase the users allowed to access

 htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName

6.9 Reject some User Agents

Reject some User Agents. You can easily block User Agents, such as scanners, robots and spammers who abuse your server.

 ## Block download agents ## if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } ##

6.10 nginx to Internet IP

If nginx is vulnerable, remote execution may exist. Download the attack tool to nginx machine through IP and use nginx machine as a springboard to attack. Through the LB proxy nginx, traffic flows through the LB first and then to nginx. Do not directly use nginx through the external network IP.

6.11 Configure reasonable response head

To further enhance the performance of Nginx web, you can add several different response headers, X-Frame Options. You can use the X-Frame Options HTTP response header to indicate whether the browser should be allowed to \<frame\> or \<iframe\> The page is rendered in. This can prevent click hijacking attacks. Add:

 add_header X-Frame-Options "SAMEORIGIN";

Strict-Transport-Security HTTP Strict Transport Security, It is called HSTS for short. It allows an HTTPS website. The browser is required to always access it through HTTPS, and at the same time, it will refuse requests from HTTP. The operations are as follows:

 add_header Strict-Transport-Security "max-age=31536000;  includeSubdomains; preload";

CSP Content Security Policy (CSP) protects your website from being attacked by means such as XSS and SQL injection. The operations are as follows:

 add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;

When serving the content provided by the user, it includes the X-Content-Type Options: nosniff header option, which, together with the Content Type: header option, disables the content type detection of some browsers

 add_header X-Content-Type-Options nosniff;

X-XSS-Protection: means that XSS filtering is enabled (X-XSS-Protection: 0 if filtering is disabled), and mode=block means that if XSS attacks are detected, the page will stop rendering

 add_header X-XSS-Protection "1;  mode=block";

6.12 Full site https

Jump all http to https

 server { listen 80 default_server; listen [::]:80 default_server; server_name .example.com; return 301 https://$host$request_uri; }

6.13 Control the number of concurrent connections

You can limit the number of concurrent connections of an IP through the ngx_http_limit_conn_module module

 http { limit_conn_zone $binary_remote_addr zone=limit1:10m; server { listen 80; server_name example.com; root /apps/project/webapp; index index.html; location / { limit_conn limit 10; } access_log /data/log/nginx/nginx_access.log main; } }

limit_conn_zone : Set the parameters of the shared memory space for saving the status of each key (for example, $binary_remote_addr), Zone=space name: the calculation of size is related to variables. For example, the size of the $binary_remote_addr variable is 4 bytes fixed for recording IPV4 addresses, and 16 bytes fixed for recording IPV6 addresses. The storage status occupies 32 or 64 bytes in 32-bit platforms, and 64 bytes in 64 bit platforms. 1m shared memory space can store about 32000 32-bit statuses and 16000 64 bit statuses limit_conn : Specify a set shared memory space (for example, the space whose name is limit1) and the maximum number of connections for each given key value

The above example shows that only 10 connections are allowed for the same IP at the same time

When multiple limit_conn directives are configured, all connection number limits will take effect

 http { limit_conn_zone $binary_remote_addr zone=limit1:10m; limit_conn_zone $server_name zone=limit2:10m; server { listen 80; server_name example.com; root /data/project/webapp; index index.html; location / { limit_conn limit1 10; limit_conn limit2 2000; } } }

The above configuration not only limits the number of connections to a single IP source to 10, but also limits the total number of connections to a single virtual server to 2000

6.14 Connection authority control

In fact, the maximum number of connections in nginx is the total number of worker_processes multiplied by worker_connections.

That is to say, the following configuration is 4X65535. Generally speaking, we will emphasize that worker_processes is set to be equal to the number of cores, but worker_connections is not required. But at the same time, this setting actually gives the attacker space. The attacker can initiate so many connections at the same time to cross your server. Therefore, we should configure these two parameters more reasonably.

 user  www; worker_processes  4; error_log  /data/log/nginx/nginx_error.log  crit; pid        /data/data/nginx/conf/nginx.pid; events { use epoll; worker_connections 65535; }

However, there is no way to limit it. Since nginx 0.7, two new modules have been launched:

 HttpLimitReqModul:     Limit the number of requests per second for a single IP HttpLimitZoneModule:      Limit the number of connections to a single IP

These two modules need to be defined in the http layer first, and then restricted in the location, server, and http contexts. They use the leaky bucket algorithm to limit single ip access, that is, if the defined limit is exceeded, 503 errors will be reported. Thus, the cc attacks that break out are all restricted. Of course, sometimes a company may have dozens of people visiting the website with the same IP address, which may be injured by mistake. It is necessary to do a good job of 503 error callback.

First look at HttpLimitReqModule:

 http { limit_req_zone $binary_remote_addr zone=test_req:10m rate=20r/s;  server {  location /download/ { limit_req zone=test_req burst=5 nodelay; } } }

The http layer above is the definition. This is a limit_req_zone space called test_req, which is used to store session data. The size is 10M memory, and about 16000 IP replies can be stored in 1M, depending on the number of accesses. With $binary_remote_addr as the key, this definition is the client IP, which can be changed to $server_name and others. The average number of requests per second is limited to 20. If it is written as 20r/m, it means every minute, and also depends on your access volume.

The following location layer applies this restriction. Corresponding to the above definition, for requests to access the download folder, each IP address is limited to no more than 20 requests per second, and the number of buckets leaked is 5. The "burst" means that if there are 19 requests in the first, second, third, and fourth seconds, 25 requests in the fifth second are allowed. However, if you have 25 requests in the first second, 503 errors will be returned for requests over 20 in the second second. nodelay, If this option is not set, five requests will be executed in the second second when 25 requests are made in the first second. If nodelay is set, 25 requests will be executed in the first second.

As far as the definition of this restriction is concerned, the number of requests per IP is limited. For massive cc request attacks, the effect is obvious. For example, the number of requests per second is limited to 1 r/s, which is even more obvious. However, as mentioned at the beginning, for large companies with multiple unified IPs accessing at the same time, it is difficult to avoid accidental injuries, so we should consider more.

Then look at HttpLimitZoneModule:

 http { limit_conn_zone test_zone $binary_remote_addr 10m; server { location /download/ { limit_conn test_zone 10; limit_rate 500k; } } }

Similar to the above, the http layer above is the total definition. This is a limit_conn_zone space named test_zone. The size is also 10M. The key is also the client IP address. However, there is no limit to the number of times. The following definition has been changed.

The following location layer is really defined. Because the key definition is the client IP, the limit_conn is limited to 10 connections for one IP. If it is $server_name, it is 10 connections for one domain name. Then the following limit_rate is to limit the bandwidth of a connection. If one IP has two connections, it is 500x2k, and here it is 10, then the maximum speed of 5000K can be given to this IP.

6.15 Regular upgrade

Nginx itself and the third-party class libraries used by Nginx may have major vulnerabilities with the development of time and the iteration of technology. We are responsible for the Nginx related services. We need to pay regular attention to the Nginx version updates and related vulnerabilities, and selectively upgrade.

Expand to read the full text
Loading
Click to lead the topic 📣 Post and join the discussion 🔥
Reward
zero comment
twenty-two Collection
two fabulous
 Back to top
Top