Practical suggestions on performance optimization of 11 Nginx configuration parameters
in Tutorial with 0 comment
Practical suggestions on performance optimization of 11 Nginx configuration parameters
in Tutorial with 0 comment

Optimize the number of Nginx processes

The configuration parameters are as follows:

 Worker_processes 1; # Specifies the number of processes Nginx wants to start. The number at the end is the number of processes, which can be auto

This parameter adjusts the number of worker processes in the Nginx service. Nginx can be divided into master processes and worker processes. The master process is the management process, and the worker process is the one that really receives "customers".

The number of processes policy: the number of worker processes can be set to equal the number of CPU cores. In high traffic and high concurrency situations, you can also consider increasing the number of processes to the number of CPU cores x 2. In addition to matching the number of CPU cores, this parameter is also related to the data stored in the hard disk and the system load. Setting the number of CPU cores is a good starting configuration and also an official recommendation.

Of course, if you want to save trouble, you can also configure worker_processes auto; , Nginx will decide the number of workers. When the number of accesses increases rapidly, Nginx will temporarily fork new processes to reduce the instantaneous cost of the system and reduce the service time.

Bind different processes to different CPUs

By default, multiple processes of Nginx may run on the same CPU core, resulting in uneven use of hardware resources by Nginx processes. This requires that processes be allocated to the specified CPU core for processing, so as to make full and effective use of hardware. The configuration parameters are as follows:

 worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000;

among worker_cpu_affinity It is a parameter to configure the affinity between Nginx processes and CPUs, that is, to distribute different processes to different CPU cores for processing. there 0001 0010 0100 1000 Is a mask, representing the 1st, 2nd, 3rd, and 4th core CPUs respectively. The above configuration will allocate one core CPU for each process.

Of course, you can configure it if you want to save trouble worker_cpu_affinity auto; , will be automatically assigned by Nginx on demand.

Nginx event processing model optimization

Nginx's connection processing mechanism uses different I/O models in different operating systems. Under Linux, Nginx uses epoll's I/O multiplexing model, Freebsd uses kqueue's I/O multiplexing model, Solaris uses/dev/poll's I/O multiplexing model, Windows uses icop, and so on.

The configuration is as follows:

 events { use epoll; }

events The command is to set the working mode of Nginx and the upper limit of the number of connections. use The command is used to specify the working mode of Nginx. Nginx supports the following working modes: select, poll, kqueue, epoll, rtsig, and/dev/poll. Of course, you can also specify no event processing model, and Nginx will automatically select the best event processing model.

Maximum number of client connections allowed for a single process

Adjust the maximum number of client connections allowed by a single Nginx process by adjusting the parameters that control the number of connections.

 events { worker_connections 20480; }

worker_connections It is also an event module instruction that defines the maximum number of connections per Nginx process. The default is 1024.

The maximum number of connections is calculated as follows:

max_clients = worker_processes * worker_connections;

If it is used as a reverse proxy, because the browser will enable two connections to the server by default, and Nginx will also use fds (file descriptor) to establish a connection from the same connection pool to the upstream backend. The calculation formula of the maximum number of connections is as follows:

max_clients = worker_processes * worker_connections / 4;

In addition, The maximum number of connections of processes is limited by the maximum number of open files of Linux system processes , executing operating system commands ulimit -HSn 65535 Or after configuring the corresponding file, worker_connections The settings of can take effect.

Configure to get more connections

By default, the Nginx process will only receive a new connection at one time. We can configure multi_accept by on , realizing that multiple new connections can be received at one time to improve processing efficiency. This parameter defaults to off , it is recommended to open it.

 events { multi_accept on; }

Configure the maximum number of open files for worker processes

Adjust and configure the maximum number of open files for the Nginx worker process. The parameter that controls the number of connections is worker_rlimit_nofile The actual configuration of this parameter is as follows:

 worker_rlimit_nofile 65535;

Can be set to the optimized ulimit -HSn Results for

Optimize the hash table size of domain names

 http { server_names_hash_bucket_size 128; }

Parameter function: set the size of the bucket used to store the maximum hash table of the server names. The default value depends on the cache row of the CPU.

server_names_hash_bucket_size The value of cannot have units. This value must be set when configuring the host. Otherwise, Nginx cannot be run or the test cannot be passed. This setting is the same as server_ names_hash_max_size They jointly control the hash table that stores the server name. The hash bucket size is always equal to the size of the hash table, and is a multiple of the cache size of all processors. If the hash bucket size is equal to the size of the processor cache, the worst case number of key lookups in memory is 2. The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit. If the prompt of hash max size or hash bucket size is reported, it needs to be added server_names_hash_max size The value of.

TCP optimization

 http { sendfile on; tcp_nopush on; keepalive_timeout 120; tcp_nodelay on; }

Of the first line sendfile Configuration can improve the efficiency of Nginx static resource hosting. Sendfile is a system call that sends files directly in the kernel space without having to read before writing, and has no context switching overhead.

TCP_NOPUSH is a socket option of FreeBSD, which corresponds to TCP_CORK of Linux and is used uniformly in Nginx tcp_nopush To control it, and only when it is enabled sendfile It will take effect after. After it is enabled, data packets will accumulate to a certain size before being sent, which reduces the extra overhead and improves network efficiency.

TCP_NODELAY is also a socket option. After it is enabled, the Nagle algorithm will be disabled to send data as soon as possible, which can save 200ms in some cases (the principle of the Nagle algorithm is that before the sent data is confirmed, the newly generated small data will be saved first to fill an MSS or sent after receiving the confirmation). Nginx will only be enabled for TCP connections in keep alive status tcp_nodelay

Optimize connection parameters

 http { client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 1024m; client_body_buffer_size 10m; }

This part is more determined by business scenarios. for example client_max_body_size It is used to determine the size of the request body and limit the size of the uploaded file. The parameters listed above can be used as starting parameters.

Configure compression to optimize performance

Gzip compression

Before we go online, the code (JS, CSS and HTML) will be compressed, and the image (PNGOUT, Pngcrush, JpegOptim, Gifscale, etc.) will also be compressed. For text files, it is also important to perform GZip compression before the server sends the response. Usually, the size of the compressed text will be reduced to 1/4 - 1/3 of the original size.

 http { gzip on; gzip_buffers 16 8k; gzip_comp_level 6; gzip_http_version 1.0; gzip_min_length 1000; gzip_proxied any; gzip_vary on; gzip_types text/xml application/xml application/atom+xml application/rss+xml application/xhtml+xml image/svg+xml text/javascript application/javascript application/x-javascript text/x-json application/json application/x-web-app-manifest+json text/css text/plain text/x-component font/opentype application/x-font-ttf application/vnd.ms-fontobject image/x-icon; gzip_disable "MSIE [1-6]\. (?!.*SV1)"; }

This part is relatively simple, with only two points to be explained:

gzip_vary It is used to output Vary response headers and solve a problem with some cache services. For details, please see my previous blog: Vary's research in HTTP protocol.

gzip_disable The command accepts a regular expression. When the UserAgent field in the request header meets this regular expression, the response will not enable GZip. This is to solve the problem caused by enabling GZip in some browsers.

By default, Nginx will only enable GZip for HTTP/1.1 and above requests, because some early HTTP/1.0 clients had bugs when processing GZip. Now you can basically ignore this situation, so you can specify gzip_http_version 1.0 to enable GZip for HTTP/1.0 and above requests.

Brotli compression

Brotli is a modern variant of LZ77 algorithm, Hoffman coding and second-order context modeling. In September 2015, Google software engineers released an enhanced version of Brotli that includes general lossless data compression, with a special focus on HTTP compression. The encoder has been partially rewritten to improve the compression ratio. Both the encoder and decoder have increased the speed. The streaming API has been improved to add more compression quality levels.

Installation required libbrotli ngx_brotli , when recompiling Nginx --add-module=/path/to/ngx_brotli Then configure as follows

 http { brotli on; brotli_comp_level 6; brotli_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript image/svg+xml; }

Brotli can share a configuration file with Gzip

Static resource optimization

Static resource optimization can reduce the number of connection requests, and it is not necessary to print logs for these resource requests. However, the side effect is that the resources may not be updated in time.

 server { #Pictures, videos location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ { expires 30d; access_log off; } #Font location ~ .*\.(eot|ttf|otf|woff|svg)$ { expires 30d; access_log off; } # js、css location ~ .*\.(js|css)?$  { expires 7d; access_log off; } }

Closing~

Responses