- Documentation
- NGINX http_gzip module
- NGINX http_gunzip module
- Enable gzip. by default, we’re not going to compress the responses that we’re getting from proxied servers or any piece of content that isn’t HTML.
- But if one of our proxied servers happens to send us a pre-compressed response then we probably want to decompress it for clients that can’t handle gzip. In this situation, we’ll use the gunzip module
$ vim /etc/nginx/nginx.conf # uncomment gzip module gzip on; gzip_disable msie6; gzip_proxied no-cache no-store private expired auth; gzip_types text/plain text/css application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/bmp image/svg+xml; gzip_min_length 1024; gzip_vary on; gunzip on;
- Expanation:
- gzip_disable - Ensure that we’re not sending compressed responses to older versions of Internet Explorer (hopefully no one is actually using these browsers)
- gzip_proxied - Specify that we only want to compress responses from proxied servers if we normally wouldn’t cache them
- gzip_types - The Content-Types that we will compress before sending the response
- gzip_min_length - Adjusting the minimum size of a file that we’ll compress, the default is 20 bytes, but we’re going to not compress resources that are less than one kilobyte
- gzip_vary - Adds a Vary: Accept-Encoding header. This tells intermediate caches (like CDNs) to treat the compressed and uncompressed version of the same resource as 2 separate entities.
- Expanation:
- Documentation
- NGINX worker_processes directive
- NGINX worker_connections directive
- Default nginx worker process is set to
1
. - To optimize we need to set the process to the number of our CPU cores. But if you have no idea of how many cores your server has set it to
auto
.$ vim /etc/nginx/nginx.conf worker_processes auto;
- Adjusting worker connecctgions
- Default connections:
512
- The above number is almost guaranteed to be too low for modern day servers.
- Having too low of a limit is going to be bad because if we get a spike in traffic on a 4 core server we could only handle 2k clients even with the auto setting for the worker processes.
- NGINX was designed to handle 10k connections over a decade ago
$ vim /etc/nginx/nginx.conf events { worker_connections 2048; }
- For a
four core CPU
, this would lead to8192 simultaneous connections
. - If NGINX ever hits this limit, it will log an error in
/var/log/nginx/error.log
- Knowing that this will be logged, we should have some log monitoring in place so that we can see if we’ve set our limit too low.
- When running a server that is getting a lot of concurrent requests it is possible to run into operating system limits for file connections.
$ nginx -t # Output nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: [warn] 2048 worker_connections exceed open file resource limit: 1024 nginx: configuration file /etc/nginx/nginx.conf test is successful
- To see the Ulimit
$ su -s /bin/sh nginx -c "ulimit -Hn" 4096 $ su -s /bin/sh nginx -c "ulimit -Sn" 1024
- The 1024 that we’re seeing is the “soft” limit.
- We stayed within the range of the “hard” limit so we can change the value that NGINX uses using the worker_rlimit_nofile directive near the top of our configuration:
$ vim /etc/nginx/nginx.conf user nginx; worker_processes auto; # worker_rlimit_nofile directive worker_rlimit_nofile 4096; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; # Load ModSecurity dynamic module load_module /etc/nginx/modules/ngx_http_modsecurity_module.so; events { worker_connections 2048; }
- In the event that you run into a “Too Many Open Files” error at some point, you can adjust the hard and soft file limits for the nginx user and then change this value to match.
- Default connections:
- Utilizing keepalives
-
To get most of it out of open connections, we need to configure keepalive value that allows a client to hold an open TCP connection for more than a single request.
-
This same connection can then be used for multiple requests and doesn’t need to be closed and reopened. There are two spots in our configuration that we should consider using keepalives.
- Between the Client and NGINX.
- Between NGINX and an upstream server/group.
-
The first type of keepalive that we’re going to discuss is between the web client and the NGINX server. These keepalives are enabled by default through the keepalive_timeout directive in the nginx.conf, and the keepalive_requests default value of 100.
-
These values can be set for an entire http context or a specific server. There’s not a set in stone rule or approach to setting the keepalive_timeout for NGINX, but the default is used quite often. The keepalive_requests directive is interesting because it’s not the number of connections to keep open, but rather the number of requests a single keepalive connection can make.
$ vim /etc/nginx/conf.d/photos.example.com.conf upstream photos { server 127.0.0.1:3000 weight=2; server 127.0.0.1:3100 max_fails=3 fail_timeout=20s; server 127.0.0.1:3101 max_fails=3 fail_timeout=20s; keepalive 32; } server { listen 80; server_name photos.example.com; client_max_body_size 5m; location / { proxy_pass http://photos; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ~* \.(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; } }
-