https://gist.github.com/denji/8359866
NGINX tuning for best performance
NGINX tuning for best performance. GitHub Gist: instantly share code, notes, and snippets.
gist.github.com
Moved to git repository: https://github.com/denji/nginx-tuning
denji/nginx-tuning
NGINX tuning for best performance. Contribute to denji/nginx-tuning development by creating an account on GitHub.
github.com
NGINX Tuning For Best Performance
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
First, you will need to install nginx
yum install nginx
apt install nginx
Backup your original configs and you can start reconfigure your configs. You will need to open your nginx.conf at
/etc/nginx/nginx.conf
with your favorite editor.
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically
# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;
# only log critical errors
error_log /var/log/nginx/error.log crit;
# provides the configuration file context in which the directives that affect connection processing are specified.
events {
# determines how much clients will be served per worker
# max clients = worker_connections * worker_processes
# max clients is also limited by the number of socket connections available on the system (~64k)
worker_connections 4000;
# optimized to serve many clients with each thread, essential for linux -- for testing environment
use epoll;
# accept as many connections as possible, may flood worker connections if set too low -- for testing environment
multi_accept on;
}
http {
# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# to boost I/O on HDD we can disable access logs
access_log off;
# copies data between one FD and other from within the kernel
# faster than read() + write()
sendfile on;
# send headers in one piece, it is better than sending them one by one
tcp_nopush on;
# don't buffer data sent, good for small data bursts in real time
tcp_nodelay on;
# reduce the data that needs to be sent over network -- for testing environment
gzip on;
# gzip_static on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
# request timed out -- default 60
client_body_timeout 10;
# if client stop responding, free up memory -- default 60
send_timeout 2;
# server will close connection after this time -- default 75
keepalive_timeout 30;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
}
Now you can save config and run bottom command
CommandLine | NGINX
Starting, Stopping, and Restarting NGINX This page shows you how to start NGINX, and once it’s running, how to control it so that it will stop or restart. Starting NGINX NGINX is invoked from the command line, usually from /usr/bin/nginx. Basic Example o
www.nginx.com
nginx -s reload
/etc/init.d/nginx start|restart
If you wish to test config first you can run
nginx -t
/etc/init.d/nginx configtest
Just For Security Reason
server_tokens off;
NGINX Simple DDoS Defense
This is far away from secure DDoS defense but can slow down some small DDoS. Those configs are also in test environment and you should do your values.
# limit the number of connections per single IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
# limit the number of requests for a given session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
# zone which we want to limit by upper values, we want limit whole server
server {
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
}
# if the request body size is more than the buffer size, then the entire (or partial)
# request body is written into a temporary file
client_body_buffer_size 128k;
# buffer size for reading client request header -- for testing environment
client_header_buffer_size 3m;
# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k;
# read timeout for the request body from client -- for testing environment
client_body_timeout 3m;
# how long to wait for the client to send a request header -- for testing environment
client_header_timeout 3m;
Now you can do again test config
nginx -t # /etc/init.d/nginx configtest
And then reload or restart your nginx
CommandLine | NGINX
Starting, Stopping, and Restarting NGINX This page shows you how to start NGINX, and once it’s running, how to control it so that it will stop or restart. Starting NGINX NGINX is invoked from the command line, usually from /usr/bin/nginx. Basic Example o
www.nginx.com
nginx -s reload
/etc/init.d/nginx reload|restart
You can test this configuration with tsung and when you are satisfied with result you can hit Ctrl+C because it can run for hours.
Increase The Maximum Number Of Open Files (nofile limit) – Linux
Two ways to raise the nofile/max open files/file descriptors/file handles limit for NGINX in RHEL/CentOS 7+. With NGINX running, checking current limit on master process
$ cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files
Max open files 1024 4096 files
And worker processes
ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files
Max open files 1024 4096 files
Max open files 1024 4096 files
Trying with the worker_rlimit_nofile directive in {,/usr/local}/etc/nginx/nginx.conf fails as SELinux policy doesn't allow setrlimit. This is shown in /var/log/nginx/error.log
015/07/24 12:46:40 [alert] 12066#0: setrlimit(RLIMIT_NOFILE, 2342) failed (13: Permission denied)
And in /var/log/audit/audit.log
type=AVC msg=audit(1437731200.211:366): avc: denied { setrlimit } for pid=12066 comm="nginx" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process
nolimit without Systemd
# /etc/security/limits.conf
# /etc/default/nginx (ULIMIT)
$ nano /etc/security/limits.d/nginx.conf
nginx soft nofile 65536
nginx hard nofile 65536
$ sysctl -p
nolimit with Systemd
$ mkdir -p /etc/systemd/system/nginx.service.d
$ nano /etc/systemd/system/nginx.service.d/nginx.conf
[Service]
LimitNOFILE=30000
$ systemctl daemon-reload
$ systemctl restart nginx.service
SELinux boolean httpd_setrlimit to true(1)
This will set fd limits for the worker processes. Leave the worker_rlimit_nofile directive in {,/usr/local}/etc/nginx/nginx.conf and run the following as root
setsebool -P httpd_setrlimit 1
DoS HTTP/1.1 and above: Range Requests
RFC 7233 - Hypertext Transfer Protocol (HTTP/1.1): Range Requests
[Docs] [txt|pdf] [draft-ietf-http...] [Tracker] [Diff1] [Diff2] [Errata] PROPOSED STANDARD Errata Exist Internet Engineering Task Force (IETF) R. Fielding, Ed. Request for Comments: 7233 Adobe Obsoletes: 2616 Y. Lafon, Ed. Category: Standards Track W3C ISS
tools.ietf.org
By default max_ranges is not limited. DoS attacks can many Range-Requests (Impact on stability I/O).
Module ngx_http_core_module
Module ngx_http_core_module Directives Syntax: absolute_redirect on | off; Default: absolute_redirect on; Context: http, server, location This directive appeared in version 1.11.8. If disabled, redirects issued by nginx will be relative. See also server_na
nginx.org
Socket Sharding in NGINX 1.9.1+ (DragonFly BSD and Linux 3.9+)
Socket typeLatency (ms)Latency stdev (ms)CPU Load
Default | 15.65 | 26.59 | 0.3 |
accept_mutex off | 15.59 | 26.48 | 10 |
reuseport | 12.35 | 3.15 | 0.3 |
Thread Pools in NGINX Boost Performance 9x! (Linux)
Core functionality
Core functionality Example Configuration user www www; worker_processes 2; error_log /var/log/nginx-error.log info; events { use kqueue; worker_connections 2048; } ... Directives Syntax: accept_mutex on | off; Default: accept_mutex off; Context: events If
nginx.org
Multi-threaded sending of files is currently supported only Linux. Without sendfile_max_chunk limit, one fast connection may seize the worker process entirely.
Module ngx_http_core_module
Module ngx_http_core_module Directives Syntax: absolute_redirect on | off; Default: absolute_redirect on; Context: http, server, location This directive appeared in version 1.11.8. If disabled, redirects issued by nginx will be relative. See also server_na
nginx.org
Selecting an upstream based on SSL protocol version
map $ssl_preread_protocol $upstream {
"" ssh.example.com:22;
"TLSv1.2" new.example.com:443;
default tls.example.com:443;
}
# ssh and https on the same port
server {
listen 192.168.0.1:443;
proxy_pass $upstream;
ssl_preread on;
}
'Nginx' 카테고리의 다른 글
Nginx - 컴파일해서 사용하자 (0) | 2020.09.19 |
---|---|
How does Nginx serve static files faster that app servers like Unicorn? (0) | 2020.09.01 |
Nginx 에서 HTTP 에서 HTTPS 으로 Redirect 하기 (0) | 2020.05.23 |
Nginx - mp4 스트리밍 (0) | 2020.05.10 |
nginx 로 proxy 연결시 실제 아이피, 프로토콜 확인하기 (0) | 2020.04.19 |