As per any web server, the tuning of the worker processes is a sort of voodoo art.Ask hundreds system administrators and you will get probably hundreds of different opinions.Before we go deep in the topic, let's understand what is an nginx worker. An nginx worker is a process in memory which "takes care" of the clients' requests. There is a minimum of one worker process in a Nginx environment. It is part of the roles of a good system administrators to decide how many workers you actually need.
NGINX offers the ability to configure how many worker processes must be spawned through the core configuration parameter "worker_processes" (http://nginx.org/en/docs/ngx_core_module.html#worker_processes).
Now, the temptation for many newcomers would be to set a very high number of worker processes or ignore the parameter at all.
The result would be to have an either over-complicated process management (or not enough CPU power) or to not take the best from the HW available.
This is pretty much similar to the endless story of how many APACHE workers you need to configure in a typical MPM_Prefork environment. Reality is, there is no magic recipe for it.
The number of workers you should configure on your NGINX server depends on different factors:
- The role of the server
- The number of CPU available
- The storage system landscape
- The disk I/O requirements (pay attention to caching which is disk I/O intensive)
- SSL encryption support
- Data compression support (gzip)
Why all these factors should be considered?
Let me try to explain this point by point.
The role of the server
The role of the server and the architecture of your web server solution is very important in the counting of the number of workers you should configure. For instance, a stack where NGINX is running on the same machine serving your Django stack (through WSGI for instance) is very much different from a stack where NGINX is running on a different machine providing you the django application. In the first case there will be competition in the usage of the cores, in the second case, the cores are available for a typical reverse proxy scenario and can be all allocated pretty much for NGINX usage.
Would be very inefficient to have a full LEMP (Linux, nginx, MySQL, Perl/Python/PHP) stack on your server having 4 cores and allocate 4 nginx workers. What about MySQL needs? What about your PHP/Python needs?
The number of CPU available
This parameter is very important since it does not make much sense to overload your system with more "loaded" processes than the number of CPUs. Nginx is very efficient in using CPUs, if your workers don't take a full CPU most probably you don't need it. Remember that Nginx workers are single thread processes.
The storage system landscape
Parallelism is not only related to CPUs. If your web server uses a lot of cached items will do surely high I/O on your storage system. What about having the full content of your web server, OS included, on the same physical volume? Would you benefit in having many processes all demanding pretty much 100% of your single physical volume I/O bandwidth? Try to think about it. Try to split the I/O load on multiple disk systems before you actually think about increasing the number of workers. If I/O is not a problem (i.e. some proxy services), then ignore this point.
The disk I/O requirements
This is linked very much to the previous points. Disk I/O is one of the few situations where Nginx can fnd itself in a lock/wait situation. There are possible ways to workaround this situation but none of them is a magic recipe (see AIO and sendfile)
SSL encryption support
Is your web server heavily using SSL? If so, remember that SSL requires additional CPU power compared to plain HTTP. Why this? Try checking the SSL encryption algorithms supported nowadays, you will see a lot of math over there, then you will realized why we talk about CPU consumption. To make a long story short, if you use SSL, consider that the CPU usage will be higher than not using it. How much higher? this depends very much on your application usage pattern, the number of GET/POST operations performed, the average size of your responses etc.
Data compression support
The support for data compression serving responses is a very good approach to limit the used bandwidth of your environment. It is nice and good but it costs in terms of CPU cycles.
Every response needs to be compressed and depending on the level of compression you set, the algorithm behind it will cost you in terms of CPU consumption.
The complexity and the computational cost of the GZIP support is ruled by the configuration parameter "gzip_comp_level" of the ngx_http_gzip_module module (http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_comp_level)
It accepts a value from 1 to 9. The higher you go, the best results (probably) you will get, the higher will be the CPU load on your average transactions.
Having mentioned the above points, it is way too evident that there is no magic recipe for it.
Let me say that a very common and easy approach is to allocate a worker per CPU. This is the basic setup and I am pretty sure that it works flawlessly on most of the environments.
But, as already said, this is not always the best approach. It works but it is not the best.
For instance on a typical reverse proxy, people tend to allocate one and a half worker for each core or, as well two workers per core. Let me say, if you don't do much SSL, 2 workers per core works Why this? Because most of the times NGINX will be waiting for the back-end systems to generate the response of the request and because the machine running Nginx is not busy running your application's logic and CPU payload is at minimum.
If you do extensive caching, considering the way NGINX does it (very efficient but I/O intensive), I would go for a more conservative approach, I would go for 1 worker per core and I would monitor the situation, if the machine behaves well, I would try to go for a 1.5 workers/cores ratio.
If you would like to have more information about this topic or your would like to have an advice on the number of workers you would need to load, try to comment on this page, I will try my best to support you.
No comments:
Post a Comment