Frequently Asked Questions
Please review this entire FAQ before querying IRC or sending questions to the mailing list.
How do you pronounce "Nginx"?
The correct pronunciation sounds like: "engine-ex". (Next question: "What does that mean?" - We don't know, exactly.)
Is it safe to use the development branch in production?
In general, all releases (development or otherwise) are quite stable. This site runs the latest development version at all times. Nginx users as a whole tend to represent an "early adopter" crowd, so a large segment is using the bleeding edge version at any given point.
That said, if stability is crucial it is best to briefly hold off on deployment after a development release; critical bugs tend to show up within the first couple days (which often results in another release immediately afterwards). If no new release shows up in two or three days, then it's likely no one has found any critical bugs. In the event that you discover a bug, capture a debug log and submit a descriptive bug report!
How do I generate an .htpasswd file without having Apache tools installed?
- In Linux (or Posix): given users named John and Mary and passwords V3Ry and SEcRe7, to generate a password file named .htpasswd, you would issue:
- printf "John:$(openssl passwd -crypt V3Ry)\n" >> .htpasswd # this example uses crypt encryption
- printf "Mary:$(openssl passwd -apr1 SEcRe7)\n" >> .htpasswd # this example uses apr1 (Apache MD5) encryption
- TODO: example for SSHA (Salted Secure Hash Algorithm).
- Or, you may use the htpasswd.py python script.
Why isn't my $foo (e.g. rewrite, proxy, location, unix:/$PATH, etc.) configuration working for me?
Start by investigating possible problem causes. Review NginxDebugging and carefully look LINE BY LINE through the error log.
If you can't determine the problem cause through testing, experimentation, searches on the 'net, etc., then gather all relevant details and clearly explain your problem on IRC or in a note to the mailing list. (If you are new to interacting with FOSS support communities, please read: How To Ask Questions The Smart Way.)
Are there other, similar webservers?
What most people mean by "similar" in this context is: "lightweight" or "not Apache". You can find many comparisons using Google, but most web servers fall into two categories: process-based (forking or threaded) and asynchronous. Nginx and Lighttpd are probably the two best-known asynchronous servers and Apache is undoubtedly the best known process-based server. Cherokee is a lesser-known process-based server (but with very high performance).
The main advantage of the asynchronous approach is scalability. In a process-based server, each simultaneous connection requires a thread which incurs significant overhead. An asynchronous server, on the other hand, is event-driven and handles requests in a single (or at least, very few) threads.
While a process-based server can often perform on par with an asynchronous server under light loads, under heavier loads they usually consume far too much RAM, which significantly degrades performance. Also, they degrade much faster on less powerful hardware or in a resource-restricted environment such as a VPS.
Pulling numbers from thin air for illustrative purposes: serving 10,000 simultaneous connections would probably only cause Nginx to use a few megabytes of RAM, while Apache would likely consume hundreds of megabytes (if it could do it at all).
Is support for chroot planned?
Unknown at this time. Unless/until that changes, you can achieve a similar - or better - effect by using OS-level features (e.g. BSD Jails, OpenVZ w/ proxyarp on Linux, etc.).
What about support for something like mod_suexec?
mod_suexec is a solution to a problem that Nginx does not have. When running servers such as Apache, each instance consumes a significant amount of RAM, so it becomes important to only have a monolithic instance that handles all one's needs. With Nginx, the memory and CPU utilization is so low that running dozens of instances of it is not an issue.
A comparable Nginx setup to Apache + mod_suexec is to run a separate instance of Nginx as the CGI script user (i.e. the user that would have been specified as suexec user under Apache), and then proxy to that from the main Nginx instance.
Alternatively, PHP could simply be executed through FastCGI, which itself would be running under a CGI script user account. (Note that mod_php - the module suexec is normally utilized to defend against - does not exist with Nginx.)
What's this @ thing mean?
@location is a named location. Named locations preserve $uri as it was before entering such location. They were introduced in 0.6.6 and can be reached only via error_page, post_action (since 0.6.26) and try_files (since 0.7.27, backported to 0.6.36).
For which general use cases is Nginx more appropriate than Squid? (And vice versa...)
Nginx is generally deployed as a reverse proxy, not as a caching proxy (like Squid). The key advantage with Nginx is its nominal RAM and CPU usage under heavy load. Squid is best applied to cache dynamic content for applications that cannot do it themselves.
The proxy module offers configurations for caching upstream servers.
Can I disable the buffering for upload progress? // How can I display upload progress on the client side?
These are both very frequently asked questions. Currently the only solution is the third-party module Nginx Upload Progress. (This functionality is planned for a future release of Nginx.)
Could someone explain how to configure and test the IMAP module (with a complete .conf example)?
- Using a PHP script on Apache server as the auth backend
- Using Nginx embedded Perl module on the same server as the POP/IMAP proxy as the auth backend
How can Nginx be deployed as an SMTP proxy, with a Postfix backend?
(Have you solved this problem? Please consider giving back by documenting your solution here.)
What algorithm does Nginx use to load balance? Can it balance based on connection load?
Currently, Nginx uses a simple round robin algorithm. It cannot balance based on connection load. (This may change in a future release.)
There is a third-party module, Http Upstream Hash, that uses a form of hash load balancing.
Note: Many users have requested that Nginx implement a feature in the load balancer to limit the number of requests per backend (usually to one). While support for this is planned, it's worth mentioning that demand for this feature is rooted in misbehaviour on the part of the application being proxied to (Ruby on Rails seems to be one example). This is not an Nginx issue. In an ideal world, this particular problem fix request would be directed toward the backend application and its ability handle simultaneous requests.