Background
The Ogre3D website has been running on a dedicated server for about 7 years now; this is relatively expensive, but when we moved away from the shared hosting that Sourceforge generously provided, but which we had outgrown, our initial foray with a VPS (at the time lighttpd on Linode) proved inadequate for our needs, so after a month of futile tuning we gave up and went fully dedicated.
Time has moved on of course, and virtualisation technology is considerably better than it was in 2005. I’d intended to try again soon anyway to reduce Ogre’s overheads but our Adsense revenue was still covering the cost and I hadn’t got around to it yet. Then suddenly, Google pulled our ads after a mistaken (I believe automated) conclusion that we were hosting copyrighted material - a few users had posed test binaries of their own work on MediaFire and similar ‘red flag’ download sites - and all of a sudden we were leaking money. The misunderstanding was sorted out with Google within a few days, but even so it illustrated that we should probably look to move to a cheaper solution if we can so we have less exposure.
The Ogre site’s main issue with performance is Apache’s memory usage under load, so given a VPS is more constrained I wanted to address that. Enter Nginx, stage right.
Choosing a VPS & platform
After sounding out the community, I discovered that Linode was still very highly respected, and their prices were great too. After signing up (again) I discovered their support was fantastic too - my initial setup questions were answered in minutes, even on a weekend. Very impressive.
For me, a web server means Linux, and my preference is for systems based on APT. I’ve run both Debian and Ubuntu servers before, and I felt that Ubuntu LTS has given me the best blend of stability and recent package access, so that’s what I decided to use. Unfortunately Ubuntu was on the cusp of releasing a new LTS (12.04), so the version Linode had as an option was 10.04 which is getting old. Nginx has evolved considerably in the last couple of years, and so has PHP-FPM, the best separate PHP daemon available now, meaning I’d have to build a lot of stuff from source if I used 10.04. But, I knew that 12.04 was going to be stable soon (and indeed, it is now), and Linode support said they’d be supporting it after that subject to testing, so I chose to install their stock 10.04 and immediately upgrade to the (then still tagged as development) 12.04, which went very smoothly. That made things considerably easier for me because I could install everything straight from apt-get.
The setup
Nginx: high performance web server with a focus on low memory usage and high concurrency
PHP-FPM: FastCGI implementation of PHP which uses pools to adapt resource usage dynamically depending on load
APC: PHP opcode cache to avoid unnecessary recompilation
All these packages were installed just using ‘apt-get install nginx php5-fpm php-apc’. I also installed the usual suites you’d normally install like MySQL, ImageMagick, sendmail etc.
Configuration
Once you have everything installed, you should have 2 services which operate your web server, the nginx service (front end) and the php5-fpm service which serves up PHP content.
PHP configuration
I set PHP-FPM to run on a unix socket rather than a network port, because that seemed simpler and more secure given that I had no intention of the PHP daemon executing scripts from any other host. All the settings for a given pool of connections is configured in /etc/php5/fpm/pool.d/www.conf on Ubuntu, where you set the ‘listen’ option to a socket address, e.g. ‘listen = /var/run/php5-fpm.sock’. I also increased the pool settings so the range of server processes was between 10 and 20, and set the number of max requests that each server would handle before unloading to 200 to mitigate any memory leaks in PHP apps; I’ve yet to test the actual numbers under load so I may refine these over time, see this optimisation article for more details.
The other thing is that for Nginx, the following php.ini setting was needed: ‘cgi.fix_pathinfo = 0;’, which I set in /etc/php5/fpm/php.ini. This makes sure that you’re not susceptible to a nasty hack like this (note that the WordPress setup for Nginx linked above does work with this change and includes the second hack prevention option on that list too).
Nginx configuration
Configuring Nginx to talk to the PHP-FPM daemon is pretty simple and has been covered elsewhere; I advise that if you’re running WordPress, you should just use their own Nginx configuration page. If you’re running anything else, read this article which includes some important security issues which some other tutorials do not. It’s worth noting that once you’ve got one block for matching PHP files (like the one provided for WordPress, which is also nice and secure) you probably don’t need another one, unless your block is specific to a subdirectory.
One of the most important things to note about Nginx is that it doesn’t support .htaccess files. All the rules that would usually appear in those files have to be converted into entries in the Nginx site configuration files. Luckily, many common web apps have already had Nginx rules written for them, such as the aforementioned WordPress Nginx page, and many don’t require special rules at all (such as phpBB). After that, you basically have to convert the .htaccess contents yourself, although in practice I didn’t find this too onerous for simple rewrite rules.
APC configuration
This didn’t actually require any work at all, once you’ve installed it, it’s automatically added to /etc/php5/fpm/conf.d/ which means it’s enabled, check phpinfo() to make sure.
Nginx rewrite rule problems
Although converting rewrite rules was generally quite easy, I did have one major issue with Nginx that was specific to Tiki. Its wiki encodes space characters in page names as ‘+’, and uses this as links, which is fine when used as a query string, e.g. http://something.org/tiki/tiki-index.php?page=Some+Page. However, when you tell it to generate short page links, Tiki simply drops the ‘tiki-index.php?page=’ part when outputting links and expects the rewrite rule to put it back later, so your URLs become something like http://something.org/tiki/Some+Page. On Apache (and lighttpd), rewriting this back to http://something.org/tiki/tiki-index.php?page=Some+Page works fine, but on Nginx it doesn’t work. The reason is that while Apache just takes the URI as-is when it’s rewriting it, Nginx decodes the URI, rewrites it, then re-encodes it when passing it on. It turns out that according to RFC 3986 using ‘+’ outside of the scheme or query string in a URI as the Tiki shortlinks do is invalid, so Nginx actually re-encodes the ‘+’ as %2B when it passes the result back, breaking the page name.
I’ve used an imperfect hack in the Tiki code to ‘fix’ this for now, and Tiki is clearly at fault for generating non-compliant URIs, which we’ll be taking up with them. However, it’s an important lesson that apps may take advantage of undocumented or overly tolerant behaviour in Apache which may cause problems when moving to Nginx.
A quick aside - lighttpd
When I hit the problem with Tiki / Nginx, I decided to give lighttpd a try instead to see whether it behaved like Apache and how it ran compared to Nginx, to see if that might be an easy alternative workaround. I configured it to run through the same PHP-FPM back-end. It also needs rewrite rules to be converted, but I was getting to be a dab hand at that by then! I discovered a few things:
- Lighttpd behaves like Apache when rewriting so tolerated Tiki’s dodgy short links in the same way
- Lighttpd felt a bit slower (subjectively from a client perspective) while using the same or more memory
- For some reason, the PHP-FPM daemon kept falling over when lighttpd was driving it, making the site unavailable after periods of 30 minutes use. I have no idea why, since this is the same FPM daemon Nginx was using, and the errors were very vague. All I know is that the daemon never seems to fail when Nginx is the front-end.
Conclusions (so far)
Nginx seems like a very efficient web server, and coupled with PHP-FPM seems to be a great way to serve most common dynamic content apps. Apache processes can get quickly bogged down and getting positive results from tuning can be very, very awkward and time consuming. I’m sure Apache would benefit from externalising PHP via FPM too (I’ve always run the regular mod_php in the past), but once you’ve done that it kind of makes no sense to have a heavyweight server just dealing with the incoming connections, and a lightweight and highly concurrent front-end makes a lot more sense.
I won’t know for sure how well this system copes until I put it live, but it’s looking promising. I’ll post an update after we deploy it to let you know if the potential was actually fulfilled.