Continuing on from our recent server posts i’m going to discuss how we have set up our tiny little Digital Ocean droplet to serve 1000s of page requests efficiently.
Originally i decided on a setup of Nginx handling SSL certificates and static resources, being proxied to Apache with PHP7 for dynamic and PHP requests. The thought being we benefit from Nginx proven advantage in serving SSL and static resources, yet the power (and familiarity) of Apache and .htaccess files.
So we went about our business getting all of that setup, Nginx was handling most things, and Apache over an internal port handling everything else.
The first oversight
The first thing that I noticed was the “benefit” I originally predicted “being easier to manage” wasn’t happening at all. First we had a few issues with proxied request uri’s in WordPress, then I realised (an oversight on my part) not only do we have to have to mange a nginx config, but an apache one too, then all the .htaccess files as well.
Debugging issues with this setup isn’t the easiest thing in the world. Not a deal breaker but something to consider when choosing a setup.
We suspected as we had found across the web a Nginx over Apache could be quite performant as Apache is only called when it’s needed, however what we found didn’t support this.
Here is a new relic graph of our little droplet over the past 24 hours.
Everything before the outage (red line 3rd of the way in from the left) was with the Nginx over Apache setup. During this time we performed 3-4 load tests using Loader.io. The site tested was http://nohalfpixels.com, a simple Laravel site with no database calls. Every test we performed was pretty disappointing. Simply testing 1000 clients over 1 minute resulted in an average response time of nearly 8 seconds!
You can also see from the graph above, when under load the server went into swap and took a long time to recover. I put the poor performance down to Apache after running similar test with a static resource and having no issues at all (doesn’t even register on the graph).
At this point feeling a little disappointed, and thinking about how many files I need to edit to make changes I decided to just drop apache and try serving the sites through Nginx and PHP fpm. This was pretty simple to test as all I needed to do was turn off Apache and send requests for php to fpm.
So after a quick config change and turning Apache off I run the tests again, the 3 little bumps after the red line are the Nginx tests, and the last test is 5000 clients over 1 minute, 5 times more users! These tests gave me average response times of just under 3 seconds when under significant load, not bad.
Apart from the obvious performance gains I got, you can also see this setup has a significant reduction in memory usage (ever more important on a little VPS). I think this is partly down to not having an active Apache instance running, but most down to PHP7 running via fpm and not included with each Apache instance via mod_php.
So there you have it, for my use case Nginx is simply the better choice, you could argue we could have tried Apache with MPM event and PHP fpm, but by the time you have fpm setup (not hard) you have to wonder why would you proxy requests from Nginx as it’s just as easy to then send it direct to fpm instead of through Apache, and save a running process on the server.
*The load bump in the graph at 6am April 19th isn’t a load test, its our server backup script running (coming to the blog soon).