I run a 256mb Cloud Server from Rackspace. As far as web servers go, you’re not going to find anything with less RAM. As long as you don’t have to deal with a ton of traffic, you can make this work pretty well. It’s taken me quite a bit of testing and research to get to where I am right now, so I wanted to share some of the things I’ve done in the never ending process of optimizing servers and increasing speed.

I work mainly with WordPress which puts somewhat of a limitation on what we can do. For example, if you start looking for ways to speed up Apache, one of the first things you’ll find is to turn off .htaccess. While we could do that, it would take more modification of core WordPress than I’m really willing to handle. If I’m going to be modifying anything, I’d rather have it be my web server configuration.

The main problem is that Apache really isn’t made for such harsh conditions. If you’re running a server with 4GB of RAM, you have some room for bloated software. This first tip can help speed up any web server though, and that’s installing nginx as a front-end to Apache. You could also use lighttpd, but I’ve read more about nginx and since that’s the solution I went with, that’s what I’ll document here.

Nginx is far more efficient at serving static files than Apache is. The reason we’re not going completely away from Apache is that it’s easier for most people to work with to serve the dynamic requests. So we want any static resources to be served by Nginx, and everything else can get passed through to Apache. And we’ll move Apache to port 81 and make sure it can’t be accessed from outside the server. That way, all the requests can be served instantly to Nginx and back across the web to our user.

One thing you’ll want to do before you install Nginx is turn off KeepAlive in the Apache configuration. Since Nginx will be the front-end, we’ll let that handle the KeepAlive requests.

I have a pretty simple Nginx configuration in nginx.conf

user www-data;
worker_processes 4;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
# multi_accept on;
}

http {
include /etc/nginx/mime.types;

access_log /var/log/nginx/access.log;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;

gzip on;
gzip_disable "MSIE [1-6].(?!.*SV1)";

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

And then the virtual host for this site looks like this:

server {
listen 80;
access_log /var/log/nginx/joshbetz.com.access.log;
server_name joshbetz.com;
root /var/www/joshbetz.com;

location / {
#root /var/www/joshbetz.com;
index index.html index.php;
error_page 404 = @dynamic;
}

location ~ .php$ {
proxy_pass http://localhost:81;
proxy_buffering on;
proxy_buffers 12 12k;
proxy_redirect off;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location @dynamic {
proxy_pass http://localhost:81;
proxy_buffering on;
proxy_buffers 12 12k;
proxy_redirect off;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location ~ /.ht {
deny all;
}
}

There’s a lot of research and trial & error involved in building this, but the basic idea is that we want to serve any static files directly, but anything that ends in .php should be forwarded on to Apache. You’ll notice the proxy_pass http://localhost:81; directive.

On that note, we’ll want to make sure Apache is listening on port 81. In ports.conf, you’ll need to change the NameVirtualHost and configure it to listen only internally with these lines.

NameVirtualHost *:81
Listen 127.0.0.1:81

Make sure you change any named virtual host that is specified for port 80 to port 81.

Restarting Apache and Nginx should give a speed increase, but there’s still one more thing to make sure the site stays fast. First of all, make sure the mpm module you’re running is prefork. Then change the prefork configuration in apache2.conf. Look for the line that specifies the mpm_prefork_module and change it to something like the following

StartServers 1
MinSpareServers 1
MaxSpareServers 5
MaxClients 50
MaxRequestsPerChild 500

Apache processes are big, and if you let them run forever they’ll become more and more bloated. What we’re doing here is limiting the amount of Apache processes that can run and making sure they don’t last for too long. With MaxRequestsPerChild set at 0, the Apache processes can last forever. Setting it closer to something like 500 is good for an application like WordPress that may or may not be super awesome at managing memory.

Test your configuration

Sometimes it’s hard to tell if changes to your configuration had any effect, or more importantly, will last. Running Apache Bench can give you some insight into these questions.

After I make a change I like to restart Apache and Nginx and load a few pages to see what the experience is like. Then I’ll run Apache Bench. I don’t always run it. Something that’s important to learn about the web and user-experience in general is that if it feels right, it’s probably right. The more you deal with interfaces and user interactions, the more you’ll learn what’s good and what’s not. If it feels fast enough for you, it’s probably fast enough for most people. That being said, Apache Bench can expose problems that you might not have discovered otherwise so it’s good to run sometimes.

I usually run it with the same settings, but it doesn’t hurt to experiment.

ab -n 100 -c 10 http://site.to.test/

With the configuration I’ve gone over in this article, I get ~54 requests/second on a clean WordPress install with the default Twenty Eleven theme. With my custom theme, which is pretty heavy and occasionally makes requests for my twitter feed, etc. and actual posts to load I can get ~18 requests/second.

My configuration isn’t perfect, so if you see something that looks wrong or have an idea on something I should do different, let me know, I’d love to talk.

Resources