Pow with Laravel

In the past1, I’ve been a big fan of running a virtualized web server locally instead of intalling PHP and Apache (or Nginx) directly on my development machine. I think I just changed my mind.

A little over a month ago, I read Elliot Jay Stocks’ article abou the stuff he missed on vacation. One of those things was the release of Anvil. Anvil is really just a nice looking front-end for Pow. Pow is a local rack web server built on node.js. It adds some cool stuff to /etc/resolvr to handle DNS for development domains and then you just symlink a directory into ~/.pow to get a server at a domain like .dev.

For some reason I’ve never really been a fan of MAMP. I think the “zero-config” thing stuck with me when I saw Pow though. So I started looking around2 to see if I could use Pow with PHP apps.

It turns out to be possible. There’s a gem called rack-legacy which is exactly for this purpose. Unfortunately rack-legacy uses the CGI version of PHP which doesn’t come in Mountain Lion. So you have to install PHP from source, which requires you to install mcrypt from source as well.

  1. Install mcrypt from source.
  2. Download the PHP source over at the PHP Downloads page.
  3. cd into the PHP source directory and configure it ./configure --with-cgi --enable-mbstring --with-curl --with-xmlrpc --with-mcrypt=/usr
  4. make and make install PHP.

So, now that we have PHP set up, install Pow, Rack, and all the other dependancies.

brew install node sqlite
gem install rack rack-legacy rack-rewrite
curl get.pow.cx | sh

I think the only dependancy for pow is node, which I used homebrew to install. I also installed sqlite because it’s easier to use locoally than MySQL. And Laravel makes it simple to switch over to a SQLite database.

It’s finally time to set up our Laravel app. Rack apps need a config.ru file in the root which is just a Ruby script that configures the server for this particular app. Put the following config.ru in Laravel’s public folder and point Pow at the same folder.

require 'rack'
require 'rack-legacy'
require 'rack-rewrite'

use Rack::Rewrite do
send_file %r{([^?]+)}, Dir.getwd + '$1', :if => Proc.new { |env|
path = File.expand_path(Dir.getwd + env['PATH_INFO'])
rewrite %r{(.*)?}, 'index.php$1'

use Rack::Legacy::Php, Dir.getwd
run Rack::File.new Dir.getwd

That’s it.


There were kind of two main resources that I used to fully understand what was going on here and how to set this up. If you’re going to be a PHP developer and use Ruby tools, get used to seeing PHP as “Legacy Development” I guess.

  1. Legacy Development with Pow
  2. Configuration file to use PHP with rack

  1. I’ve written a few articles about setting up a VM in VirtualBox for PHP development locally. Including the original, part 2, the one about nginx, and the video

  2. I even considered writing my own “zero-configuration” web server that would work with PHP for a minute. 

Develop for WordPress Locally: Server Setup

The first of a series where I talk about setting up a local environment to develop for WordPress.

I don’t use MAMP like many other people do, but instead run everything in an Ubuntu virtual machine with VirtualBox. There are a few things we need to do to make this setup practical, like give ourselves a local address to connect on and set up a shared folder between the host and guest OS. Then we’ll set up Nginx with PHP-FPM and MySQL so the server is ready to go for WordPress.

In the next video, I’ll download and install a development version of WordPress along with some plugins that make development much easier.


WordPress and LEMP

In the past I’ve written about running WordPress more efficiently [on Apache](http://joshbetz.com/2012/01/wordpress-low-memory/). Part of that was using Nginx to serve static files and only relying on Apache to interact with PHP. But we don’t need Apache for that. We can achieve a similar result with PHP-FPM and never have to install Apache.

Recently I rebuilt my server on a the [Rackspace Cloud](http://www.rackspace.com/cloud/public/servers/). Here I’ll detail the steps involved in setting up the server from the start.

Note: I’m using Ubuntu 12.04.

#1. Initial Setup
When your server first starts up, the only user you can access is root. Rackspace will send you an email with the root password. The first thing you need to do is change that. This should be something that you have to open 1Password for every time, and definitely not “123456”.


Then I like to add a non-root user for myself. You can add this user to the admin group, which automatically gives you sudoers access, but I’d recommend against it.[^sudosecurity] Anything that needs root privilege can be done with the root user. In fact, you probably want to stay logged in as root while we finish installing the services.

useradd -d /home/myuser -m myuser
passwd myuser

If you want to add your user to the admin group.

groupadd admin
usermod myuser -G admin

Next, you probably want to install any updates that are available.

apt-get update && apt-get upgrade

As a final step in the setup, I like to grab [my dotfiles](http://github.com/joshbetz/dotfiles) from Github so my shell is useable.

#2. Uncomplicated Firewall
Also in the name of security, a software firewall. Realistically, you probably only need to allow the outside world to talk to your server on port 22 (for ssh), port 80 (for http), and *maybe* port 443 (for ssl).

ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow www
ufw allow 443
ufw enable

#3. Nginx, PHP-FPM, and MySQL
Let’s install a web server on this web server. I found [a great article](http://www.howtoforge.com/installing-nginx-with-php5-and-php-fpm-and-mysql-support-lemp-on-ubuntu-12.04-lts) that walks through setting up Nginx on Ubuntu 12.04. Honestly, after you do it a couple times, this becomes really easy. Nginx config files are generally pretty easy to read.

apt-get install mysql-server mysql-client
apt-get install nginx
service nginx start
apt-get install php5-fpm

I’m going to set up a default virtual host as a wild card to grab any request that isn’t to my domain and redirect it.

vim /etc/nginx/sites-available/default

server {
server_name _;
rewrite ^ $scheme://joshbetz.com$request_uri redirect;

location = /50x.html {
root /usr/share/nginx/www;

Then we can set up the virtual host for our specific domain. The article I linked to does this a little differently. I’m going to do some WordPress specific things right away, but you could set it up generically and then come back to this if you wanted to. The idea here is to put the rules specific to this site in a this virtual host and then have some generic files we include incase we want to set up another WordPress site on this server. Honestly, you could simplify this into one file if you’re only ever going to set up one site.

First open `/etc/nginx/sites-available/mysite.com` with vim and paste the following, editing the relevent bits.

server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6

root /var/www/joshbetz.com;
index index.php index.html index.htm;

# Make site accessible from http://joshbetz.com/
server_name joshbetz.com jbe.me;

include global/restrictions.conf;

# More rules here

include global/wordpress.conf;

Next up is `/etc/nginx/global/restrictions.conf`.

# Global restrictions configuration file.
# Designed to be included in any server {} block.</p>
location = /favicon.ico {
log_not_found off;
access_log off;

location = /robots.txt {
allow all;
log_not_found off;
access_log off;

# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /. {
deny all;

# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*.php$ {
deny all;

And finally, `/etc/nginx/global/wordpress.conf`.

# WordPress single blog rules
# Designed to be included in any server {} block.

# This order might seem weird – this is attempted to match last if rules below fail.
# http://wiki.nginx.org/HttpCoreModule
location / {
try_files $uri $uri/ /index.php?$args;

# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;

# Directives to send expires headers and turn off 404 error logging.
location ~* .(?:js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;

# Uncomment one of the lines below for the appropriate caching plugin (if used).
#include global/wordpress-wp-super-cache.conf;
#include global/wordpress-w3-total-cache.conf;

# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ .php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+.php)(/.+)$;
fastcgi_pass unix:/tmp/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;

Now we need to configure PHP-FPM to be compatible with the Nginx configuration. Comment out the `listen` directive and add a new one like the one below or just replace the localhost address with `/tmp/php5-fpm.sock`. Open `/etc/php5/fpm/pool.d/www.conf` and looks for something like like `listen =` and replace it with the following.

;listen =
listen = /tmp/php5-fpm.sock

Then restart PHP-FPM with `service php5-fpm reload`. Time to run the WordPress install!

#4. Postfix
WordPress needs to be able to send email for certain things. We’ll use Postfix for this.

apt-get install postfix

I was having a problem because the hostname of my server is joshbetz.com, but this isn’t actually a mail server. In `/etc/postfix/main.cf`, there’s a line: `mydestination`. It was automatically set to “joshbetz.com, localhost.com, localhost” causing postfix to intercept messages to joshbetz.com before they ever left the server. I just removed “joshbetz.com” from that list and restarted postfix to fix the problem.

#5. Alternative PHP Cache
APC is a PHP opcode cache — it makes PHP faster. But it also has a key-value store that can be used for the [WordPress Object Cache](http://codex.wordpress.org/Class_Reference/WP_Object_Cache). Mark Jaquith has [a nice plugin](http://wordpress.org/extend/plugins/apc/) for this.

apt-get install php-apc

#6. iStat
You might know iStat for the Mac dashboard widget or the [iPhone app](http://bjango.com/iphone/istat/). There’s also a version of [iStat server for Linux](https://github.com/tiwilliam/istatd). The setup is a bit outside the scope of this article, but there are pretty good instructions in the readme on Github. Once it’s setup, you can access information about your server like CPU load, free memory, and disk utilization all from you iPhone.

#7. cdn
So, I have a love/hate relationship with Rackspace’s Cloud Files. This is the only part that I haven’t actually finished yet because I can’t decide if I want to use Cloud Files or Amazon’s CloudFront — or just skip the CDN aspect all together for now.

Origin pull would be really nice — if a file doesn’t exist on one of the edges, it just gets pulled from your server. Both Rackspace and Amazon claim to have origin pull. Amazon’s origin pull works exactly the way it’s supposed to. You have to think a little differently to understand Rackspace’s approach though.

First, you have to consider Cloud Files not as a CDN[^notacdn], but just a place to store your files — a file server if you like. You can then tell Cloud Files to turn on the CDN, which hooks it into Akamai’s CDN. From the store on Cloud Files, there is indeed origin pull, but Cloud Files has no way of asking your Cloud Server for a file if it doesn’t exist.

I used this as an opportunity to rebuild my virtual environment as well. I’ve written a couple of posts about virtualizing your local development environment in the past. Turns out, VirtualBox does some weird stuff to shared files sometimes.

The new server was killing some of my javascript and adding null bytes to the end. I found [an article on serverfault](http://serverfault.com/questions/401081/nginx-serves-broken-characters-nginx-on-linux-as-guest-system-in-vbox) that addressed this issue. If you turn sendfile off in nginx, it fixes the problem. Apparently it can happen with Apache too, but this is the first time I’ve seen it.

#Further Reading
* [Set up VirtualBox for Web Development](http://joshbetz.com/2012/01/set-up-virtualbox-for-web-development/) – An article I wrote about virtualizing your development environment in VirtualBox.
* [Virtualized Development, Part 2](http://joshbetz.com/2012/03/virtualized-development-part-2/) – A follow up article I wrote about sharing files between a VirtualBox host and client. With a similar setup, I can edit all my files in OSX, but use Ubuntu to serve them on a local domain.
* [http://codex.wordpress.org/Nginx](http://codex.wordpress.org/Nginx) – The codex entry on WordPress.org has some good stuff.

[^sudosecurity]: My recommendations about security and what you actually do in practice might not be the same. Be aware of the risks though — especially if your account has a password that might be easily cracked.

[^notacdn]: Because Cloud Files isn’t a CDN. It’s a file server that happens to have the ability to hook into Akamai’s CDN if you want it to.

Virtualized Development, Part 2

This is a follow up to my post, Set up VirtualBox for Web Development, where I describe how to configure a VirtualBox VM with two NICs so that you can develop on a local VM wherever you happen to be. I’m going to describe how to enhance that with a shared folder between your guest and host operating systems so that changes can be immediately reflected with a need to “upload” them to the virtual server.

One of the advantages to working with virtual machines for development is having a sandbox to throw stuff in. Not having to install PHP or MySQL on your local machine is nice, and if something goes wrong, just wipe it and start over (or boot from a snapshot). But wouldn’t it be nice if you could save your files and have the changes instantly reflected on the VM without going through an app like Transmit? I’ve been working with CodeKit recently, and one of the nice features is that it will refresh the browser for you automatically when you save your changes, but if you have to “upload” the files to a VM every time you save, this feature isn’t quite as useful. So, let’s fix that.

A couple of things came together in an interesting way leading up to this post. I’ve been working with CodeKit for a few weeks and it had started to become obvious that my workflow was a little flawed. Uploading to the VM after every save was getting old, even with Dropsend from TextMate. I knew about shared folders between VirtualBox host and guest, but gave up after briefly looking into it because I didn’t know how to install Guest Additions via the command line.

Then, someone showed me Vagrant. It’s an awesome app for automatically provisioning “lightweight, reproducible, and portable development environments.” I had a problem with it though; my MacBook is old and slow. Part of the automatic process is using Chef to essentially set up all the apps you need to run your environment – PHP, Apache, MySQL, etc. Everything went smoothly until Chef started doing its thing, then the CPU would jump to 100% and everything would lock up. I could limit this to 75% of the the host’s CPU if I wanted, but that wasn’t the real issue. I never really gave it a chance to finish, but it took long enough that it would be completely impractical for me to wait that long every time I needed to provision a VM for a new project. So, back to plain VirtualBox.

Let’s do this already!

As I mentioned, there is a way to set up a shared folder between the guest and host – Vagrant does this automatically for you, which is the main feature I was interested in anyway. Once you have the guest OS configured the way you want it, it’s only a few steps to create the shared folder:

First, install Guest Tools.

sudo apt-get install dkms
sudo apt-get install build-essential
sudo apt-get install linux-headers-$(uname -r)
sudo reboot

After the machine reboots, go to the “Devices” menu and click “Install Guest Additions”. This will essentially add Guest Additions as a CD-Rom, just like if you were using a desktop OS, but you have to mount it manually. Then, run the Guest Additions setup.

cd /media
sudo mkdir cdrom
sudo mount /dev/cdrom /media/cdrom
cd /media/cdrom
sudo ./VBoxLinuxAdditions-x86.run

If you haven’t already, you’ll need to shut down the guest to add a shared folder in the VirtualBox settings. From VirtualBox, click on the VM you want to set up, go to “Settings”, “Shared Folders”, and add a new share. Then, mount it with the following, where “yourshare” is the name you gave the shared folder.

mkdir /path/to/mountdir
sudo mount -t vboxsf yourshare /path/to/mountdir

Now you can just save your files into that folder and they will be shared on the guest as well. Eventually, when I get a new machine, I’ll try Vagrant again. This works excellent for me right now though.


  1. The commands to install Guest Additions came straight from Michael Halls-Moore’s blog.


Instead of mounting the shared folder directly to the path you need it at, you may want to use a symbolic link to take advantage of the auto-mount feature of VirtualBox. When auto-mount is turned on, the folder will be mount in /media as the name you set it up as in VirtualBox, but it will have a prefix of ‘sf_’ in front of it. To create the symbolic link, you would run the following command.

ln -s /media/sf_yourshare /path/to/mount

Set up VirtualBox for Web Development

I’ve used many different development styles through the years. For a while I was a big fan of using Coda to develop live on the server. Recently I’m using more of Textmate and Transmit1. And, while I still like the how fast I can move while writing changes straight to the server, I don’t necessarily want to be doing anything that could potentially break a client’s website — even for a short time. The solution to this is pretty simple; run a virtual machine that acts just like the remote server. You can make your changes and when you’re sure it’s right, you can make one push up to the live site. This probably seems pretty obvious, but I’m going to talk about how I have my VM configured so that it works even on a strange network.


We want a virtual machine that has a static address that will work even if we pick up and decide to work out of a coffee shop for a day or, say, a warehouse in Northern California. It needs to act just like a production server, which means it also needs to be connected to the web.


  1. In Preferences, set up a VirtualBox network. Since this particular network is going to be for servers, we won’t worry about turning on a DHCP server. Screenshot
  2. Create a new VM in VirtualBox. I like Ubuntu, but you can use any distro you want. Use whatever specs work for you. It would be a good idea to mirror the production machine you’ll be working with as closely as possible. Don’t worry about networking on the machine yet — that’s next.
  3. Make sure the VM is not running and open its settings. Go to the network tab and verify that adapter 1 is still set for NAT. This is how the VM will access the internet.
  4. Click over to adapter 2. Check the box to enable the adapter and set it to use the network that we set up in step 1.
  5. Start the VM and open the network configuration file. In the current version of Ubuntu this is at /etc/network/interfaces. Make sure you leave eth0 as a dynamic interface and set eth1 as a static interface for your VirtualBox network. Screenshot
  6. Restart the machine and verify that it can access the internet. Also verify that you can access the machine from your host OS on the IP address you gave it.

Something I like to do, just to make this a little easier to work with is to set that address to a local domain in my hosts file. Something like local.dev seems to be popular, but you can name it whatever you want. Obviously if you name it google.com you’ll need to start using Bing or something. 😉

I realize this was a pretty quick overview of the process, so if there’s anything that was unclear, let me know in the comments or shoot me an email.

  1. Textmate with the Transmit bundle really makes Dropsend amazing. If you don’t know what I’m talking about, look for the Transmit bundle for Textmate and check out the secrets of transmit blog post that Panic wrote a while back.