About a month ago I wrote about how cool was to migrate to HHVM on OpenShift. Custom cartridge for Nginx + HHVM and MariaDB was running fast and I was really excited about the new stack. I was landing on a new, beautiful world.

About a week later I faced some problems because of disk space. Files took only 560MB over 1GB but OpenShift shell gave me an error for 100% disk usage (and a nice blank page on the home because cache couldn’t be written). I wasn’t able to understand why it was giving me that error. It probably depends on log files written by custom cartridge in other position inside of the filesystem. No idea. Anyway I had no time to go deeper so I bought 1 more GB of storage.

The day after I bought the storage blog speed goes down. It was almost impossible to open the blog and CloudFlare gives me timeout for half of the requests. Blog visits started to fall and I have no idea about how to fix that. Some weeks later I discover some troubles with My Corderwall Badges and Simple Sharer Button Adder but, in the OpenShift environment, I had no external caching system useful to handle this kind of problems.

I didn’t want to come back to MySQL and Apache but also trash all my articles wasn’t fun so I choose something I rejected 3 years ago: I took a standalone server.


First choice was Scaleway. It’s trendy and is BareMetal. 3.5€ for a 4 core ARM (very hipster choice), 2 GB RAM, 50 GB SSD server. New interface is cool, better then Linode and Digital Ocean, server and resources are managed easily. Unfortunately HHVM is still experimental on ARM and SSD are on SAN, and they aren’t so fast (100MB/s).

Next choice was OVH. New 2016 VPS SSD (available in Canadian datacenters) are cheap enough (3.5$) and offer a virtual core Xeon with 2 GB RAM and 10 GB SSD. Multicore performances are lower and you have a lot of less storage but is an X86-64 architecture and SSD is faster (250 MB/s). I took this one!

Unfortunately my preferences aren’t changed since my first post. I’m still a developer, not a sysadmin. I’m not a master in Linux configuration and my stack has several running parts and my blog was still unavailable. My beautiful migration on cutting edge technologies became an emergency landing.

Luckily I found several online tutorial which explain how to master the WordPress stack. In the next days I completed the migration and my new stack now runs on: Pound, Varnish, Nginx, HHVM, PHP-FPM and MariaDB. I hope to have enough time in the coming days to publish all the useful stuff I used for configuration.

For the moment I’m proud to share average response time of the home page: 342ms 🙂

Almost every public website rely on Google Analytics for statistics. CloudFlare offers an app to manage the tracking system without edit your own code. It is a very useful feature but has some limitation.

Recently I worked on a network of blog (WordPress-based) available under different (and strange, I know) urls:

  1. http://example.com
  2. http://en.example.com
  3. http://example.com/path/to/blog

Each urls is a different WordPress installation with different content management teams from somewhere in the world which share the root domain. An hour after I had activated CloudFlare the content management teams of blog 2 and 3 reported me a fall of the visits. Websites were online and after 30 minutes of search (and a little headache) we found the problem: The UA code of Google Analytics were all pointed to the one of blog 1.

The CloudFlare app is very useful but doesn’t let you manage custom path and subdomains. It overwrite each occurrence of Analytics code with its own! Use it with caution 😉

If you run a commercial webapp, probably you have to track access.

CloudFlare helps you to manage more connection but hides from you many informations about the client. If you try to log the IP address you always get the CloudFlare’s ones.

Common headers which nginx uses to forward original IP (X-Forwarded-For and X-Real-IP) contain the CloudFlare’s IP. The correct header where to look is HTTP_CF_CONNECTING_IP.

/* PHP */
# Rack