Has been a long time since the last time I wrote on this blog. Many things are changed in my life since then. My journey at Curcuma wasn’t so happy as I hoped and after 6 months of hard work I left and joined the amazing team at Ernest.ai.

Ernest: Your financial coach

We are building a smart chatbot to help people managing personal finance. Currently, we are in closed beta (here you can sign.up to the waiting list). Team is distributed between London and Milan. Here is a beautiful photo taken during last meeting in London a couple of months ago.

The Ernest team, WeWork Old Street, London, December 2016.

During the last year, career path switch and parenting took all my time and chances to write vanished. Anyhow experiences I did allow me to learn a lot about Machine Learning, Artificial Intelligence, Conversational Interfaces, Chatbots, Functional and Reactive programming and many other exciting topics and now, the beginning of 2017, could be the right time to restart giving back to the community.

See you on this feed ūüėČ

Every time I attend a tech conference I usually meet interesting people and found new awesome technologies. It’s great. This year I attended 4 conference in a row (36 talks in 10 days)¬†and I started a new job a few days earlier. During May 2016 I discover dozens of new technologies and I’d like to do my part and “give back to the community” talking about them in a series of posts.

Here is what I discovered attending JSDay 2016 in Verona.


Progressive Web Apps

Progressive Web Apps¬†“take advantage of new technologies to bring the best of mobile sites and native applications to users. They’re reliable, fast, and engaging.” Google says. They are web apps with a great offline experience. Here is a great list of examples.

Service Workers and Web Workers are modern way to run background services in browser using Javascript. Web Workers has better support, Service Workers has better features.

Physical Web is about the ability of the browser to interact with beacons and other external devices without require a native app. Web Bluetooth API gives browsers again more flexibility (the speaker drove a robot using Javascript in browser)

UpUp¬†helps you to make your progressive app to be “Offline First“. The idea is to¬†makes sure your users can always access your site’s content, even when they’re on a plane, in an elevator, or 20,000 leagues under the sea


Reactive Programming

Reactive Programming Paradigm is trendy by now. Cycle.js and ReactiveX takes Observer and Iterators on functional programming. Speakers talks a lot about them.

Traditional frameworks are going reactive thanks to extensions like Ngrx

Model View Intent¬†is going (probably) to replace previous MVC, MVP, MVVM, …

While JSX gains traction thanks to React.js, other solutions, like Hyperscript, are springing up.


Javascript for cross platform apps

Electron, NW.js and many other platforms make possibile to use javascript to build cross platform apps like Slack, Atom and Visual Studio Code.

Async Javascript

Asyncronous programming is hard. Javascript never make easier to do it but now things are getting better and better thanks to many new libraries. Fluorine can simply be though of as an abstraction or a DSL and is a structure of code in which you can manage complex asynchronous code with ease. Co do almost the same thing on Node.js.

Task.js takes concept of Generators and Promises to another level defining the concept of Task.

ES7 will close the circle using async and await with the pleasure of native implementation.


chrome-canary-512Debugging in Chrome Canary

The Bleeding edge version of Chrome, Canary, offers you several beautiful beta features like an integrated layout editor able to edit SASS and works with Rails, a powerful animation inspection and enable you to emulate network connectivity and speed.

Most of these feature are available inside Chrome Workspaces.

Also Node.js stuff could be debugged using Chrome engine thank to Iron-node. If you prefer Firefox engine, Webkit, look at Node-inspector.


I also discover Modernizr and his stunning new website, Can I Use tells you how much is a technologies available on browser, ZeroClipboard and Clipboard.js makes copy and paste easier, Hapi.js is an interesting framework and Highland a powerful stream library for Node.js. I also discovered that I can use WebGL for data processing in browser.

In the end a lot of new discoveries at JSDay 2016 ūüėČ

After almost 4 years as CTO at The Fool, is time for me to search for new adventures.¬†Starting from May 1st 2016 I’ll join the Curcuma team. Below, a picture from new office:


Just joking ūüėõ Will be¬†a big challenge¬†for me because I’ll move from specific field, web and social network analysis, to general purpose development where projects are really varied: custom CMS, integration with IoT devices, mobile application and many others. In the end, a good challenge!

soliduseCommerce solutions are quite popular¬†in Curcuma’s portfolio and my last experience about was in 2008 with an early version of Magento. I worked on similar products but I’m quite “rusty” on this topic.¬†Starting from the Ruby ecosystem, default in Curcuma, only two realistic options are available: Spree (acquired by¬†First Data and no longer supported) and Solidus (a Spree fork quite young but already interesting).

I searched for tutorials about Solidus but version 1.0.0 was shipped last August (and is based on Spree 2.4) and community is still young. I found only beginner’s¬†tutorials so I decided to follow Github README instructions on master branch.


Start with a fresh installation of Rails 4.2 (Rails 5.0 beta seems not supported yet), add gems and run bundle install

gem 'solidus'
gem 'solidus_auth_devise'

Inspecting Gemfile.lock you can find solidus dependencies:

solidus (1.2.2)
  solidus_api (= 1.2.2)
  solidus_backend (= 1.2.2)
  solidus_core (= 1.2.2)
  solidus_frontend (= 1.2.2)
  solidus_sample (= 1.2.2)
solidus_auth_devise (1.3.0)

The solidus package seems a container for these modules. I really like this approach: is clean, encourages isolation and mask complexity. Also gemspec is the cleanest I’ve seen yet.

# encoding: UTF-8
require_relative 'core/lib/spree/core/version.rb'

Gem::Specification.new do |s|
  s.platform    = Gem::Platform::RUBY
  s.name        = 'solidus'
  s.version     = Spree.solidus_version
  s.summary     = 'Full-stack e-commerce framework for Ruby on Rails.'
  s.description = 'Solidus is an open source e-commerce framework for Ruby on Rails.'

  s.files        = Dir['README.md', 'lib/**/*']
  s.require_path = 'lib'
  s.requirements << 'none' s.required_ruby_version = '>= 2.1.0'
  s.required_rubygems_version = '>= 1.8.23'

  s.author       = 'Solidus Team'
  s.email        = 'contact@solidus.io'
  s.homepage     = 'http://solidus.io'
  s.license      = 'BSD-3'

  s.add_dependency 'solidus_core', s.version
  s.add_dependency 'solidus_api', s.version
  s.add_dependency 'solidus_backend', s.version
  s.add_dependency 'solidus_frontend', s.version
  s.add_dependency 'solidus_sample', s.version

Setup and config

Anyway next step on README is to run following rake tasks

bundle exec rails g spree:install
bundle exec rake railties:install:migrations

First one gives me a warning:

[WARNING] You are not setting Devise.secret_key within your application!
You must set this in config/initializers/devise.rb. Here's an example:

Devise.secret_key = "7eaa914b11299876c503eca74af..."

fires some actions related to assets, migrations and seeds then ask me for username and password. Standard install.

About the warning I found another post that recommend to run this task.

rails g solidus:auth:install

Is not clear to me what it does but seems working. After run warning is left.

Rake about migration (bundle exec rake railties:install:migrations) gives no output. I suppose migrations are already installed after first step. No idea.

Anyway last step liste on README is to run migrations (bundle exec rake db:migrate) and give no output too so everything seems ok.

No we can fire rails s and enjoy our brand new store ūüėÄ


A bit more control

These steps are cool but do a lot of things we probably don’t want like install demo products and demo users. Following the README, installation can be run without any automatic step:

rails g spree:install --migrate=false --sample=false --seed=false

and then you are free to run any of the available step with your customizations

bundle exec rake railties:install:migrations
bundle exec rake db:migrate
bundle exec rake db:seed
bundle exec rake spree_sample:load

Now our new store is ready. It’s time to dig deeper into Solidus structure. See ya in another¬†post bro ūüėČ

The answer to question “what’s the best things to do now?” has always been really important for me.

I have a really busy life and prioritisation is critical in order to accomplish everything and save some free time. Handle the huge amount of personal data (mails, messages, chats, documents, ideas, …) I¬†receive everyday and convert it to useful and usable information is essential but hard. Choose right tools is great¬†starting point. Here is¬†mine and why I’m using them.



Many people think to personal and work data as two different environment. I don’t.

I don’t think separate environment¬†is a good idea because everything we do could be modeled¬†as a task: work on a software is a task but also play with your son is and also sleep, go to work, go out with your partner¬†and read a book.

Each of these task require a timeframe (that could be scheduled or not) and some data. For instance you need date, hour and restaurant name to go out with your partner. You can decide to allocate a given timeframe for personal life and another for work but you are simply scheduling. Activities are always the same. Use data and accomplish tasks. This data usually arrives from someone else or from another task.

When you receive any communication (verbal or by message/chat) or find some new information making something else you can categorize what you get into three simple categories:

  • Useless:¬†Isn’t useful (spam mails, tv spots, boring messages, boring people…) so¬†you can ignore/trash it.
  • Now: Is useful and can be managed within 2 minute (mail with simple response, messages that require information about something you already know, a colleague asking you anything vis a vis) so you can do it.
  • Later: Is useful but you¬†don’t need¬†it at the moment (phone numbers, interesting information, ideas) or can’t manage within 2 minutes (tasks, projects, structured questions, …) so you need to store them.

Store these information is critical because if you store them in the right way you will be faster in task execution and better in prioritization.

Faster execution means less time for single task, more task accomplished and more free time for you. Optimal prioritization means you accomplish right tasks in the right time within deadlines.

If you think of how many task you do everyday you could understand why do it the right way is a good idea.


Started from this idea I split information to store into 3 categories:

  • To-do: something you can do
  • Appointments: something you can do in a specific date and time
  • Information: some information useful for you

Each of these category could be handled using best tools.


To-do list

todoist-logoA place where to store a list of task you need to do. When the list is small, a paper note or a text file are enough. When you realize your life is really busy, something with projects, prioritization, notes, remineders and deadlines became useful.

In the past I used Things for many years but now I use Todoist and I won’t go back.


A tons of products are available but Todoist is the best. Has great feature and is available everywhere (web, OS X, Windows, iPhone, Android, Chrome, Firefox and more). Handle syncronizations and backup gracefully and I love its interface.


fantastical-logoA place where to store your scheduled appointments. A few year ago a paper personal organizer inside your backpack was everything you need. Now we usually use tools offered by our favorite OS provider (Apple iCloud, Google App, Microsoft Outlook). Each software is able to integrate calendar from other providers and I used Apple Calendar for years. A few days ago I switched to Fantastical2.


Fantastical2 (not to be confused with Fantastical, previous version with less features) is really similar to Apple Calendar but has a few relevant addition that worth the price:

  • Appointments recap¬†on left sidebar.
  • Adjustable font size and flexible size of hours in weekly view (against all-day events).
  • Calendar sets.

Before the switch I tested Sunrise Calendar who has a great online interface but it doesn’t offer something really better than Apple Calendar.


evernote-logoA place where to store every information you could need¬†now or in the future. Evernote is my choice for everyday use but, in the past, I experienced¬†several problems: sync was slow and conflicts were frequent, GUI¬†wasn’t easy to use and web interface was a mess. Now, after a couple of years of active development, everything seems better.

google-drive-logoThe ability to integrate documents and edit them in place is still limited so I also use Google Docs for documents and PDF I want to store.


Todoist, Fantastical2 and Evernote help me to accomplish almost all the management work required by my everyday life. Anyway a couple of softwares are really useful too in addition to these:

1password-logo1Password is the best place where to store your password and relevant informations and could be synced over any of your devices (OS X, iOS, Android, Windows)

pocket-logoPocket can act as funnel for every interesting article find on Feedly or social networks and has great text-to-speech functionalities.



I don’t know if the expression¬†“Digital Data Hypochondriac” is¬†clear enough. They are people always worried to lose¬†all their digital data. Every time any components of their digital ecosystem doesn’t work as expected they’re scared.

I’m one of them. In 2000, when I was 15, my huge 14 GB hard disk contains almost all my digital life. One day the HD controller burn (literally, because of a short circuit) and I lost everything. EVERYTHING ūüôĀ


Since that day I became “hypochondriac” about backups. Now, working with multiple huge projects and living as digital commuter I need a safe backup strategy able to handle almost any problem. Here is what I do.

My digital ecosystem is made by different “objects”: Macbook, iPhone, iPad and several online services (iCloud, Google Documents, GMail, Dropbox, Evernote, Todoist, 1Password, …).

For mobile devices and online services is available an online backup directly from provider (iCloud for Apple devices and application, Google Drive for Google services, …). Anyway, to be safe I take a snapshot of these services and devices twice a month.

My aggregated digital footprint is about 550 GB.

Everything single byte is on my Macbook. Obviously doing so, my notebook became a fucking single point of failure and this is not a good choice. Notebook backup strategy is on three different levels.

Apple Time Machine


Everyday, during usual work, I run Time Machine in order to backup data on a WD My Passport Ultra 1TB. I choose this disk because use a single platter and hardware failures are less frequent. Backup happens incrementally about every hour and you can restore a single file from versioned snapshots.




backblaze-logoUnfortunately external hard disks are safe but are a piece of hardware I usually take with me during commuting and everything could happen. I can loose my bag, someone can stole it or I can broke the disk hitting something. Online backup is a great complement and Backblaze is a great provider. You just need to install their agent, generate encryption key and backup starts. Backup is encrypted client side so your data isn’t vulnerable to man-in-the-middle attack and is safe on servers based in US. If you lost your data you can download it as zip or buy an hard drive shipped to you containing you snapshot. 30 days of historic on your files are available.

Carbon Copy Cloner

carbon-copy-clonerYou can assume to be safe with Time Machine and Backblaze but they are both incremental backups. After 30 days or when the hard disk is full, old data is deleted. This is ok almost every time except when you absolutely need it. To be safe is better to take a monthly snapshot using Carbon Copy Cloner. It creates an mountable snapshot of the disk you can easily access. A 5TB hard drive could retain 9 or 10 months of historical data and you can archive them in an easy way.

Amazon Glacier

amazon-glacier-logo3 levels of backup are definitely enough. However some kind of data are really important for me. My Personal photos and documents related to my family, health and house is better tu have a long-term copy. Amazon Glacier is the right place for that. Pack these files per month, TAR them, calculate the checksum then upload them to a given bucket on S3 and configure lifecycle in order to archive on Glacier after 1 day. Pricing is between $0.70 and $1.10 a month for 100 GB.

Today’s challenge: write my first program using Haskell. Let’s start!


Searching “Hello World Haskell” on Google gives me the following tutorial: Haskell in 5 steps.

Install Haskell

First step is to install the Haskell Platform. Main components are the GHC (Glasgow Haskell Compiler) and Cabal (Common Architecture for Building Applications and Libraries). I decided to use brew for simplicity.

brew update
brew install ghc
brew install cabal-install

Using the REPL

You can run Haskell REPL using ghci command:

$ ghci
GHCi, version 7.10.1: http://www.haskell.org/ghc/  :? for help

Here you can run any command:

Prelude> "Hello, World!"
"Hello, World!"
Prelude> putStrLn "Hello World"
Hello World
Prelude> 3 ^ 5
Prelude> :quit
Leaving GHCi.


Create hello.hs.

putStrLn "Hello World"

Then run ghc compiler.

$ ghc hello.hs
[1 of 1] Compiling Main             ( hello.hs, hello.o )
Linking hello ...

Output executable is named hello. You can run it as any other executable.

$ ./hello
Hello, World!

Real code

Now a bit more code: factorial calculator. First step is to define factorial function. You can do in a single line:

let fac n = if n == 0 then 1 else n * fac (n-1)

Or split definition on multiple lines:

fac 0 = 1
fac n = n * fac (n-1)

And put everything into factorial.hs. Now you can load the function inside the console:

Prelude> :load factorial.hs
[1 of 1] Compiling Main             ( factorial.hs, interpreted )
Ok, modules loaded: Main.
*Main> fac 42

Or write a main function then compile and run your executable:

fac 0 = 1
fac n = n * fac (n-1)
main :: IO ()
main = print (fac 42)

N.B. First time I compiled the source code I needed to add main :: IO () to avoid compiling error: The IO action 'main' is not defined in module 'Main'.

Now your executable runs well:


Going parallel

Haskell is cool because the pure functional nature and the parallel/multicore vocation so this beginner’s tutorial add some tips about that.

First of all you need to get the Parallel lib:

$ cabal update
Downloading the latest package list from hackage.haskell.org
$ cabal install parallel
Resolving dependencies...
Downloading parallel-
Configuring parallel-
Building parallel-
Installed parallel-

Then you can write your parallel software using `par` expression.

import Control.Parallel
main = a `par` b `par` c `pseq` print (a + b + c)
a = ack 3 10
b = fac 42
c = fib 34
fac 0 = 1
fac n = n * fac (n-1)
ack 0 n = n+1
ack m 0 = ack (m-1) 1
ack m n = ack (m-1) (ack m (n-1))
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

Compile it with -threaded flag.

$ ghc -O2 --make parallel.hs -threaded -rtsopts
[1 of 1] Compiling Main             ( parallel.hs, parallel.o )
Linking parallel ...

And run it

./parallel +RTS -N2

Now, many details aren’t clear to me. From the meaning of `pseq` to the level of parallelism used. Moreover I have no idea of what are the options used with compiler and the ones for the executable. There is definitely more then “Hello, World!”.

Next step would be Haskell in 10 minutes, another beautiful introduction to the language with links for more complex topics, or Real World Haskell (by Bryan O’Sullivan, Don Stewart, and John Goerzen) or any other useful tutorial listed on 5 steps guide.

See you to the next¬†language ūüôā


Deep Learning is a trending buzzword in the Machine Learning environment. All the major players in Silicon Valley are heavily investing in these topics and US universities are improving their courses offer.

I’m really interested in artificial intelligence both for fun and for work and I spent a few hours in the last weeks searching for best MOOCs about this topic. I found only a few courses but they are from the most notable figures in Deep Learning and Neural Networks environment.

Machine Learning
Stanford University on Coursera, Andrew Ng

Andrew Ng is Chief Scientist at Baidu Research since 2015, founder of Coursera and Machine Learning lecturer at Stanford University. He also founded the Google Brain project in 2011. His Machine Learning (CS229a) course at Stanford is quite mythical and, obviously, was my starting point.


Machine Learning, Coursera

Neural Networks for Machine Learning
University of Toronto on Coursera, Geoffrey Hinton

Geoffrey Hinton is working at Google (probably on Google Brain) since 2013 when Google acquire his company DNNResearch Inc. He is a cognitive psychologist most noted for his work on artificial neural networks. His Coursera course on Neural Networks is related to 2012 but seem to be one of the best resource about these topics.


Neural Networks for Machine Learning, Coursera

Deep Learning (2015)
New York University on TechTalks, Yann LeCun (videos on techtalks.tv)

In 2013 LeCun became the first director of Facebook AI Research. He is well known for his work on optical character recognition and computer vision using convolutional neural networks (CNN), and is a founding father of convolutional nets. 2015 Deep Learning course at NYU is the last course about this topic hold by him.

Yann LeCun. CIFAR NCAP pre-NIPS' Workshop. Photo: Josh Valcarcel/WIRED

Yann LeCun. CIFAR NCAP pre-NIPS’ Workshop. Photo: Josh Valcarcel/WIRED

Big Data, Large Scale Machine Learning
New York University on TechTalks, John Langford and Yann LeCun

Another interesting course about Machine Learning hold by LeCun and John Langford, researcher at Yahoo Research, Microsoft Research and IBM’s Watson Research Center.


John Langford, NYU

Deep Learning Courses
NVIDIA Accelerated Computing

This is not a college course. NVIDIA was one of the most important graphic board manufacturer in the early 2000s and now, with the experience of massive parallel computer on GPUs, is heavily investing in Deep Learning. This course is focused on usage of GPUs on most common deep learning framework: DIGITS, Caffe, Theano and Torch.


Deep Learning Courses, NVIDIA

Mastering Apache Spark
Mike Frampton, Packt Publishing

Last summer I had the opportunity to collaborate in review of this title. Chapter about MLlib contains a useful introduction to Artificial Neural Networks on Spark. Implementation seems still young but is already possible to distribute the network over a Spark cluster.


Mastering Apache Spark

[UPDATE 2016-01-31]

Deep Learning 
Vincent Vanhoucke, Google, Udacity

Google, a few days ago, releases on Udacity a Deep Learning course focused on TensorFlow, its deep learning tool. It’s the first course officially sponsored by a big companym is free and seems a great introduction. Thanks to¬†Piotr Chromiec for pointing ūüôā



This blog started on October 2012 as a place where to write about cool tech stuff I found online. Now after 3+ years of tech problems, a son, 3 infrastructure migrations (Heroku, OpenShift, OVH) and a lot of articles I decided to put all this stuff under a new domain: UsefulStuff.io

New server configuration (HHVM, PageSpeed, SPDY, Varnish and a ton of new technologies), new theme, new topics, all new ūüôā

Learn it, then use it (again!)



When I was young (yes, now I’m old ūüėÄ ) the only messaging feature available on my phone was SMS texting: 160 chars, 10 spaces on my SIM card.

15 years later I’m impressed by the number of messaging app I’m using on daily basis. You can find below the list of messaging apps I used at least once in the last week.

Old but gold. Now is backed by Microsoft and has more problem than years before but it’s still the most used video call app. Most of my customer ask for Skype handle¬†for chatting and also some my friends use it at work.

Skype Screenshot

Most of my non geek friends still uses this app. Facebook paid $22B to acquire the company. Only half of them move to Telegram when WhatsApp became paid for Android users. Web interface, now available also for iOS, OS X and Chrome extension is really useful.


Most of my geek friends use this app. Is really similar to WhatsApp but uses an open source protocol, encryption is claimed as better than competitors. You can also use it server side. Meh…



My boss want it¬†for replacing Skype last spring. I actually don’t like the UI and the price is exaggerated but, after some months of testing, works quite well.


One of our external collaborator create a private HipChat channel because he hasn’t an internal email address and wasn’t able to join company’s Slack. After a few days of use,¬†seems really similar to Slack but is almost free. Crew.co tests both and choose Slack.


Google Talk/Hangout
The best alternative to Skype for video calls. Actually used only for setup video calls.


Facebook Messages
For people¬†I want to contact only on Facebook. Really useful in combination with other Facebook features (events, birthdays, …) also support a lot of external integration (and more are coming). Probably one of the best player in the coming years.



Twitter Direct Messages
Limited to 140 chars until June 2015 are now a valid alternative to Facebook Messages (Twitter extend the limit a few weeks ago). I use them rarely only with a couple of contacts who aren’t connected on Facebook.


Linkedin Messages
With recent update (last week for my account) they are more similar to a messaging app nor an email client. Now recruiters what to chat with you and this is quite noisy.


Apple Messages
The modern alternative to SMS. Only for Apple users.


What’s the best one? I have no idea! Anyway there is a funny thing to notice: I can use them in an easy and transparent way both on my smartphone and on my notebook even better than I was able to use SMS 15 years ago.

Many people are worried by the use of several different channels for communication. “NO NO don’t add me on Telegram, we are already connected on WhatsApp! I can’t handle this!” they say.

IMHO there is no pain in using a lot of different channel because user experience is improved and now we need to use only one channel: our “digital identity” logged in on every device. Complexity is hidden by designers.

Is the interconnected world, is here and we are already part of that.

About a month ago I wrote about how cool was to migrate to HHVM on OpenShift. Custom cartridge for Nginx + HHVM and MariaDB was running fast and I was really excited about the new stack. I was landing on a new, beautiful world.

About a week later I faced some problems because of disk space. Files took only 560MB over 1GB but OpenShift shell gave me an error for 100% disk usage (and a nice blank page on the home because cache couldn’t be written). I wasn’t able to understand why it was giving me that error. It probably depends on log files written by custom cartridge in other position inside of the filesystem. No idea. Anyway I had no time to go deeper so I bought 1 more GB of storage.

The day after I bought the storage blog speed goes down. It was almost impossible to open the blog and CloudFlare gives me timeout for half of the requests. Blog visits started to fall and I have no idea about how to fix that. Some weeks later I discover some troubles with My Corderwall Badges and Simple Sharer Button Adder but, in the OpenShift environment, I had no external caching system useful to handle this kind of problems.

I didn’t want to come back to MySQL and Apache but also trash all my articles wasn’t fun so I choose something I rejected 3 years ago: I took a standalone¬†server.


First choice was Scaleway. It’s trendy and is BareMetal. 3.5‚ā¨¬†for a 4 core ARM (very hipster choice), 2 GB RAM, 50 GB SSD server. New interface is cool, better then Linode and Digital Ocean, server and resources are managed easily. Unfortunately HHVM is still experimental on ARM and SSD are on SAN, and they aren’t so fast (100MB/s).

Next choice was OVH. New 2016 VPS SSD (available in Canadian datacenters) are cheap enough (3.5$) and offer a virtual core Xeon with 2 GB RAM and 10 GB SSD. Multicore performances are lower and you have a lot of less storage but is an X86-64 architecture and SSD is faster (250 MB/s). I took this one!

Unfortunately my preferences¬†aren’t changed since my first post. I’m still a developer, not a sysadmin. I’m not a master in Linux configuration and my stack has several running parts and my blog was still unavailable. My beautiful migration on cutting edge technologies became an emergency landing.

Luckily I found several online tutorial which explain how to master the WordPress stack. In the next days I completed the migration and my new stack now runs on: Pound, Varnish, Nginx, HHVM, PHP-FPM and MariaDB. I hope to have enough time in the coming days to publish all the useful stuff I used for configuration.

For the moment I’m proud¬†to share average response time of the home page: 342ms ūüôā