infrastructure-update

Everything started on Heroku in October 2012 over their dynos with Heroku Postgres and continued on OpenShift in August 2013 over a LAMP stack based on Apache 2.4, PHP 5.3 and MySQL 5.1.

Now it’s time to to move my little blog on a modern stack. Best offer on OpenShift is a variation of the standard LEMP (we can call it: LEMP-HH) stack with HHVM 3.8, MariaDB 5.5 over NGINX 1.7.

lemphh-stack

Actually biggest performances improvement was achieved adding a good cache plugin a few months ago. I always used W3 Total Cache and WP Super Cache but, in this specific case, they are both complex to use because of the structure of OpenShift stack. Best solution I found is WP Fastest Cache plugin, one of the latest cache plugin I tested. Here is the stunning header of their website showing two beautiful cheetahs (are they cheetahs?).

wp-fastes-cache

Anyway coming back on new stack, there is no official bundle yet but you can create a new application using tengyifei’s HHVM 3.8 cartridge and adding OpenShift MariaDB 5.5 cartridge. I wasn’t able to run them on different gears (with scaling option activated and HAProxy) but seems fast enough on a single gear.

Filesystem structure is similar to the standard PHP bundle except for the application dir that is named www/ instead of php/. I used last backup from UpdraftPlus to migrate database on MariaDB. On non scalable applications you need to forward port in order to access DB from your local machine. RHC command is:

rhc port-forward -a application-name

Source here: Getting Started with Port Forwarding on OpenShift

Moving on NGINX also causes problems on permalinks because .htaccess doesn’t work anymore. The Nginx Helper plugin fix the problem but you could simply add a couple of row to NGINX configuration located in /config/nginx.d/default.conf.erb.

# Handle any other URI
location / {
try_files $uri $uri/ /index.php?q=$request_uri;
}

Discussion on WordPress support forum: WordPress Permalinks on NGINX

Refactor of previous filesystem, migration of database and bugfix of permalinks and other stuff takes about 2 hours and, at the end, everything seems working fine. I’m quite confident this a future proof solution but I’m going to test it until next major update 🙂

[UPDATE 2015-09-06 21:56 CEST]

After migration sitemap_index.xml and robots.txt weren’t reachable. Some rules were missing. I took the opportunity to switch to Yoast SEO for sitemap, Facebook open graph and Twitter cards. Then, these rules fix problems with SEO.

# Rewrites for WordPress SEO XML Sitemap
rewrite ^/sitemap_index.xml$ /index.php?sitemap=1 last;
rewrite ^/([^/]+?)-sitemap([0-9]+)?.xml$ /index.php?sitemap=$1&sitemap_n=$2 last;
# Rewrites for robots.txt
rewrite ^/robots\.txt$ /index.php?robots=1 last;

I always like The Setup. Discover what kind of technologies, hardware and softwares other skilled people are using is extremely useful and really fun for me. This time I’d like to share some tips from the complete reboot I did to my personal ecosystem after switch to my new Macbook.

macbook_pro_13_retina

From the hardware side is a simple high-end 2015 Macbook Pro 13″ Retina with Intel Core i7 Haswell dual-core at 3,4GHz, 16GB of RAM and 1TB of SSD PCI Express 3.0. Is fast, solid, lightweight and flexible. The only required accessory is the Be.eZ LArobe Second Skin.

From the software side I decided to avoid Time Machine restore in order to setup a completely new environment. I started on a OS X 10.10 Yosemite fresh installation.

As polyglot developer I usually deal with a lot of different applications, programming languages and tools. In order to decide what top install, a list of what I had on the previous machine and what I need more was really useful.

Here is a list of useful software and some tips about the installation process.

Applications

paid_apps

Paid softwares worth having: Evernote (with Premium subscription and Skitch) and Todoist (with Premium subscription) both available on the Mac App Store. 1Password, Fantastical 2, OmniGraffle, Carbon Copy Cloner, Backblaze and Expandrive available on their own websites.

Free software worth having: Google Chrome and Mozilla Firefox as browser, Apache OpenOffice, Skype and Slack as chat, VLC for multimedia and Transmission for torrents.

app_from_suites

Suites or part of: Adobe Photoshop CC, Adobe Illustrator CC and Adobe Acrobat Pro DC are part of the Adobe Creative Cloud. Microsoft Word 2016, and Microsoft Excel 2016 are part of Microsoft Office 2016 for Mac (now in free preview). Apple Pages, and Apple Keynote are preinstalled as Apple iWork suite as well as Apple Calendar and Apple Contacts.

Development tools

Utilities for Power Users: Caffeine, Growl and HardwareGrowler, iStat Menu Pro, Disk Inventory X, Tor Browser and TrueCrypt 7.1a (you need to fix a little installation bug on OS X 10.10), Kinematic and Boot2Docker for Docker, Sublime Text 3 (with some additions like: Spacegray Theme, Soda Theme, a new icon, Source Code Pro font), Tower, Visual Studio Code, Android SDK (for Android emulator) and XCode (for iOS emulator), VirtualBox (with some useful Linux virtual images), iTerm 2.

CLI: OhMyZSH, Homebrew, GPG (installed using brew), XCode Command Line Tools (from Apple Developers website), Git (with git-flow installed using brew), AWS CLI (install via pip), PhantomJS, s3cmd and faster s4cmd, Heroku toolbelt and Openshift Client Tools (install via gem).

daemons

Servers: MariaDB 10.0 (brew), MongoDB 3.0 (brew), Redis 3.0 (brew), Elasticsearch 1.6 (brew), Nginx 1.8.0 (brew), PostgreSQL 9.4.2 (via Postgres.app), Hadoop 2.7.0 (brew), Spark 1.4 (download from official website), Neo4j 2.2 (brew), Accumulo 1.7.0 (download from official website), Crate 0.49 (download from official website), Mesos 0.22 (download from official website), Riak 2.1.1 (brew), Storm 0.9.5 (download from official website), Zookeeper 3.4.6 (brew), Sphinx 2.2 (brew), Cassandra 2.1.5 (brew).

languages

Programming languages: RVM, Ruby (MRI 2.2, 2.1, 2.0, 1.9.3, 1.8.7, REE 2012.02, JRuby 1.7.19 installed using RVM), PHP 5.6 with PHP-FPM (installed using brew), HHVM 3.7.2 (installed using brew with adding additional repo, has some issues on 10.10), Python 2.7 (brew python) and Python 3.4 (brew python3), Pip 7.1 (shipped with Python), NVM, Node.js 0.12 and IO.js 2.3 (both installed using NVM), Go 1.4.2 (from Golang website), Java 8 JVM (from Oracle website), Java 8 SE JDK (from Oracle website), Scala 2.11 (from Scala website), Clojure 1.6 (from Clojure website), Erlang 17.0 (brew), Haskell GHC 7.10 (brew), Haskell Cabal 1.22 (brew), OCaml 4.02.1 (brew), R 3.2.1 (from R for Mac OS X website), .NET Core and ASP.NET (brew using DNVM), GPU Ocelot (compiled with a lot of libraries).

Full reboot takes about 2 days. Some software are still missing but I was able to restart my work almost completely. I hope this list would be helpful for anyone 🙂

A few weeks ago I faced a performance issue around a lookup query on a 5GB MySQL database. About 10% of these request take 100x more time than the others. Syntax was really simple:

SELECT  `authors`.* FROM `authors`  WHERE `authors`.`remote_id` = 415418856 LIMIT 1;

Actually it wasn’t a real “slow query”. Table is quite large (3M+ record) and a response time of 300ms was reasonable. Unfortunately my application run this query thousands times each minute and performances were degraded. The remote_id is a standard varchar field indexed by a unique Hash index and I couldn’t understand why these queries take so much time.

After a long search I found a note in the reference of MySQL related to type conversion:
MySQL 5.7 Reference Manual > 12.2 Type Conversion in Expression Evaluation


For comparisons of a string column with a number, MySQL cannot use an index on the column to look up the value quickly. If str_col is an indexed string column, the index cannot be used when performing the lookup in the following statement:

SELECT * FROM tbl_name WHERE str_col=1;

Everything was clear: my Ruby application doesn’t implicit cast to string the remote_id I use when this value is an integer. This is a quite strange behavior because ActiveRecord, the application ORM, usually operates cast transparently. Usually.

The fixed version of the query runs in 3ms:

SELECT  `authors`.* FROM `authors`  WHERE `authors`.`remote_id` = '415418856' LIMIT 1;

Small changes, big headaches, great results 🙂

I few days ago I have been at Codemotion in Milan and I had the opportunity to discover some insights about technologies used by two of our main competitor in Italy: BlogMeter and Datalytics. It’s quite interesting because, also if technical challenges are almost the same, each company use a differente approach with a different stack.

datalytics_logo

Datalytics a is relatively new company founded 4 months ago. They had a desk at Codemotion to show theirs products and recruit new people. I chatted with Marco Caruso, the CTO (who probably didn’t know who I am, sorry Marco, I just wanted to avoid hostility 😉 ), about technologies they use and developer profile they were looking for. Requires skills was:

Their tech team is composed by 4 developers (including the CTO) and main products are: Datalytics Monitoring™ (a sort of statistical dashboard that shows buzz stats in real time) and Datalytics Engage™ (a real time analytics dashboard for live events). I have no technical insights about how they systems works but I can guess some details inferring them from the buzz words they use.

Supported sources are Twitter, Facebook (only public data), Instagram, Youtube, Vine (logos are on their website) and probably Pinterest.

They use DataSift as data source in addition to standard APIs. I suppose their processing pipeline uses Storm to manage streaming input, maybe with an importing layer before. Data is crunched using Hadoop and Java and results are stored on MongoDB (Massimo Brignoli, Italian MongoDB evangelist, advertise their company during his presentation so I suppose they largely use it).

Node.js should be used for frontend. Is fast enough for near real time application (also using websockets) and play really well both with Angular.js and MongoDB (the MEAN stack). D3.js is obviously the only choice for complex dynamic charts.

I’m not so happy when I discover a new competitor in our market segment. Competition gets harder and this is not fun. Anyway guys at Datalytics seems smart (and nice) and compete with them would be a pleasure and will push me to do my best.

Now I’m curios to know if Datalytics is monitoring buzz on the web around its company name. I’m going to tweet about this article using #Datalytics hashtag. If you find this article please tweet me “Yes, we found it bwahaha” 😛

[UPDATE 2014-12-27 21:18 CET]

@DatalyticsIT favorite my tweet on December 1st. This probably means they found my article but the didn’t read it! 😀

data_science

During the last year I refined my RSS collection about big-data, data science and analytics. I usually check it everyday in order to discover a ton of new cool technologies and have fun. Here is the updated list.

Bloggers

News about emerging technologies, scalability and data

Data companies, social networks and search engines

Companies supporting e distributing big-data processing products

Recently I discovered the awesome data science list that contains a list of interesting blogger I haven’t time to check yet. You can surely find something more in it. I’ll try to publish an update when I’ll check it.

[UPDATE 2014-09-22 11:35]

Thanks to @onurakpolat for correcting my link to awesome data science list. Previous link was to his fork, the original repo is https://github.com/okulbilisim/awesome-datascience by @okulbilisim

After GFS and MapReduce, Google solve again big data problems designing BigTable: a compressed, high performance, and proprietary data storage system that forms the basis for most of its projects. HBase and Cassandra are inspired from it.


google_logo

Title: Bigtable: A Distributed Storage System for Structured Data (PDF), November 2006
Authors: Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber

Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers.

Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products.

In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.


Check out the list of interesting papers and projects (Github).

For several years I thought MapReduce was the only paradigm for distributed data processing. Only a few month ago, watching “Clash of the Titans Beyond MapReduce” (a really interesting talk at The Hive meetupMatei Zaharia (@matei_zaharia), CTO of Databricks and co-creator of Spark, cited Dryad, a programming paradigm developed by Microsoft. I don’t know any open project which implement it but was widely used by Microsoft Research.


microsoft_research_logo

Title: Dryad: Distributed Data-Parallel Programs from Sequential
Building Blocks (PDF), March 2007
Authors: Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, Dennis Fetterly

Abstract

Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational “vertices” with communication “channels” to form a dataflow graph.

Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through files, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources.

Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers. The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.


Check out the list of interesting papers and projects (Github).

Hadoop would not be real without this paper. MapReduce is the most famous and still most used processing paradigm for big data. It is not suitable for everything and there are several improvements (Dryad, Spark, …) but Google, Facebook, Twitter and many other has million rows of code deployed into their systems.


google_logoTitle: MapReduce: Simplified Data Processing on Large Clusters (PDF), December 2004
Authors: Jeffrey Dean and Sanjay Ghemawat

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.

Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google’s clusters every day.


Check out the list of interesting papers and projects (Github).

crate_logo

I usually don’t trust cutting edge datastore. They promise a lot of stunning features (and use a lot of superlatives to describe them) but almost every time they are too young and have so much problems to run in production to be useless. I thought the same also about Crate Data.

“Massively scalable data store. It requires zero administration”

First time I read these words (take from the home page of Crate Data) I wasn’t impressed. I simply didn’t think was true. Some months later I read some articles and the overview of the project and I found something more interesting:

It includes solid established open source components (Presto, Elasticsearch, Lucene, Netty)

I used both Lucene and Elasticsearch in production for several years and I really like Presto. Combine some production-ready components can definitely be a smart way to create something great. I decided to give it a try.

They offer a quick way to test it:

bash -c "$(curl -L try.crate.io)"

But I don’t like self install scripts so I decided to download it a run from bin. It simply require JVM. I unpacked it on my desktop on OS X and I launched ./bin/crate. The process bind the port 4200 (or first available between 4200 and 4300) and if you go to http://127.0.0.1:4200/admin you found the admin interface (there is no authentication). You also had a command line interface: ./bin/crash. Is similar to MySQL client and you are familiar with any other SQL client you will be familiar with crash too.

I created a simple table with semi-standard SQL code (data types are a bit different)

create table items (id integer, title string)

Then I search for a Ruby client and I found crate_ruby, the official Ruby client. I started to fill the table using a Ruby script and a million record CSV as input. Inserts go by 5K per second and the meantime I did some aggregation query on database using standard SQL (GROUP BY, ORDER BY and so on) to test performances and response was quite fast.

CSV.foreach("data.csv", col_sep: ";").each do |row|
client.execute("INSERT INTO items (id, title) VALUES (\$1, \$2)", [row[0], row[9]])
end

Finally I decided to inspect cluster features by running another process on the same machine. After a couple of seconds the admin interface shows a new node and after a dozen informs me data was fully replicated. I also tried to shut down both process to see what happen and data seems ok. I was impressed.

crate_admin

I still have many doubts about Crate. I don’t know how to manage users and privileges, I don’t know how to create a custom topology for a cluster and I don’t know how difficult is to use advanced features (like full text search or blob upload). But at the moment I’m impressed because administration seems really easy and scalability seems easy too.

Next step will be test it in production under a Rails application (I found an interesting activerecord-crate-adapter) and test advanced features to implement a real time search. I don’t know if I’ll use it but beginning looks very good.

Next week O’Reilly will host a webcast about Crate. I’m really looking forward to discover more about the project.

hiphop_logoHipHop was one of the most notable thing came from the Facebook labs about PHP development. PHP is slow and limited. They can’t rewrite theirs entire codebase so they decided to make PHP better. HipHop is a simply PHP to C++ compiler (HPHPc). Converted code is compiled into a binary and performance improvements are about 6x.

Unfortunately HipHop has several downsides. For all the performance gains that HPHPc provided, the curve for further performance improvements had flattened. HPHPc did not fully support the PHP language, including the create_function() and eval() constructs. HPHPc required a very different push process, requiring a bigger than 1 GB binary to be compiled and distributed to many machines in short order.

hhvm_logoTo overcome these problems Facebook develops, starting from early 2010, the HHVM: a PHP virtual machine. HHVM builds on top of HPHPc, using the same runtime and extension function implementations. HHVM converts PHP code into a high-level bytecode. This bytecode is then translated into x64 machine code dynamically at runtime by a just-in-time (JIT) compiler similarly to C#/CLR or Java/JVM.

hack_logoFacebook also released Hack, a programming language for HHVM that can be seen as a new version of PHP which it allows programmers to use both dynamic typing and static typing.

HHVM supports major PHP open source projects like WordPress. Running this project on seems really easy. A little modification was needed but last version (3.9) no longer need this. HHVM can also run on Heroku using a custom buildpack available here: https://github.com/hhvm/heroku-buildpack-hhvm.

My first experiment was to run WordPress on Heroku using HHVM. First step is create a Heroku app using HHVM buildpack:

heroku create --buildpack https://github.com/hhvm/heroku-buildpack-hhvm

Then you can deploy a standard WordPress installation adding the following config.hdf (the HHVM configuration file)

Server {
DefaultDocument = index.php
}
Eval {
Jit = true
}
VirtualHost {
* {
Pattern = .*
RewriteRules {
dirindex {
pattern = ^/(.*)/$
to = $1/index.php
qsa = true
}
}
}
}
StaticFile {
FilesMatch {
* {
pattern = .*.(dll|exe)
headers {
* = Content-Disposition: attachment
}
}
}
Extensions {
css = text/css
gif = image/gif
html = text/html
jpe = image/jpeg
jpeg = image/jpeg
jpg = image/jpeg
png = image/png
tif = image/tiff
tiff = image/tiff
txt = text/plain
}
}

Warning: don’t miss a newline character on the last line or linter will fail and you will going to hate this project 😉

Everything works fine. You can add you favorite MySQL hosted service and run your WordPress 5 minutes installation. Almost every plugin seems 100% compatible, I tested most popular with no problem. Performances are better and you also have the opportunity to use Hack to develop new custom plugins.

Now I’m curious about how HHVM can improve my production installations of WordPress. About this I’m looking for an OpenShift cartridge for HHVM or someone want to collaborate to create a new one (the only I found on Github seems “young”). Anyone interested? Let me know!