A few days ago @lastknight gave me a strange cardboard package. I have no idea of what it was. “It’s a Google Cardboard” he said. Wasn’t helpful. Still no idea.

knox-v2

After a couple of Google search everything was clear. The strange package he gave me is a Knox V2: one of the implementation of the Google Cardboard project.

Google Cardboard Logo

The Google Cardboard project logo

Virtual reality (or “VR”) is one of the most interesting topic in tech business. Since when Oculus VR was acquired by Facebook during March 2014 for about 2B$ everybody think VR will be one of the next big thing in many fields: from game to surgery. I had the opportunity to test Oculus Rift only once at Codemotion 2014 in Milan (a surgery test simulator). Experience seems really realistic but I have no idea about how could be for an extensive use.

My iPhone 5 doesn’t support cooler apps (like Vrse that seems really cool) but I was able to test Roller Coaster VR, a simple roller coaster demo:

roller-coaster-vr

and Vanguard V for Google Cardboard, a third person space flight rails shooter designed for virtual reality.

vanguard-v-1vanguard-v-2

More and more application are available, here is a list of best apps select by the Huffington Posts. Anyway my thought about VR are not about the quality. Application are still poor. They don’t give you an headache like a few years ago but aliasing, clipping and other 3D related glitch are still there. Quality could improve a lot more and is doing. I’m more interested in real world use of a VR kit.

So I asked my family to test the Knox.

vr-test

First impression is of a new, cool and fun way to enjoy media. Gaming is hard but really immersive and the roller coaster demo is addictive. My parent aren’t familiar with cutting edge technologies but they were happily involved in the new experience. Gaming and documentaries can benefit this new way to interact.

Another industry can benefit this new way: the porn industry. By now a Google search show 3 producer who publicise VR movies: BadoinkVRVirtualRealPorn and NaughtyAmericaVR. Moreover BadoinkVR sponsor the new experience offering a pair of VR Google Cardboard for free. In the past porn industry was able address technology adoption. It’s easy to think they could do it again.

In the end Google Cardboard isn’t confortable. Smartphone position inside the box is fundamental and sharp corners hurt my big nose. But I think is the first step of the next big thing. We are ready and I really look forward to find what the future holds.

infrastructure-update

Everything started on Heroku in October 2012 over their dynos with Heroku Postgres and continued on OpenShift in August 2013 over a LAMP stack based on Apache 2.4, PHP 5.3 and MySQL 5.1.

Now it’s time to to move my little blog on a modern stack. Best offer on OpenShift is a variation of the standard LEMP (we can call it: LEMP-HH) stack with HHVM 3.8, MariaDB 5.5 over NGINX 1.7.

lemphh-stack

Actually biggest performances improvement was achieved adding a good cache plugin a few months ago. I always used W3 Total Cache and WP Super Cache but, in this specific case, they are both complex to use because of the structure of OpenShift stack. Best solution I found is WP Fastest Cache plugin, one of the latest cache plugin I tested. Here is the stunning header of their website showing two beautiful cheetahs (are they cheetahs?).

wp-fastes-cache

Anyway coming back on new stack, there is no official bundle yet but you can create a new application using tengyifei’s HHVM 3.8 cartridge and adding OpenShift MariaDB 5.5 cartridge. I wasn’t able to run them on different gears (with scaling option activated and HAProxy) but seems fast enough on a single gear.

Filesystem structure is similar to the standard PHP bundle except for the application dir that is named www/ instead of php/. I used last backup from UpdraftPlus to migrate database on MariaDB. On non scalable applications you need to forward port in order to access DB from your local machine. RHC command is:

rhc port-forward -a application-name

Source here: Getting Started with Port Forwarding on OpenShift

Moving on NGINX also causes problems on permalinks because .htaccess doesn’t work anymore. The Nginx Helper plugin fix the problem but you could simply add a couple of row to NGINX configuration located in /config/nginx.d/default.conf.erb.

# Handle any other URI
location / {
try_files $uri $uri/ /index.php?q=$request_uri;
}

Discussion on WordPress support forum: WordPress Permalinks on NGINX

Refactor of previous filesystem, migration of database and bugfix of permalinks and other stuff takes about 2 hours and, at the end, everything seems working fine. I’m quite confident this a future proof solution but I’m going to test it until next major update 🙂

[UPDATE 2015-09-06 21:56 CEST]

After migration sitemap_index.xml and robots.txt weren’t reachable. Some rules were missing. I took the opportunity to switch to Yoast SEO for sitemap, Facebook open graph and Twitter cards. Then, these rules fix problems with SEO.

# Rewrites for WordPress SEO XML Sitemap
rewrite ^/sitemap_index.xml$ /index.php?sitemap=1 last;
rewrite ^/([^/]+?)-sitemap([0-9]+)?.xml$ /index.php?sitemap=$1&sitemap_n=$2 last;
# Rewrites for robots.txt
rewrite ^/robots\.txt$ /index.php?robots=1 last;

I always like The Setup. Discover what kind of technologies, hardware and softwares other skilled people are using is extremely useful and really fun for me. This time I’d like to share some tips from the complete reboot I did to my personal ecosystem after switch to my new Macbook.

macbook_pro_13_retina

From the hardware side is a simple high-end 2015 Macbook Pro 13″ Retina with Intel Core i7 Haswell dual-core at 3,4GHz, 16GB of RAM and 1TB of SSD PCI Express 3.0. Is fast, solid, lightweight and flexible. The only required accessory is the Be.eZ LArobe Second Skin.

From the software side I decided to avoid Time Machine restore in order to setup a completely new environment. I started on a OS X 10.10 Yosemite fresh installation.

As polyglot developer I usually deal with a lot of different applications, programming languages and tools. In order to decide what top install, a list of what I had on the previous machine and what I need more was really useful.

Here is a list of useful software and some tips about the installation process.

Applications

paid_apps

Paid softwares worth having: Evernote (with Premium subscription and Skitch) and Todoist (with Premium subscription) both available on the Mac App Store. 1Password, Fantastical 2, OmniGraffle, Carbon Copy Cloner, Backblaze and Expandrive available on their own websites.

Free software worth having: Google Chrome and Mozilla Firefox as browser, Apache OpenOffice, Skype and Slack as chat, VLC for multimedia and Transmission for torrents.

app_from_suites

Suites or part of: Adobe Photoshop CC, Adobe Illustrator CC and Adobe Acrobat Pro DC are part of the Adobe Creative Cloud. Microsoft Word 2016, and Microsoft Excel 2016 are part of Microsoft Office 2016 for Mac (now in free preview). Apple Pages, and Apple Keynote are preinstalled as Apple iWork suite as well as Apple Calendar and Apple Contacts.

Development tools

Utilities for Power Users: Caffeine, Growl and HardwareGrowler, iStat Menu Pro, Disk Inventory X, Tor Browser and TrueCrypt 7.1a (you need to fix a little installation bug on OS X 10.10), Kinematic and Boot2Docker for Docker, Sublime Text 3 (with some additions like: Spacegray Theme, Soda Theme, a new icon, Source Code Pro font), Tower, Visual Studio Code, Android SDK (for Android emulator) and XCode (for iOS emulator), VirtualBox (with some useful Linux virtual images), iTerm 2.

CLI: OhMyZSH, Homebrew, GPG (installed using brew), XCode Command Line Tools (from Apple Developers website), Git (with git-flow installed using brew), AWS CLI (install via pip), PhantomJS, s3cmd and faster s4cmd, Heroku toolbelt and Openshift Client Tools (install via gem).

daemons

Servers: MariaDB 10.0 (brew), MongoDB 3.0 (brew), Redis 3.0 (brew), Elasticsearch 1.6 (brew), Nginx 1.8.0 (brew), PostgreSQL 9.4.2 (via Postgres.app), Hadoop 2.7.0 (brew), Spark 1.4 (download from official website), Neo4j 2.2 (brew), Accumulo 1.7.0 (download from official website), Crate 0.49 (download from official website), Mesos 0.22 (download from official website), Riak 2.1.1 (brew), Storm 0.9.5 (download from official website), Zookeeper 3.4.6 (brew), Sphinx 2.2 (brew), Cassandra 2.1.5 (brew).

languages

Programming languages: RVM, Ruby (MRI 2.2, 2.1, 2.0, 1.9.3, 1.8.7, REE 2012.02, JRuby 1.7.19 installed using RVM), PHP 5.6 with PHP-FPM (installed using brew), HHVM 3.7.2 (installed using brew with adding additional repo, has some issues on 10.10), Python 2.7 (brew python) and Python 3.4 (brew python3), Pip 7.1 (shipped with Python), NVM, Node.js 0.12 and IO.js 2.3 (both installed using NVM), Go 1.4.2 (from Golang website), Java 8 JVM (from Oracle website), Java 8 SE JDK (from Oracle website), Scala 2.11 (from Scala website), Clojure 1.6 (from Clojure website), Erlang 17.0 (brew), Haskell GHC 7.10 (brew), Haskell Cabal 1.22 (brew), OCaml 4.02.1 (brew), R 3.2.1 (from R for Mac OS X website), .NET Core and ASP.NET (brew using DNVM), GPU Ocelot (compiled with a lot of libraries).

Full reboot takes about 2 days. Some software are still missing but I was able to restart my work almost completely. I hope this list would be helpful for anyone 🙂

bedrock_big_logo During the last couple of weeks I had to work on a PHP project with a custom WordPress stack I have never used before: Bedrock.

The home page says “Bedrock is a modern WordPress stack that gets you started with the best development tools, practices, and project structure.“.

What Bedrock really is

It is a regular WordPress installation with a different folder structure and is integrated with composer for dependencies management and capistrano for deploy. The structure reminds Rails or similar frameworks but contains usual WordPress component and run on the same web stack.

├── composer.json
├── config
│   ├── application.php
│   └── environments
│       ├── development.php
│       ├── staging.php
│       └── production.php
├── vendor
└── web
├── app
│   ├── mu-plugins
│   ├── plugins
│   ├── themes
│   └── uploads
├── wp-config.php
├── index.php
└── wp

Server configuration

The project use to works on Apache with mod_php but I personally don’t like this stack. I’d like to test it on HHVM but at the moment I preferred to run it on nginx with PHP-FPM. Starting with an empty Ubuntu 14.04 installation I set up a LEMP stack with memcached and Redis using apt-get:

apt-get update
apt-get install build-essential tcl8.5 curl screen bootchart git mailutils munin-node vim nmap tcpdump nginx mysql-server mysql-client memcached redis-server php5-fpm php5-curl php5-mysql php5-mcrypt php5-memcache php5-redis php5-gd

Everything works fine except the Redis extension (used for custom function unrelated with WordPress). I don’t know why but the config file wasn’t copied into the configuration directory /etc/php5/fpm/conf.d/. You can find it among the available mods into /etc/php5/mods-available/.

PHP-FPM uses a standard configuration placed into /etc/php5/fpm/pool.d/example.conf. It listen on 127.0.0.1:9000 or unix socket in /var/run/php5-fpm-example.sock (I assume the configured name was “example”).

Memcached should be configured to be used for session sharing among multiple servers. To activate it you need to edit the php.ini configuration file setting the following parameters into /etc/php5/fpm/php.ini

session.save_handler = memcache
session.save_path = 'tcp://192.168.0.1:11211,tcp://192.168.0.2:11211'

nginx configuration is placed into /etc/nginx/sites-available/ and linked into /etc/nginx/sites-enabled/ as usual and forward request to PHP-FPM for PHP files.

server {
listen 80 default deferred;
root /var/www/example/htdocs/current/web/;
index index.html index.htm index.php;
server_name www.example.com;
access_log /var/www/example/logs/access.log;
error_log /var/www/example/logs/error.log;
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~\.php$ {
try_files $uri =404;
# fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm-example.sock;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}

Root directory will be web/ of Bedrock prefixed with current/ to support the capistrano directory structure displayed below.

├── current -> /var/www/example/htdocs/releases/20150120114500/
├── releases
│   ├── 20150080072500
│   ├── 20150090083000
│   ├── 20150100093500
│   ├── 20150110104000
│   └── 20150120114500
├── repo
│   └── <VCS related data>
├── revisions.log
└── shared
└── <linked_files and linked_dirs>

Local configuration

I’m quite familiar with capistrano because of my Ruby recent background. You need a Ruby version greater then 1.9.3 to run it (RVM helps). First step is to download dependencies. Ruby uses Bundler.

# run it to install bundler gem the first time
gem install bundler
# run it to install dependencies
bundle install

Bundler read the Gemfile (and Gemfile.lock) and download all the required gems (Ruby libraries).

Now the technological stack is ready locally and on server 🙂
I’ll probably describe how to run a LEMP stack on OS X in a next post. For the moment I’m assuming you are able to run it locally. Here is useful guides by Jonas Friedmann and rtCamp.

Anyway Bedrock could run over any LAMP/LEMP stack. The only “special” feature is the Composer integration. Composer for PHP is like Bundler for Ruby. Helps developers to manage dependencies in the project. Here is used to manage plugins, themes and WordPress core update.

You can run composer install to install libraries. If you update libraries configuration or you want to force download of them (maybe after a fresh install) run composer update.

Deploy

Capistrano enable user to setup different deploy environment. A global configuration is defined and you need to specify only custom configuration for each environment. An example of /config/deploy/production.rb:

set :application, 'example'
set :stage, :production
set :branch, "master"
server '192.168.0.1', user: 'user', roles: %w{web app db}

Everything else is inherited from global config where are defined all the other deploy properties. Is important to say that deploy script of capistrano on Bedrock only download source code from Git repo and run composer install for main project. If you need to run in on any plugin you new to define a custom capistrano task and run it after the end of deploy. For instance you can add in the global configuration the following lines in order to install dependencies on a specific plugin:

namespace :deploy do
desc 'Rebuild Plugin Libraries'
task :updateplugin do
on roles(:app), in: :sequence, wait: 5 do
execute "cd /var/www/#{fetch(:application)}/htdocs/current/web/app/plugins/anything/ && composer install"
end
end
end
after 'deploy:publishing', 'deploy:updateplugin'

Now you are ready to deploy your Bedrock install on server!
Simply run cap production deploy, restart PHP-FPM (service php5-fpm restart) and enjoy it 😀

Many thanks to Giuseppe, great sysadmin and friend, for support during development and deploy of this @#@?!?@# application.

phpday_logoI used to refer to me as “PHP Developer” for a long time. More or less from the beginning of my career to 2011 when I moved to Ruby as main language. Last time I worked for real using PHP, tools were still uncomfortable and community was huge but still messy. 4 years later everything seems changed. Several important companies (including Facebook) leverage on it and both core language and most popular tools are improved a lot.

Things like HHVM, HackFIG, Composer, PHP7 many more are rapidly evolving the PHP landscape so I decided to attend the PHPDay 2015 to meet the Italian community refresh my knowledge.

It was a really interesting event. I had the opportunity to meet several awesome people and chat with them about almost everything (quite often chats happened on the grass of the beautiful location in Verona 🙂 )

phpday_attendees

Davey Shafik (@dshafik), a funny guy from Engine Yard, talks about PHP7 and HHVM as major improvements for the core of PHP. Enrico Zimuel (@ezimuel), core developer of Zend Framework, talks about ZF3. Steve Maraspin (@maraspin) talks about his experience with async PHP and parallel processing. Bernhard Schussek (@webmozart), core developer of Symfony, talks about Puli: a framework agnostic tool fro resource sharing.

I strongly believe PHP is changed. It included many “good parts” from other languages and is now ready to became, with the Facebook endorsement, a first class language.

See you next year at PHPDay 2016 😀

A few weeks ago I faced a performance issue around a lookup query on a 5GB MySQL database. About 10% of these request take 100x more time than the others. Syntax was really simple:

SELECT  `authors`.* FROM `authors`  WHERE `authors`.`remote_id` = 415418856 LIMIT 1;

Actually it wasn’t a real “slow query”. Table is quite large (3M+ record) and a response time of 300ms was reasonable. Unfortunately my application run this query thousands times each minute and performances were degraded. The remote_id is a standard varchar field indexed by a unique Hash index and I couldn’t understand why these queries take so much time.

After a long search I found a note in the reference of MySQL related to type conversion:
MySQL 5.7 Reference Manual > 12.2 Type Conversion in Expression Evaluation


For comparisons of a string column with a number, MySQL cannot use an index on the column to look up the value quickly. If str_col is an indexed string column, the index cannot be used when performing the lookup in the following statement:

SELECT * FROM tbl_name WHERE str_col=1;

Everything was clear: my Ruby application doesn’t implicit cast to string the remote_id I use when this value is an integer. This is a quite strange behavior because ActiveRecord, the application ORM, usually operates cast transparently. Usually.

The fixed version of the query runs in 3ms:

SELECT  `authors`.* FROM `authors`  WHERE `authors`.`remote_id` = '415418856' LIMIT 1;

Small changes, big headaches, great results 🙂

python_logo

Yesterday, after a beautiful (and greedy) Christmas, I decided to start learning basic of new tech stuff. First choice was Python programming language because seems quite similar to Ruby, several friends are skilled and is widely used in Google and for Spark so is probably one of the best language to learn at the moment.

I’m quite fluent with Ruby, PHP and Javascript but I have no skills about Python. I choose CodeAcademy for a basic introduction and I’m at about 50% of the course. Here is what I learned by now.

Python is dynamic typed (you have not to declare it) and types are almost the same of Ruby and PHP:

my_int = 42
my_float = 108.0
my_string = "The answer to life, the universe and everything"

There is no parenthesis, no begin/end structure. Everything is related to indentation (I absolutely love it). Function are defined as follow:

def my_function(argument): # colon start a new indentation level
return argument * 2

Control structures are always the same. Conditionals:

if condition1:
return 1
# elif is like elsif in Ruby and elseif in PHP
elif condition2:
return 2
else:
return 3

Loops:

for item in my_list
print item

Instead Hash and Array the use Dictionaries and Lists but they are almost the same:

my_list = ["daniele", "luca", "michele"]
my_dictionary = {"marco": 1, "matteo": 7, "michele": 4}

I started a few hours ago and I’m just a newbie. I hope the rest of the course on CodeAcademy will teach me about more complex topics and, talking about real world usage, I probably need to learn how to handle version management and discover most powerful libaries. Anyway I can already say is pretty cool to program using Python 🙂

BlogMeter is our main competitor in the Social Reputation segment in Italy. It was founded in 2007, the headquarters are in Milan, Turin, Rome and Madrid and developer team consist of about 10 coders. At Codemotion Vittorio Di Tomaso (CEO at BlogMeter) and Roberto Franchini (Chief Architect at BlogMeter) talk about infrastructure behind the company. Speaks were very interesting and, as I said for Datalytics, is funny to discover how approaches could be different when you face the same problem.

blogmeter_logo

First talk was by Roberto Franchini about GlusterFS. I never used this distributed filesystem but seems really interesting and completely different than HDFS.

They use it to store the daily production of more than 10GB of Lucene inverted indexes (more than 200GB/month) Their platform search stored indexes to extract different sets of documents for every customer. Seems crazy but they open indexes directly on storage. Hardware grows from 4TB on 8 non-dedicated server in 2010 to 28TB on 2 dedicated server in 2014 and they plan to grow more. Outages were caused by misconfiguration of storage limits but there was no data loss.

Here is the slides:


 

Second talk was by Vittorio Di Tomaso about the BlogMeter‘s infrastructure (with a bit of advertising and marketing about his company). Here is the overall schema (taken from his presentation and cropped to remove Italian title):

blogmeter_infrastructure

Platform leverage on PostgreSQL, Java and GlusterFS. Stream data come mostly from Twitter (they use both Streaming API and Gnip as data provider) and is processed on Hazelcast data grid using Kestrel to manage incoming data, Redis to deduplicate data and Drools to route (and avoid unnecessary processing). They optimize their process, moving from batch to near real time, avoiding processing on duplicated contents and optimizing processing pipeline from a linear flow to a DAG (directed acyclic graph) flow.

To process data they use a Spring based application that use Apache IUMA and theirs closed-source Sophia Semantic Engine and store data using Lucene. A few more product are used: Ubuntu as operating system, Jenkins for Continuos Integration and Jasig for authentication and security.

Visualization layer uses standard-de-facto libraries like D3.js, jQuery and Fusion Charts. Informations about hardware list: 300 cores, 1.2TB of RAM and 29TB of storage.

Here is the slides:


 

In the end their process is not different than ours. They handle incoming data, process it, store it and visualize it. Probably their system are more oriented to quantity than quality but logic is similar and everything seems cool 🙂

As for Datalytics I’m really interested to know if they monitor their name on Twitter so I’m going to tweet about this article using #BlogMeter hashtag. If you find this article please tweet me “Yes, we found it bwahaha” 😛

[UPDATE 2014-12-27 21:22 CET]

After about 4 weeks since I tweeted about this post I didn’t receive any answer yet. As I said before they probably focus on quantity over quality 😉

c2_logoA few weeks ago I had the privilege to attend C2 Spark conference in Milan. It is a “business conference somewhere between genius and insanity by Sid Lee, Cirque du Soleil, Fast Company and Microsoft” and try to mix Commerce and Creativity. It started in Montreal a few years ago and this was the first time in Europe (Zurich and Milan).

Talks were awesome and people were awesome too. Is quite strange for me meet non tech people and I really enjoy the day. I learned a lot of things attending the conference. Here the most fascinating ones.

Technology will change everything, again.

David Rose, scientist at MIT Media Lab and author of “Enchanted Objects” talks about the way we imagined the future. Internet of Things will be a huge opportunity and a lot of products are already here. Here is the “periodic table” of the Enchanted Objects who David shown us.

enchanted_objects_poster

Microsoft has enough money to reboot its business

Ten year ago Microsoft make a lot of billions on Windows and Office. Now Windows and Office worth nothing and Microsoft is trying to reboot its business with cloud (Windows Azure), mobile (Nokia and Surface) and wearable (Microsoft Band). Carlo Purassanta, CEO of Microsoft Italia, was clear: Microsoft is changing. He first ran for the conference wearing a Microsoft Band.

carlo_purassanta

Non tech people are awesome

I never had great respect for the non-technical people. “I can change the world, they can’t“, a lot of modesty and respect. I grow up inside a really close environment made by nerds and geeks and I always thought non-technical people have nothing to give me. I was wrong. I was absolutely wrong. At the event people came from different fields: diplomacy, medicine, sales, biology, advertising, marketing, law and more. Each of them has enriched me in some way. Non tech people are awesome 🙂

non-tech-people

Creativity could be an analytical process

Sid Lee, creativity firm behind C2 Spark describes the process behind its most successful advertising campaigns. A lot of myths about creativity are just myths and, with the right process, anyone could express his creativity.

creativity_process

Jump on table to move them is definitely cool

Time between talks and workshops where staff move tables is usually boring. At C2 Spark tables were moved by a parkour crew jumping and dancing around the room. It was absolutely useless by definitely cool! 😀

c2_parkour-2 c2_parkour-4 c2_parkour-1 c2_parkour-3

[UPDATE 2014-12-27 21:10 CET]

Seems a lot of people at Microsoft liked my article 🙂 Carlo Purassanta (CEO at Microsoft Italy), Carlo Rinaldi (Digital Marketing Group Leader at Microsoft Italia) and Chiara Mizzi (CMO at Microsoft Italia) shared it:

I few days ago I have been at Codemotion in Milan and I had the opportunity to discover some insights about technologies used by two of our main competitor in Italy: BlogMeter and Datalytics. It’s quite interesting because, also if technical challenges are almost the same, each company use a differente approach with a different stack.

datalytics_logo

Datalytics a is relatively new company founded 4 months ago. They had a desk at Codemotion to show theirs products and recruit new people. I chatted with Marco Caruso, the CTO (who probably didn’t know who I am, sorry Marco, I just wanted to avoid hostility 😉 ), about technologies they use and developer profile they were looking for. Requires skills was:

Their tech team is composed by 4 developers (including the CTO) and main products are: Datalytics Monitoring™ (a sort of statistical dashboard that shows buzz stats in real time) and Datalytics Engage™ (a real time analytics dashboard for live events). I have no technical insights about how they systems works but I can guess some details inferring them from the buzz words they use.

Supported sources are Twitter, Facebook (only public data), Instagram, Youtube, Vine (logos are on their website) and probably Pinterest.

They use DataSift as data source in addition to standard APIs. I suppose their processing pipeline uses Storm to manage streaming input, maybe with an importing layer before. Data is crunched using Hadoop and Java and results are stored on MongoDB (Massimo Brignoli, Italian MongoDB evangelist, advertise their company during his presentation so I suppose they largely use it).

Node.js should be used for frontend. Is fast enough for near real time application (also using websockets) and play really well both with Angular.js and MongoDB (the MEAN stack). D3.js is obviously the only choice for complex dynamic charts.

I’m not so happy when I discover a new competitor in our market segment. Competition gets harder and this is not fun. Anyway guys at Datalytics seems smart (and nice) and compete with them would be a pleasure and will push me to do my best.

Now I’m curios to know if Datalytics is monitoring buzz on the web around its company name. I’m going to tweet about this article using #Datalytics hashtag. If you find this article please tweet me “Yes, we found it bwahaha” 😛

[UPDATE 2014-12-27 21:18 CET]

@DatalyticsIT favorite my tweet on December 1st. This probably means they found my article but the didn’t read it! 😀