Facebook has one of the biggest hardware infrastructure in the world with more than 60.000 servers. But servers are worthless if you can’t efficiently use them. So how does Facebook works?
Facebook born in 2004 on a LAMP stack. A PHP website with MySQL datastore. Now is powered by a complex set of interconnected systems grown over the original stack. Main components are:
- Main persistence layer
- Data warehouse layer
- Facebook Images
- Facebook Messages and Chat
- Facebook Search
The frontend is written in PHP and compiled using HipHop (open sourced on github) and g++. HipHop converts PHP in C++ you can compile. Facebook frontend is a single 1,5 GB binary file deployed using BitTorrent. It provide logic and template system. Some parts are external and use HipHop Interpreter (to develop and debug) and HipHop Virtual Machine to run HipHop “ByteCode”.
Is not clear which web server they are using. Apache was used in the beginning and probably is still used for main fronted. In 2009 they acquired (and open sourced) Tornado from FriendFeed and are using it for Real-Time updates feature of the frontend. Static contents (such as fronted images) are served through Akamai.
Everything not in the main binary (or in the ByteCode source) is served using using Thrift protocol. Java services are served without Tomcat or Jetty. Overhead required by these application server is worthless for Facebook architecture. Varnish is used to accelerate responses to HTTP requests.
The Memcached infrastructure is based on more than 800 servers with 28TB of memory. They use a modded version of Memcached which reimplement the connection pool buffer (and rely on some mods on the network layer of Linux) to better manage hundred of thousand of connections.
Insights and Sources:
- [ArsTechnica] Exclusive: a behind-the-scenes look at Facebook release engineering
- [Facebook Developers] HipHop for PHP: Move Fast
- [Facebook Note] Making HPHPi Faster
- [Facebook Note] The HipHop Virtual Machine
- [Facebook Engineering] BigPipe: Pipelining web pages for high performance
- [Facebook Note] Scaling memcached at Facebook
- [Facebook Developers] Tornado: Facebook’s Real-Time Web Framework for Python
2. Main Persistence layer
Facebook rely on MySQL as main datastore. I know, I can’t believe it too…
They have one of the largest MySQL Cluster deploy in the world and use the standard InnoDB engine. As many other people they designd the system to easìly handle MySQL failure, they simply enforce backup. Recently Facebook Engineers published a post about their 3-level backup stack:
- Stage 1: each node (all the replica set) of cluster has 2 rack backup. One of these backup binary log (to speed up slave promotion) and one with
mysqldumpsnapshot taken every night (to handle node failure).
- Stage 2: each binlog and backup are copied in a larger Hadoop cluster mapped using HDFS. This layer provide a fast and distributed source of data useful to restore nodes also in differenti locations.
- Stage 3: provide a long-term storage copying data from Hadoop cluster to a discrete storage in a separate region.
Facebook collect several statistics about failure, traffic and backups. These statistics contribute to build a “score” of a node. If a node fails or has a score too low an automatic system provide to restore it from Stage 1 or Stage 2. They don’t try to avoid problem, they simply handle them as fast as possible.
Insights and Sources:
- [Facebook Engineering] Under the Hood: Automated backups
- [GIgaom] Facebook shares some secrets on making MySQL scale
3. Data warehouse layer
MySQL data is also moved to a “cold” storage to be analyzed. This storage is based on Hadoop HDFS (which leverage on HBase) and is queried using Hive. Data such as logging, clicks and feeds transit using Scribe and are aggregating and stored in Hadoop HDFS using Scribe-HDFS.
Standard Hadoop HDFS implementation has some limitations. Facebook adds a different kind of NameNodes called AvatarNodes which use Apache ZooKeeper to handle node failure. They also adds RaidNode that use XOR-parity files to decrease storage usage.
Analysis are made using MapReduce. Anyway analyze several petabytes of data is hard and standard Hadoop MapReduce framework became a limit on 2011. So they developed Corona, a new scheduling framework that separates cluster resource management from job coordination. Results are stored into Oracle RAC (Real Application Cluster).
Insights and Sources:
- [Facebook Note] Hadoop
- [Facebook Note] Looking at the code behind our three uses of Apache Hadoop
- [Facebook Note] Hive – A Petabyte Scale Data Warehouse using Hadoop
- [Hadoop Blog] HDFS Scribe Integration
- [Axon Flux] How Facebook uses Scribe, Hadoop, and Hive for Analytics, Ad hoc analysis, Spam detection and Ad Optimization
- [Facebook Engineering] Under the Hood: Scheduling MapReduce jobs more efficiently with Corona
- [Facebook Engineering] Under the Hood: Hadoop Distributed Filesystem reliability with Namenode and Avatarnode
- [Gigaom] How Facebook keeps 100 petabytes of Hadoop data online
- Hadoop and Hive Development at Facebook (PDF)
In the next post I’m going to analyze the other Facebook services: Images, Message and Search.