I always underestimated the contribute of Google to the evolution of big-data processing. I used to think that Google only manages and shows some search results. Not so much data. Not so much as Facebook or Twitter at least…
Obviously I was wrong. Google has to manage a HUGE amount of data and big-data processing was already a problem on 2002! Its contribution to the current processing technologies such as Hadoop and its filesystem HDFS and HBase was fundamental.
We can split contribution into two periods. The first of these (from 2003 to 2008) influenced technologies we are using today. The second (from 2009 since today) is influencing product we are going to use is the near future.
The first period gave us
- GFS Google FileSystem (PDF paper), a scalable distributed ﬁle system for large distributed data-intensive applications which later inspire HDFS
- BigTable (PDF paper), a columnar oriented database designed to store petabyte of data across large clusters which later inspire HBase
- the concept of MapReduce (PDF paper), a programming model to process large datasets distributed across large cluster. Hadoop implements this programming model over the HDFS or similar filesystems.
This series of paper revolutionized the strategies behind data warehouse and now all the largest companies uses products, inspired by these papers, we all knows.
The second period is less popular at the moment. Google faced many limits in its previous infrastructure and tried to fix them and move ahead. This behavior gave as many other technologies, some of these not yet completely public:
- Caffeine, a new search infrastructure who use GFS2, next-generation MapReduce and next-generation BigTable.
- Colossus, formerly known as Google FileSystem 2 the next generation GFS.
- Spanner (PDF paper), a scalable, multi-version, globally-distributed, and synchronously-replicated database, the NewSQL evolution of BigTable
- Dremel (PDF paper), a scalable, near-real-time ad-hoc query system for analysis of read-only nested data, and its implementation for the GAE: BigQuery.
- Percolator (PDF paper), a platform for incremental processing which continually update the search index.
- Pregel (PDF paper), a system for large-scale graph processing similar to MapReduce for columnar data.
Now market is different than 2002. Many companies such Cloudera and MapR are working hard for big-data and Apache Foundation as well. Anyway Google has 10 years of advantages and its technologies are still stunning.
Probably many of these papers are going to influence the next 10 year. First results are already here. Apache Drill and Cloudera Impala implement the Dremel paper specification, Apache Giraph implements the Pregel one and HBase Coprocessor the Percolator one.
And they are just some examples, a Google search can show you more 😉