Paper: The Google File System

This is “the paper” which started everything in big data environment. During 2003 Google already had problems most of our still haven’t in terms of size and availability of data. They developed a proprietary distributed filesystem called GFS. After a couple of years Yahoo creates HDFS, the distributed filesystem, part of Hadoop framework inspire by this paper. As The Hadoop co-creator Doug Cutting (@cutting): “Google is living a few years in the future and sending the rest of us messages”.


google_logo

Title: The Google File System (PDF), October 2003
Authors: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients.

While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points.

The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients.

In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use.


Check out the list of interesting papers and projects (Github).