During the last year I refined my RSS collection about big-data, data science and analytics. I usually check it everyday in order to discover a ton of new cool technologies and have fun. Here is the updated list.
News about emerging technologies, scalability and data
Data companies, social networks and search engines
Companies supporting e distributing big-data processing products
Recently I discovered the awesome data science list that contains a list of interesting blogger I haven’t time to check yet. You can surely find something more in it. I’ll try to publish an update when I’ll check it.
[UPDATE 2014-09-22 11:35]
Thanks to @onurakpolat for correcting my link to awesome data science list. Previous link was to his fork, the original repo is https://github.com/okulbilisim/awesome-datascience by @okulbilisim
The big-data environment at the moment is really “collaborative”. Each project is ready to run on almost every available platform and this is good. However, recently two factions are forming: people who use the Hadoop 2.0 Stack and people who use the BDAS.
Hadoop 2.0 Stack
The most important difference between Hadoop 2.0 and previous versions is YARN, the new cluster resource manager and next generation MapReduce. It can run almost every kind of big-data project:
- Traditional Map Reduce (new version is backward compatible) with Hive or Pig or Cascading for query.
- Interactive near real-time Map Reduce (using Tez)
- HBase and Accumulo
- Storm and S4 for stream processing
- Giraph for graph processing
- OpenMPI for message passing
- Spark as In-memory Map Reduce
- HDFS as distributed filesystem
Most interesting companies here are Intel, Cloudera, MapR and Hortonworks.
BDAS (Berkely Data Analytics Stack)
On the BDAS everything is built around Mesos: the cluster resource manager. It’a relative new project is already widely used. Traditional HDFS is accelerated by Tachyon (in-memory file system). The main integration is around Spark which is the base for:
Mesos can also run traditional Hadoop environment and other projects (such as Storm and OpenMPI). You can also run traditional applications (also Rails apps) using Marathon.
The most interesting companies here are Databricks and Mesosphere.
Who will win? 😀
Last week I found this diagram on @al3xandru‘s MyNoSQL blog and I was surprise of how many softwares I never heard before.
From the diagram are missing many other softwares such as NuoDB (NewSQL), Aerospike (Key-Value), Titan (Graph), FoundationDB (Key-Values) Apache Accumulo (Key-Value), Apache Giraph (Graph) and more and includes some companies (like Cloudera, MapR and Xeround) also if they didn’t develop a custom version but just fork and maintain the main one.
Anyway it seems one of the best visual representation of the current database world and I’m going to use it as base to an updated and more detailed version 😉
I always underestimated the contribute of Google to the evolution of big-data processing. I used to think that Google only manages and shows some search results. Not so much data. Not so much as Facebook or Twitter at least…
Obviously I was wrong. Google has to manage a HUGE amount of data and big-data processing was already a problem on 2002! Its contribution to the current processing technologies such as Hadoop and its filesystem HDFS and HBase was fundamental.
We can split contribution into two periods. The first of these (from 2003 to 2008) influenced technologies we are using today. The second (from 2009 since today) is influencing product we are going to use is the near future.
The first period gave us
- GFS Google FileSystem (PDF paper), a scalable distributed ﬁle system for large distributed data-intensive applications which later inspire HDFS
- BigTable (PDF paper), a columnar oriented database designed to store petabyte of data across large clusters which later inspire HBase
- the concept of MapReduce (PDF paper), a programming model to process large datasets distributed across large cluster. Hadoop implements this programming model over the HDFS or similar filesystems.
This series of paper revolutionized the strategies behind data warehouse and now all the largest companies uses products, inspired by these papers, we all knows.
The second period is less popular at the moment. Google faced many limits in its previous infrastructure and tried to fix them and move ahead. This behavior gave as many other technologies, some of these not yet completely public:
- Caffeine, a new search infrastructure who use GFS2, next-generation MapReduce and next-generation BigTable.
- Colossus, formerly known as Google FileSystem 2 the next generation GFS.
- Spanner (PDF paper), a scalable, multi-version, globally-distributed, and synchronously-replicated database, the NewSQL evolution of BigTable
- Dremel (PDF paper), a scalable, near-real-time ad-hoc query system for analysis of read-only nested data, and its implementation for the GAE: BigQuery.
- Percolator (PDF paper), a platform for incremental processing which continually update the search index.
- Pregel (PDF paper), a system for large-scale graph processing similar to MapReduce for columnar data.
Now market is different than 2002. Many companies such Cloudera and MapR are working hard for big-data and Apache Foundation as well. Anyway Google has 10 years of advantages and its technologies are still stunning.
Probably many of these papers are going to influence the next 10 year. First results are already here. Apache Drill and Cloudera Impala implement the Dremel paper specification, Apache Giraph implements the Pregel one and HBase Coprocessor the Percolator one.
And they are just some examples, a Google search can show you more 😉