- MySQL vs. PostgreSQL, Part 1: Table Organization
- Scalable System Design Patterns
- Map Reduce and Stream Processing
- Adopting Apache Cassandra
- Scaling Up by Scaling Down: Successful Agile Adoption in the Large by Focusing on the Individual
- NoCAP – Part III – GigaSpaces clustering explained..
- Robert Greene on “New and Old Data stores”
In the last few months I haven’t posted at all so now it’s time to put some more useful links concerning Cloud Computing and all the stuff part of it.
Useful Cloud Computing Blogs from High Scalability Website
And last but not list one interesting post discussing Google’s Cloud approach from x86Virtualization .
For the research type of readers I would recommend to have a look at a lecture offered by Ashraf Aboulnaga from Waterloo University:
22.09.2008 Update links:
Google are developing their own distributed storage system which has really interesting structure.It is described in the paper:
“Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.”
Bigtable: A Distributed Storage System for Structured Data and Google presentation at University of Washington:
Update: Thanks to a friend I get to know the MapReduce model of Google for simplified data processing on large clusters.
“MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google’s clusters every day, processing a total of more than twenty petabytes of data per day. “
Update: Overview of Google system architecture from Seattle Conference on Scalability 2007.
Jeff Dean, Google, Inc. :
Update (April 2008 presentation & slides): Behind The Scenes of Google Scalability