Wednesday, May 27, 2009

Why you won't be building your killer app on a distributed hash table

I ran across A case study in building layered DHT applications while doing some research on implementing load-balancing in Cassandra. The question addressed is, "Are DHTs a general-purpose tool that you can build more sophisticated services on?"

Short version: no. A few specialized applications can and have been built on a plain DHT, but most applications built on DHTs have ended up having to customize the DHT's internals to achieve their functional or performance goals.

This paper describes the results of attempting to build a relatively complex datastructure (prefix hash trees, for range queries) on top of OpenDHT. The result was mostly failure:

A simple put-get interface was not quite enough. In particular, OpenDHT relies on timeouts to invalidate entries and has no support for atomicity primitives... In return for ease of implementation and deployment, we sacrificed performance. With the OpenDHT implementation, a PHT query operation took a median of 2–4 seconds. This is due to the fact that layering entirely on top of a DHT service inherently implies that applications must perform a sequence of put-get operations to implement higher level semantics with limited opportunity for optimization within the DHT.
In other words, there are two primary problems with the DHT approach:
  • Most DHTs will require a second locking layer to achieve correctness when implementing a more complex data structure on top of the DHT semantics. In particular, this will certainly apply to eventually-consistent systems in the Dynamo mold.
  • Advanced functionality like range queries needs to be supported natively to be at all efficient.
While they spin this in a positive manner -- "hey, at least it didn't take much code" -- the reality is that for most of us, query latency of two to four seconds is several orders of magnitude away from acceptable.

This is one reason why I think Cassandra is the most promising of the open-source distributed databases -- you get a relatively rich data model and a distribution model that supports efficient range queries. These are not things that can be grafted on top of a simpler DHT foundation, so Cassandra will be useful for a wider variety of applications.

Monday, May 18, 2009

Belated 2009 Introduction to SQLAlchemy slides

I was asked to put my slides up again -- sorry it took so long. The slides and code samples are now up here. Video of the tutorial is also up. (3 parts, first is linked). There's definitely audio problems in parts but at least some is watchable.

Wednesday, May 13, 2009

Cassandra 0.3 release candidate and progress

We have a release candidate out for Cassandra 0.3. Grab the download and check out how to get started. The facebook presentation from almost a year ago now is also still a good intro to some of the features and data model.

Cassandra in a nutshell:

  • Scales writes very, very well: just add more nodes!
  • Has a much richer data model than vanilla key/value stores -- closer to what you'd be used to in a relational db.
  • Is pretty bleeding edge -- to my knowledge, Facebook is the only group running Cassandra in production. (Their largest cluster is 120 machines and 40TB of data.) At Rackspace we are working on a Cassandra-based app now that 0.3 has the extra features we need.
  • Moved to the Apache Incubator about 40 days ago, at which point development greatly accelerated.
Changes in 0.3 include
  • Range queries on keys, including user-defined key collation.
  • Remove support, which is nontrivial in an eventually consistent world.
  • Workaround for a weird bug in JDK select/register that seems particularly common on VM environments. Cassandra should deploy fine on EC2 now. (Oddly, it never had problems on Slicehost / Cloud Servers, which is also Xen-based.)
  • Much improved infrastructure: the beginnings of a decent test suite ("ant test" for unit tests; "nosetests" for system tests), code coverage reporting, etc.
  • Expanded node status reporting via JMX
  • Improved error reporting/logging on both server and client
  • Reduced memory footprint in default configuration
  • and plenty of bug fixes.
For those of you just joining us, Cassandra already had
  • An advanced on-disk storage engine that never does random writes
  • Transaction log-based data integrity
  • P2P gossip failure detection
  • Read repair
  • Hinted handoff
  • Bootstrap (adding new nodes to a running cluster)
(Read repair and hinted handoff are discussed in more detail in the Dynamo paper.)

The cassandra development and user community is also growing at an exciting pace. Besides the original two developers from Facebook, we now have five developers regularly contributing improvements and fixes, and many others on a more ad-hoc basis.

How fast is it?

In a nutshell, Cassandra is much faster than relational databases, and much slower than memory-only systems or systems that don't sync each update to disk. Actual benchmarks are in the works. We plan to start performance tuning with the next release, but if you want to benchmark it, here are some suggestions to get numbers closer to what you'll see in the wild (and about 10x more throughput than if you don't do these):

  • Do enough runs of your benchmark first that each operation tested by your suite runs 20k times before timing it for real. This will allow the JVM jit to compile down to machine code; otherwise you'll just be getting the interpreted version.
  • Change the root logger level in conf/log4j.properties from DEBUG to INFO; we do a LOT of logging for debuggability and for small column values the logging has more overhead than the actual workload. (It would be even faster if we were to remove them entirely but that didn't make this release.)

Friday, May 01, 2009

A better analysis of Cassandra than most

Vladimir Sedach wrote a three-part dive into Cassandra. (Almost two months ago now. Guess I need to set up Google Alerts. Trouble is there's a surprising amount of noise around the word `cassandra.`) A few notes:
  • We now have an order-preserving partitioner as well as the hash-based one
  • Yes, if you tell Cassandra to wait for all replicas to be ack'd before calling a write a success, then you would have traditional consistency (as opposed to "eventual") but you'd also have no tolerance for hardware failures which is a main point of this kind of system.
  • Zookeeper is not currently used by Cassandra, although we have plans to use it in the future.
  • Load balancing is not implemented yet.
  • The move to Apache is finished and development is active there now.

Consistent hashing vs order-preserving partitioning in distributed databases

The Cassandra distributed database supports two partitioning schemes now: the traditional consistent hashing scheme, and an order-preserving partitioner.

The reason that almost all similar systems use consistent hashing (the dynamo paper has the best description; see sections 4.1-4.3) is that it provides a kind of brain-dead load balancing from the hash algorithm spreading keys across the ring. But the dynamo authors go into some detail about how this by itself doesn't actually give good results in practice; their solution was to assign multiple tokens to each node in the cluster and they describe several approaches to that. But Cassandra's original designer considers this a hack and prefers real load balancing.

An order-preserving partitioner, where keys are distributed to nodes in their natural order, has huge advantages over consistent hashing, particularly the ability to do range queries across the keys in the system (which has also been committed to Cassandra now). This is important, because the corollary of "the partitioner uses the key to determine what node the data is on" is, "each key should only have an amount of data associated with it (see the data model explanation) that is relatively small compared to a node's capacity." Cassandra column families will often have more columns in them than you'd see in a traditional database, but "millions" is pushing it (depending on column size) and "billions" is a bad idea. So you'll want to model things such that you spread data across multiple keys and if you then pick an appropriate key naming convention, range queries will let you slice and dice that data as needed.

Cassandra is in the process of implementing load balancing still, but in the meantime order-preserving partitioning is still be useful without that if you know what your key distribution will look like in advance and can pick your node tokens accordingly. Otherwise, there's always the old-school hash-based partitioner until we get that done (for the release after the one we'll have in the next week or so).

See the introduction and getting started pages of the Cassandra wiki for more on Cassandra, and drop us a line on the mailing list or in IRC if you have questions; we're actively trying to improve our docs.