Tuesday, December 15, 2009

Cassandra reading list

I put together this list for a co-worker who wants to learn more about Cassandra: (0.5 beta 2 out now!)If you want to know more about the internals, also see these:

Wednesday, July 29, 2009

Cassandra hackfest and OSCON report

The best part of OSCON for me wasn't actually part of OSCON. The guys at Twitter put together a Cassandra hackfest on Wednesday night, with much awesomeness resulting. Thanks to Evan for organizing!

Stu Hood flew up from Rackspace's Virginia offices just for the night, which normally probably wouldn't have been worth it, but Cliff Moon, author of dynomite, showed up (thanks, Cliff!) and was able to give Stu a lot of pointers on implementing merkle trees. Cliff and I also had a good discussion with Jun Rao about hinted handoff--Cliff and Jun are not fans, and I tend to agree with them--and eventual consistency.

I also met David Pollack and got to talk a little about persistence for Goat Rodeo, and talked to a ton of people from Twitter and Digg. I think those two, with Rackspace and IBM Research, constituted the companies with more than one engineer attending. The rest was "long tail."

Back at OSCON, my Cassandra talk was standing room only. Slides:

My second talk is the one I would have preferred to give first, on "What Every Developer Should Know About Database Scalability". (I would have preferred to give it first so that I could have just said "come to my Cassandra talk for more details" instead of trying to cram that in at the end. But, it was in my proposal outline!) Slides: Other OSCON talks I liked (that have slides available):

Monday, July 06, 2009

Cassandra 0.3 update

Two months after the first release candidate, Cassandra 0.3 is still not out. But, we're close!

We had two more bug-fix release candidates, and it's virtually certain that 0.3-final will be the same exact code as 0.3-rc3. (If you're using rc1, you do want to upgrade; see CHANGES.txt.) But, we got stuck in the ASF bureaucracy and it's going to take at least one more round-trip before the crack Release Prevention Team grudgingly lets us call it official.

In the meantime, work continues apace on trunk for 0.4.

Tuesday, June 23, 2009

Patch-oriented development made sane with git-svn

One of the drawbacks to working on Cassandra is that unlike every other OSS project I have ever worked on, we are using a patch-oriented development process rather than post-commit review. It's really quite painfully slow. Somehow this became sort of the default for ASF projects, but there is precedent for switching to post-commit review eventually.

In the meantime, there is git-svn.

(The ASF does have a git mirror set up, but I'm going to ignore that because (a) its reliability has been questionable and (b) sticking with git-svn hopefully makes this more useful for non-ASF projects.)

Disclaimer: I am not a git expert, and probably some of this will make you cringe if you are. Still, I hope it will be useful for some others fumbling their way towards enlightenment. As background, I suggest the git crash course for svn users. Just the parts up to the Remote section.

Checkout:

  1. git-svn init https://svn.apache.org/repos/asf/cassandra/trunk cassandra
Once that's done the only git-svn commands you need to know about are dcommit to push the changes in the current git branch back to svn, and rebase, to pull changes from svn and re-apply your uncommitted patches on top of that (basically exactly like svn up).

Creating new code:

  1. git checkout -b [ticket number]
  2. [edit stuff, maybe get add or git rm new or obsolete files]
  3. git commit -a -m 'commit'
  4. repeat 2-3 as necessary
  5. git-jira-attacher [revision] (usually some variant of HEAD^^^)
[after review]
  1. git log (just to make sure I'm about to commit what I think I'm about to commit)
  2. git-svn dcommit
  3. git checkout master
  4. git-svn rebase -l (this will put the changes you just committed into master)
  5. git branch -d [ticket number]
When I'm reviewing code it looks similar:
  1. git checkout -b [ticket number]
  2. wget patches and git-apply, or jira-apply CASSANDRA-[ticket-number]
  3. review in gitk/qgit and/or IDE (the intellij git plugin is quite decent)
  4. commit .. branch -d as above
The last operation is "see who I need to bug to get reviews moving." This is just a list of the branches I haven't merged into master and deleted yet:
  1. git branch
Git-svn takes a lot of the pain out of the ASF's patch-and-jira workflow. In particular, you can easily break changes for a ticket up into multiple patches that are easily reviewed, and the latency of waiting for patch review doesn't kill your throughput so badly since you can just leave that branch alone and start a new one for your next piece of functionality. And of course you get git commit --amend and git rebase -i for massaging patches during the review process.

One fairly common complication is if you finish a ticket A, then start on ticket B (that depends on A) while waiting for A to be reviewed. So you checkout -b from your branch A rather than master and build some patches on that. As sometimes happens, the reviewer finds something you need to improve in your patch set for A, so you make those changes. Now you need to rebase your patches to B on top of the changes you made to A. The best way to do this is to branch A to B-2, then git cherry-pick from B and resolve conflicts as necessary.

Final note: I often like to create lots of small commits as I am exploring a solution and combine them into larger units with git rebase -i for patch submission. (It's easier to combine small patches, than pull apart large ones.) So my early commit messages are often terse and need editing. You can change commit messages with edit mode in rebase, then using commit --amend and rebase --continue, but that is tedious. I complained about this to my friend Zach Wily and he made this git amend-message command (place in [alias] in your .gitconfig):

   amend-message = "!bash -c ' \
       c=$0; \
       if [ $c == \"bash\" ]; then echo \"Usage: git amend-message <commit>\"; exit 1; fi; \
       saved_head=$(git rev-parse HEAD); \
       commit=$(git rev-parse $c); \
       commits=$(git log --reverse --pretty=format:%H $commit..HEAD); \
       echo \"Rewinding to $commit...\"; \
       git reset --hard $commit; \
       git commit --amend; \
       for X in $commits; do \
           echo \"Applying $X...\"; \
           git cherry-pick $X >> /dev/null; \
           if [ $? -ne 0 ]; then \
               echo \"  apply failed (is this a merge?), rolling back all changes\"; \
               git reset --hard $saved_head; \
               echo \" ** AMEND-MESSAGE FAILED, sorry\"; \
               exit 1; \
           fi; \
       done; \
       echo \"Done\"'"
(Zach would like the record to show that he knows this is pretty hacky. "For instance, it won't work if one of the commits after the one you're changing is a merge, since cherry-pick can't handle those." But it's quite useful, all the same.)

For what it's worth, the rest of my aliases are

 st = status
 ci = commit
 co = checkout
 br = branch
 cp = cherry-pick

Wednesday, May 27, 2009

Why you won't be building your killer app on a distributed hash table

I ran across A case study in building layered DHT applications while doing some research on implementing load-balancing in Cassandra. The question addressed is, "Are DHTs a general-purpose tool that you can build more sophisticated services on?"

Short version: no. A few specialized applications can and have been built on a plain DHT, but most applications built on DHTs have ended up having to customize the DHT's internals to achieve their functional or performance goals.

This paper describes the results of attempting to build a relatively complex datastructure (prefix hash trees, for range queries) on top of OpenDHT. The result was mostly failure:

A simple put-get interface was not quite enough. In particular, OpenDHT relies on timeouts to invalidate entries and has no support for atomicity primitives... In return for ease of implementation and deployment, we sacrificed performance. With the OpenDHT implementation, a PHT query operation took a median of 2–4 seconds. This is due to the fact that layering entirely on top of a DHT service inherently implies that applications must perform a sequence of put-get operations to implement higher level semantics with limited opportunity for optimization within the DHT.
In other words, there are two primary problems with the DHT approach:
  • Most DHTs will require a second locking layer to achieve correctness when implementing a more complex data structure on top of the DHT semantics. In particular, this will certainly apply to eventually-consistent systems in the Dynamo mold.
  • Advanced functionality like range queries needs to be supported natively to be at all efficient.
While they spin this in a positive manner -- "hey, at least it didn't take much code" -- the reality is that for most of us, query latency of two to four seconds is several orders of magnitude away from acceptable.

This is one reason why I think Cassandra is the most promising of the open-source distributed databases -- you get a relatively rich data model and a distribution model that supports efficient range queries. These are not things that can be grafted on top of a simpler DHT foundation, so Cassandra will be useful for a wider variety of applications.

Monday, May 18, 2009

Belated 2009 Introduction to SQLAlchemy slides

I was asked to put my slides up again -- sorry it took so long. The slides and code samples are now up here. Video of the tutorial is also up. (3 parts, first is linked). There's definitely audio problems in parts but at least some is watchable.

Wednesday, May 13, 2009

Cassandra 0.3 release candidate and progress

We have a release candidate out for Cassandra 0.3. Grab the download and check out how to get started. The facebook presentation from almost a year ago now is also still a good intro to some of the features and data model.

Cassandra in a nutshell:

  • Scales writes very, very well: just add more nodes!
  • Has a much richer data model than vanilla key/value stores -- closer to what you'd be used to in a relational db.
  • Is pretty bleeding edge -- to my knowledge, Facebook is the only group running Cassandra in production. (Their largest cluster is 120 machines and 40TB of data.) At Rackspace we are working on a Cassandra-based app now that 0.3 has the extra features we need.
  • Moved to the Apache Incubator about 40 days ago, at which point development greatly accelerated.
Changes in 0.3 include
  • Range queries on keys, including user-defined key collation.
  • Remove support, which is nontrivial in an eventually consistent world.
  • Workaround for a weird bug in JDK select/register that seems particularly common on VM environments. Cassandra should deploy fine on EC2 now. (Oddly, it never had problems on Slicehost / Cloud Servers, which is also Xen-based.)
  • Much improved infrastructure: the beginnings of a decent test suite ("ant test" for unit tests; "nosetests" for system tests), code coverage reporting, etc.
  • Expanded node status reporting via JMX
  • Improved error reporting/logging on both server and client
  • Reduced memory footprint in default configuration
  • and plenty of bug fixes.
For those of you just joining us, Cassandra already had
  • An advanced on-disk storage engine that never does random writes
  • Transaction log-based data integrity
  • P2P gossip failure detection
  • Read repair
  • Hinted handoff
  • Bootstrap (adding new nodes to a running cluster)
(Read repair and hinted handoff are discussed in more detail in the Dynamo paper.)

The cassandra development and user community is also growing at an exciting pace. Besides the original two developers from Facebook, we now have five developers regularly contributing improvements and fixes, and many others on a more ad-hoc basis.

How fast is it?

In a nutshell, Cassandra is much faster than relational databases, and much slower than memory-only systems or systems that don't sync each update to disk. Actual benchmarks are in the works. We plan to start performance tuning with the next release, but if you want to benchmark it, here are some suggestions to get numbers closer to what you'll see in the wild (and about 10x more throughput than if you don't do these):

  • Do enough runs of your benchmark first that each operation tested by your suite runs 20k times before timing it for real. This will allow the JVM jit to compile down to machine code; otherwise you'll just be getting the interpreted version.
  • Change the root logger level in conf/log4j.properties from DEBUG to INFO; we do a LOT of logging for debuggability and for small column values the logging has more overhead than the actual workload. (It would be even faster if we were to remove them entirely but that didn't make this release.)

Friday, May 01, 2009

A better analysis of Cassandra than most

Vladimir Sedach wrote a three-part dive into Cassandra. (Almost two months ago now. Guess I need to set up Google Alerts. Trouble is there's a surprising amount of noise around the word `cassandra.`) A few notes:
  • We now have an order-preserving partitioner as well as the hash-based one
  • Yes, if you tell Cassandra to wait for all replicas to be ack'd before calling a write a success, then you would have traditional consistency (as opposed to "eventual") but you'd also have no tolerance for hardware failures which is a main point of this kind of system.
  • Zookeeper is not currently used by Cassandra, although we have plans to use it in the future.
  • Load balancing is not implemented yet.
  • The move to Apache is finished and development is active there now.

Consistent hashing vs order-preserving partitioning in distributed databases

The Cassandra distributed database supports two partitioning schemes now: the traditional consistent hashing scheme, and an order-preserving partitioner.

The reason that almost all similar systems use consistent hashing (the dynamo paper has the best description; see sections 4.1-4.3) is that it provides a kind of brain-dead load balancing from the hash algorithm spreading keys across the ring. But the dynamo authors go into some detail about how this by itself doesn't actually give good results in practice; their solution was to assign multiple tokens to each node in the cluster and they describe several approaches to that. But Cassandra's original designer considers this a hack and prefers real load balancing.

An order-preserving partitioner, where keys are distributed to nodes in their natural order, has huge advantages over consistent hashing, particularly the ability to do range queries across the keys in the system (which has also been committed to Cassandra now). This is important, because the corollary of "the partitioner uses the key to determine what node the data is on" is, "each key should only have an amount of data associated with it (see the data model explanation) that is relatively small compared to a node's capacity." Cassandra column families will often have more columns in them than you'd see in a traditional database, but "millions" is pushing it (depending on column size) and "billions" is a bad idea. So you'll want to model things such that you spread data across multiple keys and if you then pick an appropriate key naming convention, range queries will let you slice and dice that data as needed.

Cassandra is in the process of implementing load balancing still, but in the meantime order-preserving partitioning is still be useful without that if you know what your key distribution will look like in advance and can pick your node tokens accordingly. Otherwise, there's always the old-school hash-based partitioner until we get that done (for the release after the one we'll have in the next week or so).

See the introduction and getting started pages of the Cassandra wiki for more on Cassandra, and drop us a line on the mailing list or in IRC if you have questions; we're actively trying to improve our docs.

Thursday, April 30, 2009

Automatic project structure inference

David MacIver has an interesting blog entry up about determining logical project structure via commit logs. I was very interested because one of Cassandra's oldest issues is creating categories for our JIRA instance. (I've never been a big fan of JIRA, but you work with the tools you have. Or the ones the ASF inflicts on you, in this case.)

The desire to add extra work to issue reporting for a young project like Cassandra strikes me as slightly misguided in the first place. I have what may be an excessive aversion to overengineering, and I like to see a very clear benefit before adding complexity to anything, even an issue tracker. Still, I was curious to see what David's clustering algorithm made of things. And after pestering him to show me how to run his code I figure I owe it to him to show my results.

In general it did a pretty good job, particularly with the mid-sized groups of files. The large groups are just noise; the small groups, well, it's not exactly a revelation that Filter and FilterTest go together. I'd be tempted to play with it more but with only about two months and 250 commits in the apache repo there's not really all that much data there. (Cassandra's first two years were in an internal Facebook repository.) Working with data that exists as a side effect of natural activity is fascinating.

Saturday, April 11, 2009

The best PyCon talk you didn't see

There were a lot of good talks at PyCon but I humbly submit that the best one you haven't seen yet is Robert Brewer's talk on DejaVu. Robert describes how his Geniusql layer disassembles and parses python bytecode to let his ORM turn python lambdas into SQL. Microsoft got a lot of press for doing something similar for .NET with LINQ, but Bob was there first.
  box = store.new_sandbox()
print [c.Title for c in box.recall(
  Comic, lambda c: 'Hob' in c.Title or c.Views > 0)]
This is cool as hell. The Geniusql part start about 15 minutes in.

Tuesday, April 07, 2009

Credit where credit is due

I'm starting to conclude that git just doesn't fit my brain. Several months in, I'm still confused when things don't work the way they "should." My co-worker says I should start a wiki for weird-ass things to do with git: "You keep coming up with use cases that would never occur to me." But, I have to give the git community credit: I've never gone in to #git on freenode and gotten less than fantastic help. Even with git-svn.

Saturday, March 28, 2009

Distributed Databases and Cassandra at PyCon

I'll be leading an open-spaces discussion about distributed database architecture, implementation, and use today at 5:00 PM in the Lambert room. Specifically, we will cover bigtable, dynamo, and cassandra, and how to port a typical relational schema to cassandra's ColumnFamily model.

I wrote a little background information yesterday about why I think Cassandra in particular is compelling.

Friday, March 27, 2009

Why I like the Cassandra distributed database

I need a distributed database. A real distributed database; replication doesn't count because under a replication-oriented db, each node still needs to be able to handle the full write volume, and you can only throw hardware at that for so long.

So, I'm working on the Cassandra distributed database. I gave a lightning talk on it at PyCon this morning. Cassandra is written in Java and implements a sort of hybrid between Dynamo and Bigtable. (Both papers are worth reading.) It takes its distribution algorithm from Dynamo and its data model from Bigtable -- sort of the best of both worlds. Avinash Lakshman, Cassandra's architect, is one of the authors of the Dynamo paper.

There is a video about Cassandra here. The first 1/4 is about using Cassandra and then the rest is mostly about the internals.

Cassandra is very bleeding edge. Facebook runs several Cassandra clusters in production (the largest is 120 machines and 40TB of data), but there are sharp edges that will cut you. If you want something that Just Works out of the box Cassandra is a poor fit right now and will be for several months.

Cassandra was open-sourced by Facebook last summer. There was some initial buzz but the facebook developers had trouble dealing with the community and the project looked moribund -- FB was doing development against an internal repository and throwing code over the wall every few months which is no way to run an OSS project. Now the code is being developed in the open on Apache and I was voted in as a committer so things are starting to move again.

There are other distributed databases that are worth considering. Here is why those don't fit my needs (and I am not saying that these are bad choices if your requirements are different, just that they don't work for me):

HBase:

  • Follows the bigtable model, so it's more complicated than it needs to be. (300+kloc vs 50 for Cassandra; many more components). This means it's that much harder for me to troubleshoot. HBase is more bug-free than Cassandra but not so bug-free that troubleshooting would not be required.
  • Does not have any non-java clients. I need CPython support.
  • Sits on top of HDFS, which is optimized for streaming reads, not random accesses. So HBase is fine for batch processing but not so good for online apps.
Hypertable:
  • See HBase, except it's written in C++.
Project Voldemort:
  • Voldemort is probably the key / value store in the pattern of Dynamo that is farthest along. If you only need key / value I would definitely recommend Voldemort. Next closest is probably Dynomite. Then there are a whole bunch of "me too" key value stores that made fatal architecture decisions or are writing a "memcached with persistence" without really thinking it through.
Running Cassandra [updated 04-22-09 for hacker news visitors]:
  1. prereqs: jdk6 and ant
  2. Check out the code from http://svn.apache.org/repos/asf/incubator/cassandra/trunk (did I mention this was early-adopter only?)
  3. run ant [optional: ant test]. If you get an error like "class file has wrong version 50.0, should be 49.0" then ant is using an old jdk version instead of 6.
  4. For non-java clients, install Thrift. (For java, trunk includes libthrift.jar.) This is a major undertaking in its own right. See this page for a list of dependencies, although most of the rest of that page is now outdated -- for instance, Cassandra no longer depends on the fb303 interface. Python users will have to hand-edit the generated Cassandra.py in three obvious places until this bug is fixed -- just replace the broken argument with None.
  5. run bin/cassandra [optionally -f for foreground]
  6. Connect to the server. By default it listens on localhost:9160. Look at config/server-conf.xml for the columnfamily definitions.
  7. Insert and query some data. Here is an introduction.
  8. Ask on the mailing list or #cassandra on freenode if you have questions.
(Cassandra has a new website up to replace the google code one. We're actively working on the docs, so let us know what needs work.) Good luck!

Monday, February 16, 2009

Impressed by KDE 4.2

I'm running Linux on my desktop at work after a year of OS X, and Gnome as shipped by Ubuntu 8.10 has just been a world of hurt. The panel looks and works like ass when moved to the left side of the screen (the only sane place to put it in today's world of widescreen monitors), network-manager just decided to quit working one day (I got by with wicd after that), alt-tab behavior sucks both ways you can configure it, etc.

I installed KDE 4.2 over the weekend to see if I was missing anything there.

Wow.

It's like daylight after being in a cave for two months. I didn't realize how hard it has been to use a butt-ugly environment until I wasn't anymore. (Yes, I tried all the gnome themes I could find. Even Nimbus which took a bit of work. What's that recently-famous phrase? "Lipstick on a pig?")

What is better in KDE? In a word, everything. And put me in the camp that really likes having the desktop turned into a usable area for the first time. Like apple's dashboard, except it doesn't suck. I always hated dashboard.

Things that could be improved:

  • Never in a thousand years would I have thought to look under "Regional & Language" for the preference to turn caps lock into control. I had to google this.
  • I'm still not sure how to set F9 to Present Windows. Or how to bind a keystroke to the K menu as a poor man's quicksilver.
  • More generally, a "Welcome to kde. Let me teach you how to be a power user" tutorial would be nice. I have the feeling there is lots of awesome under the hood if I knew where it was. I never got that feeling from gnome. ("Beauty is only skin deep, but ugly goes right to the bone.")
  • Firefox UI widgets are imperfectly themed from XUL to GTK to KDE. But it is useable. (And having my second monitor redraw correctly instead of leaving artifacts when windows are moved makes up for that.) Is this KDE's fault? Firefox's? I don't know.
  • Konqueror is still using KHTML instead of webkit which means it is mostly unusable in the world of "web 2.0." Yes, you can install webkitkde but that is Very Alpha. ("Open in new window" doesn't work, for instance. "Open in new tab" is gone entirely.)
  • I couldn't find an option to just use icons in the task manager widget.

Saturday, February 07, 2009

SQLAlchemy at PyCon 2009

I will be giving an Introduction to SQLAlchemy tutorial and Mike Bayer and Jason Kirtland will be teaching Advanced SQLAlchemy, both on Thursday. I'll be covering similar material as last year, updated for 0.5. I'm also trying to see if I can get the emails of the registrants so far to see what else they would like covered. My tutorial style is exercise-heavy, so if you've read the docs or my slides but still find it hard to write SQLA code, coming to the tutorial is a great way to fix that. (Note: the blog link to the 2008 slides is broken since we moved utahpython.org. If you want them, drop me a note.)

Wednesday, January 28, 2009

All you ever wanted to know about writing bloom filters

It seems like Bloom filters are all the rage these days. Three years ago I had barely heard of them and now it seems like I see articles and code using them all the time. That's mostly a good thing, since bloom filters are a very useful tool to avoid performing expensive computations without the full memory overhead of a standard map/dictionary.
Bloom filters are surprisingly simple: divide a memory area into buckets (one bit per bucket for a standard bloom filter; more -- typically four -- for a counting bloom filter). To insert a key, generate several hashes per key, and mark the buckets for each hash. To check if a key is present, check each bucket; if any bucket is empty, the key was never inserted in the filter. If all buckets are non-empty, though, the key is only probably inserted -- other keys' hashes could have covered the same buckets. Determining exactly how big to make the filter and how many hashes to use to achieve a given false positive rate is a solved problem; the math is out there.
But it turns out that it's surprisingly hard to find good information on one part of the implementation: how do you generate an indefinite number of hashes? Even small filters will use three or four; a dozen or more is not unheard of.
Cassandra uses bloom filters to save IO when performing a key lookup: each SSTable has a bloom filter associated with it that Cassandra checks before doing any disk seeks, making queries for keys that don't exist almost free. Unfortunately, Cassandra was using perhaps the worst possible implementation of hash functions: a hardcoded list of hash functions, apparently from this page. A lot of those hashes are reimplementations of the same algorithm! For 6 or less hash functions, Cassandra was only actually generating 2 distinct hash values, so the false positive rate was far far higher than would have been expected.
Judging from the bloom filter implementations out there, generating appropriate hashes is surprisingly hard to get right. One implementation in java uses object.hashCode() to seed the stdlib's PRNG and calls Random.nextInt() to generate the bloom hashes. This works okay for small filters but the false positive rate is up to 140% of the expected rate for large filters. This one in python combines the stdlib hash and a pjw hash with simple arithmetic to achieve "only" an extra 10-15% false positives.
It turns out that most off-the-shelf algorithms suck for bloom filter purposes because their output isn't distributed well-enough across the (32-bit) hash space; most hash function authors are more concerned with speed than achieving a really good output distribution. For a lot of applications, this is a good tradeoff, but you are presumably using a bloom filter because a filter hit requires a relatively very expensive operation, so double-digit increases in this rate is Very Bad. (So I would be skeptical of any home-made hash function generator too, like PyBloom's.)
There are two approaches that I found generate results in line with the theoretical predictions. One is to use a cryptographic hash function like SHA-1. Cryptography's requirements may be overkill for a bloom filter, but crypto hashes do spread their output uniformly across the hash space. So SHA-1 works fine, and you can feed the hash a constant or part of the earlier-generated hash to generate extra data as needed so you're not limited to 5 hashes. (Although Java makes this a pain by having all the digest methods also reset the hash to uninitialized. Explicit is better than implicit!)
The other is the Murmur and Jenkins hashes. Both of these generate very well-distributed hashes; Murmur is about 10% faster than Jenkins. (Murmur3 is faster still.) You can generate extra hashes either by using the output of hash#i as the initial value for hash#i+1, which is the approach used by Hadoop, or by computing hash#i = hash0 + i*hash1, which is described by Kirsch and Mitzenmacher and is the approach Cassandra settled on (with hash0 and hash1 being the two halves of a 128-bit Murmur3 hash). The Hadoop approach is about twice as fast as the SHA approach for 5 hashes; Cassandra's is about 5x faster. (Remember, SHA will give you 160 bits of pseudo-randomness at a time, so Murmur will be faster still depending on how far you are from a multiple of that.)
In short: use the murmur hash in your bloom filter. If you're using java, you can grab Cassandra's implementation from my github tree until our new repository is up at the apache incubation site.  Unfortunately it's spread across half a dozen files, without even counting the tests.  Start with [Counting]BloomFilter and go up from there.  Your other option is Hadoop's (if you grab it from svn, because that has my fix included), which is even less self-contained and requires wrapping your keys in a Key object.  Sorry.
Bonus tip: if you do roll your own, you should probably port Cassandra's test suite to your language of choice.  In particular, nobody else bothers to check that false positive rates are within what is predicted by the math, but if you don't, it's easy for bugs that pass simple tests to slip by, like the one in Cassandra's older implementation, or the one I found in Hadoop's.
(Updated 2014 to link the Kirsch/Mitzenmacher paper in case people don't read the comments.)