Skip to main content

CouchDB: not drinking the kool-aid

This is my attempt to clear up some misconceptions about CouchDB and point out some technical details that a lot of people seem to have overlooked.  For the record, I like Damien Katz's blog, he seems like a great programmer, and Erlang looks cool.  Please don't hurt me.
First, and most important: CouchDB is not a distributed database.  BigTable is a distributed database.  Cassandra and dynomite are distributed databases.  (And open source, and based on a better design than BigTable.  More on this in another post.)  It's true that with CouchDB you can "shard" data out to different instances just like you can with MySQL or PostgreSQL.  That's not what people think when they see "distributed database." It's also true that CouchDB has good replication, but even multi-master replication isn't the same as a distributed database: you're still limited to the write throughput of the slowest machine.
Here are some reasons you should think twice and do careful testing before using CouchDB in a non-toy project:
  • Writes are serialized.  Not serialized as in the isolation level, serialized as in there can only be one write active at a time.  Want to spread writes across multiple disks?  Sorry.
  • CouchDB uses a MVCC model, which means that updates and deletes need to be compacted for the space to be made available to new writes.  Just like PostgreSQL, only without the man-years of effort to make vacuum hurt less.
  • CouchDB is simple.  Gloriously simple.  Why is that a negative?  It's competing with systems (in the popular imagination, if not in its author's mind) that have been maturing for years.  The reason PostgreSQL et al have those features is because people want them.  And if you don't, you should at least ask a DBA with a few years of non-MySQL experience what you'll be missing.  The majority of CouchDB fans don't appear to really understand what a good relational database gives them, just as a lot of PHP programmers don't get what the big deal is with namespaces.
  • A special case of simplicity deserves mention: nontrivial queries must be created as a view with mapreduce.  MapReduce is a great approach to trivially parallelizing certain classes of problem.  The problem is, it's tedious and error-prone to write raw MapReduce code.  This is why Google and Yahoo have both created high-level languages on top of it (Sawzall and Pig, respectively).  Poor SQL; even with DSLs being the new hotness, people forget that SQL is one of the original domain-specific languages.  It's a little verbose, and you might be bored with it, but it's much better than writing low-level mapreduce code.


Zach said…
Amen. The CouchDB website starts off "Apache CouchDB is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API." It took me a couple weeks before I realized that their definition of "distributed" was not my definition of "distributed".
Anonymous said…
I think that all your points are valid.

However, your premise is that somehow it will replace current relational database technologies. I don't think that's its goal (in fact, in the "Introduction to CouchDB" on its own website, it states that it's not "a replacement for relational databases").

I think that comparing CouchDB to PostgreSQL is just comparing two different things.

If your premise is that other database technologies are better for storing logical documents in a distributed way, then I'd be really interested to see a post on that.
Jonathan Ellis said…
Eric, yes, that's why I said it's competing with those systems "in the popular imagination, if not in its author's mind." The official position has always been what you say, but a lot of the blog activity hypes it as more than that.
J Chris A said…
CouchDB is no replacement for a relational database. However, I think a lot of the excitement over it is driven by developers for whom the relational model is overkill. DHH was mocked for calling his database "a hash in the sky" but the reality is, that's what a lot of applications really need.

As far as scalability is concerned, CouchDB currently supports multi-master replication, as well as offline or disconnected writes. Availability is favored over consistency, so application developers will have to take seriously the chance that two different users could update the same document within a short time frame on separate nodes.

However, as Couch progresses, we're working to add cluster-wide transactions (as well as integrated partitioning) so in the long term, Couch should be able to scale with your data.
Anonymous said…
If I'm correct, CouchDB has no way to synchronize accesses across multiple keys (no transactions or locks).

In my mind, that makes it totally useless for just about anything interesting. Why would I use CouchDB over memcached?
J Chris A said…
@Peter CouchDB uses MVCC, so "whoever saves first wins" on a single node. Out of date saves fail, so clients must retry saving after fetching (and hopefully merging) the latest rev.

Replication flags any conflicting writes as they come in from other nodes. So if your application can process a queue of conflicted revs, you can scale it to multiple disconnected replicas, and have a guarantee of eventual consistency, as long as the nodes eventually complete a successful replication.
Stephan.Schmidt said…
I'm much more excited about Scalaris.

(Really-)Distributed, Erlang, Transactional Key/Value-Store.

What they lack is the JSON-API (easy to do yourself) and the server side map reduce (I miss that one, it's cool! but I'm still not sure how important that is or if it can be done with Hadoop as easily)

JanL said…
> Writes are serialized. Not serialized as in the isolation level, serialized as in there can only be one write active at a time. Want to spread writes across multiple disks? Sorry.

Writes are serialized per database. Want to spread writes across multiple disks? Use multiple databases.
JanL said…

More importantly. Scalaris lacks persistent storage. It is memory-only.



Same for memcached.
Jonathan Ellis said…
@JanL: "just use database-per-disk" is not a solution; you just run into the "CouchDB is not a truly distributed database" problem that much quicker, i.e., you have to manually partition. Manual partitioning is more painful the more nodes you have to manage, so a system that can only effectively handle a single disk per node is that much less attractive.
JanL said…
@Jonathan, CouchDB will handle that for you.
Cliff said…
Thanks for the dynomite mention. If you want to pick my brain about it, feel free to get in touch. Thanks.
Anonymous said…
When MySQL first started taking hold, many DB gurus dismissed it as useless. Sure, it doesn't do half of what big boys like Oracle can do, but it doesn't need to. Oracle is overkill to the people using MySQL.

CouchDB is another such technology. RDBMS is overkill for many web related projects. Those same PHP programmers who don't understand namespaces, and struggle with SQL, are going to eat this up.

I'm personally more interested in the Drizzle fork of MySQL than I am about CouchDB, but I'm definitely going to watch it closely so I can consider it for projects where it might fit the bill.
J Chris A said…
@PENIX I see where you are coming from, but as I mentioned in my earlier comment, what is introducing about CouchDB is the p2p distribution model. Nothing else even comes close for the purpose of distributing simple applications to users, while letting them own their own data.

CouchDB's storage model is closer to GFS than it is to Drizzle, so if you're looking for "big boys" to compare it to, that'd be one place to start.
kowsik said…
Maybe it's just me with being join challenged. Just yesterday, I was trying to do something with sqlite3 (which I love BTW for certain things) with nested selects and had to read up on the syntax for half hour because it forced me to think about the problem in a different way. I think the simplicity of writing P.O.JS (plain old javascript) to express what you want to query vastly overcomes any limitations of a project that's barely 1.0. Atleast to me, that's the attractiveness. We'll eventually have sharding, parallel map-reduce and all the bells and whistles. But the magic of Couch is that it's simple and it __just__ works and it helps me think about the problem the way it needs to be thought about.
drawk said…
CouchDB, MonogoDB( and stuff like it have a place as it is essentally REST type service interfaces to databases where traditionally RDBMS was only accessible over specific ports, pipes or other network walls.

CouchDB in that respect is a 'distributed' database.

CouchDB may not be the end all but document databases are better suited for the future it is just we will be coding different where the middle tier is smarter about where data lies not just the database. Lots of people are coding that way already dealing with terabytes of data that really go beyond the point of being able to even run a process across all data at all.

CouchDB was addressing a locking down of databases to one machine or cluster and freeing it to be more globally unique data (hash in the sky) when that type of system was just emerging, it will evolve just as RDBMS did.

I do believe there are strong futures for RESTful HTTP/JSON based databases that can handle some vertical scale like something like terracotta gives you but the future is horizontal scaling mostly. Document databases fit that much better than RDMBS even if they are young and still iterating to get to market ready.
Unknown said…
Bah, just saying something isn't distributed because it doesn't do X isn't really valid. There isn't a single technology what makes a thing distributed. The fact that it can do clusters makes it distributed.

Claiming it can't scale calls for actual proof - tests and numbers - and not just words.
Jonathan Ellis said…
As everyone knows, scalability isn't about single-node numbers (although those don't look too hot either); it's about whether adding Nx machines gives you Nx performance, and it's about automating growing and failure recovery so that you don't have to add Nx members to your ops team at the same time. If sharding + replication were enough to scale we'd all stick with pg and mysql, but it's not; at this point everyone's pretty much concluded that that's not adequate for big data.

But manual sharding is labor intensive, error prone, and inflexible. You _can_ deal with machine failures but it's painful. And that's the good news. Growing your cluster is much worse. So is dealing with load hot spots.

That's what my problem is with couchdb -- as Zach quoted, the first feature they tout is "distributed," which has become associated, fairly or not, with scalability features couchdb doesn't have. But none of their devs ever post a correction to articles lumping couchdb in with scalable databases to say, "actually, we mean this _other_ definition of distributed." They seem content to allow people to assume they have these other features too, which is understandable in some sense, but not really honest.

Popular posts from this blog

Why schema definition belongs in the database

Earlier, I wrote about how ORM developers shouldn't try to re-invent SQL . It doesn't need to be done, and you're not likely to end up with an actual improvement. SQL may be designed by committee, but it's also been refined from thousands if not millions of man-years of database experience. The same applies to DDL. (Data Definition Langage -- the part of the SQL standard that deals with CREATE and ALTER.) Unfortunately, a number of Python ORMs are trying to replace DDL with a homegrown Python API. This is a Bad Thing. There are at least four reasons why: Standards compliance Completeness Maintainability Beauty Standards compliance SQL DDL is a standard. That means if you want something more sophisticated than Emacs, you can choose any of half a dozen modeling tools like ERwin or ER/Studio to generate and edit your DDL. The Python data definition APIs, by contrast, aren't even compatibile with other Python tools. You can't take a table definition

Python at

At my day job, I write code for a company called Berkeley Data Systems. (They found me through this blog, actually. It's been a good place to work.) Our first product is free online backup at . Our second beta release was yesterday; the obvious problems have been fixed, so I feel reasonably good about blogging about it. Our back end, which is the most algorithmically complex part -- as opposed to fighting-Microsoft-APIs complex, as we have to in our desktop client -- is 90% in python with one C extension for speed. We (well, they, since I wasn't at the company at that point) initially chose Python for speed of development, and it's definitely fulfilled that expectation. (It's also lived up to its reputation for readability, in that the Python code has had 3 different developers -- in serial -- with very quick ramp-ups in each case. Python's succinctness and and one-obvious-way-to-do-it philosophy played a big part in this.) If you try it out, pleas

A review of 6 Python IDEs

(March 2006: you may also be interested the updated review I did for PyCon -- .) For September's meeting, the Utah Python User Group hosted an IDE shootout. 5 presenters reviewed 6 IDEs: PyDev Eric3 3.7.1 Boa Constructor 0.4.4 BlackAdder 1.1 Komodo 3.1 Wing IDE 2.0.3 (The windows version was tested for all but Eric3, which was tested on Linux. Eric3 is based on Qt, which basically means you can't run it on Windows unless you've shelled out $$$ for a commerical Qt license, since there is no GPL version of Qt for Windows. Yes, there's Qt Free , but that's not exactly production-ready software.) Perhaps the most notable IDEs not included are SPE and DrPython. Alas, nobody had time to review these, but if you're looking for a free IDE perhaps you should include these in your search, because PyDev was the only one of the 3 free ones that we'd consider using. And if you aren