Hello,

Based on the research that I've done, CouchDB seems like a very good fit for the problem I'm trying to solve, but when talking to people from within the company, they're expressed that there are some unadvertised down sides to it (though they tried using it 3 years ago) and that we would have to migrate fairly quickly off of it.

The worries that they expressed were:
1. Scalability (with a stop the world lock when writing to disk).
2. Erlang stack traces not being the easiest to deal with. (I assumed with the "crash only" design, that common problems could be solved by restarting a node).
3. Garbage collection being a very slow process.

And questions that I had were:
1. Would anyone have approximate numbers on how it scales? In terms of writes, size of the data, and cluster size? How big of a cluster have people gotten working without issues? Have people tested this out in the petabyte range? 2. Would anyone have approximate times on how long a map reduce query could take (with whatever sized data, I'm just curious)?

The use that I'd be looking at is about 200 writes/second of documents <50 kB each with updates fairly rarely, though if everything goes well, the documents could be a bit larger with a more writes and a lot more updates.

Thanks in advance for any help,
Diogo

Reply via email to