On Jul 12, 2008, at 23:52 , David King wrote:

Has anyone hit a practical storage limit, either in terms of data or number of records, under couchdb? I have a database of about 70 million records (over 60 million of them are very small, only three integer properties) and maybe eight views that I'd like to try a quick port and see how it performs.

We'd love to hear what you come up with and also to solve any problems you might encounter on your way. Please let us know. Please note that CouchDB at this point is not optimised. We are still in the 'getting it right' phase before we come to the 'getting it fast'. That said, CouchDB is plenty fast already, but there is also the potential to greatly speed up things.

Are there any gotchas that I should know about first under the default config?

As Paul said, use bulk inserts where possible. Be aware that view index creation might take some time if you run it on a very large db for the first time. What I'd do is query the view for updating after each bulk insert. Note that CouchDB stores more data on disk that is in your document, especially when you don't do bulk inserts (but even then). Run database compaction when the database size gets too large for you (compaction will need some more space since it creates a copy of your data).

Other than that, I'm very interested in your findings :)

Cheers
Jan
--

Reply via email to