On Mon, Mar 15, 2010 at 10:07:32AM +0000, Kevin Jackson wrote: > However to get business buy-in, I need to investigate the possibility > of not using replication and instead have 2+ couchdb nodes using a SAN > to store the data.
I believe your two couchdb nodes will need separate partitions on the SAN, and then to replicate to each other using couchdb replication. I'm fairly sure they won't tolerate multiple independent couchdb instances updating the same file with some sort of global filesystem. That is, I don't think the couchdb process grabs locks on the underlying files, not can it notice when another process has updated the files and update its own internal state to match. If it can, that's a feature which has slipped in very quietly :-) > I've checked the JIRA and there appears to be an issue with nfs[1] - > as we will be deploying on linux, nfs would have been part of the > solution to multiple nodes -> shared disc AFAIK, NFS is still for a single couchdb node talking to the database files, not as a way of multiple couchdb processes accessing the same files. You could of course run some sort of master/slave failover: if node A stops working, ensure it is completely dead (STONITH) then start up a slave node pointing at the same filesystem or SAN partition. If anyone knows differently, please feel free to silence me. Regards, Brian.
