nickva commented on issue #5544:
URL: https://github.com/apache/couchdb/issues/5544#issuecomment-2895741459

   > Thanks [@nickva](https://github.com/nickva) , I will look into that and 
update the case here later.
   > 
   
   We discussed this a bit in our dev channel  and decided to even raise the 
default limit to 1M in `vm.args`: https://github.com/apache/couchdb/pull/5545
   
   > We actually talked about setting up a cluster to distribute load (or 
setting up multiple single instances) our setup is maybe a bit special as we 
have a multiple small to medium dbs (like a few MB to <10Gb). They are not 
related to each other and they are always replicated with a remote server.
   > 
   
   Either setup would work. As long as all dbs can fit on one instance I can 
see multiple single instances being simpler in one respect. A cluster could be 
needed once all the dbs cannot all fit one server any longer.
   
   > Would it be possible to upgrade a single node to a cluster? Would it 
actually make sense in our scenario to use a cluster?
   > 
   
   That's possible but might be a bit tricky, you'd have to carefully manage 
shard node / names. I haven't done that recently so can't don't have any good 
advice. If possible, maybe just replicating them to a new cluster could be 
simpler:  replicate all dbs initially to a new cluster, stop the traffic from 
the old instance, replicate again to catch everything up, then switch the 
traffic over. It might imply a downtime during the catch up and switchover.
   
   > PS: I can't thank you and everyone in the project enough for CouchDB its 
awesome to work with it 👍
   
   Np at all! Thanks for using CouchDB and for reaching out. It's much 
appreciated and it helps! us find out and fix issues!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to