> On Mar 10, 2016, at 3:18 AM, Jan Lehnardt <j...@apache.org> wrote:
> 
>> 
>> On 09 Mar 2016, at 21:29, Nick Wood <nwood...@gmail.com> wrote:
>> 
>> Hello,
>> 
>> I'm looking to back up a CouchDB server with multiple databases. Currently
>> 1,400, but it fluctuates up and down throughout the day as new databases
>> are added and old ones deleted. ~10% of the databases are written to within
>> any 5 minute period of time.
>> 
>> Goals
>> - Maintain a continual off-site snapshot of all databases, preferably no
>> older than a few seconds (or minutes)
>> - Be efficient with bandwidth (i.e. not copy the whole database file for
>> every backup run)
>> 
>> My current solution watches the global _changes feed and fires up a
>> continuous replication to an off-site server whenever it sees a change. If
>> it doesn't see a change from a database for 10 minutes, it kills that
>> replication. This means I only have ~150 active replications running on
>> average at any given time.
> 
> How about instead of using continuous replications and killing them,
> use non-continuous replications based on _db_updates? They end
> automatically and should use fewer resources then.
> 
> Best
> Jan
> --

In my opinion this is actually a design we should adopt for CouchDB’s own 
replication manager. Keeping all those _changes listeners running is needlessly 
expensive now that we have _db_updates.

Adam

Reply via email to