This is something we're working on. At present, the replication manager will 
try to run all replications at once. We plan to introduce a smarter job 
scheduler so that you can configure 'max_replications_running' independently of 
other concerns.

For now, you could use non-continuous replications and listen to /_db_updates 
on your sources to know when to trigger those replications. This is, roughly 
speaking, what the replication manager will do in a future release.

B.


> On 28 Mar 2016, at 09:01, Alexander Harm <[email protected]> wrote:
> 
> Hello all,
> 
> I’ve been playing with CouchDB over the last weeks and am running into 
> serious problems. My idea was, to have a central database and roughly 1.000 
> project dbs that stem from a filtered local one-way replication from the 
> central db using _replicator db. However, triggering 1.000 replications 
> grinds CouchDB to a standstill. I read that for each replication Couch spawns 
> a process so that approach might not be feasible after all.
> I changed my setup now into a node process listening to the changes feed of 
> the central database and then manually triggering a one-shot replication to 
> the affected project dbs. That seems to work for now.
> 
> However, now the clients should listen to the changes feed of their relevant 
> project dbs and again CouchDB is thrashing and crashing all over the place. I 
> increased "max_dbs_open" to 1.500 and added "+P 10.000” to the Erlang startup 
> parameters. ULIMIT on OS X is set to unlimited.
> 
> My central database only has 1.000 small documents so I’m a bit scared of 
> what will happen if it grows to millions of documents and hundreds of clients 
> listening to the changes feeds.
> 
> Are there any recommendations on (nr_of_dbs, nr_of_clients, 
> nr_of_replications) => {couch_settings, erlang_settings, system_settings}?
> Or should I not even attempt to host Couch myself (I would really like to) 
> and directly use Cloudant?
> 
> I’m using CouchDB 1.6.1 (installed via homebrew) on OS X 10.11.4
> 
> Any help appreciated.
> 
> Regards, Alexander

Reply via email to