afaik you have to remove the _replication_state attribute from the _replicator document to restart the replication.
Hth, Tib On 25 September 2013 14:53, Jason Smith <[email protected]> wrote: > We just had a discussion in another thread about deleting and > re-creating _replicator documents. Try adding a random or changing > value to your document. Does that help? > > I described detailed techniques for handling replicator documents in > my earlier post. > > On Wed, Sep 25, 2013 at 6:40 PM, [email protected] > <[email protected]> wrote: > > Am 25.09.2013 12:41, schrieb Dave Cottlehuber: > > > >> Hey Marten, Sorry to hear things didn't work out. Some more details > might > >> help. How was your replication set up? What sort of replication, esp > was it > >> continuous? > > > > it was a continous replication with a filter function. the filter > function > > itself was ok > > > >> and what does "restart replication" mean? > > > > well, we stored stuff into the database, but that stuff was not > replicated > > to the other two databases - therefore we deleted the documents in the > > replicator database (they were showing no error state) and stored it > again - > > but no task was assigned to these new documents. > > > >> Was it stored in the replicator db or not? > > > > yes > > > >> Any relevant errors in your system logs around that time? > >> > >> Its unusual for a restart of couchdb not to fix stuff, and when the > >> network comes up couchdb should restart any replications saved in the > >> replicator db automatically, assuming its not exceeded the > (configurable) > >> maximum restarts. > >> > > >
