Great info.  Thanks!

On Wed, Jan 25, 2017 at 12:11 PM, Paul Davis <paul.joseph.da...@gmail.com>
wrote:

> Garth,
>
> The way to tell when a cluster is sync'ed is by looking at the
> `internal_replication_jobs` key in the JSON blob returned from the
> _system endpoint on the 5984 port from each node in the cluster. Once
> its zero (or close to) on each node you're done getting data to the
> new node. Though it can lie some times if there's a busy cluster and
> things get wonky. To be extra sure you can run this on each node in
> the cluster:
>
> `mem3_sync:initial_sync(nodes()).`
>
> Basically all that does is queue up every database for internal
> replication. If that doesn't change the count from zero then you
> should be good to go.
>
>
> To your second question, it would be impossible to automatically sync
> a view when rebuilding it via internal replication. Each view depends
> on the order of updates to the shard. With internal replication that
> order is undefined and would change each time during a rebuild as the
> order of updates via internal replication is approximately random.
> Also, it most definitely would not match the order of the source shard
> when documents have been updated.
>
> However, the answer to your specific problem is to use
> maintenance_mode on the node being rebuilt. Before you first boot the
> node you're wanting to rebuild (or before you connect it to the
> cluster) you just need to set the `[couchdb] maintenance_mode = true`
> parameter. This prevents the node from participating in any
> interactive requests which prevents it from responding to view queries
> before its built its views. Then you just need to watch _active_tasks
> for your view to build before setting maintenance_mode back to false
> or deleting it.
>
> You may also want to make a view query to the individual shards over
> the 5986 node local port as well to make sure that there was a build
> triggered for each shard.
>
> Paul
>
> On Wed, Jan 25, 2017 at 11:27 AM, Garth Gutenberg
> <garth.gutenb...@gmail.com> wrote:
> > Kind of a follow-up question to this.  I've found in my testing that
> when a
> > new node comes online in a cluster, it only syncs the raw data, but not
> the
> > views.  Is there a way to enable syncing of views across cluster nodes as
> > well?  Basically I want all the nodes in my cluster to be exact replicas
> of
> > each other.  We have some relatively large DBs (~4GB) whose views take
> > awhile to generate.
> >
> > To expand on the previous scenario, if the downed node comes up without
> any
> > views, and a client hits hit, that client needs to wait for the view to
> be
> > generated - even though it exists on the other nodes in the cluster.  And
> > that wait time can be 15-30 mins in some cases, which really isn't
> > acceptable when the view is already generated, just not on this
> particular
> > node.
> >
> > On Wed, Jan 25, 2017 at 8:59 AM, Garth Gutenberg <
> garth.gutenb...@gmail.com>
> > wrote:
> >
> >> Scenario:
> >>
> >> I have a three node cluster.  One of the nodes goes offline (server
> dies,
> >> whatever).  I bring up a new node with no data and it starts sync'ing
> with
> >> the other nodes in the cluster.
> >>
> >> How do I know when this sync is complete and the new node has all the
> >> data?  I'm dealing with thousands of DBs, so doing a doc count in each
> one
> >> isn't really feasible - at least not in a timely manner.  Is there a
> log or
> >> API somewhere that indicates completion of data synchronization from a
> >> server perspective, not just individual DBs?
> >>
>

Reply via email to