l...@airstreamcomm.net wrote:
> On 7/6/12 9:56 PM, Timo Sirainen wrote:
> >On 6.7.2012, at 23.28, l...@airstreamcomm.net wrote:
> >
> >>Thanks, that certainly helps identify the configuration options. However I
> >>am more concerned about the experiences of others who have actually used
> >>the replication.  What is the rate of change on your mail cluster, how many
> >>concurrent users do you support with replication enabled, do you use
> >>synchronous or asynchronous replication, are you using it in an
> >>active/active or active/passive state, is it possible to have a cluster
> >>with multiple servers at each site hosting the same mail data, does dysnc
> >>replication scale well (10,000 -> 100,000 -> 1,000,000 users)?  Just trying
> >>to get a good feel for whether dsync replication is capable of handling the
> >>use case I am proposing before investing too much time in testing it.
> >
> >I wouldn't use it for large systems yet. It is still pretty inefficient.
> >v2.2 will have a redesigned dsync that can do incremental syncs much faster
> >and with less bandwidth.  Anyway, in my small installation I'm using it in
> >active-active mode and it works well enough. I've even configured my clients
> >intentionally so that they use different servers.
> >
> Does dsync replication only work between two hosts?  In my scenario
> I would have two sites with X number of nodes at each with an NFS
> backend for each site.  For this example lets say I have site A with
> two nodes that mount one NFS share, and site B with two nodes that
> mount one NFS share.  Is it possible to implement dsync replication
> between these two clusters of nodes?

If you respect http://wiki2.dovecot.org/NFS and
setup a director http://wiki2.dovecot.org/Director
including a doveadm proxy it could work.

dsync is part of "doveadm backup", so you just have
to get your director setup right so that imap, pop3,
lmtp and doveadm services are always proxied to the
correct nfs-client-node at the local site.

Regards
Daniel
-- 
https://plus.google.com/103021802792276734820

Reply via email to