On Thu, 2004-09-16 at 18:56 -0400, Ken Murchison wrote: > I think this would cause performance to suffer greatly. I think what we > want is "lazy" replication, where the client gets instant results from > the machine its connected to, and the replication is done in the > background. I believe this is what David's implementation does. > > Question: Are people looking at this as both redundancy and > performance, or just redundance?
There has to be some balance between the two, of course. What exactly would that balance be? A while back I had some ideas of lazy replication between geographically separate nodes in a mail cluster, to solve a problem that a customer was having. I think I posted something on this very list back then. There was some research, but the costs involved in actually implementing the thing were too big, and the time to do it was too short. The idea was to get rid of the single-master structure of Murder, and have an assymetric structure where each node in the mail cluster can act as "primary" for one or several domains, and as "secondary" for one or several domains, at the same time. Synchronization could flow in either direction. Each domain would have one primary server and some number of secondary servers -- redundancy could be increased by adding slaves and performance could be increased by placing them close to users in the network topology. Placing slaves in a geographically remote location would act as a sort of hot backup -- if one server breaks then you just replace it and let it synchronize with an available replica. Basically, think DNS here, and add the ability to inject messages at any node. Let's say you have five servers and three offices (customers) -- you'd set up one server in your own facilities, one server in a co-location facility, and one server in each of your customers' facilities. You configure the server in your network -- which acts as a kind of administration point -- and in the co-location network to handle "all domains" and each server in the customers' facilities to handle mail only for their domain(s). You then create domains and mailboxes on the server close to you in the network topology. The mailboxes will be lazy- replicated to the correct locations. Using suitable DNS records, you can have mail delivered directly to each customer's server, and it would lazy-replicate to your servers. Your servers would act as MX backups when the customer's network is down, and the mail would be lazy- replicated to them when they reappear on the network. Also, you could support dial-up users by having them connect to the co-located server instead of having to open firewalls etc. to the customer's network, which is potentially behind a much slower link. So to answer your question, I believe that by selecting a suitable structure, you could actually address both performance and redundancy at the same time. (Although I realize I've broadened the terms beyond what you probably meant originally.) In any case, I'd be willing to join the fundraising, but before that I'd like to see an exact specification of what is actually being implemented. I imagine that the specification could be drafted here on this list, put somewhere on the web along with the fundraising details, and we'd go from there. Cheers, -- Fabian Fagerholm <[EMAIL PROTECTED]>
signature.asc
Description: This is a digitally signed message part