Sascha Lucas wrote:
easy with postfix: DNS round robin (solution inside DNS Server, nearly all
 MTAs are aware of this)

So DNS round robins between the two mail servers. A new mail comes into server1, how does this mail make it to server2.

Yes this is definitely the difficultest part. It must be solved somewhere in the LDA (cyrus). The MTA just pass mails to the LDA. So round robin should work.

Sure round robin works for incoming mail from other ISP's. But what about users trying to send mail that happens to get the DNS for the down server? Or trying to check your mail and hitting the down server? Or the server without your mail? DNS round robin is useful in many cases, but I'm not convinced it solves the problem we're looking at.

I don't see that Postgres supports multiple masters either. Circular

I'm not an expert but what about this?
http://www.postgresql.org/about/news.289

Ah forgot about that package. Only saw the Slony stuff. However after thinking about it a bit, I don't see why you'd need multi-master. Can you elaborate?

The storage has to reside somewhere. If that site goes down, both servers go down. You either need both servers in the same site with shared storage or figure out how to do a shared nothing backend.

It depends on how both sites are connected. Just like redundancy in Servers/Network you can have redundant storage. I.E. with (a)synchronous replication over IP or FC-networks.

If you're replicating or copying to another site with completely different hardware then that fits the description of shared nothing. So either you're mounting the same storage or your mounting different storage in both locations. If you're mounting the same storage then one site can lose connection to it.

BTW: I don't know a cluster-solution with cyrus. We use a cluster aware commercial one.

Does the cluster aware stuff assume shared storage with both servers active/active or active/passive?

I hate to be the guy shooting holes in everyone's ideas, but fault tolerant system design is a large part of what I've been doing for the past six years. It's not easy, it's not simple, and there are 254,861 caveats to any plan based on the services you're supporting, network size, skill of the local admins, and a hundred other things. Part of the problem is that the original poster left the field wide open and we're all making a number of assumptions. The emails so far indicate that there is no perfect solution in this case. Any fault tolerant design is going to require some trade offs. The original poster needs to decide what's important. Is better uptime more important than throughput? Is cost a factor? Is data integrity more important than uptime?

kashani
--
gentoo-user@gentoo.org mailing list

Reply via email to