I am thinking of concepts like this now and then. They do have one advantage over simple replication techniques:

- with replication, you have a replication interval and a certain guarantee for losses of those mails received within the replication interval in case of a creash (you can solve that at the block storage level, but the price is not convenient for all projects) - you can forget triggered replication, because not EVERY incoming email can trigger a replication run; you can use that approach for data that is written less frequent - using postfix for duplicating the email has the advantage of buffering in case of target problems - if you can make sure the duplication postfix instance can deliver all duplicated emails nearly at once, you have a very small interval with the risk of loss

My idea would be not to use the gateway(s) for duplication but the servers where emails are stored (where users access them via pop3/imap). The gateways deliver to one of them (via dedicated subdomain you can even define priorities), and the one receiving a certain email delivers to the other storage server(s). If all the storage servers share a virtual IP address (e. g. via heartbeat/LinuxHA), a switchover would be quite transparent for users.

One drawback is that you have to have a means of synchronizing deletes and moves.

Dirk

Reply via email to