-------- Original-Nachricht --------
> Datum: Wed, 12 Jan 2011 13:42:14 +0000
> Von: Jonathan Tripathy <jon...@abpni.co.uk>
> An: postfix-users@postfix.org
> Betreff: Re: Network Ideas

> 
> On 12/01/11 13:36, Steve wrote:
> > -------- Original-Nachricht --------
> >> Datum: Wed, 12 Jan 2011 13:47:00 +0100
> >> Von: John Adams<mailingli...@belfin.ch>
> >> An: postfix-users@postfix.org
> >> Betreff: Re: Network Ideas
> >> Am 12.01.2011 12:03, schrieb Jonathan Tripathy:
> >>> On 12/01/11 10:45, John Doe wrote:
> >>>> From: Jonathan Tripathy<jon...@abpni.co.uk>
> >>>>
> >>>>> While your idea would work in HA mode, would that cause any problems
> >>>> if both
> >>>> postfix servers were used at the same time? (i.e. load balanced)
> >>>>> In fact I may be able to answer my own question by saying yes, it
> >>>>> would cause
> >>>>> a problem as you're not supposed to write to a DRBD secondary...
> >>>> I saw some active-active DRBD howtos; but they used filesystems
> >>>> likeOCFS2 or GFS
> >>>> and such...
> >>>> http://www.sourceware.org/cluster/wiki/DRBD_Cookbook
> >>>> But I am no expert...
> >>>>
> >>>> JD
> >>>>
> >>> If I used a nfs cluster, I could use both postfix server at the same
> >>> time, couldn't i?
> >> these questions you should really ask in the heartbeat/drbd
> >> mailinglist(s).
> >> Just one hint: think about complexity in an active-active cluster
> >> running ocfs2 and mail. Think about file locking.
> >> Building this is one thing. Managing the unexpected afterwards is
> >> another thing.
> >>
> > I run a two node mail server using GlusterFS with replication. It is
> ultra easy to setup. File locking in mail environments is no big issue. Mostly
> mail arrives on one of the mx nodes, gets processed and then passed to the
> delivery agent, the delivery agent then saves the mail (in my case maildir
> format) into the final destination. In the whole processing there is
> almost no locking involved since the mail saved in the maildir has an unique
> number and that alone mostly avoids the need for locking. The POP/IMAP server
> does then indexing and this is the place where locking is/can be involved.
> But a good IMAP/POP server can handle that (dovecot can).
> >
> > The whole storage part works so well that I often forget that it is
> clustered. The good thing about GlusterFS is that I can add as many active
> nodes as I like.
> >
> > The only part where you have to take care about a clustered mail servers
> or a n-node mail server setup is more the other things that you glue into
> the mail server. Things like greylisting, antispam, mailing list software,
> etc... This kind of stuff requires to be cluster aware. The storage is the
> lesser problem IMHO.
> Thanks Steve, excellent info
> 
:)


> As for the antispam, greylisting and av things, they will be on 
> different servers which are related to the cluster, so I think I'm good 
> there.
> 
Okay. If you can make it that way then this will simplify a lot.


> As for the GlusterFS, I take it this would replace DRBD, Heartbeat and 
> NFS in my proposed setup?
>
Yes. My goal was when designing the system that each node is autarkic. If I 
look at just one node (from the FS viewpoint) then the node is build that way: 
storage on top of a local RAID device. That local storage is then exported as a 
GlusterFS brick that does replication.

The other node is setup the same way. So lets say the total storage is 1TB. 
Then you need the double amount because node 1 would have 1TB and node 2 would 
have 1 TB too. And since both nodes (in my setup) have local RAID (lets say you 
use mirror) then the total storage would be 4 TB but real usable is only 1 TB.

The GlusterFS server process running on each system then sees the local 1TB 
plus the other 1TB from the other node. If now one node would go down the other 
node still can continue to work since it still sees the 1TB because the 
GlusterFS client process just sees 1TB (the server is aware of the 2 x 1TB but 
from the GlusterFS client viewpoint there is just 1TB). As soon as the other 
node would come back the GlusterFS replication process would take care of the 
sync. And not only that. I could go on and remove that 1TB from node 1 and node 
1 would still be functional since from it's viewpoint it sees the just 1TB 
storage (the other node 2 is still working so the storage is still there from 
the viewpoint of node 1).

I know, I know. This all sounds very complicated but it is not. In my first 
setup I managed to completely overload the nodes with just GlusterFS process 
time. But that was long time ago with early GlusterFS software. Current 
GlusterFS versions are much better.


> Have you got any good links that you would 
> recommend to setting up such a setup?
> 
http://www.gluster.com/community/documentation/index.php/Main_Page

I would suggest you to subscribe to their mailing list and ask there if you 
need more help.


> Thanks
>
Steve
-- 
NEU: FreePhone - kostenlos mobil telefonieren und surfen!                       
Jetzt informieren: http://www.gmx.net/de/go/freephone

Reply via email to