Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-17 Thread Russell Coker
On Sat, 16 Oct 2004 22:00, Marcin Owsiany [EMAIL PROTECTED] wrote:
  If one machine has a probability of failure of 0.1 over a particular time
  period then the probability of at least one machine failing if there are
  two servers in the cluster over that same time period is 1-0.9*0.9 ==
  0.19.

 But do we really care about whether a machine fails? I'd rather say
 that what we want to minimize is the _service_ downtime.

If someone has to take time out from other work to fix it then we care.  There 
are lots of things that we would like to have done but which are not being 
done due to lack of time.  Do we really want to take more time away from 
other important tasks just to have super-reliable @debian.org email?

 With one machine, the possibility of the service being unavailable is
 0.1. With two machines it's equal to the possibility of both machines
 failing at the same time, so it's 0.1*0.1 == 0.01, as long as the
 possibilites are independent (not sure if that's the right translation
 of the term).

Correct.  Configuration errors and software bugs can put two machines offline 
just as easily as one.

 Otherwise, I'd say that the increase of availability is worth the
 additional debugging effort :-)

Are you going to be involved in doing the work?

This entire thread started because the admin team doesn't seem to have enough 
time to do all the work that people would like them to do.  Your suggestion 
seems likely to make things worse not better.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-17 Thread martin f krafft
also sprach Russell Coker [EMAIL PROTECTED] [2004.10.17.1622 +0200]:
 Are you going to be involved in doing the work?

I volunteer to join the postmaster team and help out.

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-17 Thread martin f krafft
also sprach martin f krafft [EMAIL PROTECTED] [2004.10.17.1626 +0200]:
 I volunteer to join the postmaster team and help out.

Though my experience is really 98% postfix, 1.5% qmail, 0.4%
MDaemon, and 0.1% Exchange. So absolutely no exim in there. I've had
my fair share with single setuid binaries. :)

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-17 Thread Stephen Gran
This one time, at band camp, martin f krafft said:
 also sprach Russell Coker [EMAIL PROTECTED] [2004.10.17.1622 +0200]:
  Are you going to be involved in doing the work?
 
 I volunteer to join the postmaster team and help out.

/AOL.

My experience is mostly exim3  4, and sendmail.
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


pgpDsNQLLgYAJ.pgp
Description: PGP signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-16 Thread Russell Coker
On Fri, 15 Oct 2004 23:33, Arnt Karlsen [EMAIL PROTECTED] wrote:
  On Fri, 15 Oct 2004 03:19, Arnt Karlsen [EMAIL PROTECTED] wrote:
Increasing the number of machines increases the probability of one
machine failing for any given time period.  Also it makes it more
difficult to debug problems as you can't always be certain of
which machine was involved.
  
   ..very true, even for aero engines.  The reason the airlines like
   2, 3 or even 4 rather than one jet.
 
  You seem to have entirely misunderstood what I wrote.

 ..really?   Compare with your average automobile accident and
 see who has the more adequate safety philosophy.

If one machine has a probability of failure of 0.1 over a particular time 
period then the probability of at least one machine failing if there are two 
servers in the cluster over that same time period is 1-0.9*0.9 == 0.19.

 [EMAIL PROTECTED], 2 boxes watching each other or some such, will give
 that Ok, I'll have a look some time next week peace of mind,
 and we don't need symmetric power here, one big and one or
 more small ones will do fine

Have you ever actually run an ISP?

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-16 Thread Marcin Owsiany
On Sat, Oct 16, 2004 at 09:29:32PM +1000, Russell Coker wrote:
 On Fri, 15 Oct 2004 23:33, Arnt Karlsen [EMAIL PROTECTED] wrote:
   On Fri, 15 Oct 2004 03:19, Arnt Karlsen [EMAIL PROTECTED] wrote:
 Increasing the number of machines increases the probability of one
 machine failing for any given time period.  Also it makes it more
 difficult to debug problems as you can't always be certain of
 which machine was involved.
   
..very true, even for aero engines.  The reason the airlines like
2, 3 or even 4 rather than one jet.
  
   You seem to have entirely misunderstood what I wrote.
 
  ..really?   Compare with your average automobile accident and
  see who has the more adequate safety philosophy.
 
 If one machine has a probability of failure of 0.1 over a particular time 
 period then the probability of at least one machine failing if there are two 
 servers in the cluster over that same time period is 1-0.9*0.9 == 0.19.

But do we really care about whether a machine fails? I'd rather say
that what we want to minimize is the _service_ downtime.

With one machine, the possibility of the service being unavailable is
0.1. With two machines it's equal to the possibility of both machines
failing at the same time, so it's 0.1*0.1 == 0.01, as long as the
possibilites are independent (not sure if that's the right translation
of the term).

Or am I wrong in the first sentence?

Otherwise, I'd say that the increase of availability is worth the
additional debugging effort :-)

Marcin
-- 
Marcin Owsiany [EMAIL PROTECTED] http://marcin.owsiany.pl/
GnuPG: 1024D/60F41216  FE67 DA2D 0ACA FC5E 3F75  D6F6 3A0D 8AA0 60F4 1216


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-16 Thread Arnt Karlsen
On Sat, 16 Oct 2004 21:29:32 +1000, Russell wrote in message 
[EMAIL PROTECTED]:

 On Fri, 15 Oct 2004 23:33, Arnt Karlsen [EMAIL PROTECTED] wrote:
   On Fri, 15 Oct 2004 03:19, Arnt Karlsen [EMAIL PROTECTED] wrote:
 Increasing the number of machines increases the probability of
 one machine failing for any given time period.  Also it makes
 it more difficult to debug problems as you can't always be
 certain of which machine was involved.
   
..very true, even for aero engines.  The reason the airlines
like 2, 3 or even 4 rather than one jet.
  
   You seem to have entirely misunderstood what I wrote.
 
  ..really?   Compare with your average automobile accident and
  see who has the more adequate safety philosophy.
 
 If one machine has a probability of failure of 0.1 over a particular
 time period then the probability of at least one machine failing if
 there are two servers in the cluster over that same time period is
 1-0.9*0.9 == 0.19.
 
  [EMAIL PROTECTED], 2 boxes watching each other or some such, will give
  that Ok, I'll have a look some time next week peace of mind,
  and we don't need symmetric power here, one big and one or
  more small ones will do fine
 
 Have you ever actually run an ISP?

..no, I'm an aeronautical engineer and likes Zeppeliners.  ;-)

-- 
..med vennlig hilsen = with Kind Regards from Arnt... ;-)
...with a number of polar bear hunters in his ancestry...
  Scenarios always come in sets of three: 
  best case, worst case, and just in case.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-16 Thread Arnt Karlsen
On Sat, 16 Oct 2004 14:00:57 +0200, Marcin wrote in message 
[EMAIL PROTECTED]:

 On Sat, Oct 16, 2004 at 09:29:32PM +1000, Russell Coker wrote:
  On Fri, 15 Oct 2004 23:33, Arnt Karlsen [EMAIL PROTECTED] wrote:
On Fri, 15 Oct 2004 03:19, Arnt Karlsen [EMAIL PROTECTED] wrote:
  Increasing the number of machines increases the probability
  of one machine failing for any given time period.  Also it
  makes it more difficult to debug problems as you can't
  always be certain of which machine was involved.

 ..very true, even for aero engines.  The reason the airlines
 like 2, 3 or even 4 rather than one jet.
   
You seem to have entirely misunderstood what I wrote.
  
   ..really?   Compare with your average automobile accident and
   see who has the more adequate safety philosophy.
  
  If one machine has a probability of failure of 0.1 over a particular
  time period then the probability of at least one machine failing if
  there are two servers in the cluster over that same time period is
  1-0.9*0.9 == 0.19.
 
 But do we really care about whether a machine fails? I'd rather say
 that what we want to minimize is the _service_ downtime.
 
 With one machine, the possibility of the service being unavailable is
 0.1. With two machines it's equal to the possibility of both machines
 failing at the same time, so it's 0.1*0.1 == 0.01, as long as the
 possibilites are independent (not sure if that's the right translation
 of the term).
 
 Or am I wrong in the first sentence?
 
 Otherwise, I'd say that the increase of availability is worth the
 additional debugging effort :-)

..email is a lot like Zeppeliner transportation, even if these services
stop, there is no loss other than propulsion, unlike with common jet
airliners promptly dropping outta the sky to ditch in the drink or
rocks, unless the aircrew manages to do another Gimli glide.

-- 
..med vennlig hilsen = with Kind Regards from Arnt... ;-)
...with a number of polar bear hunters in his ancestry...
  Scenarios always come in sets of three: 
  best case, worst case, and just in case.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Russell Coker
On Thu, 14 Oct 2004 13:35, Lucas Albers [EMAIL PROTECTED] wrote:
  As long as the machine is fixed within four days of a problem we don't
  need
  more than one.  Email can be delayed, it's something you have to get used
  to.

 Machines are cheap enough, wouldn't it be reasonable to throw in
 redundancy? Unless having 2 machines adds unneccessary complexity to the
 setup.

Better to have one good machine than three cheap machines.  The more machines 
you have the greater the chance that one of them will break.

 Sometimes I don't even realize one of the external relays is broken for a
 day...(even though the monitoring tools should tell you.)

Which is another good reason for not having such redundant servers.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Russell Coker
On Thu, 14 Oct 2004 23:35, martin f krafft [EMAIL PROTECTED] wrote:
 also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.14.1525 
+0200]:
  Or we can do it in two, with capacity to spare AND no downtime.

 I would definitely vote for two systems, but for high-availability,
 not load-sharing. Unless we use a NAS or similar in the backend with
 Maildirs to avoid locking problems. Then again, that's definitely
 overkill...

A NAS in the back-end should not be expected to increase reliability.  Every 
time you increase the complexity of the system you should expect to decrease 
reliability.

KISS!

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Henrique de Moraes Holschuh
On Fri, 15 Oct 2004, Russell Coker wrote:
 You seem to have entirely misunderstood what I wrote.

And I think I had misunderstood you, but your message cleared things up...

 Having four engines on a jet rather than two or three should not be expected 
 to give any increase in reliability.  Having two instead of one (and having 
 two fuel tanks etc) does provide a significant benefit.
[...]

 With mail servers if you have a second server you have more work to maintain 
 it, more general failures, and you have no chance of saving anyone's life to 
 compensate.
 
 Finally consider that one of the main causes of server unreliability is 
 mistakes made during system maintenance.  Increase the amount of work 
 involved in running the systems and you increase the chance of problems.

In other words, your point is not that two MX are not more resilient to
failure, but rather that the work of administrating them is not worth the
gain in resilience ?

That would depend directly on what sort of downtime you want to tolerate on
the mail system.  IMHO, 4h downtimes in Debian's mailsystem is something to
avoid at any reasonable cost, and that would require either two active MX,
or a ready-to-deploy MX kept inside the closet (which is more dangerous
maintenance-wise than two live MX IMO) for the worst-case scenario.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread martin f krafft
also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.15.1448 +0200]:
 Just to make it clear, I am advocating two *good* machines.

ENOSUCHTHING wrt it not failing.

  Which is another good reason for not having such redundant
  servers.
 
 Now, that is a bit too far.  The correct answer is to monitor the
 damn things.  And any sort of monitoring that would not catch
 a problem is not good enough.  A good enough reacive (as opposed
 to predictive) monitoring for email is rather easy to do (just
 send one directly to the MX, and freak if it does not send it back
 to you in a given time window).  

While I understand Russell's concerns, I think that we should have
a second machine to be able to swap in. If the primary every goes
down, then the secondary must be able to take over, or else we will
have problems with the project. We cannot assume that the MX admin
will be able to fix the problem ASAP.

About backup MX... well, we can put them elsewhere. I run a couple
reliable MXs and could also serve as backup for Debian.

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread martin f krafft
also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.15.1455 +0200]:
 In other words, your point is not that two MX are not more
 resilient to failure, but rather that the work of administrating
 them is not worth the gain in resilience ?

This is frequently a problem people do not (like to) see.

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Henrique de Moraes Holschuh
On Fri, 15 Oct 2004, martin f krafft wrote:
 also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.15.1448 +0200]:
  Just to make it clear, I am advocating two *good* machines.
 
 ENOSUCHTHING wrt it not failing.

Nor did I intend to imply that they wouldn't fail :)

   Which is another good reason for not having such redundant
   servers.
  
  Now, that is a bit too far.  The correct answer is to monitor the
  damn things.  And any sort of monitoring that would not catch
 
 While I understand Russell's concerns, I think that we should have
 a second machine to be able to swap in. If the primary every goes

And it better be live, or it gets wy easier for it to fall out-of-sync
with what was done to the primary machine.

 About backup MX... well, we can put them elsewhere. I run a couple
 reliable MXs and could also serve as backup for Debian.

I'd rather we had no backup MX per se, but two equally-ranking ones (i.e.
two primary ones)... this is also because of how Debian system
administration is usually done, and because of the big drawbacks of
secondary MX that are not configured exactly as the primary ones...

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread A Mennucc
(reposting from private)
note that  what I state below *is* possible, as pointed 
by someone in d-private, by using a mke2fs option
$ mke2fs -J device=
(and of course you need to use a nonvolatile kind of ramdisk)

- Forwarded message 

now that I come to think of it...
there would be a wonderful solution to the problem of journalling:
keeping the journal in another, faster, device!
the best would be to have a filesystem that keeps the main data 
in hard disks (cheap but slow) and the complete journal (metadata AND 
data) in RAM disk (or similar) (which would be more expensive by 
the GB, but much much faster).
This would be somewhat difficult to implement
in kernel, and difficult to properly setup, but it may
be very effective. It would entail a /etc/fstab such as

/dev/hda1   /  ext3   default,journal=/dev/ramdisk/101
/dev/hda2   /usr   ext3   default,journal=/dev/ramdisk/201
...

where /dev/ramdisk/* are partitions in the ramdisk.

(Now that I think of it, this e-mail has nothing private in it  :-(
sorry for the abuse of d-p) 

a.

- End forwarded message -

-- 
Andrea Mennucc
 Ukn ow,Ifina llyfixe dmysp acebar.ohwh atthef


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Henrique de Moraes Holschuh
On Thu, 14 Oct 2004, Russell Coker wrote:
 On Thu, 14 Oct 2004 01:47, Henrique de Moraes Holschuh [EMAIL PROTECTED] wrote:
  On Wed, 13 Oct 2004, Russell Coker wrote:
   On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] 
 wrote:
We have a lot of resources, why can't we invest some of them into a
small three or four machine cluster to handle all debian email (MLs
included),
  
   A four machine cluster can be used for the entire email needs of a
   500,000 user ISP.  I really doubt that we need so much hardware.
 
  Including the needed redundancy (two MX at least), and a mailing list
  processing facility that absolutely has to have AV and AntiSPAM measures at
  least on the level gluck has right now?
 
 The Debian email isn't that big.  We can do it all on a single machine 
 (including spamassasin etc) with capacity to spare.

Or we can do it in two, with capacity to spare AND no downtime.

 One machine should be able to do it with AV and antispam.  Four AV/antispam 
 machines can handle the load for an ISP with almost 1,500,000 users, one 
 should do for Debian.

That depends on how much delay you want to have when processing mail. It'd
be nice to know how many messages/minute @d.o and gluck receive, to stop
guessing, though.

  But we really should have two of them (in 
  different backbones), with the same priority as MX.
 
 Why?

No downtime.  Easy maintenance.  Redundancy when we have network problems
(these are rare, thank god).

  It would be nice to 
  have a third MTA with less priority and heavier anti-spam machinery
  installed.
 
 Bad idea.

Ok.

   OK, having a single dedicated mail server instead of a general machine
   like master makes sense.
 
  Two so that we have some redundancy, please. IMHO email is important enough
  in Debian to deserve two full MX boxes (that never forward to one another).
 
 As long as the machine is fixed within four days of a problem we don't need 
 more than one.  Email can be delayed, it's something you have to get used to.

And while that email is being delayed, our work suffers, and there could
even be security concerns as well.  Developer time IS an important resource,
I don't think we should be wasting it because we don't want to have a second
MX.  Would you set up a mail system for any ISP (including small, 1000-user
ones) with only one MX?

 We don't need high-end hardware.  Debian's email requirements are nothing 
 compared to any serious ISP.

True.  But we don't need cheap-ass, will-break hardware either.  Debian's
admin requirements are different. The less on-site intervention needed, the
better.

   http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html
 
  How much?  It certainly looks very good.
 
 If you want to buy one then you have to apply for a quote.

I.e.: quite expensive. argh.  It is very nice to know about it, though.

   I've run an ISP with more than 1,000,000 users with LDAP used for the
   back-end.  The way it worked was that mail came to front-end servers
   which did LDAP lookups to determine which back-end server to deliver to. 
   The
 
  I meant LDAP being used for the MTA routing and and rewriting. That's far
  more than one lookup per mail message :(
 
 Yes, I've done all that too.  It's really no big deal.  Lots of Debian 
 developers have run servers that make all Debian's servers look like toys by 
 comparison.

So do I.  And I can tell you that I experienced a lot of improvement when
big mass-delivery mail hits, on the order of _minutes_ (thousands of
recipients, every one of them causes postfix to generate a minimum of 4 LDAP
searches, due to the way the LDAP maps were required to be deployed), and
the way postfix map lookup happens.  Moving that to a hash DB sped things up
considerably.

And our requirements for the LDAP cluster went down a lot too, so it was all
benefits without a single drawback.

  Well, we are talking MTA and not mail stores.  The LDAP workload on a MTA
  is usually quite different for the one in a mail store.
 
 Yes, it should be less load because you don't have POP or IMAP checks.

Try the other way around...  Not all MTA setups do a single LDAP lookup per
recipient... of course, if @d.o requires only one lookup, then we don't need
to worry, but...

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread martin f krafft
also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.14.1525 +0200]:
 Or we can do it in two, with capacity to spare AND no downtime.

I would definitely vote for two systems, but for high-availability,
not load-sharing. Unless we use a NAS or similar in the backend with
Maildirs to avoid locking problems. Then again, that's definitely
overkill...

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Arnt Karlsen
On Fri, 15 Oct 2004 00:36:07 +1000, Russell wrote in message 
[EMAIL PROTECTED]:

 On Thu, 14 Oct 2004 23:25, Henrique de Moraes Holschuh
 [EMAIL PROTECTED] wrote:
   The Debian email isn't that big.  We can do it all on a single
   machine(including spamassasin etc) with capacity to spare.
 
  Or we can do it in two, with capacity to spare AND no downtime.
 
 Increasing the number of machines increases the probability of one
 machine failing for any given time period.  Also it makes it more
 difficult to debug problems as you can't always be certain of which
 machine was involved.

..very true, even for aero engines.  The reason the airlines like 
2, 3 or even 4 rather than one jet.

-- 
..med vennlig hilsen = with Kind Regards from Arnt... ;-)
...with a number of polar bear hunters in his ancestry...
  Scenarios always come in sets of three: 
  best case, worst case, and just in case.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Henrique de Moraes Holschuh
On Thu, 14 Oct 2004, martin f krafft wrote:
 also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.14.1525 +0200]:
  Or we can do it in two, with capacity to spare AND no downtime.
 
 I would definitely vote for two systems, but for high-availability,

Two MX records should be enough to do that, no need for more complicated
setups using heardbeats and so on.

 not load-sharing. Unless we use a NAS or similar in the backend with

The idea is redundancy, not load-sharing. But they will load-share if you
add two MX records.

And remember, these are strictly MTAs, they never deliver anything locally,
there is no MDA involved at all.  It is E?SMTP-in, ESMTP-out.

The MDAs can continue in master.d.o or any other Debian machine.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Russell Coker
On Wed, 13 Oct 2004 21:26, Wouter Verhelst [EMAIL PROTECTED] wrote:
 On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
  On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] 
wrote:
   The third is to not use LDAP for lookups, but rather cache them all in
   a local, exteremly fast DB (I hope we are already doing that!).  That
   alone could get us a big speed increase on address resolution and
   rewriting, depending on how the MTA is configured.
 
  I've run an ISP with more than 1,000,000 users with LDAP used for the
  back-end.

 Yes, but that was probably with the LDAP servers and the mail servers
 being in the same data center, or at least with a local replication.

Yes.  Local replication is not difficult to setup.

 This is not the case for Debian; and yes, we already do have local fast
 DB caches (using libnss-db).

That's an entirely different issue.  libnss-db is just for faster access 
to /etc/passwd.  The implementation in Linux is fairly poor however, it 
doesn't even stat /etc/passwd to see if it's newer than the db.  The 
performance gain isn't as good as you would expect either.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
Please respect the privacy of this mailing list.

Archive: file://master.debian.org/~debian/archive/debian-isp/

To UNSUBSCRIBE, use the web form at http://db.debian.org/.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread martin f krafft
also sprach Henrique de Moraes Holschuh [EMAIL PROTECTED] [2004.10.12.2329 +0200]:
 We have a lot of resources, why can't we invest some of them into
 a small three or four machine cluster to handle all debian email
 (MLs included), and tune the entire thing for the ground up just
 for that? And use it *only* for that?

I agree. And I would offer my time to assist. I do have quite some
experience with mail administration.

 and tune it for spool work (IMHO a properly tunned ext3 would be
 best, as XFS has data integrity issues on crashes even if it is
 faster (and maybe the not-even-data=ordered XFS way of life IS the
 reason it is so fast). I don't know about ReiserFS 3, and ReiserFS
 4 is too new to trust IMHO).

This does not belong here, but you misunderstand XFS. It does not
have data integrity issues on crashes; all other JFS's do. XFS takes
a somewhat rigorous approach, but it makes perfect sense. When there
is a crash, journaling filesystems primarily ensure the consistency
of the meta data. XFS does so perfectly. The problems you raise
relate to the infamous zeroing of files, I assume. Well, no
performant filesystem can ensure the consistency of the file
content, and rather than trying heuristically to reconnect sectors
with inodes after a crash, XFS zeroes all the data over which it is
unsure. I think this is important, or else you may one day find
/etc/passwd connected to the /etc/login.defs inode.

I say performant filesystems in the above because I do not see
ext3/journal as a performant filesystem. Nevertheless, it is a very
mature filesystem (already!) and works well for a mail spool, though
I suggest synchronous writes (chattr +S). That said, I find any
filesystem that requires a recheck of its metadata every X mounts to
be fundamentally flawed -- did the authors assume it would
accumulate inconsitencies, or what is the real reason here?

That said, I am using XFS effectively, successfully, and happily on
all the mail spools I administer. For critical servers, I mount it
with 'wsync', which effectively makes sure that I never lose mail,
but which also brings about a 250% performance impact (based on some
rudimentary tests, and assuming the worst case). I can
suggest XFS confidently.

 The third is to not use LDAP for lookups, but rather cache them
 all in a local, exteremly fast DB (I hope we are already doing
 that!).  That alone could get us a big speed increase on address
 resolution and rewriting, depending on how the MTA is configured.

The way we do it here is to use a local LDAP server which sync's
with the external one. Using an external LDAP is definitely a no-do
because of the SSL and TCP overheads.

I have had much success with using PostgreSQL, both for direct use
and to dump postfix Berkeley DB files from its data at regular
intervals when the user data does not change every couple of
minutes. Berkeley DB is definitely the fastest, IME.

 Others in here are surely even better experienced than me in this
 area, and I am told exim can be *extremely* fast for mail HUBs.
 Why can't we work to have an email infrastructure that can do 40
 messages/s sustained?

postfix does this here on a Dual Itanium 2GHz with 2 Gb of RAM and
an XFS filesystem, 2.6.8.1 and Debian sarge. The mail spool is on
a software RAID 1, the machine also does Amavis/F-prot mail scanning
and it rarely ever breaks a sweat. At peaks, we measure about 40
mails/second.

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Wouter Verhelst
On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
 On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] wrote:
  The third is to not use LDAP for lookups, but rather cache them all in a
  local, exteremly fast DB (I hope we are already doing that!).  That alone
  could get us a big speed increase on address resolution and rewriting,
  depending on how the MTA is configured.
 
 I've run an ISP with more than 1,000,000 users with LDAP used for the 
 back-end.

Yes, but that was probably with the LDAP servers and the mail servers
being in the same data center, or at least with a local replication.

This is not the case for Debian; and yes, we already do have local fast
DB caches (using libnss-db).

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


-- 
Please respect the privacy of this mailing list.

Archive: file://master.debian.org/~debian/archive/debian-isp/

To UNSUBSCRIBE, use the web form at http://db.debian.org/.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Russell Coker
On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] wrote:
 We have a lot of resources, why can't we invest some of them into a small
 three or four machine cluster to handle all debian email (MLs included),

A four machine cluster can be used for the entire email needs of a 500,000 
user ISP.  I really doubt that we need so much hardware.

 and tune the entire thing for the ground up just for that? And use it
 *only* for that?  That would be enough for two MX, one ML expander and one
 extra machine for whatever else we need. Maybe more, but from two (master +
 murphy) two four optimized and exclusive-for-email machines should be a
 good start :)

I think that front-end MX machines is a bad idea in this environment.  It 
means that more work is required to correctly give 55x codes in response to 
non-existent recipients (vitally important for list servers which will 
receive huge volumes of mail to [EMAIL PROTECTED] and which should not 
generate bounces for it).

We don't have the performance requirements that would require front-end MX 
machines.

 colaborative work needs the MLs in tip-top shape, or it suffers a LOT. Way,
 way too many developers use @debian.org as their primary Debian contact
 address (usually the ONLY well-advertised one), and get out of the loop
 everytime master.d.o croaks.

OK, having a single dedicated mail server instead of a general machine like 
master makes sense.

 One of the obvious things that come to mind is that we should have MX
 machines with very high disk throughput, of the kinds we need RAID 0 on top
 of RAID 1 to get.  Proper HW RAID (defined as something as good as the
 Intel SCRU42X fully-fitted) would help, but even LVM+MD allied to proper
 SCSI U320 hardware would give us more than 120MB/s read throughput (I have
 done that).

U320 is not required.  I don't believe that you can demonstrate any 
performance difference between U160 and U320 for mail server use if you have 
less than 10 disks on a cable.  Having large numbers of disks on a cable 
brings other issues, so I recommend a scheme that has only a single disk per 
cable (S-ATA or Serial Attached SCSI).

RAID-0 on top of RAID-1 should not be required either.  Hardware RAID-5 with a 
NV-RAM log device should give all the performance that you require.

You will NEVER see 120MB/s read throughput on a properly configured mail 
server that serves data for less than about 10,000,000 users!  When I was 
running the servers for 1,000,000 users there was a total of about 3M/s 
(combined read and write) on each of the five back-end servers.  A total of 
15MB/s while each server had 4 * U160-15K disks (total of 20 * U160-15K 
disks).  The bottlenecks were all on seeks, nothing else mattered.

 Maybe *external* journals on the performance-critical filesystems would
 help (although data=journal makes that a *big* maybe for the spools, the
 logging on /var always benefit from an external journal). And in that case,
 we'd need obviously two IO-independent RAID arrays. That means at least 6
 discs, but all of them can be small disks.

http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html

If you want to use external journals then use a umem device for it.  The above 
URL advertises NV-RAM devices with capacities up to 16G which run at 64bit 
66MHz PCI speed.  Such a device takes less space inside a PC than real disks, 
produces less noise, has no moving parts (good for reliability) and has ZERO 
seek time as well as massive throughput.

Put /var/spool on that as well as the external journal for the mail store and 
your mail server should be decently fast!

 The other is to use a filesystem that copes very well with power failures,
 and tune it for spool work (IMHO a properly tunned ext3 would be best, as
 XFS has data integrity issues on crashes even if it is faster (and maybe
 the not-even-data=ordered XFS way of life IS the reason it is so fast). I
 don't know about ReiserFS 3, and ReiserFS 4 is too new to trust IMHO).

reiserfsck has a long history of not being able to fix all possible errors.  A 
corrupted ReiserFS file system can cause a kernel oops and this isn't 
considered to be a serious issue.

ext3 is the safe bet for most Linux use.  It is popular enough that you can 
reasonably expect that bugs get found by someone else first, and the 
developers have a good attitude towards what is a file system bug.

 The third is to not use LDAP for lookups, but rather cache them all in a
 local, exteremly fast DB (I hope we are already doing that!).  That alone
 could get us a big speed increase on address resolution and rewriting,
 depending on how the MTA is configured.

I've run an ISP with more than 1,000,000 users with LDAP used for the 
back-end.  The way it worked was that mail came to front-end servers which 
did LDAP lookups to determine which back-end server to deliver to.  The 
back-end servers did LDAP lookups to determine the directory to put the mail 
in.  When users checked mail via 

Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Wouter Verhelst
On Wed, Oct 13, 2004 at 11:01:42PM +1000, Russell Coker wrote:
 On Wed, 13 Oct 2004 21:26, Wouter Verhelst [EMAIL PROTECTED] wrote:
  On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
   On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] 
 wrote:
The third is to not use LDAP for lookups, but rather cache them all in
a local, exteremly fast DB (I hope we are already doing that!).  That
alone could get us a big speed increase on address resolution and
rewriting, depending on how the MTA is configured.
  
   I've run an ISP with more than 1,000,000 users with LDAP used for the
   back-end.
 
  Yes, but that was probably with the LDAP servers and the mail servers
  being in the same data center, or at least with a local replication.
 
 Yes.  Local replication is not difficult to setup.

Obviously not.

  This is not the case for Debian; and yes, we already do have local fast
  DB caches (using libnss-db).
 
 That's an entirely different issue.

No, it's not, not in this case anyway.

 libnss-db is just for faster access to /etc/passwd.

You are mistaken. In the FreeBSD implementation, it is; however, the
Linux implementation allows other things to be done with it.

For instance, my /etc/default/libnss-db contains the following lines:

ETC = /root/stage
DBS = passwd group shadow

I also have a script which creates (incomplete (as in, without system
users)) files /root/stage/{passwd,shadow,group} containing just the user
and group records that are in LDAP. Next, /etc/nsswitch.conf contains
the following:

passwd: db compat
group:  db compat
shadow: db compat

/etc/passwd contains only system users and root; /var/lib/misc/passwd.db
contains my 'local' users (the ones that are in LDAP), and they can both
be queried via getent(1) or other mechanisms. Same goes, of course, for
group and shadow.

I did this after checking out how Debian does things -- although I'm
using my own (Perl) implementation, rather than the python one which is
used by Debian, called 'userdir-ldap'.

 The implementation in Linux is fairly poor however, it doesn't even
 stat /etc/passwd to see if it's newer than the db.

That's a feature, not a bug. Unless you want it to check 'the passwd
file' as it is defined in /etc/default/libnss-db (or another
configuration file), in which case it would indeed be a good idea.

 The performance gain isn't as good as you would expect either.

Been there, done that.

IME, doing this kind of thing is *way* faster than using libnss-ldap.
When I first installed LDAP, I tried using libnss-ldap. It worked; but
on my network, it was quite slow. Since the goal was to have a
centralized user database, rather than to have a fully configured LDAP
environment, I didn't want to install a slapd on each and every host.

So, I first tried the nscd solution to speed up performance. That worked
(kinda), but it still wasn't ideal. When I found out about the way
userdir-ldap handles things, I tried that out; and immediately liked it.

An added bonus is that the libnss-db Makefile will not update the .db
files if the original ones are empty; so if the LDAP daemon dies or is
unavailable for some reason, my users can still login, even after the
next time the cronjob runs. This is not the case with libnss-ldap, AIUI.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


signature.asc
Description: Digital signature


Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Henrique de Moraes Holschuh
On Tue, 12 Oct 2004, Will Newton wrote:
 On Tuesday 12 Oct 2004 22:29, Henrique de Moraes Holschuh wrote:
 
  The other is to use a filesystem that copes very well with power failures,
  and tune it for spool work (IMHO a properly tunned ext3 would be best, as
  XFS has data integrity issues on crashes even if it is faster (and maybe
  the not-even-data=ordered XFS way of life IS the reason it is so fast). I
  don't know about ReiserFS 3, and ReiserFS 4 is too new to trust IMHO).
 
 I'm not familiar XFS in a mail server situation, but I am currently working on 
 a PVR box that uses XFS and it gets the plug pulled on it as a daily 
 occurrence. I have found XFS to be fast and solid even in this situation.

The metadata will not get corrupted. But search your files for big blocks of
NULLs...

I have seen XFS do that on 2.4.x kernels (including X=27 and X=28-pre3),
in different machines.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
Please respect the privacy of this mailing list.

Archive: file://master.debian.org/~debian/archive/debian-isp/

To UNSUBSCRIBE, use the web form at http://db.debian.org/.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Henrique de Moraes Holschuh
On Wed, 13 Oct 2004, Wouter Verhelst wrote:
 On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
  On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] wrote:
   The third is to not use LDAP for lookups, but rather cache them all in a
   local, exteremly fast DB (I hope we are already doing that!).  That alone
   could get us a big speed increase on address resolution and rewriting,
   depending on how the MTA is configured.
  
  I've run an ISP with more than 1,000,000 users with LDAP used for the 
  back-end.
 
 Yes, but that was probably with the LDAP servers and the mail servers
 being in the same data center, or at least with a local replication.
 
 This is not the case for Debian; and yes, we already do have local fast
 DB caches (using libnss-db).

Useless unless the MTA does all LDAP lookups through libc, in which case it
doesn't even know it is using LDAP.  For postfix, that would be non-optimal,
I have no idea about Exim.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
Please respect the privacy of this mailing list.

Archive: file://master.debian.org/~debian/archive/debian-isp/

To UNSUBSCRIBE, use the web form at http://db.debian.org/.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-13 Thread Russell Coker
On Wed, 13 Oct 2004 20:42, Steinar H. Gunderson [EMAIL PROTECTED] 
wrote:
 On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
  http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html
 
  If you want to use external journals then use a umem device for it.  The
  above URL advertises NV-RAM devices with capacities up to 16G which run
  at 64bit 66MHz PCI speed.  Such a device takes less space inside a PC
  than real disks, produces less noise, has no moving parts (good for
  reliability) and has ZERO seek time as well as massive throughput.

 Out of curiosity; approximately how much does such a thing cost? I can't
 find prices on it anywhere.

Last time I got a quote the high-end model had 1G of storage and was 33MHz 
32bit PCI.  It cost around $700US.  I would expect that the high-end model 
costs around that price nowadays, but if you want one you'll just have to 
apply for a quote.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-13 Thread Russell Coker
On Thu, 14 Oct 2004 01:47, Henrique de Moraes Holschuh [EMAIL PROTECTED] wrote:
 On Wed, 13 Oct 2004, Russell Coker wrote:
  On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh [EMAIL PROTECTED] 
wrote:
   We have a lot of resources, why can't we invest some of them into a
   small three or four machine cluster to handle all debian email (MLs
   included),
 
  A four machine cluster can be used for the entire email needs of a
  500,000 user ISP.  I really doubt that we need so much hardware.

 Including the needed redundancy (two MX at least), and a mailing list
 processing facility that absolutely has to have AV and AntiSPAM measures at
 least on the level gluck has right now?

The Debian email isn't that big.  We can do it all on a single machine 
(including spamassasin etc) with capacity to spare.

 Yes, one machine that is just a MTA, without AV or Antispam should be able
 to push enough mail for @d.o.

One machine should be able to do it with AV and antispam.  Four AV/antispam 
machines can handle the load for an ISP with almost 1,500,000 users, one 
should do for Debian.

 But we really should have two of them (in 
 different backbones), with the same priority as MX.

Why?

 It would be nice to 
 have a third MTA with less priority and heavier anti-spam machinery
 installed.

Bad idea.

  OK, having a single dedicated mail server instead of a general machine
  like master makes sense.

 Two so that we have some redundancy, please. IMHO email is important enough
 in Debian to deserve two full MX boxes (that never forward to one another).

As long as the machine is fixed within four days of a problem we don't need 
more than one.  Email can be delayed, it's something you have to get used to.

  U320 is not required.  I don't believe that you can demonstrate any

 Required? No. Nice to have given the hardware prices available, probably.
 If the price difference is that big, U160 is more than enough.  But
 top-notch RAID hardware nowadays is always U320, so unless the hotswap U160
 enclosures and disks are that much cheaper...  and the price difference
 from a non top-notch HW RAID controller that is still really good, and a a
 top-notch one is not that big.

We don't need high-end hardware.  Debian's email requirements are nothing 
compared to any serious ISP.

  http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html

 How much?  It certainly looks very good.

If you want to buy one then you have to apply for a quote.

  I've run an ISP with more than 1,000,000 users with LDAP used for the
  back-end.  The way it worked was that mail came to front-end servers
  which did LDAP lookups to determine which back-end server to deliver to. 
  The

 I meant LDAP being used for the MTA routing and and rewriting. That's far
 more than one lookup per mail message :(

Yes, I've done all that too.  It's really no big deal.  Lots of Debian 
developers have run servers that make all Debian's servers look like toys by 
comparison.

  back-end server had Courier POP or IMAP do another LDAP lookup.  It
  worked fine with about 5 LDAP servers for 1,000,000 users.

 Well, we are talking MTA and not mail stores.  The LDAP workload on a MTA
 is usually quite different for the one in a mail store.

Yes, it should be less load because you don't have POP or IMAP checks.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]