Re: Keep auto-periodic fsck's enabled on ext3 partitions?

2005-01-06 Thread Russell Coker
On Thursday 06 January 2005 22:48, Wouter Verhelst <[EMAIL PROTECTED]> wrote:
> That is mostly relevant for systems that don't take regular backups. If
> you do (and for the sake of your customers, I hope that is the case),
> the extra precaution isn't really necessary, and probably a bad idea if
> the cost involved (in terms of downtime) is too high.

One thing that has been suggested is to use LVM and fsck a snapshot.  If fsck 
on a snapshot LV indicates that nothing other than journal replay is really 
needed then you can keep running.  If it finds some more serious problem then 
you can consider other options.

I don't know of anyone actually implementing this due to fsck not being 
painful enough.  It would be interesting to read some reports of someone 
actually doing this in the field.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Is gray-listing a one-shot anti-spam measure?

2004-12-27 Thread Russell Coker
On Friday 10 December 2004 21:31, Adrian von Bidder <[EMAIL PROTECTED]> 
wrote:
> > >As has already been suggested it would be good to be able to configure
> > > the number of messages that come through before the client IP is
> > > white-listed.
> >
> > But I think the
> > problem of this would be that initial messages would be even more
> > delayed, depending on the sending server, than they are with normal
> > one-shot greylisting.
>
> I think you misunderstand Russel.  He does, afaict, not want the initial
> message be rejected multiple times, but he wants to see several messages
> coming through, with normal greylisting in effect, before the IP is
> whitelisted for all email.

You are correct.  My desire is to increase the number of messages that must be 
successfully delivered before white-listing, not to increase the number of 
attempts that is necessary to deliver a single message.

Also I would want to control the length of time that a white-list entry will 
remain if there is no appropriate traffic.  I think that a period of about a 
week of no traffic from that IP address is enough cause to remove the 
white-list entry.

The vast majority of email that I receive comes from a small set of IP 
addresses that send mail to me every day.  This includes the Debian list 
servers and other mailing lists.  A much smaller (but very significant) part 
of my email is from on-going discussions.  Sometimes I have email 
correspondence of 1-2 messages per day with one person for a period of a week 
or so, and often in those cases they use the same IP address to send all 
their email.

Finally an important part of my email is comprised of messages from people I 
know well, friends, relatives, and people I work with.  Assembling a 
permanent white-list of IP addresses that those people use would be 
reasonably easy.  Ideally the mail server would help in automating this by 
allowing me to white-list combinations of email address and IP address and 
then automatically remove them if mail stops from that address and starts 
coming from another.

We need a web-based front-end for managing these things so we can allow 
regular users to manage their white-list entries.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: EHLO/HELO [was blacklists]

2004-12-10 Thread Russell Coker
On Friday 10 December 2004 00:39, Mark Bucciarelli <[EMAIL PROTECTED]> 
wrote:
> I've recently turned on EHLO/HELO validation and am encouraged by how
> effective it is.  WIth RBL's (spamcop and dnsbl) and SpamAssassin 3, only
> 88% of spam was stopped.  So far, it's 100%.  (This is a _very_ small

What exactly do you mean by EHLO/HELO validation?

In my postfix configuration I have:
smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, 
reject_non_fqdn_hostname, reject_unknown_sender_domain

I tried out "reject_unknown_hostname" but had to turn it off, too many 
machines had unknown hostnames.

For example a zone foo.com has a SMTP server named postfix1 and puts 
postfix1.foo.com in the EHLO command but has an external DNS entry of 
smtp.foo.com.  Such a zone is moderately well configured and there are too 
many such zones to block them all.  The other helo restrictions get enough 
non-spam traffic.

Using reject_unknown_hostname would get close to blocking 100% of spam, but 
that's because it would block huge amounts of non-spam email.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Russell Coker
On Thursday 09 December 2004 01:12, Craig Sanders <[EMAIL PROTECTED]> wrote:
> the log file noise issue is important to me - i've recently started
> monitoring mail.log and adding iptables rules to block smtp connections
> from client IPs that commit various spammish-looking crimes against my
> system.  some crimes get blocked for 60 seconds, some for 10 minutes, some
> for an hour.  each time the same IP address is seen committing a crime, the
> time is doubled.  i am doing this not because i'm worried that spammers
> will get their junk through my anti-spam rules but because a) i don't want
> their noise in my mail.log, and b) it was an interesting programming
> project that amused me for a few days of part time perl hacking.

Interesting.  Do you plan to package it for Debian?

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Russell Coker
On Wednesday 08 December 2004 20:16, Craig Sanders <[EMAIL PROTECTED]> wrote:
> > Craig, why do you think it's undesirable to do so?
>
> because i dont want the extra retry traffic.  i want spammers to take FOAD
> as an answer, and i dont want to welcome them with a pleasant "please try
> again later" message.  i think it is a sin to be polite or pleasant to a
> spammer :)

I agree that we don't want to be nice to spammers.  But there is also the 
issue of being nice in the case of false-positives.

The extra traffic shouldn't be that great (the message body and headers are 
not being transmitted).  When a legit user accidentally gets into a 
black-list their request to get the black-list adjusted can often be 
processed within the time that their mail server is re-trying the message.

> even on my little home system, at the end of an adsl line, i reject nearly
> 10,000 spams per day (and climbing all the time).  i would expect that to
> at least double or triple if i 4xx-ed them rather than 5xx, depending on
> how much came from open relays or spamhaus rather than dynamic/DUL.

30,000 rejections per day is only one every three seconds.  Not a huge load.

I am not trying to convince you to change your system (I'm not entirely 
convinced to change mine at this time).

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Russell Coker
On Wednesday 08 December 2004 20:32, daniele becchi 
<[EMAIL PROTECTED]> wrote:
> > Odd, since we don't see this.  And when it does happen to 'big' mail
> > senders it's never AOL for one (they're on the whitelist).  And it's
> > totally automatic so if they do end up on it's usually for less than a
> > day.
>
> And how to deal with legitimate email sent via webmail (eg. yahoo) where
> the IP of the sender is inside a RBL, typicall for dsl or dialup
> ip-classes? This is part of the headers from a mail i received:

Yahoo server IP address space should not be in a dialup class.  If that 
happens then notify the person maintaining the dialup-list that you use that 
they have an inaccuracy.

>  Mon, 29 Nov 2004 19:12:38 +0100 (CET)
> Received: from web60309.mail.yahoo.com (web60309.mail.yahoo.com
> [216.109.118.120]) by -snip- (Postfix) with SMTP id 474D8249E74
>  for ; Mon, 29 Nov 2004 19:12:38 +0100 (CET)
> Received: (qmail 47653 invoked by uid 60001); 29 Nov 2004 18:12:36 -
> Message-ID: <[EMAIL PROTECTED]>
> Received: from [217.226.195.183] by web60309.mail.yahoo.com via HTTP; Mon,
> 29 Nov 2004 19:12:36 CET Content-Type: text/plain; charset=iso-8859-1
> Content-Transfer-Encoding: 8bit
> X-Spam-Status: No, hits=4.2 tagged_above=0.0 required=4.5 tests=BAYES_50,
>  RCVD_IN_DSBL, RCVD_IN_NJABL, RCVD_IN_SORBS,
>  RCVD_IN_SORBS_HTTP, RCVD_IN_SORBS_SOCKS
>
> if i would have used rbl checks in postfix instead of spamassim i would
> never receive that mail, right?
> the tracked ip is of course 217.226.195.186 and not the yahoo ip
> 216.109.118.120.
> Or i didn't understand? :(

Most people use DNSBLs for the IP address that's the source of the port 25 
connection and don't use them on the addresses in the Received headers.  Such 
use won't have a problem with this.

Yahoo don't seem to police their TOS well so they tend to get on black-lists 
(among other things they don't even have a functional abuse address).  So if 
you want email from yahoo you probably have to white-list them anyway.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Russell Coker
On Wednesday 08 December 2004 09:55, Michael Loftis <[EMAIL PROTECTED]> 
wrote:
> I have to agree with that statement.  For us it suits our needs very well.
> I don't mind handling the extra retry traffic if it means legitimate mail
> on a 'grey/pink' host is just temporarily rejected or delayed while they
> clean up, in fact this is far more desireable for us.  Complaints of 'lost'
> mail went up when we were using permanent fatal codes as an experiment.
> Yes legitimate hosts get blacklisted, but legitimate hosts will retry, and
> if they don't well, it's their problem, not ours.  We're telling them 454
> listed on spamciop see URL of whatever (I'm obviously paraphrasing)

How would I configure Postfix to do this?

Craig, why do you think it's undesirable to do so?

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: a couple of postfix questions

2004-12-08 Thread Russell Coker
On Wednesday 08 December 2004 19:18, "W.D.McKinney" <[EMAIL PROTECTED]> 
wrote:
> > Qmail is not in Debian.  Even the qmail-src package is no longer in
> > Debian. This makes it significantly more difficult to manage Qmail Debian
> > servers.
>
> Well if you don't like compiling from src, then head to
> http://smarden.org/pape/Debian/

It would be good if he could revive the qmail-src package in non-free.  Having 
lots of apt repositories listed in your server's configuration is not really 
what you want.

> > If you want a reliable server then it's a really good idea to stick with
> > software that's in the distribution whenever possible.  Preferrably use
> > one of the more common options too.  Postfix and Exim are both commonly
> > used in Debian, it's most likely that someone else will encounter bugs
> > before you do and they will be fixed before you upgrade.
>
> Hey, Adam is one of the best guys working with Debian. See
> http://www.linuxis.net for his personal biz. Heavy into qmail.
> He originally helped me get going.

Who is Adam?  Is he a DD?  If so then why doesn't he revive qmail-src?

> > > "Bloated" means overweight, non essential and not availble to chuck out
> > > the window up here.
> >
> > The way Debian generally works is that all the most commonly used
> > features are compiled in.  This means that the vast majority of users can
> > use binary packages.  Significant advantages are derived from this, there
> > are situations where minor changes in code (optimisation changes etc) can
> > cause programs to break.  Using the same binaries as a million other
> > people reduces the chance that you will be the one to first encounter a
> > bug.
>
> Yes, I understand but thanks. Typically this is a big help.

If you understand then why are you so desperate to chuck out features at the 
cost of using a less common system?

> > > "Rock Solid" means it's been so long long since we needed to make a
> > > change, it's easy to forget how.
> >
> > That's because changing Qmail is a PITA.
>
> So we didn't change, it just keeps purring.

Unless you want to have mail to unknown recipients be rejected at the SMTP 
level or one of the other features that are missing from Qmail.  Also if you 
develop a patch for Qmail then there's no chance of Dan accepting it...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: a couple of postfix questions

2004-12-08 Thread Russell Coker
On Wednesday 08 December 2004 14:35, "W.D.McKinney" <[EMAIL PROTECTED]> 
wrote:
> Hmm, meaning Hotmail, Yahoo and others run three legged mules ? :-)

It's just a pity that hotmail and yahoo have so many users that it's 
inconvenient to block them entirely.

> No worries, this list is about Debian and we really like Debian. Not
> married to any MTA, just need some.

Qmail is not in Debian.  Even the qmail-src package is no longer in Debian.  
This makes it significantly more difficult to manage Qmail Debian servers.

If you want a reliable server then it's a really good idea to stick with 
software that's in the distribution whenever possible.  Preferrably use one 
of the more common options too.  Postfix and Exim are both commonly used in 
Debian, it's most likely that someone else will encounter bugs before you do 
and they will be fixed before you upgrade.

> "Bloated" means overweight, non essential and not availble to chuck out
> the window up here.

The way Debian generally works is that all the most commonly used features are 
compiled in.  This means that the vast majority of users can use binary 
packages.  Significant advantages are derived from this, there are situations 
where minor changes in code (optimisation changes etc) can cause programs to 
break.  Using the same binaries as a million other people reduces the chance 
that you will be the one to first encounter a bug.

Gentoo users like compiling everything specific to each installation.  They 
might get a few percent performance increase (but this is not guaranteed), 
but they will definitely have more problems with reliability.

> "Rock Solid" means it's been so long long since we needed to make a
> change, it's easy to forget how.

That's because changing Qmail is a PITA.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-06 Thread Russell Coker
On Monday 06 December 2004 19:34, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> Various AOL mailservers, the Debian mailservers, and other servers sending
> out lots of regular mail get listed in spamcop regularly, so my
> recommendation (and that of spamcop.net themselves, btw) is not to use
> bl.spamcop.net for blacklisting.  Use it in spamassassin to score points.


Received: from johnny.adanco.com (151.adsl.as8758.net [212.25.16.151])
 by murphy.debian.org (Postfix) with ESMTP id B42442DED6
 for <[EMAIL PROTECTED]>; Mon,  6 Dec 2004 02:34:01 -0600 (CST)
Received: from humphrey.adanco.local (humphrey.adanco.local [172.18.10.16])
 by johnny.adanco.com (Postfix) with ESMTP id 24E4B2C6D
 for <[EMAIL PROTECTED]>; Mon,  6 Dec 2004 09:34:01 +0100 (CET)

The Debian servers correctly preserve the Received: path.  This is used by 
Spamcop to assign blame to the correct server.  Above are the original 
Received: headers from your message to the list, if your message was reported 
to Spamcop then it would send a complaint to [EMAIL PROTECTED] about IP 
address 212.25.16.151.

If your message was reported to spamcop it would not list a Debian server, it 
would list 212.25.16.151.

I doubt that Debian servers get listed regularly.  I use the spamcop DNSBL and 
it doesn't get in the way of Debian mailing lists.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Is gray-listing a one-shot anti-spam measure?

2004-12-05 Thread Russell Coker
On Friday 03 December 2004 20:07, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> (And - this to Stephen Frost, I believe - there is a patch to postgrey
> which I will include in the next version, and I believe which will also be
> included in the next upstream, to whitelist a client IP as soon as one
> greylisted email came through.  So the load on legitimate mailservers will
> be even smaller.)

As has already been suggested it would be good to be able to configure the 
number of messages that come through before the client IP is white-listed.

Also it would be good to be able to configure the amount of time for which a 
white-list entry is valid.  What is a dedicated mail server today may be part 
of a dial-up IP address range next year...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Re: Is gray-listing a one-shot anti-spam measure?

2004-12-03 Thread Russell Coker
On Friday 03 December 2004 19:10, Henrique de Moraes Holschuh <[EMAIL 
PROTECTED]> 
wrote:
> > A delay of transmission means more time for the spamming IP address to be
> > added to black-lists.  So during the gray-list interval (currently 5
> > minutes
>
> True.  But in that case, we also need the greylisting period to be long
> enough for the blacklisting to happen, *and* we might need special
> provision on the spamtraps too.
>
> Assuming greylisting gets realy widespread (otherwise spammers would not be
> doing retries in the first place, I suppose), spamtraps might also have to
> do greylisting (or spammers could just stop delivering for non-greylisting
> sites, which is something quite weird to think about but...).  So we would
> need various levels of greylisting.

Running gray-listing (or pseudo-gray-listing as it might never actually accept 
mail) on a spam-trap will be fine.  The Postfix implementation of 
gray-listing postgrey does not send it's 450 code until after the rcpt to:, 
this means that it knows what address the mail was being sent to, what 
address it was coming from, and of course the IP address.  In spite of having 
gray-listing permanently on it could still operate fully as a spam-trap.  
Sure it's convenient for a spam-trap to actually collect the spam, but it's 
not strictly required.

If the spammer can send to a gray-listing site then it can send to a 
gray-listing spam-trap too.

> > Currently gray-listing can be used on it's own with no other anti-spam
> > measures and still do some good.  This situation will change.  But I
> > believe that in combination with other anti-spam measures it will still
> > offer considerable benefits even after spammers wake up to it's presence.
>
> You're probably right.  So please let me revise my point: greylisting by
> itself is a one-shot deal, let's use it while we can.  greylisting as a
> delay measure for blacklists to catch up before you deliver the email will
> continue working well (i.e. not an one-shot deal), IF the blacklists DO
> manage to catch up during the greylisting time AND we can keep them doing
> just that when greylisting gets very widely deployed (greylisting could
> interfere with the listing delays, after all).

The black-lists often beat the spam.

> Russell, how fast are the blacklists reacting to ongoing spam runs on the
> systems you pay attention to?  I don't have that data for mine :(

I'm not sure that it's possible for anyone other than a spammer to really know 
this.  Spamcop reacts quite fast and I suspect that often entries are added 
to the spamcop DNSBL during a spam run before it gets to me even without 
gray-listing.  Adding gray-listing (or other delays) increases the chance 
that someone else will report the spammer before the spam gets to me.

Of course this relies on some people not using gray-listing (so that they get 
the spam fast) and being active in reporting it.  Given the previous 
discussions it seems quite obvious that not everyone will implement it so we 
can probably rely on that.

> > Henrique, please don't take this as a flame.  I am writing to you because
> > you
>
> I didn't...

I'm glad to hear it.  I was also concerned that other readers might get the 
wrong idea.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Is gray-listing a one-shot anti-spam measure?

2004-12-02 Thread Russell Coker
http://www.atm.tut.fi/list-archive/debian-security/msg14351.html

Henrique recently stated the belief that gray-listing is a one-shot measure 
against spam (see the above URL) and that spammers would just re-write their 
bots to do two transmission runs with a delay in between.

I have been considering that point and have come to the conclusion that it may 
not be correct.

A delay of transmission means more time for the spamming IP address to be 
added to black-lists.  So during the gray-list interval (currently 5 minutes 
but may need to be increased to something longer such as 30 mins in future) 
the spammer keeps sending mail to other systems until they either hit a 
spam-trap address or they get reported to spamcop or some other black-list 
service.  Then when they get to their second attempt at sending to a system 
that uses gray-listing they are on a DNSBL or RHSBL listing and are not 
permitted to send.

Currently gray-listing can be used on it's own with no other anti-spam 
measures and still do some good.  This situation will change.  But I believe 
that in combination with other anti-spam measures it will still offer 
considerable benefits even after spammers wake up to it's presence.


Henrique, please don't take this as a flame.  I am writing to you because you 
best expressed a sentiment that others seem to share, and the debian-isp list 
is the best place for such a discussion on the topic.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Limiting User Commands

2004-11-22 Thread Russell Coker
On Wednesday 10 November 2004 21:49, "Ben Hutchings" 
<[EMAIL PROTECTED]> wrote:
> > I feel the need to learn something new today. How could the user replace
> > the root owned files in a directory that they own?
>
> By renaming or unlinking them.  Linux treats this as an operation on the
> directory, not the file, so it's controlled by the directory's permissions.

SE Linux has finer grained access control.  So you can allow a user to have 
write access to their home directory but give ~/.bashrc etc a different type 
that permits only read, getattr, and execute access (but not write, append, 
unlink, link, rename, setattr, lock, ioctl, or create).

I periodically run SE Linux play machines setup in this manner.  I have some 
files in the root user's home directory that they can only read and execute, 
some that they can read and append to, and the default is for full access to 
files in the home directory.  I'll have my play machine back online soon, see 
my web page for the details.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apache & log files

2004-11-06 Thread Russell Coker
On Friday 05 November 2004 19:47, "Francesco P. Lovergine" 
<[EMAIL PROTECTED]> wrote:
> On Fri, Nov 05, 2004 at 01:35:28AM +1100, Russell Coker wrote:
> > My clftools package allows you to split and mangle the log files if you
> > have Apache configured for a single log file...
>
> Uhm, not found in current sid archive

Sorry, it's logtools.  It's been so long since I've worked on it that I'd 
forgotten the name.

It still works well though, I've got it processing all the web stats on the 
server that hosts my domain.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: apache & log files

2004-11-04 Thread Russell Coker
On Thursday 04 November 2004 09:11, Marek Podmaka <[EMAIL PROTECTED]> wrote:
>   I have apache 1.3 webserver hosting about 150 domains (more than 400
>   virtual hosts). Now I have separate error log for each domain

My clftools package allows you to split and mangle the log files if you have 
Apache configured for a single log file...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



postfix mail routing

2004-11-02 Thread Russell Coker
I want to have Postfix route mail to two relays based on the sender.  If the 
sender is from domain1 then I want to use the relay that is authorised with 
SPF for domain1, if the sender is from domain2 then I want to use the relay 
that has SPF records for domain2.

Any ideas on how to do this?

Before anyone asks, for domain1 I don't control the DNS (can't add another IP 
address to the SPF record) and also don't control the outbound mail server 
(it will refuse to act as an outbound relay for mail from other domains).

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: dropping vs rejecting for non exixtent services

2004-10-30 Thread Russell Coker
On Sat, 30 Oct 2004 19:12, martin f krafft <[EMAIL PROTECTED]> wrote:
> also sprach Russell Coker <[EMAIL PROTECTED]> [2004.10.30.1106 +0200]:
> > If you block with tcp-reset then not only will the person
> > connecting get a fast response, but someone who port scans you
> > won't know which ports don't have anything listening on them and
> > which ports are blocked by iptables.
>
> While it can be considered "kind" to let people know which ports are
> inaccessible, I always treat access to ports that I did not open for
> the public as an offence. Thus, I do not feel obliged to let the
> offender know that s/he is accessing an inaccessible port.

Which is why you want a TCP RST packet so that they don't know the port is 
being blocked by a firewall, just that the port is not available.

> As an added benefit, DROP obscures who is dropping. It could be the
> host or a firewall before it.  Now that I think of it, however, 
> a firewall would spoof the sending IP when rejecting with tcp-reset,
> right?

Yes.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: dropping vs rejecting for non exixtent services

2004-10-30 Thread Russell Coker
On Sat, 30 Oct 2004 18:16, Leonardo Boselli <[EMAIL PROTECTED]> wrote:
> On some machine for which i can edvice but do not have final decision
> there sare some non-exixtent services.

If you block with tcp-reset then not only will the person connecting get a 
fast response, but someone who port scans you won't know which ports don't 
have anything listening on them and which ports are blocked by iptables.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: nscd: Was Re: long delays with LDAP nss/pam

2004-10-30 Thread Russell Coker
On Sat, 30 Oct 2004 12:47, "Donovan Baarda" <[EMAIL PROTECTED]> wrote:
> Seriously, does nscd really not correctly handle dns caching/expiry
> properly? I thought the dns caching stuff was well thought out and
> defined... not implementing it properly would be dumb.

It's what I've been told.  I haven't tested it myself.

> I don't think that it's that simple... I seem to be getting lookups for
> both of those. Are you sure you didn't just have smtp.sws.net.au in your
> hosts file?

You are correct, I stuffed up that test.

> > I think that ping is buggy in this regard.  I think that it should just
> > keep using the first DNS result that it gets, if the user wants ping to
> > re-do the DNS lookups then they will press ^C and re-start it!  Would you
> > like to file the bug report or shall I?
>
> There may be reasons that it doesn't round robin DNS? Dynamic DNS
> "flapping"? dunno.

I disagree, and I am not the only one, see the following URL:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=109709

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: nscd: Was Re: long delays with LDAP nss/pam

2004-10-29 Thread Russell Coker
On Fri, 29 Oct 2004 09:56, "Donovan Baarda" <[EMAIL PROTECTED]> wrote:
> I actually run pdnsd. I find it leaner and simpler than named. However, is
> "run named on all hosts" really better than "run nscd on all hosts"?

That's debatable.  Some people will say that DNS servers are too much of a 
security risk.  However another issue is that nscd uses different cache 
algorithms to DNS servers and is likely to either give worse performance or 
less accurate results than using a DNS server.

> I have the gut feeling nscd is a lighter simpler and faster solution than
> named, but I could be wrong.

Probably.  But on a modern machine named takes so little resources that it 
doesn't matter (IMHO).  Having named on localhost gives better performance 
than talking to another server while guaranteeing the same results (the other 
server is almost certainly running named).

> > > apps like squid that explicitly have it). If you ping, every single
> > > ping packet triggers an nslookup.
> >
> > Which ping program have you seen doing this?  The ping program in
>
> iputils-ping
>
> I am using the ping from iputils-ping in sarge. It definitely does ns
> lookups for every packet... using iptraf to monitor traffic, I see the
> following repeated for every ping packet.

Try pinging smtp.sws.net.au (my mail server) and www.coker.com.au (my web 
server).  Note that the repeated reverse lookups only occur on 
www.coker.com.au, it seems that the repeated lookups only occur if forward 
and reverse DNS don't match (but I haven't checked the source code to verify 
this).

You are correct that it does repeated DNS lookups in some situations.  The 
first test case that I chose happened to be one that it does not do such 
lookups for.

> This is when I first noticed this behaviour... ping was taking ~10secs
> between each ping packet... it turns out waiting for nslookups to time out
> before trying the second nameserver between each ping.

I think that ping is buggy in this regard.  I think that it should just keep 
using the first DNS result that it gets, if the user wants ping to re-do the 
DNS lookups then they will press ^C and re-start it!  Would you like to file 
the bug report or shall I?

> > > Is there any reason why nscd should not be installed on a system?
> >
> > It wastes RAM on small machines.  Caches get stale some times.  It's one
> > more thing that can go wrong or have a security issue.  Most people don't
> > need it.
>
> but does running named instead really avoid all these issues, or make them
> worse?

If there was a choice between running only nscd or only named then nscd might 
be a reasonable option.  But given that every serious network will need a 
caching DNS proxy (for which task it's unfortunate that there is nothing 
better than BIND) it doesn't seem to be a problem to me that you run it on 
several machines instead of just one.

If you have only a single machine connected to an ISP then maybe nscd will be 
the best choice.  However that scenario is becoming increasingly rare.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: nscd: Was Re: long delays with LDAP nss/pam

2004-10-28 Thread Russell Coker
On Wed, 27 Oct 2004 18:07, Donovan Baarda <[EMAIL PROTECTED]> wrote:
> Sorry to subvert a thread like this, but has anyone else decided that
> nscd is pretty much essential for all systems, regardless of nss, or
> local nameservers?

No.

> It seems without it there is _no_ dns caching of any kind (except for

Run named on localhost.

> apps like squid that explicitly have it). If you ping, every single ping
> packet triggers an nslookup.

Which ping program have you seen doing this?  The ping program in iputils-ping 
only does a DNS lookup before sending the first packet and I expect that all 
other ping programs do the same.  Run tcpdump while running ping and check 
what your ping program does.

> Even if you have a local caching name 
> server, the UDP traffic on the loopback interface hurts.

How does UDP traffic on the loopback hurt more than Unix domain socket access?

> Is there any reason why nscd should not be installed on a system?

It wastes RAM on small machines.  Caches get stale some times.  It's one more 
thing that can go wrong or have a security issue.  Most people don't need it.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mount options for Optimizing ext2/ext3 performance with Maildir's

2004-10-28 Thread Russell Coker
On Tue, 19 Oct 2004 02:15, Ian Forbes <[EMAIL PROTECTED]> wrote:
> Is ext3 faster or slower than ext2?

If you use an external journal on a fast device then ext3 should be much 
faster.

> What mount options give the best performance, "noatime" "data=journal" ?

noatime is (IMHO) mandatory for a Maildir based mail server.  It seriously 
decreases the load while not removing any feature that you desire.  Make sure 
that you set noatime on the root file system as well as the mail store, 
otherwise you get a lot of writes to the root FS every time a POP server is 
started.

> Currently I have everything in one big root partition. If I mount it with
> "noatime" will a hole bunch of things stop working, like the automatic
> reloading of files in /etc/cron.d/ ?

Nothing will stop working.  cron uses the mtime.

finger won't tell you the inactivity time of sessions from users who login at 
the console if /dev is on a file system with noatime, this is no real loss 
(and udev can fix this).

> With the options data=journal / data=ordered / data=writeback which will
> give me the best performance and which has the biggest chance of data loss
> in a crash situation.  I think I can live with mail that is being delivered
> at the moment of a crash getting corrupted, provided that the server is
> never rendered un-bootable and that no other files are effected.

I think that you will have to do your own benchmarks of this.

> The system is running with a 2.4.18. Is there anything to be gained from
> upgrading to a later 2.4 or a 2.6 kernel.

If you use a 2.6 kernel then you get directory hashing which can significantly 
improve performance when you have large numbers of directory entries.  This 
will help if you have a large number of user mail store directories in one 
directory, if you have a large number of email files in one Maildir 
directory, or if you have a large number of temporary files in one spool 
directory under /var/spool.  Generally you can expect some significant 
performance increases from 2.6.x kernels.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mail Delivery (failure jcoo...@planetz.com)

2004-10-26 Thread Russell Coker
On Mon, 25 Oct 2004 03:55, "John Cooper" <[EMAIL PROTECTED]> wrote:
> I understand your guys' point, and I appreciate it.What you describe
> here sounds nearly identicaly to my auto-responder.  But, that may be my
> lack of knowledge of how the mail system works in general.  Something about

Be smart.  Don't mess with things that you don't understand.  Get someone who 
works for your ISP to sort things out for you.

> He could easily have shared his idea with the list, and mailed me
> separately at my new address, without (in his words) publically archiving
> my private address for spammers to harvest.   Do you not agree that this
> was simply malicious, and needlessly hurtful?

Nothing less will work.  In fact in your case I am not convinced that even 
this has worked.

> Would he also teach someone to swim by throwing them in the water and
> watching them drown, laughing as the dumbass goes down?

In your case, yes!  I'd make an AVI and put it on the web for everyone to 
enjoy!

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mail Delivery (failure jcoo...@planetz.com)

2004-10-26 Thread Russell Coker
On Mon, 25 Oct 2004 03:11, Fraser Campbell <[EMAIL PROTECTED]> wrote:
> Spam does not justify spam.  I have come to this realization myself only
> recently (I am, unfortunately still, a TMDA user).  I can understand that

You should cease using TMDA.  For reference I never respond to TMDA type 
messages in response to messages I wrote, only if they are in response to 
spams.

> many people see autoresponders as essential but due care should be taken to
> not respond to innocent third parties and mailing lists especially.

Auto-responders always respond to innocent people.  The only excuse for an 
auto-responder is for a mailing list system (for subscription requests and 
for notification that only subscribers may post to the list).  Generating an 
automatic message in response to an attempted list posting is acceptable 
because in the common case one person (the person who's email address was 
forged) is inconvenienced instead of many people (the list subscribers).

> The fact that you sent your new email in the body as "johnc at planetz.com"
> instead of as a real email address is, I suspect, immaterial.  Spammers
> send to millions of invalid email addresses, they scan all webpages, list
> archives, etc. and look for anything that looks like a valid email address
> ... IMO x at y is just as easy to find and parse as [EMAIL PROTECTED]  They will be
> finding you anyway.

Paste it into the To: field in a modern email program such as kmail and the 
"at" will automatically be converted to "@" etc.

> > Coker, consider a private email, before publically hanging someone.
>
> When someone does something stupid there is value in making sure that
> everyone knows that it is stupid.  Knowledge is only advanced when it is
> shared.

Also see the several incidents in the past where I have communicated privately 
with such idiots, been flamed by the idiot, then taken the discussion back to 
the Debian list where it started.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mail Delivery (failure jcoo...@planetz.com)

2004-10-26 Thread Russell Coker
On Sun, 24 Oct 2004 06:29, "John Cooper" <[EMAIL PROTECTED]> wrote:
> > John C has requested that
> > the following message be removed from the archives.
>
> My apologies that my autoresponder spammed the list.  I've never posted to
> the debian-isp list.  Apparently someone's machine is infected with an
> email-worm, which has used my jcooper address (which I stopped using
> several years ago) as the return address.

This always happens.

> I started using an autoresponder after experimenting with spam-net,
> spamassassin, and qurb for well over a year.  When you receive hundreds of
> spam per day, 9x% isn't good enough.Since I started using a responder,

You could have just disabled that address entirely if that was your desire and 
let people who want to contact you use other methods.  People were 
communicating long before email was invented.

> I have received virtually zero spam, aside from those individuals like
> nigerian scammers, who make an effort to respond by hand.   Yes, there are

That's a temporary thing.  Once spammers get your address they share it.

> > Requests to have list archives altered to hide the evidence of
> > your mis-deeds doesn't work either.
>
> Clearly I've touched a nerve with Mr. Coker!  The virtiolic nature of his
> response here, and the public posting of my private email address which I
> was trying to protect, is simply inane and immature.Next time, Mr.
> Coker, consider a private email, before publically hanging someone.

I gave up on that long ago.  When I respond privately the offender never fixes 
their system to stop spamming.  Publicising their mis-deeds is the only way 
to stop spammers.

> Yes, I have asked the list manager to at least remove my personal johnc
> address from the archive, which was so needlessly cc'd there.   (Notice I'm
> replying from my "[EMAIL PROTECTED]" address which I use for public postings).
> Secondarily, if the list managers so desire, they can remove this whole
> thread, which is totally off topic for this list in the first place.

No, it will never be removed.  Any attempt you make to remove it will most 
likely get it more widely known.  There is nothing you can do other than 
ceasing your spamming.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mail Delivery (failure jcoo...@planetz.com)

2004-10-26 Thread Russell Coker
On Mon, 25 Oct 2004 12:58, "John Cooper" <[EMAIL PROTECTED]> wrote:
> >...spammers drown you in water?
>
> http://dictionary.reference.com/search?q=metaphor
>
> >..you want respect?   Earn it.
>
> If earning respect in this crowd requires being disrespectful, then I'm not
> interested.

Earning respect in this crowd requires some intelligence, and to not be a 
spammer.

Spamming is wrong, learn this and tell your friends.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mail Delivery (failure jcoo...@planetz.com)

2004-10-23 Thread Russell Coker
For the benefit of interested people.  John C has requested that the following 
message be removed from the archives.

Auto-responders ARE spam.  They will hit innocent people.  Just because most 
victims of auto-responders don't complain does not mean that the 
auto-responder is not causing problems.

Requests to have list archives altered to hide the evidence of your mis-deeds 
doesn't work either.  It just gets you more copies of the message.

On Sat, 23 Oct 2004 14:27, Russell Coker <[EMAIL PROTECTED]> wrote:
> On Thu, 21 Oct 2004 22:30, [EMAIL PROTECTED] wrote:
> > Due to the unprecedented amount of spam I've been receiving, I'm forced
> > to change my email address yet again.  My new address is johnc at
> > planetz.com.
>
> Please don't be stupid.  Such auto-responders will get you added to all the
> spam lists again.
>
> I've put your new email address in the header of this message which will be
> publicly archived for spammers to harvest.
>
>
> Have a nice day.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mail Delivery (failure jcoo...@planetz.com)

2004-10-22 Thread Russell Coker
On Thu, 21 Oct 2004 22:30, [EMAIL PROTECTED] wrote:
> Due to the unprecedented amount of spam I've been receiving, I'm forced to
> change my email address yet again.  My new address is johnc at planetz.com.

Please don't be stupid.  Such auto-responders will get you added to all the 
spam lists again.

I've put your new email address in the header of this message which will be 
publicly archived for spammers to harvest.


Have a nice day.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-19 Thread Russell Coker
On Tue, 19 Oct 2004 00:17, Stephane Bortzmeyer <[EMAIL PROTECTED]> wrote:
> On Sat, Oct 16, 2004 at 09:41:43PM +1000,
>  a message of 39 lines which said:
> > Getting servers that each have 200G or 300G of storage is easy.
>
> For a mail server, it means either 1G per user (like gmail gives you)
> for only 300 users or 10M (much less than hotmail) for 30 000
> users. It is probably not enough for a Hotmail-like service. Think of
> 300 000 users. How many servers will you need?

10M is a more common limit than 1G.  Most people don't use their limit, most 
people who do use their limit only do so by subscribing to mailing lists and 
not checking their email.

A gmail service is entirely different to an ISP mail server.  The common use 
of an ISP mail server is to allow download and delete via pop.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-17 Thread Russell Coker
On Sat, 16 Oct 2004 22:00, Marcin Owsiany <[EMAIL PROTECTED]> wrote:
> > If one machine has a probability of failure of 0.1 over a particular time
> > period then the probability of at least one machine failing if there are
> > two servers in the cluster over that same time period is 1-0.9*0.9 ==
> > 0.19.
>
> But do we really care about whether a "machine" fails? I'd rather say
> that what we want to minimize is the _service_ downtime.

If someone has to take time out from other work to fix it then we care.  There 
are lots of things that we would like to have done but which are not being 
done due to lack of time.  Do we really want to take more time away from 
other important tasks just to have super-reliable @debian.org email?

> With one machine, the possibility of the service being unavailable is
> 0.1. With two machines it's equal to the possibility of both machines
> failing at the same time, so it's 0.1*0.1 == 0.01, as long as the
> possibilites are independent (not sure if that's the right translation
> of the term).

Correct.  Configuration errors and software bugs can put two machines offline 
just as easily as one.

> Otherwise, I'd say that the increase of availability is worth the
> additional debugging effort :-)

Are you going to be involved in doing the work?

This entire thread started because the admin team doesn't seem to have enough 
time to do all the work that people would like them to do.  Your suggestion 
seems likely to make things worse not better.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-16 Thread Russell Coker
On Fri, 15 Oct 2004 20:08, Paul Dwerryhouse <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 15, 2004 at 06:56:21PM +1000, Russell Coker wrote:
> > The machines were all running 2.4.2x last time I was there, but they
> > may be moving to 2.6.x now.
>
> All the stores, relays and proxies are still on 2.4.x, but the LDAP
> servers are now on 2.6.x (mainly because I could, not for any technical
> reason. At the time I upgraded them I had enough redundancy to go around
> that the downtime didn't affect anything).

In that case you should get the 4/4 kernel patch, it will make a huge 
improvement to your LDAP rebuild times which can come in handy in an 
emergency.  From memory I had the slave machines rebuilding in about 15 
minutes, I expect that I could get it down to 5 minutes with a 4/4 kernel, 
and less if the machine has 6G of RAM or more.

For 4/4 the easiest thing to do is probably to get the Fedora kernel.

> Four perdition/apache/imp servers now, rather than three. The webmail is
> rather popular now, and three servers couldn't cut it on their own
> anymore.

Is there any way to optimise PHP for speed?  Maybe PHP5 is worth trying?

> Seven backend mailstores now, and I really want an eighth, but can't get
> anyone to pay for it.

I still think that using a umem device for journals is the right thing to do.  
You should be able to double performance by putting a umem device in each 
machine.  It'll cost less than half as much as a new server to put a umem 
device in each machine, and give much more performance.

I recall that none of those machines was even close to running out of disk 
space.  You could probably handle the current load with 4 back-end machines 
if you used umem devices.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-16 Thread Russell Coker
On Sat, 16 Oct 2004 02:02, Christoph Moench-Tegeder <[EMAIL PROTECTED]> 
wrote:
> ## Henrique de Moraes Holschuh ([EMAIL PROTECTED]):
> > > So, now we would like Russel to explain why he does not like SAN.
> >
> > He probably doesn't advocate using SAN instead of local disks if you do
> > not have a good reason to use SAN.  If that's it, I *do* agree with him. 
> > Don't use SANs just for the heck of it.  Even external JBOD enclosures
> > are a bad idea if you don't need them.
>
> Of course. Buying SAN for a single mailserver is not worth the money.
> Think of money per gigabyte and the extra trouble of managing your
> SAN, local disks are much easier to handle.

Exactly.

Getting servers that each have 200G or 300G of storage is easy.  Local storage 
is expected to be faster than SAN (never had a chance to benchmark it 
though).  Having multiple back-end servers with local disks reduces the risks 
(IMHO).  There's less cables for idiots to trip over or otherwise break 
(don't ask), and no single point of failure for the entire network.  Having 
one back-end server go down and take out 1/7 of the mail boxes would be 
annoying, but a lot less annoying than a SAN problem taking it all out.

For recovery I would prefer to have a spare system and hot-swap disks.  If 
there's a serious problem then swap the disks into an identical machine 
that's already connected.  Down time is the time taken to get a taxi to the 
server room.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-16 Thread Russell Coker
On Fri, 15 Oct 2004 23:33, Arnt Karlsen <[EMAIL PROTECTED]> wrote:
> > On Fri, 15 Oct 2004 03:19, Arnt Karlsen <[EMAIL PROTECTED]> wrote:
> > > > Increasing the number of machines increases the probability of one
> > > > machine failing for any given time period.  Also it makes it more
> > > > difficult to debug problems as you can't always be certain of
> > > > which machine was involved.
> > >
> > > ..very true, even for aero engines.  The reason the airlines like
> > > 2, 3 or even 4 rather than one jet.
> >
> > You seem to have entirely misunderstood what I wrote.
>
> ..really?   Compare with your average automobile accident and
> see who has the more adequate safety philosophy.

If one machine has a probability of failure of 0.1 over a particular time 
period then the probability of at least one machine failing if there are two 
servers in the cluster over that same time period is 1-0.9*0.9 == 0.19.

> [EMAIL PROTECTED], "2 boxes watching each other" or some such, will give
> that "Ok, I'll have a look some time next week" peace of mind,
> and we don't need symmetric power here, one big and one or
> more small ones will do fine

Have you ever actually run an ISP?

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-15 Thread Russell Coker
On Wed, 13 Oct 2004 07:18, Stephane Bortzmeyer <[EMAIL PROTECTED]> wrote:
> I'm currently writing a proposal for a webmail service for, say, 50
> 000 to 500 000 users. I'm looking for description of existing "big

50K isn't big by today's standards.

An ISP I used to work for has something like 1,300,000 users.  They have two 
SMTP machines for outbound email which do virus scanning (to stop customers 
from sending viruses).  They have four SMTP machines for inbound mail which 
do anti-spam and virus scanning (RAV anti-virus and Qmail).  Those four 
machines send mail to the back-end machines according to data in OpenLDAP 
(about four slave OpenLDAP servers and one master).  There are six back-end 
machines for mail store that run Qmail for delivery and the Courier POP and 
IMAP servers, all data on user mail directory and password is in LDAP.  The 
back-end servers use ReiserFS mounted noatime for storage.

When a user connects via POP or IMAP they get to a Perdition server, there are 
three Perdition servers behind Cisco LocalDirectors (for reliability and load 
balancing) which proxy IMAP and POP to the correct back-end servers after 
doing an LDAP lookup.  The Perdition server machines also run Apache and IMP 
for webmail, the webmail instance uses Perdition on localhost for IMAP 
access.

The machines are pretty much all Dell servers with 2*2.8GHz P4 CPUs, about 4G 
of RAM, and 4*U160 disks in a RAID-5 array (with one hot spare).

The machines were all running 2.4.2x last time I was there, but they may be 
moving to 2.6.x now.

SAN and NAS are best avoided IMHO.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Russell Coker
On Fri, 15 Oct 2004 03:19, Arnt Karlsen <[EMAIL PROTECTED]> wrote:
> > Increasing the number of machines increases the probability of one
> > machine failing for any given time period.  Also it makes it more
> > difficult to debug problems as you can't always be certain of which
> > machine was involved.
>
> ..very true, even for aero engines.  The reason the airlines like
> 2, 3 or even 4 rather than one jet.

You seem to have entirely misunderstood what I wrote.

Having four engines on a jet rather than two or three should not be expected 
to give any increase in reliability.  Having two instead of one (and having 
two fuel tanks etc) does provide a significant benefit.

However the needs of an aircraft are significantly different from a mail 
server.  When a mail server has a problem we have the option of pulling the 
plug and then taking some time to fix it.  There is no equivalent operation 
for an aircraft.  When installing two engines in an air-craft that can run on 
a single engine you are trading off an increased risk of having an engine 
fail against a greatly decreased risk that an engine failure will kill 
everyone on board.

With mail servers if you have a second server you have more work to maintain 
it, more general failures, and you have no chance of saving anyone's life to 
compensate.

Finally consider that one of the main causes of server unreliability is 
mistakes made during system maintenance.  Increase the amount of work 
involved in running the systems and you increase the chance of problems.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Russell Coker
On Thu, 14 Oct 2004 23:35, martin f krafft <[EMAIL PROTECTED]> wrote:
> also sprach Henrique de Moraes Holschuh <[EMAIL PROTECTED]> [2004.10.14.1525 
+0200]:
> > Or we can do it in two, with capacity to spare AND no downtime.
>
> I would definitely vote for two systems, but for high-availability,
> not load-sharing. Unless we use a NAS or similar in the backend with
> Maildirs to avoid locking problems. Then again, that's definitely
> overkill...

A NAS in the back-end should not be expected to increase reliability.  Every 
time you increase the complexity of the system you should expect to decrease 
reliability.

KISS!

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-15 Thread Russell Coker
On Thu, 14 Oct 2004 13:35, "Lucas Albers" <[EMAIL PROTECTED]> wrote:
> > As long as the machine is fixed within four days of a problem we don't
> > need
> > more than one.  Email can be delayed, it's something you have to get used
> > to.
>
> Machines are cheap enough, wouldn't it be reasonable to throw in
> redundancy? Unless having 2 machines adds unneccessary complexity to the
> setup.

Better to have one good machine than three cheap machines.  The more machines 
you have the greater the chance that one of them will break.

> Sometimes I don't even realize one of the external relays is broken for a
> day...(even though the monitoring tools should tell you.)

Which is another good reason for not having such redundant servers.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Russell Coker
On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh <[EMAIL PROTECTED]> wrote:
> We have a lot of resources, why can't we invest some of them into a small
> three or four machine cluster to handle all debian email (MLs included),

A four machine cluster can be used for the entire email needs of a 500,000 
user ISP.  I really doubt that we need so much hardware.

> and tune the entire thing for the ground up just for that? And use it
> *only* for that?  That would be enough for two MX, one ML expander and one
> extra machine for whatever else we need. Maybe more, but from two (master +
> murphy) two four optimized and exclusive-for-email machines should be a
> good start :)

I think that front-end MX machines is a bad idea in this environment.  It 
means that more work is required to correctly give 55x codes in response to 
non-existent recipients (vitally important for list servers which will 
receive huge volumes of mail to [EMAIL PROTECTED] and which should not 
generate bounces for it).

We don't have the performance requirements that would require front-end MX 
machines.

> colaborative work needs the MLs in tip-top shape, or it suffers a LOT. Way,
> way too many developers use @debian.org as their primary Debian contact
> address (usually the ONLY well-advertised one), and get out of the loop
> everytime master.d.o croaks.

OK, having a single dedicated mail server instead of a general machine like 
master makes sense.

> One of the obvious things that come to mind is that we should have MX
> machines with very high disk throughput, of the kinds we need RAID 0 on top
> of RAID 1 to get.  Proper HW RAID (defined as something as good as the
> Intel SCRU42X fully-fitted) would help, but even LVM+MD allied to proper
> SCSI U320 hardware would give us more than 120MB/s read throughput (I have
> done that).

U320 is not required.  I don't believe that you can demonstrate any 
performance difference between U160 and U320 for mail server use if you have 
less than 10 disks on a cable.  Having large numbers of disks on a cable 
brings other issues, so I recommend a scheme that has only a single disk per 
cable (S-ATA or Serial Attached SCSI).

RAID-0 on top of RAID-1 should not be required either.  Hardware RAID-5 with a 
NV-RAM log device should give all the performance that you require.

You will NEVER see 120MB/s read throughput on a properly configured mail 
server that serves data for less than about 10,000,000 users!  When I was 
running the servers for 1,000,000 users there was a total of about 3M/s 
(combined read and write) on each of the five back-end servers.  A total of 
15MB/s while each server had 4 * U160-15K disks (total of 20 * U160-15K 
disks).  The bottlenecks were all on seeks, nothing else mattered.

> Maybe *external* journals on the performance-critical filesystems would
> help (although data=journal makes that a *big* maybe for the spools, the
> logging on /var always benefit from an external journal). And in that case,
> we'd need obviously two IO-independent RAID arrays. That means at least 6
> discs, but all of them can be small disks.

http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html

If you want to use external journals then use a umem device for it.  The above 
URL advertises NV-RAM devices with capacities up to 16G which run at 64bit 
66MHz PCI speed.  Such a device takes less space inside a PC than real disks, 
produces less noise, has no moving parts (good for reliability) and has ZERO 
seek time as well as massive throughput.

Put /var/spool on that as well as the external journal for the mail store and 
your mail server should be decently fast!

> The other is to use a filesystem that copes very well with power failures,
> and tune it for spool work (IMHO a properly tunned ext3 would be best, as
> XFS has data integrity issues on crashes even if it is faster (and maybe
> the not-even-data=ordered XFS way of life IS the reason it is so fast). I
> don't know about ReiserFS 3, and ReiserFS 4 is too new to trust IMHO).

reiserfsck has a long history of not being able to fix all possible errors.  A 
corrupted ReiserFS file system can cause a kernel oops and this isn't 
considered to be a serious issue.

ext3 is the safe bet for most Linux use.  It is popular enough that you can 
reasonably expect that bugs get found by someone else first, and the 
developers have a good attitude towards what is a file system bug.

> The third is to not use LDAP for lookups, but rather cache them all in a
> local, exteremly fast DB (I hope we are already doing that!).  That alone
> could get us a big speed increase on address resolution and rewriting,
> depending on how the MTA is configured.

I've run an ISP with more than 1,000,000 users with LDAP used for the 
back-end.  The way it worked was that mail came to front-end servers which 
did LDAP lookups to determine which back-end server to deliver to.  The 
back-end servers did LDAP lookups to determine the directory to put the mail 
in.

Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Russell Coker
On Wed, 13 Oct 2004 21:26, Wouter Verhelst <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
> > On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh <[EMAIL PROTECTED]> 
wrote:
> > > The third is to not use LDAP for lookups, but rather cache them all in
> > > a local, exteremly fast DB (I hope we are already doing that!).  That
> > > alone could get us a big speed increase on address resolution and
> > > rewriting, depending on how the MTA is configured.
> >
> > I've run an ISP with more than 1,000,000 users with LDAP used for the
> > back-end.
>
> Yes, but that was probably with the LDAP servers and the mail servers
> being in the same data center, or at least with a local replication.

Yes.  Local replication is not difficult to setup.

> This is not the case for Debian; and yes, we already do have local fast
> DB caches (using libnss-db).

That's an entirely different issue.  libnss-db is just for faster access 
to /etc/passwd.  The implementation in Linux is fairly poor however, it 
doesn't even stat /etc/passwd to see if it's newer than the db.  The 
performance gain isn't as good as you would expect either.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
Please respect the privacy of this mailing list.

Archive: file://master.debian.org/~debian/archive/debian-isp/

To UNSUBSCRIBE, use the web form at <http://db.debian.org/>.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-14 Thread Russell Coker
On Thu, 14 Oct 2004 23:25, Henrique de Moraes Holschuh <[EMAIL PROTECTED]> wrote:
> > The Debian email isn't that big.  We can do it all on a single machine
> > (including spamassasin etc) with capacity to spare.
>
> Or we can do it in two, with capacity to spare AND no downtime.

Increasing the number of machines increases the probability of one machine 
failing for any given time period.  Also it makes it more difficult to debug 
problems as you can't always be certain of which machine was involved.

> > One machine should be able to do it with AV and antispam.  Four
> > AV/antispam machines can handle the load for an ISP with almost 1,500,000
> > users, one should do for Debian.
>
> That depends on how much delay you want to have when processing mail. It'd
> be nice to know how many messages/minute @d.o and gluck receive, to stop
> guessing, though.

When four machines can do it for 1,500,000 users with no significant delay I 
am quite certain that one machine can provide all the performance you want 
for 1,000 users.

> > > But we really should have two of them (in
> > > different backbones), with the same priority as MX.
> >
> > Why?
>
> No downtime.  Easy maintenance.  Redundancy when we have network problems
> (these are rare, thank god).

Getting redundant network connections working properly takes a lot of effort 
and skill.  I've seen major ISPs screw this up in a big way.

KISS!

> > As long as the machine is fixed within four days of a problem we don't
> > need more than one.  Email can be delayed, it's something you have to get
> > used to.
>
> And while that email is being delayed, our work suffers, and there could
> even be security concerns as well.  Developer time IS an important
> resource, I don't think we should be wasting it because we don't want to
> have a second MX.  Would you set up a mail system for any ISP (including
> small, 1000-user ones) with only one MX?

Yes.  For big ISPs the one MX record would point to multiple servers behind a 
Cisco LocalDirector or similar device.

> > We don't need high-end hardware.  Debian's email requirements are nothing
> > compared to any serious ISP.
>
> True.  But we don't need cheap-ass, will-break hardware either.  Debian's
> admin requirements are different. The less on-site intervention needed, the
> better.

There's nothing cheap-ass about a second-hand 2U server with a 2.8GHz P4 CPU 
and 1G of RAM.

> So do I.  And I can tell you that I experienced a lot of improvement when
> big mass-delivery mail hits, on the order of _minutes_ (thousands of
> recipients, every one of them causes postfix to generate a minimum of 4
> LDAP searches, due to the way the LDAP maps were required to be deployed),
> and the way postfix map lookup happens.  Moving that to a hash DB sped
> things up considerably.

What type of hardware and software were you using for LDAP?

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-13 Thread Russell Coker
On Thu, 14 Oct 2004 01:47, Henrique de Moraes Holschuh <[EMAIL PROTECTED]> wrote:
> On Wed, 13 Oct 2004, Russell Coker wrote:
> > On Wed, 13 Oct 2004 07:29, Henrique de Moraes Holschuh <[EMAIL PROTECTED]> 
wrote:
> > > We have a lot of resources, why can't we invest some of them into a
> > > small three or four machine cluster to handle all debian email (MLs
> > > included),
> >
> > A four machine cluster can be used for the entire email needs of a
> > 500,000 user ISP.  I really doubt that we need so much hardware.
>
> Including the needed redundancy (two MX at least), and a mailing list
> processing facility that absolutely has to have AV and AntiSPAM measures at
> least on the level gluck has right now?

The Debian email isn't that big.  We can do it all on a single machine 
(including spamassasin etc) with capacity to spare.

> Yes, one machine that is just a MTA, without AV or Antispam should be able
> to push enough mail for @d.o.

One machine should be able to do it with AV and antispam.  Four AV/antispam 
machines can handle the load for an ISP with almost 1,500,000 users, one 
should do for Debian.

> But we really should have two of them (in 
> different backbones), with the same priority as MX.

Why?

> It would be nice to 
> have a third MTA with less priority and heavier anti-spam machinery
> installed.

Bad idea.

> > OK, having a single dedicated mail server instead of a general machine
> > like master makes sense.
>
> Two so that we have some redundancy, please. IMHO email is important enough
> in Debian to deserve two full MX boxes (that never forward to one another).

As long as the machine is fixed within four days of a problem we don't need 
more than one.  Email can be delayed, it's something you have to get used to.

> > U320 is not required.  I don't believe that you can demonstrate any
>
> Required? No. Nice to have given the hardware prices available, probably.
> If the price difference is that big, U160 is more than enough.  But
> top-notch RAID hardware nowadays is always U320, so unless the hotswap U160
> enclosures and disks are that much cheaper...  and the price difference
> from a non top-notch HW RAID controller that is still really good, and a a
> top-notch one is not that big.

We don't need high-end hardware.  Debian's email requirements are nothing 
compared to any serious ISP.

> > http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html
>
> How much?  It certainly looks very good.

If you want to buy one then you have to apply for a quote.

> > I've run an ISP with more than 1,000,000 users with LDAP used for the
> > back-end.  The way it worked was that mail came to front-end servers
> > which did LDAP lookups to determine which back-end server to deliver to. 
> > The
>
> I meant LDAP being used for the MTA routing and and rewriting. That's far
> more than one lookup per mail message :(

Yes, I've done all that too.  It's really no big deal.  Lots of Debian 
developers have run servers that make all Debian's servers look like toys by 
comparison.

> > back-end server had Courier POP or IMAP do another LDAP lookup.  It
> > worked fine with about 5 LDAP servers for 1,000,000 users.
>
> Well, we are talking MTA and not mail stores.  The LDAP workload on a MTA
> is usually quite different for the one in a mail store.

Yes, it should be less load because you don't have POP or IMAP checks.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-13 Thread Russell Coker
On Wed, 13 Oct 2004 23:23, Wouter Verhelst <[EMAIL PROTECTED]> wrote:
> > > This is not the case for Debian; and yes, we already do have local fast
> > > DB caches (using libnss-db).
> >
> > That's an entirely different issue.
>
> No, it's not, not in this case anyway.
>
> > libnss-db is just for faster access to /etc/passwd.
>
> You are mistaken. In the FreeBSD implementation, it is; however, the
> Linux implementation allows other things to be done with it.
>
> For instance, my /etc/default/libnss-db contains the following lines:
>
> ETC = /root/stage
> DBS = passwd group shadow

shadow is part of the passwd setup.  group does no good on most systems (on my 
system /etc/group is only 70 lines and the database gives no benefit).

> I also have a script which creates (incomplete (as in, without system
> users)) files /root/stage/{passwd,shadow,group} containing just the user
> and group records that are in LDAP. Next, /etc/nsswitch.conf contains
> the following:
>
> passwd: db compat
> group:  db compat
> shadow: db compat

So what's the point of having LDAP if you are going to manually copy flat 
files around?

> > The implementation in Linux is fairly poor however, it doesn't even
> > stat /etc/passwd to see if it's newer than the db.
>
> That's a feature, not a bug. Unless you want it to check 'the passwd
> file' as it is defined in /etc/default/libnss-db (or another
> configuration file), in which case it would indeed be a good idea.

If you want the database to be in sync with the flat file and be usable 
without gross hacks as it is in AIX then it's a serious bug.

> > The performance gain isn't as good as you would expect either.
>
> Been there, done that.
>
> IME, doing this kind of thing is *way* faster than using libnss-ldap.

Way faster than a non-local LDAP.  But not significantly faster than flat 
files unless you have >10,000 users (which isn't the case for Debian).

> An added bonus is that the libnss-db Makefile will not update the .db
> files if the original ones are empty; so if the LDAP daemon dies or is
> unavailable for some reason, my users can still login, even after the
> next time the cronjob runs. This is not the case with libnss-ldap, AIUI.

True.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Can we build a proper email cluster? (was: Re: Why is debian.org email so unreliable?)

2004-10-13 Thread Russell Coker
On Wed, 13 Oct 2004 20:42, "Steinar H. Gunderson" <[EMAIL PROTECTED]> 
wrote:
> On Wed, Oct 13, 2004 at 01:05:26PM +1000, Russell Coker wrote:
> > http://www.umem.com/16GB_Battery_Backed_PCI_NVRAM.html
> >
> > If you want to use external journals then use a umem device for it.  The
> > above URL advertises NV-RAM devices with capacities up to 16G which run
> > at 64bit 66MHz PCI speed.  Such a device takes less space inside a PC
> > than real disks, produces less noise, has no moving parts (good for
> > reliability) and has ZERO seek time as well as massive throughput.
>
> Out of curiosity; approximately how much does such a thing cost? I can't
> find prices on it anywhere.

Last time I got a quote the high-end model had 1G of storage and was 33MHz 
32bit PCI.  It cost around $700US.  I would expect that the high-end model 
costs around that price nowadays, but if you want one you'll just have to 
apply for a quote.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-20 Thread Russell Coker
On Mon, 20 Sep 2004 21:37, Josh Bonnett <[EMAIL PROTECTED]> wrote:
> >Do you have benchmark results to support this assertion?  Last time I
> > tested the performance of software RAID-1 on Linux I was unable to get
> > anywhere near 2x disk speed for writing.
>
> Not to be a stickler but i hope you mean reading

Correct.

> > I did tests by reading two files that were 1G in
> >size and the operation took considerably longer than reading a single 1G
> > file from a non-RAID system.  If RAID-1 was delivering twice the read
> > throughput then I should be able to read two 1G files concurrently from a
> > RAID-1 in the same time as would be taken to read a single 1G file from a
> > single disk.
>
> also i think that the original poster was assuming that the raid 1
> driver would read in stripes as you would a raid 0, its not required to
> implement raid 1 as far as i know, but it might be a nice thing to add
> (not sure if it fits nicely in the Linux software raid drivers). You
> might have to trade some memory usage and cpu  to make sure the blocks
> were put back together again(a fifo buffer 2 stripe sizes big would
> probably be all it would need) so whether it helps would all come down
> to where the bottleneck is.

I believe that the Linux software RAID-1 code already spreads the read load 
between the disks.  The problem is that the algorithm chosen is not much good 
(or at least was not much good last time I tested it).

If you had RAID-1 running over flash disks, NVRAM devices or other media that 
does not have a seek overhead then I expect that read performance would 
double whenever you have at least two processes reading from the same RAID-1 
device.  But when seeks matter the algorithm loses.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Defining ISP?

2004-09-19 Thread Russell Coker
On Wed, 15 Sep 2004 22:59, "shift" <[EMAIL PROTECTED]> wrote:
>  The idea seems still interesting to me 2 days after the week-end! ( Did
> some definitive dammage happen? :)
> I imagine an install, giving possibilities of Raid, backup, replication,
> networking etc from the start, all necessary tools and programs, in a

Software RAID, backup, and networking are needed on workstations just as badly 
as on ISP servers.  You don't need an ISP specific distribution to need that.

> compact, easy to use distribution with some "ncursed" ISP specific
> administration tools. Something secure, minimalistic (I like the word and
> the concept) and with some optimization possibilities.
> does-it still seem confuse? Is it "une idee farfelue"?

It is really handy to have GUI administration consoles at ISPs.  At the last 
ISP I ran 17 inch monitors were quite common.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Defining ISP?

2004-09-19 Thread Russell Coker
Please write your text after the quoted text and don't quote excessively.  
This is not AOL.

On Wed, 15 Sep 2004 07:48, "shift" <[EMAIL PROTECTED]> wrote:
> Well, about the week-end, you're welcome for another one (...)
>
> About the install, I do almost the same. the second part is the
> optimization.
> Using an optimized distrib on an SR2200 (dual PIII 1.4GHz Tualatin-S), SCSI
> U160, I have better results on Mysql nemchmarks than with a non-optimized
> SR2300-SKU0 dual xeon 3.0 1MB L3 cache and SCSI U320!!

U160 vs U320 makes little difference if you have only one hard disk.  I have 
never seen a disk that can do more than 70MB/s sustained (and the transfer 
rates under real load are usually much lower).

Two CPUs are not necessarily faster than one.  There is overhead in locking 
data structures.  If an application is only written to use one CPU then the 
second is just dead weight.

For good test results you change one thing at a time.  Change three or more 
things at once (CPU, disk, and compilation options) and you will never know 
how much each one affected the results.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-14 Thread Russell Coker
On Tue, 14 Sep 2004 09:54, Donovan Baarda <[EMAIL PROTECTED]> wrote:
> Is there any up-to-date "State of the RAID Nation" statement? I'd hate
> to start digging into RAID code only to find that RAID Mk.2 was going to
> replace everything I'd been looking at.

Not that I'm aware of.  The only change recently seems to be RAID-6, so if you 
contact the person who wrote that then you should get the latest available 
information.

The RAID-6 code was all written by H. Peter Anvin <[EMAIL PROTECTED]>.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-13 Thread Russell Coker
On Mon, 13 Sep 2004 18:32, "Donovan Baarda" <[EMAIL PROTECTED]> wrote:
> > Ummm... Bit confused here, but RAID 1 is not faster, than a single disk.
> > RAID one is just for 'safety' purposes. Yes, you do have 2 disks, but
> > in an
> > ideal world, they will both be synced with one another, and both be
> > doing
> > exactly the same thing at the same time.
>
> With RAID-1, both disks need to be written to at the same time, but for
> reads you can read different data from each disk at once. This means, in
> theory, that reading from RAID-1 can be 2x as fast.

In practice under sustained load from a large number of processes or in the 
trivial case of two processes each doing non-stop file reads it should come 
quite close to that theoretical speed.  The fact that Linux software RAID-1 
does not come close is (IMHO) an indication of a deficiency in the 
algorithms.  Probably many other RAID systems have the same deficiency, but I 
haven't bothered checking.

> However, whether you can actually read at 2x depends on how the read
> requests are scheduled to the disks.

Yes.

> I was originally thinking that a single file read would read alternate data
> blocks from alternate disks, and hence reading two files at once would
> cause head-seeks on both disks between the two files.
>
> Thinking about it more, for there to be any speed benefit, the length of
> data read from each disk would have to be a whole track from each disk. A
> whole track is kinda large, not giving you much "interleaving".

Also the cylinder size is unknown and unknowable to the OS.  The best thing to 
do is to read ahead in large chunks and hope that the firmware on the disk 
gets the right idea and starts reading ahead even further.

The benefit for a single file is that whenever there's a discontiguous section 
of the file or a requirement to read more metadata then the other disk can be 
used to save the seek time.  This will probably only give minor benefit.

> So Russell was right, reading two files at once is more likely to identify
> any speed benefits than reading a single file. If the RAID-1 implementation
> is smart enough, it can allocate read requests to different disks based on
> "closest last read" to minimise seeks and allow simultaneous reads for
> different read requests. Tuning this to get it right would be hard. I
> wouldn't be surprised if most RAID-1 implementations don't bother.

Writing an algorithm to do this would not be difficult at all.  The problem is 
fitting it into the overall design of the system.

I could write a sample program to simulate this in a few hours.  Getting the 
code to work in the Linux kernel is quite another matter.

This would be a really good kernel coding project for someone.  Much fame and 
fortune waits for someone who can make some significant improvements in this 
area!

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-13 Thread Russell Coker
On Mon, 13 Sep 2004 15:39, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> While I really substantiate my assumtption, Russel's right, in theory: in
> RAID1, you *do* have 2 disks, so reading 2 independent files *should* be
> possible without too much seeking.
>
> But OTOH you might run into disk scheduler issues that way, I really was
> thinking about reading one file.

In either case disk scheduler improvements could dramatically increase 
performance.  Improving the performance for the case of a single file without 
hurting performance for other situations is difficult.  Improving performance 
for reading two files should not be really difficult (as far as kernel coding 
projects go).

> Ok, I should finally learn not to open my mouth too wide where I don't have
> the experience, so I guess I'll stay silent in this thread now.

No need to do that, just do the tests.  Setting up a test-bed for these things 
should only take a few hours if you have some spare hardware.  As I 
previously noted things MAY have changed since I last tested this area of 
performance.  This is why I never claimed you to be wrong, I just requested 
benchmark results to support your assertions.  I've done the tests of early 
2.4.x kernels to show the opposite of what you say, but things could have 
improved since.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-13 Thread Russell Coker
On Mon, 13 Sep 2004 05:20, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> > Machines that can handle such an IO load have faster CPUs.  So for any
> > but the very biggest machines there is no chance of CPU performance being
> > a problem for RAID-5.
>
> You certainly have more experience than I - I was thinking about machines
> where the CPU is already heavily loaded by userspace tasks, where the
> additional load from RAID5 might be a poblem. Don't know for certain,
> though.

If you have a machine that is capable of 1296MB/s for RAID-5 calculations (as 
my P3-650 is), and if that machine has disk IO capacity of 200MB/s (not the 
maximum that you could achieve with such hardware, but better than the vast 
majority of such machines) then in theory you will use 15% of your CPU time.

But considering that a bus-mastering hardware RAID device will take some RAM 
access time away from the CPU and that the RAM bandwidth is often the CPU 
performance bottleneck it's quite likely that a large portion of that 15% CPU 
performance hit will happen no matter how you attach your disks.

It would be really useful if someone spent a couple of weeks benchmarking 
these things and wrote a magazine article about it.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-13 Thread Russell Coker
On Mon, 13 Sep 2004 09:55, Donovan Baarda <[EMAIL PROTECTED]> wrote:
> > Do you have benchmark results to support this assertion?  Last time I
> > tested the performance of software RAID-1 on Linux I was unable to get
> > anywhere near 2x disk speed for writing.  I did tests by reading two
> > files that were 1G in size and the operation took considerably longer
> > than reading a single 1G file from a non-RAID system.  If RAID-1 was
> > delivering twice the read throughput then I should be able to read two 1G
> > files concurrently from a RAID-1 in the same time as would be taken to
> > read a single 1G file from a single disk.
>
> I think Russel must be checking if the class is awake :-)
>
> That doesn't sound like a fair test; reading two files at once means the
> heads have to bounce around all over the place.

No it doesn't!  In an ideal situation each read request would go to the disk 
who's head was nearest to the requested data.  If a program is reading data 
that is sequential on disk (mostly the case if you copy large files onto a 
file system that otherwise has no writes) then each read request should be 
sent to the same disk.

The result should be that each disk performs sequential reads of a single file 
with good performance.

> If you are just talking throughput, then reading a 1G file should take
> half the time on a RAID-1 that it does on a single disk.

No.  If the 1G file is contiguous then having a single disk read through it 
all will give the maximum possible speed.  Using two disks to increase 
performance of reading a single large file requires either RAID-0 or very 
large read buffers.  Linux does not seem to have such large read buffers.

Test it out.

> I suspect that reading 2 1G files at once on RAID-1 will be not much
> faster than reading 2 1G files on a single disk, because reading two
> files at once will probably be seek-bound, not throughput bound. RAID-1
> boosts throughput, not latency.

It should do so, but it doesn't seem to do it very well in Linux software 
RAID.  Do a test of reading two 1G files from a RAID-1 and I expect that 
you'll find that nothing has changed since the last time I tested and that 
performance is much less than you would hope for.

> HDD latency is a killer. It is significantly faster to read small
> objects from another machine's RAM over ethernet than off the local HDD;
> HDD latency is ~10ms, ethernet is <1ms.

Yes, in many situations NFS can outperform a local disk.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-13 Thread Russell Coker
On Mon, 13 Sep 2004 16:39, Andrew Miehs <[EMAIL PROTECTED]> wrote:
> Ummm... Bit confused here, but RAID 1 is not faster, than a single disk.

RAID-1 in the strict definition has two disks with the same data.  In the 
modern loose definition it means two or more disks with the same data (maybe 
3 disks).

There is an option of whether reads go to all disks in a RAID-1 set or to just 
one disk.  Some OSs (such as AIX) make this a tunable.  In Linux there is no 
option, reads go to one disk.

So if two programs make read requests at the same time with a software RAID-1 
on Linux then (ideally) each disk will receive one request and the result 
will be that the two requests are satisfied in less time than it would take 
on a single disk.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-12 Thread Russell Coker
On Mon, 6 Sep 2004 23:35, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> RAID5 does need more computation than RAID1, so if you have a CPU
> bottleneck RAID5 will always be slower (assuming RAID5 is computed on the
> main CPU.)

raid5: automatically using best checksumming function: pIII_sse
   pIII_sse  :  1296.000 MB/sec
raid5: using function: pIII_sse (1296.000 MB/sec)
md: raid5 personality registered as nr 4

The above is put in my kernel message log when I load raid5.ko on a P3-650.  I 
am not aware of there having ever been any P3-650 machines that could sustain 
a 1296MB/s IO load (that requires more than two 64bit 66MHz PCI buses).

Machines that can handle such an IO load have faster CPUs.  So for any but the 
very biggest machines there is no chance of CPU performance being a problem 
for RAID-5.

For really big machines there is a good performance benefit of using hardware 
RAID-5, it's not to save CPU but to save IO.  RAID-5 operations on the host 
can double the amount of IO going through the system bus or more (think about 
the read-modify-write cycles for RAID-5).

Low end hardware RAID solutions have throughput bottlenecks because of 
computation speed.  I believe that the sole reason for this is to improve 
sales of high-end hardware RAID from the same companies.

> For reading, RAID5 is very fast, since access can be spread over many
> disks. OTOH each read from RAID5 touches n - 1 disks, so concurrent reads
> tend to be not as fast as some may expect them to be. Big caches are
> mandatory here!

If you read a single block from RAID-5 it should only hit a single disk.

> For RAID 1, you can get quite close to the theoretical max bandwidth: 1 x
> disk speed on writing, and 2 x disk speed for reading. (Of course,
> available bus bandwidth etc. will limit this, and there is some minimal
> management overhead, but RAID1 is quite simple, after all.)

Do you have benchmark results to support this assertion?  Last time I tested 
the performance of software RAID-1 on Linux I was unable to get anywhere near 
2x disk speed for writing.  I did tests by reading two files that were 1G in 
size and the operation took considerably longer than reading a single 1G file 
from a non-RAID system.  If RAID-1 was delivering twice the read throughput 
then I should be able to read two 1G files concurrently from a RAID-1 in the 
same time as would be taken to read a single 1G file from a single disk.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: High volume mail handling architecture

2004-09-11 Thread Russell Coker
On Sat, 11 Sep 2004 05:59, Theodore Knab <[EMAIL PROTECTED]> wrote:
> RAM is always not the answer with 32Bit machines. You can cause bounce
> buffers with too much RAM. The sweet spot for Linux on a 32Bit platform
> seems to be 4GB of RAM. I had 10GB of RAM in a Courier IMAP server and the
> server had problems releasing swap after a week. The kernel was compiled
> for 64GB of RAM. When I reduced the RAM to 4GB and recompiled for a 4GB
> machine these problems disappeared.

The solution to this is to use AMD64.  If you run an AMD64 kernel with 32bit 
user-space (should work well on Debian) then you get efficient access to >4G 
of RAM and the ability to run well-tested x86 binaries.

If I was going to purchase hardware for a big mail store now I would only 
consider Opteron.  The other 64bit CPUs don't offer the bang for the buck and 
Intel's x86_64 offering is still quite new and difficult to obtain.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: High volume mail handling architecture

2004-09-09 Thread Russell Coker
On Thu, 9 Sep 2004 18:44, Marcin Owsiany <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 09, 2004 at 06:03:20AM +1000, Russell Coker wrote:
> > You have to either be doing something very intensive or very wrong to
> > need more than one server for 20K users.  Last time I did this I got 250K
> > users per server, and I believe that I could have easily doubled that if
> > I was allowed to choose the hardware.
>
> We have a little over 10K users, and the disk subsystem seems to be the
> bottleneck. When we reach about 600 read transactions + 150 write
> transactions per second (as reported by sar -b), the load average starts
> to grow expotentially instead of proportionally. There are about 20K
> sectors read, and 3K written per second. (That was before I turned noatime
> on. After that we had about 2K sector writes and 70 write transactions
> less, and load average dropped to a more sane value - about 3, instead
> of 20.)

Last time I was doing this I had some Dell 2U servers (2650 from memory) with 
4 * 10K U160 disks in a RAID-5 (5th disk was hot-spare) and something like 4G 
of RAM.  The machines had almost no read access to the drives, something less 
than 10% of disk access was for read because the cache worked really well 
(the accounts that receive the most mail are the ones that have clients 
checking them most often - in some cases people leave their email client on 
24*7 checking every 5 mins).

The write bottleneck was just under 3M/s, I don't recall how many transactions 
that was.

To give better performance you may want to look at getting more RAM.  RAM is 
cheap and you can eliminate most read bottlenecks by caching lots of stuff.

3K sectors written per second isn't too good, but I guess that's because of 
the 20K sectors read.  Get some more cache and things should improve a lot.

Also if using a typical Unix mail server (Postfix, Sendmail, etc) then the 
data is written synchronously somewhere under /var before being read from 
there and written to the destination.  If you use a NVRAM card from UMEM 
http://www.umem.com/ for /var/spool then you could possibly double mail 
delivery performance.  If you use data=journal and put the journal for the 
mail store file system on the umem device you could probably double 
performance again.

> Also, did you implement virus/spam scanning on that box?

No!  Virus/spam scanning was on the front-end machines.  It was believed that 
the mail store machines were busy enough with doing the most basic work 
without virus scanning (also the number of licences for the anti-virus 
program didn't match the number of store machines that were planned).

You want to do as much work away from the mail store as possible.  Mail store 
machines can not be replaced without major inconvenience to everyone 
(customers, staff, management).  Front-end anti-virus machines are 
disposable, if you have a traffic balancing device (such as a Cisco 
LocalDirector or IPVS) in front of a cluster of anti-virus machines then an 
anti-virus machine can go down for a few days without anyone bothering.

If (hypothetically) anti-virus was to take 10% of the performance from a mail 
store then it could require another mail store machine (if you have 5-10 
machines) and therefore that's one more machine which can break and cause 
massive pain to everyone.

Another thing, a mail store machine should require almost no CPU power.  Give 
it a single CPU that's not the fastest available.  It sucks when you have two 
almost unused CPUs which are both fast and hot and then one breaks down 
killing the machine.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: High volume mail handling architecture

2004-09-08 Thread Russell Coker
On Tue, 7 Sep 2004 23:48, Theo Hoogerheide <[EMAIL PROTECTED]> wrote:
> Try looking for a netapp or something else for central datastorage and a
> loadbalancer..

If you have a Netapp then you have to deal with Linux NFS issues which aren't 
fun.

If you have a cluster of storage machines and front-end SMTP servers to direct 
delivery to the correct back-end machine as well as Perdition to proxy POP 
and IMAP to the correct back-end machine then you can scale easily without 
dealing with NFS.

> This setup is proven to be very scalable, when you want to add another
> 20k users, just add some servers :)

You have to either be doing something very intensive or very wrong to need 
more than one server for 20K users.  Last time I did this I got 250K users 
per server, and I believe that I could have easily doubled that if I was 
allowed to choose the hardware.

I used Qmail (not my choice), Courier, Perdition, IMP, MySQL (for IMP), and 
was moving it to OpenLDAP at the time I left that project.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Debian rejects Sender ID

2004-09-08 Thread Russell Coker
http://www.nwfusion.com/news/2004/0907opensourc.html?net

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: which SATA-raid controller...

2004-08-16 Thread Russell Coker
On Mon, 16 Aug 2004 17:39, "R.M. Evers" <[EMAIL PROTECTED]> wrote:
> pci, the speeds are fairly good (surely not top of the bill though). the
> configuration is 3-disk raid5. fyi, here's the hdparm test:
>
> /dev/sda:
>  Timing buffered disk reads:  64 MB in  1.43 seconds = 44.76 MB/sec

That read speed is quite poor.  I would expect to see better speeds than that 
from a single S-ATA disk!  For several years people have been reporting 
better speeds than that from 3ware controllers (although almost no-one tested 
with as few as three disks).

But hdparm is a poor benchmark tool.  I suggest using Bonnie++.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Auth_imap: Required IMAP functions were not found.

2004-08-11 Thread Russell Coker
On Wed, 11 Aug 2004 22:56, Jan Wagner <[EMAIL PROTECTED]> wrote:
> > Upgrading from php3 to php4 while upgrading from Apache 1.x to Apache 2.x
> > seemed to have missed those extension lines.  I now have IMP working
> > again.
>
> I did ran into this issue 1 week ago. It happened when I was updating from
> Apache 1.3 to Apache 2.0.
> Maybe anybody should fill a bugreport. :D

Done.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Auth_imap: Required IMAP functions were not found.

2004-08-11 Thread Russell Coker
On Wed, 11 Aug 2004 22:28, Jan Wagner <[EMAIL PROTECTED]> wrote:
> # grep imap /etc/php4/apache2/php.ini
> extension=imap.so
> # grep imap /etc/php4/apache/php.ini
> extension=imap.so

Thanks for that!

Upgrading from php3 to php4 while upgrading from Apache 1.x to Apache 2.x 
seemed to have missed those extension lines.  I now have IMP working again.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Auth_imap: Required IMAP functions were not found.

2004-08-11 Thread Russell Coker
I get the above error from imp3 running with PHP4 and Apache2.  Any idea what 
the cause might be?

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: ssh and root logins

2004-08-10 Thread Russell Coker
On Tue, 10 Aug 2004 23:02, Mark Bucciarelli <[EMAIL PROTECTED]> wrote:
> On Tuesday 10 August 2004 10:52, Dale E Martin wrote:
> > Anyways, I would like to disable password logins for root on several of
> > my boxes but allow root to come in from known IPs and with known ssh
> > keys.  Is there a way to disable password logins for root in sshd_config
> > or root/.ssh/config, while leaving password logins intact for regular
> > users?
>
> Would it work to disable all ssh password logins and only allow logins with
> the proper private key?
>
> I find this most secure--no more worries about password cracks (I just have
> to worry about the physical security of the USB key on my keychain).

Also the security of the machine that you use to ssh to other machines.  If 
the machine can be compromised then the ssh private key can be stolen from 
the USB device by a trojaned ssh client.

Systems like Opie deal with this by having a calculation to generate the new 
one-time password which can be performed on another machine.  Run that 
calculation on a PDA and things are a lot more difficult for an attacker.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: ssh and root logins

2004-08-10 Thread Russell Coker
On Tue, 10 Aug 2004 20:52, Dale E Martin <[EMAIL PROTECTED]> wrote:
> I've noticed a fair number of attempted root logins on my various boxes

Same here.  Also attempted logins to "test", "admin", and some other accounts.

> over the last few weeks.  I don't know if there is a new ssh vulnerability
> (that thus far appears to be ineffective with my config) or if they are
> attempting one of the old ones...

It appears to be just password guessing.

> Anyways, I would like to disable password logins for root on several of my
> boxes but allow root to come in from known IPs and with known ssh keys.  Is
> there a way to disable password logins for root in sshd_config or
> root/.ssh/config, while leaving password logins intact for regular users?

Ideally we would be able to specify a list of acceptable IP addresses for each 
account, both in a central file and in per-user config files.  It would be 
really great if someone would write code to do this!

Of course this wouldn't necessarily cover you against a bug in sshd...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: IIS worms and apache

2004-08-10 Thread Russell Coker
On Tue, 10 Aug 2004 19:38, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> Am 2004-08-08 15:32:51, schrieb Russell Coker:
> > On Sat, 7 Aug 2004 14:56, "Shannon R." <[EMAIL PROTECTED]> wrote:
> > > Is there a debian package wherein the app recognizes
> > > IIS worm attacks? Then blocks these IPs in real time?
> >
> > Why bother?  They can't do any harm, and the bandwidth that they take is
> > usually a small portion of the total bandwidth.  Why not just ignore
> > them, it's the easiest thing to do.
>
> Allready tried webalyzer on a 10 MByte IIS-Worm infected LOG File...
>
> Forget it !!!

What was the problem?

When I was analysing 500M web logs with Webalizer I didn't have any serious 
performance problems.  I was analysing the logs three ways, for customers of 
the ISP, for outside users, and for both combined.  The machine doing the log 
analysis had a 400MHz SPARC CPU (not a fast CPU at all), and only 1G of RAM 
(which was a problem as Webalizer could use a lot of RAM at times).

Sometimes a single run would deal with 1G or 2G of log files from the web 
server.  It would take a couple of hours to process but it still wasn't a big 
deal.

> On some days I had on my Virtual WebServer @HOME (ADSL 128/1024)
> more then 50 MByte Logfiles with ISS-Worm and hash=xxx entries.

Maybe the thing to do would be to write a server that establishes the HTTP 
protocol and then sets the TCP window size to zero (to tar-pit connections).  
Such a server program could listen on every IP address that's not used for a 
real web server and tie up resources on the zombie machines without wasting 
space in log files.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: postfix, spamassassin and spam ~ blocking cable and adsl modems

2004-08-08 Thread Russell Coker
On Sat, 7 Aug 2004 09:52, Steven Jones <[EMAIL PROTECTED]> wrote:
> We seem to be, being hit with in excess of 12,000 spam emails per day
> from adsl and cable modems in the US alone. Then we get brute force
> attackedthe server at times gets somewhat stretched...
>
> What would ppl suggest it the most efficient way to block such
> addresses?

If you use some DNSBL services you can block access from dial-up and broadband 
customer IP addresses without blocking mail servers.  Below is the list of 
DNSBL and RHSBL services that I have one one of my machines.

smtpd_client_restrictions = permit_mynetworks, 
reject_rbl_clientbl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, 
reject_rbl_client list.dsbl.org, reject_rbl_client cbl.abuseat.org, 
reject_rbl_client dnsbl.njabl.org, reject_rbl_client sbl.spamhaus.org, 
reject_rbl_client relays.ordb.org, reject_rhsbl_client rhsbl.sorbs.net, 
reject_rhsbl_client dsn.rfc-ignorant.org, reject_rhsbl_client 
postmaster.rfc-ignorant.org


> The goal here is to minimise disk i/o as that is the item being
> stretched, iostat -x 5 shows over 450% utilisation.delays are geting
> to 4+ hours...and they bitch if its over 5 minutes

Putting some of that iostat output as a text attachment to your email would 
really help us advise you about this (NB don't paste it into your email as 
the lines are too long and will get munged).

> I have 4 cpu's and spare capacity on these and I am only using 2.5 gig
> out of 4gig of ram so have spare herethe box only processes incoming
> smtp only, outgoing takes another route.

The spare RAM will be cache, so most likely your machine is doing few disk 
reads and it's entirely bottlenecked on disk writes when it's running.

If you mount all your file systems with the noatime option then you may save 
5% or 10% of your disk access.

Configure syslogd to use the "-" option for most (if not all) log files to not 
use synchronous writes.  Every email gets several lines in the syslog and you 
don't want them to all be written synchronously.

> At present I am running ext3 on the logging and spool directories but
> considering reiserFS, a good idea?
>
> Also I am aiming to get more disks as I ahve only 2, so I can either
> raid 0 over the 3 new disks or split the queuesto 3 disks, which
> might be better?

Don't use RAID-0, it increases the probability of data loss through disk 
error.  A hardware RAID-5 over the 5 disks will give better write performance 
if you have a battery-backed write-back cache on the RAID controller (the 
cheap ones don't).

> Would a scsi hwraid based cache controller be worth it?

Yes.

If you mount your Ext3 file systems with "data=journal" and have external 
journals on a separate disk then you may get really good performance.

Usually the lower block numbers of a disk are mapped to the outer tracks and 
have a higher data transfer rate (use the zcav program in my Bonnie++ package 
to test this).  So you could have the main file systems for storing the data 
on one pair of disks in a RAID-1 array and the external journals for those 
file systems on the fastest part of another pair of disks in a separate 
RAID-1.  If you have a pair of disks used for nothing but journals (which 
will probably take <100M of disk space) then the seeks should all be very 
short which will give a fast access time.

http://www.umem.com/PCINVRAMCARDS.html

An even better option might be to use non-volatile RAM storage devices.  Above 
is the URL for a company that makes PCI cards that have non-volatile storage.  
These cards can handle reads and writes at PCI bandwidth (four times faster 
than any hard disk even with 32bit PCI) and with no seek time (hard disks can 
only do about 100 seeks a second while the umem cards should do 50,000 or 
more depending on the size of the data blocks).

I don't know whether the Linux drivers for umem cards work with the latest 
hardware, you would have to check with them.

Also umem cards aren't particularly expensive.  Last time I got a quote the 
high-end cards were only about $700US.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: IIS worms and apache

2004-08-07 Thread Russell Coker
On Sat, 7 Aug 2004 14:56, "Shannon R." <[EMAIL PROTECTED]> wrote:
> Is there a debian package wherein the app recognizes
> IIS worm attacks? Then blocks these IPs in real time?

Why bother?  They can't do any harm, and the bandwidth that they take is 
usually a small portion of the total bandwidth.  Why not just ignore them, 
it's the easiest thing to do.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Restoring /etc

2004-08-06 Thread Russell Coker
On Sat, 7 Aug 2004 00:17, Mark Bucciarelli <[EMAIL PROTECTED]> wrote:
> Is there some clever way I can recreate the /etc dir?  (A dpkg-reconfigure
> trick?)  Or can I just copy the symbolic links from the working box over
> to the non-working box?

How about the following:

tar cf /tmp/foo.tar `find /etc -type l`

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: DSPAM Setup

2004-07-24 Thread Russell Coker
On Sat, 24 Jul 2004 00:27, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> Since you're running postfix... you may want to have a look at
> greylisting - the postgrey package provides this 
>
> Unfortunately, postfix 2.1 is required, so woody users will have to
> wait. Greylisting is a very resource-friendly way to limit spam - there

deb http://www.backports.org/debian stable postfix

The above is one of many sources.list entries that will give you Postfix 2.1 
on woody.

Postgrey build-depends on libberkeleydb-perl (>= 0.25-1), so if 
libberkeleydb-perl is back-ported then it should be really easy to back-port 
postgrey...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: What is GreyListing

2004-07-21 Thread Russell Coker
On Wed, 21 Jul 2004 05:47, Michael Loftis <[EMAIL PROTECTED]> wrote:
> It won't work forever eventually spambots and virusbots will catch on
> and start retrying after being 4xx-ed but implementing it now makes you
> just harder than your neighbor to break into so for the time being they'll
> mostly leave you alone.

If you have to go through one 4xx messages to send a message then it takes 
twice the network bandwidth to send a spam and more than twice the effort 
(queues have to be maintained etc).  If you were to require more than one 4xx 
message and a longer time-out then it makes it even more work for the 
spamming machine and thus reduces the volume of spam that can be sent before 
the machine is put on black-lists and/or shut down.

There is no solution, there are just many ways of alleviating the problem for 
us while making more problems for spammers.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: What is GreyListing (was: Re: Christian Hammers...)

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 23:51, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> Also, it requires postfix' policy server which is only available in
> postfix 2.1.

I think I'll give up on back-porting it.  Back-porting Postfix 2.0.16 was 
enough pain.  I guess I'll just have to move up my plans for upgrading the 
mail server to unstable.  :(

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: greylisting

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 23:28, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> On Tuesday 20 July 2004 14.06, Russell Coker wrote:
> > [...] Greylisted for 300 seconds... [...]
> > [..] mail server is broken.
>
> Russel, if there are arguments against greylisting, I'd like to hear

After the previous message explaining it I am all for greylisting!

> about them - so far, I've mostly seen success reports. (I like
> greylisting because while the idea is similar to TMDA or such things,
> it usually Just Works(tm) and users won't even notice it in most
> cases.)

It's not similar to TMDA in that it normally should not bother users, as 
opposed to TMDA which is specifically designed to annoy people.

> I won't say there are no problems, but so far these have been quite
> marginal.
>  - there are some broken mailservers treating a 4xx error like a 5xx
> (this unfortunately includes some big corporate servers)

No problem, I don't mind missing mail from such broken machines.

>  - server pools which don't send out the second try from the same IP.

This will still work eventually, it may just take more time.

How many such server pools are there?

> [0] I maintain the postgrey Debian package, as you may have guessed from
> the style of this email :-)

Any chance of back-porting it to woody?  I'm not sure I can upgrade my mail 
server to unstable at the moment...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: What is GreyListing (was: Re: Christian Hammers...)

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 22:48, Christian Hammers <[EMAIL PROTECTED]> wrote:
> On 2004-07-20 Russell Coker wrote:
> > (host mail3av.westend.com[212.117.79.67] said: 450 <[EMAIL PROTECTED]>:
> > Recipient address rejected: Greylisted for 300 seconds... (in reply to >
> > RCPT TO command))  [EMAIL PROTECTED]
> >
> > Christian's mail server is broken.
>
> Err, no. It's not a bug it's a feature :-) Called "greylisting".
>
> In opposide to normal black- and white-listing here postfix has an
> additional policy daemon that checks if the tripel "sending ip, from, to"
> is already in the database and if not, reply with a 450 aka "temporary(!)
> failure" code and take note of it. If it's a real mailserver and not a
> trojan-winXP-desktop then it will try it again in a couple of minutes. If
> it does the above tripel will be whitelisted for the next
> days/month/whatever.

OK, that makes a lot of sense!  Sorry for mistakenly claiming that your mail 
server was broken.

I'm just looking at implementing that on my Postfix server now.  For reference 
of other interested people the postfix-doc package has documentation on this 
(see the following URL if you have postfix-doc installed locally): 
file:/usr/share/doc/postfix/html/SMTPD_POLICY_README.html#greylist

Hmm, the postgrey package is not available for woody (no great surprise I 
guess), I'll have to back-port it.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 20:05, Brett Parker <[EMAIL PROTECTED]> wrote:
> > (create large file)
> > [EMAIL PROTECTED]:~$ dd if=/dev/urandom of=public_html/large_file bs=1024
> > count=5 5+0 records in
> > 5+0 records out
> >
> > (get large file)
> > [EMAIL PROTECTED]:~$ wget www.lobefin.net/~steve/large_file
> > [...]
> > 22:46:09 (9.61 MB/s) - `large_file' saved [5120/5120]
> >
> > Of course, for reasonable sized files (where reasonable is <10MB),
> > I get transfer speeds closer to 11MB/s.  YMMV, but it is not a fault
> > of the tcp protocol.  Switched 10/100 connection here.  Of course real
> > internet travel adds some latency, but that's not the point - the NIC
> > is not the bottleneck, bandwidth is in the OP's question.
>
> *ARGH*... and of course, there's *definately* no compression going on
> there, is there...

If the files come from /dev/urandom then there won't be any significant 
compression.

http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.1/0257.html

Once again, see the above URL with Dave S. Miller's .sig on the topic.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Christian Hammers

2004-07-20 Thread Russell Coker
(host mail3av.westend.com[212.117.79.67] said: 450 <[EMAIL PROTECTED]>: 
Recipient address rejected: Greylisted for 300 seconds... (in reply to RCPT 
TO command))  [EMAIL PROTECTED]

Christian's mail server is broken.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: max requests a celeron web server can handle

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 10:15, "Shannon R." <[EMAIL PROTECTED]> wrote:
> the machine will be hosting 1 website only. with about 3,000 static html
> files and about 5,000 image files (from 3kb to 100kb. and no, it's not a
> pornsite, but a bike enthusiast site)
>
> so what do you guys think? any ballpark as to how many simulataneous users
> it can serve and how many page-views it can do per hour will be very much
> appreciated.

5000 files of 100K would be 500M of data.  As most of them will be less than 
100K (as little as 3K) the data will probably be around 250M at a guess.

With 1G of RAM and 250M of files being served as long as the Apache processes 
don't take up more than 750M of RAM the files should all be cached, thus 
preventing the IDE disk from being a bottleneck.

If you want extreme performance of static content then the kernel http server 
will be better (and can redirect to Apache for more complex queries).  But 
it's quite likely that bandwidth will be your issue even if you only use 
Apache.

As for the number of hits/pages.  That really depends on what a "hit" is, and 
how many images are on a "page".

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Recommendations for redundant server esp. regarding shared storage?

2004-07-20 Thread Russell Coker
On Mon, 19 Jul 2004 22:25, Christian Hammers <[EMAIL PROTECTED]> wrote:
> Shared storage would be neat as we could do real load balancing on
> POP3/IMAP servers as well but has anybody a recommendation for a

In my experience neither POP3 nor IMAP uses any significant amount of CPU 
time.  Therefore having the files stored on local disks with a single CPU 
running POP/IMAP will be expected to give significantly better performance 
than having two machines running POP/IMAP and mounting the partitions over 
NFS.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 10:39, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> >Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.
>
> I do not belive it !

http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.1/0257.html

See the above message from  David S. Miller <[EMAIL PROTECTED]> posted 
in 1997.  At the time Dave used that as his standard .sig because it was 
really ground-breaking performance from Linux of >11MB/s TCP!

When I did tests I never got 11MB/s on my machines, that is because my 
hardware was probably not as good, and because I used real-world applications 
such as FTP rather than TCP benchmarks.

100/8 == 12.5.  The wire is capable of 12.5MB/s, having a protocol do 11.26 
isn't so strange.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-18 Thread Russell Coker
On Mon, 19 Jul 2004 05:59, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> >Thinking of the expected 50KB/sec download rate i calculated a
> >theoretical maximum of ~250 simultaneous downloads -- am i right ?
>
> With a 100 MBit NIC you can have a maximum of 7 MByte/sec

What makes you think so?

Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Your archive

2004-07-18 Thread Russell Coker
On Mon, 19 Jul 2004 00:29, "monta" <[EMAIL PROTECTED]> wrote:
> Fuck you

Silly newbie, the debian-isp list did not send a message to you, a virus did.

Don't complain to the list, blame someone who is responsible for the problem.  
You could blame the author of the virus, but it's probably impossible to 
track them down.  You could blame Microsoft for writing low quality software 
that is prone to viruses.

But the best thing to do is blame people who are stupid enough to use Outlook 
which is the main vector for spreading viruses.


You use Outlook.  So any message you send about such viruses should really be 
sent to yourself.

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Sunday, July 18, 2004 10:30 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Your archive
>
> Your document is attached.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Russell Coker
On Sat, 17 Jul 2004 14:09, Nate Duehr <[EMAIL PROTECTED]> wrote:
> Other good ways to do this include a shared RAID'ed network filesystem
> on a central box and two front-end boxes that are load-balanced with a
> hardware load-balancer.  That gets into the "must be up 24/7" realm, or
> close to it.  I worked on an environment that did this with a hardware
> NFS server (NetApp) and the front-ends could be up or down, it just
> didn't matter... as long as enough of them were up to handle the
> current load.

There are two ways of doing the "storage is available to two machines.  One is 
to have a shared SCSI bus and clustering software - but this is a major cause 
of clusters being less reliable than stand-along machines in my experience.  
The other way is using an NFS server.

For an NFS server there are two main options, one is using a Linux NFS server 
and the other is a dedicated hardware box such as NetApp.  The problem with 
using a Linux machine is that Linux as an NFS server is probably no more 
reliable than Linux as an Apache server (and may be less reliable).  In 
addition you have network issues etc, so you may as well just have a single 
machine.  Using a NetApp is expensive but gives some nice features in terms 
of backup etc (most of which can be done on Linux if you have the time and 
knowledge).  A NetApp Filer should be more reliable than a Linux NFS server, 
but you still have issues with the Linux NFS client code.

My best idea for a clustered web server was to have a master machine that 
content is uploaded to via a modified FTP server.  The FTP server would 
launch rsync after the file transfer to update the affected tree.  Cron jobs 
would periodically rsync the lot in case the FTP server didn't correctly 
launch the rsync job.  That way there are machines that have no dependencies 
on each other.  The idea was to use IPVS to direct traffic to all the 
servers.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Russell Coker
On Sat, 17 Jul 2004 10:39, Nate Duehr <[EMAIL PROTECTED]> wrote:
> On Jul 16, 2004, at 1:43 PM, Markus Oswald wrote:
> > Summary: Don't bother with tuning the server and don't even think about
> > setting up a cluster for something like this - definitely overkill. ;o)
>
> Unless there's a business requirement that it be available 24/7 with no
> maintenance downtime - that adds a level of complexity (and other
> questions that would need to be asked like "do we need a second machine
> at another data center?") to the equation.

That's a good point.  But keep in mind that when done wrong clusters decrease 
reliability and increase down-time.

I have never been involved in running a cluster where it worked as well as a 
single machine would have.  Clusters need good cluster software (which does 
not exist for Solaris, there's probably something good for linux), they need 
a lot of testing (most people don't test properly), and they need careful 
planning.

Installing a single machine and hoping for the best often gives better 
results.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Russell Coker
On Sat, 17 Jul 2004 05:42, Skylar Thompson <[EMAIL PROTECTED]> wrote:
> As long as we're not talking about 486-class machines, the processor is not
> going to be the bottleneck; the bandwidth is. Multiplying 150 peak users by
> 50kB/s gives 7.5MB/s, so your disks should be able to spit out at least
> 5MB/s. You should also make sure you have plenty of RAM (at least 512MB) to
> make sure you can cache as much of the files in RAM as possible.

As long as we are not talking about 486 class hardware then disks can handle 
>5MB/s.  In 1998 I bought the cheapest available Thinkpad with a 3G IDE disk 
and it could do that speed for the first gigabyte of the hard disk.  In 2000 
I bought a newer Thinkpad with a 7.5G IDE disk which could do >6MB/s over the 
entire disk and >9MB/s for the first 3G.  Also in 2000 I bought some cheap 
46G IDE disks which could do >30MB/s for the first 20G and >18MB/s over the 
entire disk.

If you buy one of the cheapest IDE disks available new (IE not stuff that's 
been on the shelf for a few years) and you connect it to an ATA-66 or ATA-100 
bus on the cheapest ATX motherboard available then you should expect to be 
able to do bulk reads at speeds in excess of 40MB/s easily, and probably 
>50MB/s for some parts of the disk.  I haven't had a chance to benchmark any 
of the 10,000rpm S-ATA disks, but I would hope that they could sustain bulk 
read speeds of 70MB/s or more.

The next issue is seek performance.  Getting large transfer rates when reading 
large amounts of data sequentially is easy.  Getting large transfer rates 
while reading smaller amounts of data is more difficult.  Hypothetically 
speaking if you wanted to read data in 1K blocks without any caching and it 
was not in order then you would probably find it difficult to sustain more 
than about 2MB/s on a RAID array.  Fortunately modern hard disks have 
firmware that implements read-ahead (the last time I was purchasing hard 
disks the model with 8M of read-ahead buffer was about $2 more than one with 
2M of read-ahead buffer).  When you write files to disk the OS will try to 
keep them contiguous as much as possible, to the read-ahead in the drive may 
help if the OS doesn't do decent caching.  However Linux does really 
aggressive caching of both meta-data and file data, and Apache should be 
doing reads with significantly larger block sizes than 1K.


I expect that if you get a P3-800 class machine with a 20G IDE disk and RAM 
that's more than twice the size of the data that's to be served (easy when 
it's only 150M of data) then there will not be any performance problems.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: lvm with raid

2004-07-03 Thread Russell Coker
On Fri, 2 Jul 2004 05:09, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> Seriously, as I need more disk space and CPU than disk IO, I went for
> RAID 5. If level 0 or 1 fits your application better, software RAID
> might be an option. But why burn CPU on RAID when your controller
> brings it's own CPU? And for mirroring disks, why not take the
> on-board controller?

Does software RAID-5 really burn CPU?  See the below web page for the speed of 
an old machine in doing the RAID-5 checksum calculations.  Given that data to 
be written to disk will already be in the cache it seems that there won't be 
any significant overhead for this.
http://www.uwsg.iu.edu/hypermail/linux/kernel/0110.2/0816.html

The advantage of hardware for RAID-1 is that there are bottlenecks for IO 
speed.  Doing only half the writes on the motherboard side will help things.

> > The vast majority of hardware RAID devices are too slow to handle more
> > than 4 disks at full speed, the way they lay the data on the disk is not
> > documented (so if they mess up it will be really bad for you), and they
> > really aren't that cheap (not anything that's worth using).
>
> If your storage messes up, it will take the filesystem with it.

Not necessarily.  Sometimes you just have the RAID devices refuse to recognise 
the storage.  If you know the block layout then writing a program to 
reconstruct a RAID-5 from the set of disks (or even the set minus one disk) 
should not be difficult.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: cciss vs IDE (was: lvm with raid)

2004-07-03 Thread Russell Coker
On Fri, 2 Jul 2004 16:22, Michael Loftis <[EMAIL PROTECTED]> wrote:
> > If you have a hot-spare disk in the machine then you can have it take the
> > place of a disk that dies while the machine is running and then replace
> > the  defective hardware during a scheduled maintenance time.
>
> Except that in my experience a dead IDE drive takes the whole system with
> it even with MD RAID, the system just locks up.  (yes even on say three
> 'independent' channels).

That hasn't been my experience, maybe I haven't had a drive die in th right 
way.  All the disk failures I have experienced have had read errors be the 
only symptom.

It's expected that a drive electronics failure will take out any other drives 
on the same cable.  If a drive starts drawing excessive current then it can 
cause the entire system to hang (lack of power for the CPU and other 
devices), but I wouldn't imagine that to be common.

Maybe you encountered a bug in the device driver or the hardware?  In either 
case it would be interesting to repeat the test and file bug reports if it 
appears to be kernel code.

> YMMV of course...I've kind of thought about doing another experiment here
> lately I've got a handful of older drives at home that I've thought about
> trying failure scenarios (c'mon, don't tell me you're not the least bit
> interested in taking a ball peen hammer to a drive in a running system!!!)

Good idea!  Go for it!

Please make sure you have a camera that is capable of at least 2Mp on hand to 
put pictures of this on your web site!

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: lvm with raid

2004-07-03 Thread Russell Coker
On Fri, 2 Jul 2004 05:09, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> Seriously, as I need more disk space and CPU than disk IO, I went for
> RAID 5. If level 0 or 1 fits your application better, software RAID
> might be an option. But why burn CPU on RAID when your controller
> brings it's own CPU? And for mirroring disks, why not take the
> on-board controller?

Does software RAID-5 really burn CPU?  See the below web page for the speed of 
an old machine in doing the RAID-5 checksum calculations.  Given that data to 
be written to disk will already be in the cache it seems that there won't be 
any significant overhead for this.
http://www.uwsg.iu.edu/hypermail/linux/kernel/0110.2/0816.html

The advantage of hardware for RAID-1 is that there are bottlenecks for IO 
speed.  Doing only half the writes on the motherboard side will help things.

> > The vast majority of hardware RAID devices are too slow to handle more
> > than 4 disks at full speed, the way they lay the data on the disk is not
> > documented (so if they mess up it will be really bad for you), and they
> > really aren't that cheap (not anything that's worth using).
>
> If your storage messes up, it will take the filesystem with it.

Not necessarily.  Sometimes you just have the RAID devices refuse to recognise 
the storage.  If you know the block layout then writing a program to 
reconstruct a RAID-5 from the set of disks (or even the set minus one disk) 
should not be difficult.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: cciss vs IDE (was: lvm with raid)

2004-07-03 Thread Russell Coker
On Fri, 2 Jul 2004 16:22, Michael Loftis <[EMAIL PROTECTED]> wrote:
> > If you have a hot-spare disk in the machine then you can have it take the
> > place of a disk that dies while the machine is running and then replace
> > the  defective hardware during a scheduled maintenance time.
>
> Except that in my experience a dead IDE drive takes the whole system with
> it even with MD RAID, the system just locks up.  (yes even on say three
> 'independent' channels).

That hasn't been my experience, maybe I haven't had a drive die in th right 
way.  All the disk failures I have experienced have had read errors be the 
only symptom.

It's expected that a drive electronics failure will take out any other drives 
on the same cable.  If a drive starts drawing excessive current then it can 
cause the entire system to hang (lack of power for the CPU and other 
devices), but I wouldn't imagine that to be common.

Maybe you encountered a bug in the device driver or the hardware?  In either 
case it would be interesting to repeat the test and file bug reports if it 
appears to be kernel code.

> YMMV of course...I've kind of thought about doing another experiment here
> lately I've got a handful of older drives at home that I've thought about
> trying failure scenarios (c'mon, don't tell me you're not the least bit
> interested in taking a ball peen hammer to a drive in a running system!!!)

Good idea!  Go for it!

Please make sure you have a camera that is capable of at least 2Mp on hand to 
put pictures of this on your web site!

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: cciss vs IDE (was: lvm with raid)

2004-07-01 Thread Russell Coker
On Fri, 2 Jul 2004 00:40, "Marek Isalski" <[EMAIL PROTECTED]> wrote:
> Russell Coker writes:
> > Having the OS on one disk means that a single disk failure will kill the
> > machine.  While you may have good backups it's always more convenient
> > if you can leave the machine running with a dead disk instead of having
> > to do an emergency hardware replacement job.
>
> I've not tried Linux's software RAID for about 5 years now.  How much does
> hotswapping a dead IDE drive kill the machine?  Does this at all depend on
> the IDE controller or can most modern ones cope with the abuse?

Physically plugging or unplugging a P-ATA (IDE) disk is not supported.  Some 
people have managed to get it to work, but it required the type of 
engineering effort that most people won't want to apply to their production 
machines (IE don't do it).

If you have a hot-spare disk in the machine then you can have it take the 
place of a disk that dies while the machine is running and then replace the 
defective hardware during a scheduled maintenance time.

The cheapest hot-swap disk array might be to have the disks in USB devices, 
USB supports hot-swap.  I haven't tried having more than one USB block device 
in a system so I don't know how well this would work.  My USB 2.0 IDE disk 
box can sustain over 30MB/s so there's no great performance loss unless you 
have one of the newest and fastest IDE disks.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: cciss vs IDE (was: lvm with raid)

2004-07-01 Thread Russell Coker
On Fri, 2 Jul 2004 00:40, "Marek Isalski" <[EMAIL PROTECTED]> wrote:
> Russell Coker writes:
> > Having the OS on one disk means that a single disk failure will kill the
> > machine.  While you may have good backups it's always more convenient
> > if you can leave the machine running with a dead disk instead of having
> > to do an emergency hardware replacement job.
>
> I've not tried Linux's software RAID for about 5 years now.  How much does
> hotswapping a dead IDE drive kill the machine?  Does this at all depend on
> the IDE controller or can most modern ones cope with the abuse?

Physically plugging or unplugging a P-ATA (IDE) disk is not supported.  Some 
people have managed to get it to work, but it required the type of 
engineering effort that most people won't want to apply to their production 
machines (IE don't do it).

If you have a hot-spare disk in the machine then you can have it take the 
place of a disk that dies while the machine is running and then replace the 
defective hardware during a scheduled maintenance time.

The cheapest hot-swap disk array might be to have the disks in USB devices, 
USB supports hot-swap.  I haven't tried having more than one USB block device 
in a system so I don't know how well this would work.  My USB 2.0 IDE disk 
box can sustain over 30MB/s so there's no great performance loss unless you 
have one of the newest and fastest IDE disks.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: lvm with raid

2004-07-01 Thread Russell Coker
On Thu, 1 Jul 2004 20:37, Jogi Hofmüller <[EMAIL PROTECTED]> wrote:
> * Gustavo Polillo <[EMAIL PROTECTED]> [2004-06-30 17:22]:
> >   Is it possible to make lvm with raid ?? Is there anyone here that make
> > it? thanks.
>
> We just recently started tests with adaptecs zcr cards (2010S) and
> aic-7902 controlors. Our solution is to have one disk to hold the OS and
> three more that form a raid5. On top of the raid5 (which looks like one

Why not just use a four disk RAID-5 for everything?

If you have a decent amount of RAM then the OS partition(s) will usually have 
almost no access apart from writes to /var/log, and if you use the "-" option 
in the syslog configuration that shouldn't be a significant load either.  
Generally the more disks in a RAID-5 the better the performance that you can 
get, so having a four-disk RAID-5 is likely to give better performance for no 
cost (run "iostat -x 10" to verify this).

Having the OS on one disk means that a single disk failure will kill the 
machine.  While you may have good backups it's always more convenient if you 
can leave the machine running with a dead disk instead of having to do an 
emergency hardware replacement job.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: lvm with raid

2004-07-01 Thread Russell Coker
On Thu, 1 Jul 2004 20:37, Jogi Hofmüller <[EMAIL PROTECTED]> wrote:
> * Gustavo Polillo <[EMAIL PROTECTED]> [2004-06-30 17:22]:
> >   Is it possible to make lvm with raid ?? Is there anyone here that make
> > it? thanks.
>
> We just recently started tests with adaptecs zcr cards (2010S) and
> aic-7902 controlors. Our solution is to have one disk to hold the OS and
> three more that form a raid5. On top of the raid5 (which looks like one

Why not just use a four disk RAID-5 for everything?

If you have a decent amount of RAM then the OS partition(s) will usually have 
almost no access apart from writes to /var/log, and if you use the "-" option 
in the syslog configuration that shouldn't be a significant load either.  
Generally the more disks in a RAID-5 the better the performance that you can 
get, so having a four-disk RAID-5 is likely to give better performance for no 
cost (run "iostat -x 10" to verify this).

Having the OS on one disk means that a single disk failure will kill the 
machine.  While you may have good backups it's always more convenient if you 
can leave the machine running with a dead disk instead of having to do an 
emergency hardware replacement job.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Re: lvm with raid

2004-07-01 Thread Russell Coker
On Thu, 1 Jul 2004 17:43, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> ## Russell Coker ([EMAIL PROTECTED]):
> > > ## Gustavo Polillo ([EMAIL PROTECTED]):
> > > >   Is it possible to make lvm with raid ?? Is there anyone here that
> > > > make it?
> > >
> > > Works as expected. RAID appears as a simple SCSI drive.
> >
> > Only for hardware RAID.
>
> Yes. Given the price of RAID controllers (ServerRAID, for example) and
> the problems of software RAID, I strongly suggest getting a decent
> controller and do whatever RAID level you need.

Hardware RAID is more expensive.  Some of the SCSI hardware RAID devices have 
bottlenecks on throughput (I suspect to help sell the high-end models without 
the bottleneck).  The 3ware hardware RAID devices are well known for 
saturating the PCI bus, but to get maximum performance out of them some 
people used to use two 3ware cards in two PCI buses with software RAID-0 to 
beat the PCI bottleneck (I can't remember if it was 32bit-66MHz or 
64bit-33MHz, but the end result was that performance didn't go much better 
than about 210MB/s on a single 3ware card).

The vast majority of hardware RAID devices are too slow to handle more than 4 
disks at full speed, the way they lay the data on the disk is not documented 
(so if they mess up it will be really bad for you), and they really aren't 
that cheap (not anything that's worth using).

If you want just two disks for reliability then software RAID-1 is the easiest 
and most reliable.  You can mount a RAID-1 file system as a non-RAID device 
if you wish.  One big advantage of Linux software RAID is that you know 
what's going on.  With every hardware RAID system I've ever seen or heard of 
(including the biggest ones that Sun sells) there have been situations where 
the administrator finds themselves without a way of discovering what's 
happening with their data.  The cheapest RAID often doesn't support telling 
Linux about the status, or has so many bugs in the driver software that you 
can't rely on it.  The more expensive RAID hardware has a computer in it 
which can have protocol errors when talking to the host or simply crash.

> > Software RAID looks quite different to the OS and
> > there are still some minor quirks in getting it working for boot devices.
> > One of which is that for LILO you need the MBR to be provided by the
> > debian-mbr program and have the LILO block inside the RAID, as well as
> > having identical block numbers in both disks in the RAID-1 (RAID-5 and
> > RAID-0 is not supported).
>
> That's why I don't use software RAID. Thanks for the summary.

The summary was not for your benefit.  It was for the benefit of people who 
actually want to get work done.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: lvm with raid

2004-07-01 Thread Russell Coker
On Thu, 1 Jul 2004 17:43, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> ## Russell Coker ([EMAIL PROTECTED]):
> > > ## Gustavo Polillo ([EMAIL PROTECTED]):
> > > >   Is it possible to make lvm with raid ?? Is there anyone here that
> > > > make it?
> > >
> > > Works as expected. RAID appears as a simple SCSI drive.
> >
> > Only for hardware RAID.
>
> Yes. Given the price of RAID controllers (ServerRAID, for example) and
> the problems of software RAID, I strongly suggest getting a decent
> controller and do whatever RAID level you need.

Hardware RAID is more expensive.  Some of the SCSI hardware RAID devices have 
bottlenecks on throughput (I suspect to help sell the high-end models without 
the bottleneck).  The 3ware hardware RAID devices are well known for 
saturating the PCI bus, but to get maximum performance out of them some 
people used to use two 3ware cards in two PCI buses with software RAID-0 to 
beat the PCI bottleneck (I can't remember if it was 32bit-66MHz or 
64bit-33MHz, but the end result was that performance didn't go much better 
than about 210MB/s on a single 3ware card).

The vast majority of hardware RAID devices are too slow to handle more than 4 
disks at full speed, the way they lay the data on the disk is not documented 
(so if they mess up it will be really bad for you), and they really aren't 
that cheap (not anything that's worth using).

If you want just two disks for reliability then software RAID-1 is the easiest 
and most reliable.  You can mount a RAID-1 file system as a non-RAID device 
if you wish.  One big advantage of Linux software RAID is that you know 
what's going on.  With every hardware RAID system I've ever seen or heard of 
(including the biggest ones that Sun sells) there have been situations where 
the administrator finds themselves without a way of discovering what's 
happening with their data.  The cheapest RAID often doesn't support telling 
Linux about the status, or has so many bugs in the driver software that you 
can't rely on it.  The more expensive RAID hardware has a computer in it 
which can have protocol errors when talking to the host or simply crash.

> > Software RAID looks quite different to the OS and
> > there are still some minor quirks in getting it working for boot devices.
> > One of which is that for LILO you need the MBR to be provided by the
> > debian-mbr program and have the LILO block inside the RAID, as well as
> > having identical block numbers in both disks in the RAID-1 (RAID-5 and
> > RAID-0 is not supported).
>
> That's why I don't use software RAID. Thanks for the summary.

The summary was not for your benefit.  It was for the benefit of people who 
actually want to get work done.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: lvm with raid

2004-06-30 Thread Russell Coker
On Thu, 1 Jul 2004 03:33, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> ## Gustavo Polillo ([EMAIL PROTECTED]):
> >   Is it possible to make lvm with raid ?? Is there anyone here that make
> > it?
>
> Works as expected. RAID appears as a simple SCSI drive.

Only for hardware RAID.  Software RAID looks quite different to the OS and 
there are still some minor quirks in getting it working for boot devices.  
One of which is that for LILO you need the MBR to be provided by the 
debian-mbr program and have the LILO block inside the RAID, as well as having 
identical block numbers in both disks in the RAID-1 (RAID-5 and RAID-0 is not 
supported).

LVM should work with LILO, whether it's a good idea is an entirely separate 
issue.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: lvm with raid

2004-06-30 Thread Russell Coker
On Thu, 1 Jul 2004 03:33, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> ## Gustavo Polillo ([EMAIL PROTECTED]):
> >   Is it possible to make lvm with raid ?? Is there anyone here that make
> > it?
>
> Works as expected. RAID appears as a simple SCSI drive.

Only for hardware RAID.  Software RAID looks quite different to the OS and 
there are still some minor quirks in getting it working for boot devices.  
One of which is that for LILO you need the MBR to be provided by the 
debian-mbr program and have the LILO block inside the RAID, as well as having 
identical block numbers in both disks in the RAID-1 (RAID-5 and RAID-0 is not 
supported).

LVM should work with LILO, whether it's a good idea is an entirely separate 
issue.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: lvm with raid

2004-06-30 Thread Russell Coker
On Thu, 1 Jul 2004 01:49, Brett Parker <[EMAIL PROTECTED]> wrote:
> Just create the LVM volume on the RAID device, and that should be it,
> keeping /boot out of the LVM is a requirement fwict, otherwise the
> bootloader can't get access to the initrd or kernel image.

LILO is supposed to work on LVM devices as long as LVM doesn't move the blocks 
around under it (any such movement of /boot requires running "lilo" again).

I hope that LILO would work on LVM on software RAID, but both LVM and software 
RAID are complex and the interaction may make it fail to work.

If LILO does not work on LVM then please open a bug report about it, it is 
supposed to work.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: Which Spam Block List to use for a network?

2004-06-30 Thread Russell Coker
On Thu, 1 Jul 2004 01:34, Adrian 'Dagurashibanipal' von Bidder 
<[EMAIL PROTECTED]> wrote:
> I agree that false positives are extremely annoying, so an ISP/corporate
> anti-spam policy will have to be more conservative than what some here
> use for their own email.

The correct solution to false positives (IMHO) is to be extremely conservative 
in regard to dropping email.  Only a confirmed virus should be dropped on the 
floor.  Any other rejection of a message should be a code 55x in the SMTP 
protocol.

If you reject a message with a 55x and a suitable message then the author of 
the message can find another method of contact and there is no loss merely 
inconvenience.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: Which Spam Block List to use for a network?

2004-06-30 Thread Russell Coker
On Thu, 1 Jul 2004 01:43, "Robert Cates" <[EMAIL PROTECTED]> wrote:
> Well I do not remember ever seeing on the evening news or morning news
> paper that somebody was hurt or worst killed from a Spam attack!  Have you

I know many people who have a stated intention of killing a spammer if given a 
reasonable chance.  It would really suck if one of those people accidentally 
killed a non-spammer by mistake!

> >>When users try to deal with spam they often complain to the wrong people
> >>(think about joe-job's), they take the wrong actions (think about sending
> >>email to the "remove" address in a spam), and they don't have the
> >> competence
> >>to do it properly (think about the people who block postmaster mail etc,
> >> or who just block everything and complain to their ISP).
>
> Somebody who blocks everything, or ignorantly complains to their ISP, needs
> to be educated, not hand-held.  That "education" in my mind is a service
> and responsibilty of the ISP, an if it's a matter of getting too many phone
> calls per day, there can easily be an FAQ posted on the ISP web site.  Or
> maybe more appropriately it should be the responsibility of the software
> vendor providing the Anti-Spam software.

Sure.  Next time you run an ISP with over a million customers and only three 
people who really know how email works you can try educating users.  I'll 
stick to giving them what I and management think is best for them.

> Who on the ISP side knows what the customer wants (blocked)?

I do because I'm the bofh!  ;)

> Are the ISPs calling all of their customers and asking?

No point.  The customer doesn't know the answer either.

> So the world will come to a day 
> when all Internet users won't have much choice, won't know what's getting
> blocked, won't know who's controlling what, won't know who's making what

If a user finds that their ISP gives them th wrong mix of spam protection to 
false positives then they can find another ISP.  ISPs that make the wrong 
choices will lose business and eventually go bankrupt or get bought out by 
better ISPs.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: Which Spam Block List to use for a network?

2004-06-30 Thread Russell Coker
On Wed, 30 Jun 2004 23:54, "Robert Cates" <[EMAIL PROTECTED]> wrote:
> Spam Black ("Block") Lists?  Not a good thing in my opinion!!  I mean,
> e-mail servers can be configured NOT to relay for unauthorized domains
> anyway.  I'm not an advocate of e-mail Spamming.  I just feel that the
> control or blocking should be left up to the individual user.  Just like
> it's my choice which "Office" package I want to (buy and) use. ;-)

Should we leave control of crime to the victim as well?  Or do you think that 
a professional police force is better?

When users try to deal with spam they often complain to the wrong people 
(think about joe-job's), they take the wrong actions (think about sending 
email to the "remove" address in a spam), and they don't have the competence 
to do it properly (think about the people who block postmaster mail etc, or 
who just block everything and complain to their ISP).

It's better for the ISP to have an anti-spam system that blocks most of the 
spam that customers want blocked and gets a small enough number of 
false-positives that they don't mind.  Some ISPs find that SpamCop's DNSBL 
fits this description...

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: email server - how to

2004-06-30 Thread Russell Coker
On Wed, 30 Jun 2004 21:23, Dave Watkins <[EMAIL PROTECTED]> wrote:
> Andreas John wrote:
> >> Best to use 2U machines with the maximum number of disks IMHO.  A 2U
> >> machine should be able to have 5 disks.
> >
> > I say: 9 Disks without problems. e.g.  pcicase
> > http://www.pcicase.de/catalog/produktweb/IPC-C2-X/IPC-C2D.htm
>
> The question is with that many disks is a single raid 5 going to be
> enough redundancy... Thats an awful lot of data to loose if 2 drives
> fail. May be worth thinking about RAID6 or a couple of RAID5 arrays striped

If you have two RAID-5 arrays striped then two disks can fail and lose all 
your data.  If you have a 10 disk setup where one disk has already failed, 
and if all disks are equally likely to fail, then on a single RAID-5 any disk 
failure will lose your data while on a pair of striped RAID-5's the chance 
will be 4/9 that the next failure will lose the data.

However in a RAID-5 when one disk has failed there is more work for the 
remaining disk, so it may be more likely that the RAID-5 which has already 
lost a disk will lose a second than having a disk die in a RAID-5 that's 
working fine.

Another issue is that physical issues (vibration and temperature) can cause or 
trigger disk death.  As a RAID-5 is likely to be comprised of disks that are 
near each other there may be a pattern to disk death.

I would hope that RAID-6 would be significantly more reliable than RAID-5.

However there are lots of other causes of data loss.  If reads don't occur on 
all disks at the same time with checking of both parity blocks then a RAID-6 
system will still fail if a disk returns bad data and claims it to be good.  
Performance will be better if you don't have to read all blocks in each 
stripe for every read, so I expect that most systems will support turning off 
the feature to read the entire stripe (and it may be the default for some).

There are lots of physical issues that can take out multiple disks, anything 
that can take out two disks can probably take out three just as easily.  
These physical issues include repairmen who use a hammer as a CPU 
installation tool (this is not a joke).

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




  1   2   3   4   5   6   7   8   9   10   >