Re: New SSL Certificates for Postfix Courier-imap

2004-12-17 Thread Craig Sanders
On Tue, Dec 14, 2004 at 06:03:23AM +, [EMAIL PROTECTED] wrote:
 Ce jour Mon, 13 Dec 2004, W.Andrew Loe III a dit:
 
  I am trying to figure out how to re-build my SSL certificates for 
  postfix and courier-imap. Right now my certificate for postfix has some 
  errors on it (wrong CN), but I am able to download it and set it to be 
  accepted by OS X (ends pop-ups in Mail.app). My courier-imap 
  certificate does not work in OS X, I've tried using mkimapdcert in 
  /usr/sbin/ but it is not generating certificates that are compatible 
  with OS X. Suggestions on how I can use OpenSSL to generate 
  certificates for both?
 
 i wrote a goofy script to create server certs:

try this one instead.

it makes certificates which can either be used in a postfix-tls server or
in a mail client for encryption and or relay authentication.

(NOTE: i am far from an expert in SSL certificates.  i wrote the script after
reading various HOWTOs and notes on the web.  it may or may not be the best
way, or even a good way, of generating certificates.  i probably wouldn't use
it where identity  authentication was important.  but it works for
opportunistic encryption of mail transport and for client-certificate based
relaying)

---cut here---
#! /bin/sh

# make-postfix-cert.sh
#   Craig Sanders [EMAIL PROTECTED]2000-09-03
# this script is hereby placed in the public domain.

# this script assumes that you already have a CA set up, as the openssl
# default demoCA under the current directory.  if you haven't done it
# already, run /usr/lib/ssl/misc/CA.pl -newca (or where the path to
# openssl's CA.pl script is on your system).
#
# then run this script like so: 
#
#./make-postfix-cert.sh hostname.your.domain.com
#
# it will create the certificate and key files for that host and put
# them into a subdirectory.

site=$1

# edit these values to suit your site.

COUNTRY=AU
PROVINCE=Victoria
LOCALITY=Melbourne
ORGANISATION=
ORG_UNIT=
COMMON_NAME=$site
EMAIL=[EMAIL PROTECTED]

OPTIONAL_COMPANY_NAME=

# leave challenge password blank
CHALLENGE_PASSWORD=

# generate a certificate valid for 10 years
# (probably not a good idea if you care about authentication, but should
# be fine if you only care about encryption of the smtp session)
# comment this out if you want the openssl default (1 year, usually)
DAYS=-days 3652

# create the certificate request
cat __EOF__ | openssl req -new $DAYS -nodes -keyout newreq.pem -out newreq.pem
$COUNTRY
$PROVINCE
$LOCALITY
$ORGANISATION
$ORG_UNIT
$COMMON_NAME
$EMAIL
$CHALLENGE_PASSWORD
$OPTIONAL_COMPANY_NAME
__EOF__

# sign it
openssl ca $DAYS -policy policy_anything -out newcert.pem -infiles newreq.pem

# move it
mkdir -p $site
mv newreq.pem $site/key.pem
chmod 400 $site/key.pem
mv newcert.pem $site/cert.pem
cd $site

# create server.pem for smtpd
cat cert.pem ../demoCA/cacert.pem key.pem server.pem
chmod 400 server.pem

# create fingerprint file
openssl x509 -fingerprint -in cert.pem -noout  fingerprint

# uncomment to  create pkcs12 certificate for netscape 
# (probably not needed)
#openssl pkcs12 -export -in cert.pem -inkey key.pem \
#  -certfile ../demoCA/cacert.pem -name $site -out cert.p12

cd ..
---cut here---

run it like so:

./make-postfix-cert.sh FQDN

you should use the server's announced FQDN host-name as the server name in the
certificate. 


once the cert has been created, copy $site/*.pem and demoCA/cacert.pem into
/etc/postfix on the target system.  and add the following to 
/etc/postfix/main.cf
to enable TLS encryption.

---cut here---
smtpd_tls_cert_file = /etc/postfix/server.pem
smtpd_tls_key_file = $smtpd_tls_cert_file
smtpd_tls_CAfile = /etc/postfix/cacert.pem
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_use_tls = yes
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_tls_CApath = /etc/postfix/certs
smtp_tls_loglevel = 1
smtp_use_tls = yes
smtp_tls_per_site = hash:/etc/postfix/tls_per_site
smtp_tls_note_starttls_offer = yes
tls_random_source = dev:/dev/urandom
tls_daemon_random_source = dev:/dev/urandom
---cut here---

then:

  echo .   MAY /etc/postfix/tls_per_site 
  postmap hash:/etc/postfix/tls_per_site
  mkdir /etc/postfix/certs
  /etc/init.d/postfix restart

tls_per_site allows you to control which remote sites are offered TLS and which
are not.  useful because some sites have broken implementations so you need to
disable TLS for them.

if you want postfix to verify remote certs, you can put CA certs for them into
/etc/postfix/certs.  this is not strictly necessary - encryption works fine
without cert verification.


i run this as a matter of routine whenever i create a new mail host in my
domain.  if i'm doing it for a new domain, i copy the script to somewhere else
(usually to somewhere on the target system) and create a new demoCA for that
domain, and then run the script there for all hosts and relay clients in that
domain.




finally, to allow a client with a known cert to relay through postfix, first
generate the cert just as if for a server

Re: blacklists

2004-12-10 Thread Craig Sanders
On Thu, Dec 09, 2004 at 11:18:16PM -0700, Michael Loftis wrote:
 --On Friday, December 10, 2004 16:43 +1100 Craig Sanders
 [EMAIL PROTECTED] wrote:

 DoS is a huge exaggeration. a few smtpd processes waiting to timeout
 does not constitute a DoS. neither does a few dozen.

 I had about 800 waiting around in just a few minutes on the one server
 I began testing it on, but this is a large installation. And this
 isn't peak time...It's holding at around 1000 blocked hosts, most of
 them for blacklist infractions.

i certainly wouldn't recommend running it on a large installation. i'm
surprised you even tried.

i run it on my home system at the moment. i wouldn't run it at work.

i experiment with lots of things on my home system that i wouldn't even
think of doing at work. some of them, very few, actually turn out to be
worthwhile and safe enough to use at work.


try dropping only SYN smtp packets if you still want to experiment with
it, adding --syn to the end of the iptables args in the scripts. that
should stop the hanging processes.

 But when you've got a lot of mail (and a number of customer domains
 that just tend to attract junk) it's easy to get a lot of processes
 hanging around.

unfortunately, my domain seems to attract a lot of junk. i've had my
domain for over 10 years, and kept the same email address all along.
and i've been joe-jobbed many times over the last decade by spammers
who don't like me (or my anti-spam methods, or the fact that i share
them openly), and i've had thousands of bogus, non-existant addresses in
my domain added to spam lists also by spammers who don't like me. the
current crop of spammers probably don't even notice or care, but in the
early days of spam it was different. spammers got very offended and took
it personally...which, of course, was excellent incentive to keep on
blocking them :)

i pissed off quite a few in the very early days, when spammers didn't
hide their identities and hadn't yet learned not to use their own
address. one of the things i wrote was a script which i could bounce
spam to. it would then parse the sender addresses and add it to a
database of spammersand sent copies of each spam to a random subset
of the database. that infuriated them and amused me no end. my intention
was to annoy them at least as much as their MMF or green card or
whatever spam had annoyed me. unfortunately that stopped being a viable
tactic fairly quickly, and it certainly wouldn't scale to anything like
the spam load of today (back then 1 or 2 spams every few days was a lot.
now i wouldn't even notice it).

craig

ps: anyone know if MMF spams still happen? i haven't seen one for years.  could
be my body checks rules block them all, or maybe they've just given up since
419 scams are more lucrative.

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: EHLO/HELO [was blacklists]

2004-12-10 Thread Craig Sanders
On Fri, Dec 10, 2004 at 11:08:53PM +1100, Russell Coker wrote:
 I tried out reject_unknown_hostname but had to turn it off, too many
 machines had unknown hostnames.

 For example a zone foo.com has a SMTP server named postfix1 and puts
 postfix1.foo.com in the EHLO command but has an external DNS entry of
 smtp.foo.com. Such a zone is moderately well configured and there are
 too many such zones to block them all. The other helo restrictions get
 enough non-spam traffic.

actually, it's not moderately well configured. it's trivial to add a DNS
entry for postfix1.foo.com (preferably an A record and not a CNAME - doesn't
matter for HELO/EHLO but it does matter for $myorigin). it's even more trivial
to make the postfix server announce itself with a real hostname, one that
actually exists in the DNS - smtp.foo.com would be perfect. that's all it
takes to get past reject_unknown_hostname.

it's unusual to see this level of cluelessness with someone running a
unix MTA - i thought it was reserved to Exchange and Groupwise users.

 Using reject_unknown_hostname would get close to blocking 100% of
 spam,

nowhere near that much. it helps a little, but it's not even remotely close to
the final solution to spam.

 but that's because it would block huge amounts of non-spam email.

that's not the case in my experience (but that depends on exactly what kind of
mail traffic is received).

but it's your server, you get to choose what rules are on it.

craig

ps: yes, this is another rule i use at home but not at work. there are
lots of windows MTAs out there run by the clueless. fortunately, at home
i don't need or have to communicate with them, but at work there are
many people who might.

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-10 Thread Craig Sanders
On Fri, Dec 10, 2004 at 11:20:28AM -0700, Michael Loftis wrote:
 i certainly wouldn't recommend running it on a large installation.
 i'm surprised you even tried.

 Well, we're very anti-spam, and willing ot try anything to help...I
 had to disable it after we got around ~8K rules in the tables on that
 box, that ended up causing the system CPU time to go through the roof.
 Though it was very effective. :)

i guess what it needs is something that can merge addresses into a CIDR range.

e.g. if it sees x.y.z.1 through to .8, it should merge them all into a /29.
delete the individual /32 rules and replace them with the /29.

managing that would get fairly complicated, especially when it has to later
merge a few /29s and /32s into larger blocks. but it should be doable.  if i
get time, i'll think about how that might be implemented.

the simpler thing is to just uncomment the /24 stuff that's already in there,
and block the entire /24 rather than just the offending host.  heavy-handed
but it should reduce the number of entries in iptables.

hmmm. one useful optimisation here would be to use Net::DNS to find out whether
the IP is in a DUL - and if it is, then block the entire /24. i reject all mail
direct from dynamic/dialups anyway so it wouldn't hurt anything.  (and remember
to add your own dialup pools to @whitelist :)




another thing it should do is add the rules to a SPAMMERS table, rather than to
the main INPUT table.  that would be a trivial change, only a few minutes work.

...done.

new version now published:

# v1.8  - changed to use SPAMMERS table rather than INPUT.
# use --syn when adding iptables rules, to avoid hanging smtpd processes
#
# the SPAMMERS table should be set up like this (BEFORE this script is run):
#
# # create SPAMMERS table
# iptables -F SPAMMERS
# iptables -X SPAMMERS
# iptables -N SPAMMERS
#
# # send all INPUT  FORWARD packets to the SPAMMERS table
# iptables -I INPUT -j SPAMMERS
# iptables -I FORWARD -j SPAMMERS
# 
# FORWARD rule needed only on gateway/router boxes, not normal hosts.
#
# you could optionally create a SPAMDROP table too, which logged the packet
# with a SPAMMERS prefix before dropping itbut that kind of defeats the
# purpose of this script which is to remove spammer noise from the logs.



 I've made a few modifications already, including making everything
 persistent and making it purge SEEN entries after not seeing a host
 for 24hrs (this also effectively caps any block time to being 24hrs).
 I might just set it so that it only watches our MailScanner and blocks
 the IPs it reports as sending virii. That would probably help to
 shrink the number of reports a lot, and help with my virus load.
 That'd be a good site-wide table to share (we use central mysql maps a
 lot).

diff -u ???


 one of the things i wrote was a script which i could bounce spam to.
 it would then parse the sender addresses and add it to a database of
 spammersand sent copies of each spam to a random subset of the
 database. that infuriated them and amused me no end. my intention

 Now that's a heck of a tactic LOL :)

oh yes, i forgot the most amusing thing about it.  it not only sent it to a
subset of the spammer database, it also used random addresses out of that db as
the envelope and header sender addresses, so that they'd complain at each
other.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-10 Thread Craig Sanders
On Fri, Dec 10, 2004 at 05:01:33PM -0700, Michael Loftis wrote:
 So it's your fault they figured out the forged MAIL FROM trick! Bad
 craig, no donut! ;)

no, many of them already knew that.  it was obvious anyway.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: blacklists

2004-12-09 Thread Craig Sanders
On Thu, Dec 09, 2004 at 07:04:49PM +0100, Richard Zuidhof wrote:
 To see some statistics on the hit rate of various blacklists see:
 http://cgi.monitor.nl/popstats.html
 http://www.sdsc.edu/~jeff/spam/cbc.html

or if you run postfix and want to compare RBLs against the client IPs in your
mail.log, then download http://taz.net.au/postfix/scripts/compare-rbls.pl

notes: 

1. you will also need to download openlogfile.pl from the same place
and put it in the same directory as compare-rbls.pl

2. i wrote it several years ago, so it has several very old and now defunct
RBLs listed in it.  change the @dnsrbls array to list only the ones you
want to check.

for example, change this:

my @dnsrbls = qw(blackholes.mail-abuse.org relays.mail-abuse.org
 dialups.mail-abuse.org
 relays.osirusoft.com 
 inputs.orbz.org outputs.orbz.org
 or.orbl.org
 relays.ordb.org);

to this:

my @dnsrbls = qw(cn-kr.blackholes.us
 taiwan.blackholes.us
 brazil.blackholes.us
 hongkong.blackholes.us
 list.dsbl.org
 sbl-xbl.spamhaus.org
 dul.dnsbl.sorbs.net
 dnsbl.sorbs.net
 Dynablock.njabl.org
 relays.ordb.org 
);

BTW, except for dnsbl.sorbs.net (which i don't use because i don't like their
de-listing policy - but i do use their DUL list), these are the RBLs i am
currently using.

3. i just updated the script to use the @dnsrbls array as shown abovebut
it's still useful to know how to configure it.

4. it is very slow.  it has do to one DNS lookup per RBL per IP address seen.
this is fairly slow, anyway, and it uses Net::DNS, which is not noted for it's
speed.

if you want to trial it on a small subset, do something like this:

tail -1000 /var/log/mail.log /tmp/small.log
compare-rbls.pl /tmp/small.log | less

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-09 Thread Craig Sanders
On Thu, Dec 09, 2004 at 10:22:24PM -0700, Michael Loftis wrote:
 if you want to see it, look in http://taz.net.au/postfix/scripts/
 
 it's called watch-maillog.pl

 One little note about that script, the DROP needs to be changed since
 basically you're DoSing yourself by hanging a bunch of connections

DoS is a huge exaggeration.  a few smtpd processes waiting to timeout does not
constitute a DoS.  neither does a few dozen. 

 because you suddenly start dropping their inbound packets while still
 'in-flight' as it were. postfix's default timeouts are about 300s, so
 you'll want to turn those down (300s seems too generous to me for most
 of them anyway)

aside from the DoS exaggeration, that is true, but i don't careor more
accurately, i care more about spammer noise in my logs and the bandwidth that
spammers waste.  i have more than enough smtpd processes, ram, and cpu power
available to cope with a few (or even several dozen) smtpds waiting to time
out.  

i can also cope with the eventual dropped connection messages in the logs -
instead of vaguely annoying me like the spam rejects do, they give me a feeling
of satisfaction that i have in some small way slowed down the spamware by
silently dropping their packets.




the first workable fix i can think of is to DROP only smtp packets with SYN
set, rather than all smtp packets.

alternatively, i could extract the PID of the smtpd process and send it a HUP
at the same time as i created the iptables rule.

if it ever bothered me, i'd do one or the otherbut, as i said, it's not
something i care much about.

craig

ps: watch-maillog.pl is a toy that i wrote for my own amusement.  if you like
it, run it or adapt it for your own needs.  if you don't, then ignore it.  i
don't claim that it's good software or even that it's useful.  i wrote it more
as a proof of concept than anything else.

pps: it also monitors TLS connection failures and adds them to
/etc/postfix/tls_per_site (which doesn't seem to be really necessary now, but
they were quite common a few years ago, mainly due to a particularly broken
version of communigate) and it does basic pop-before-smtp (dovecot only because
that's what i run).  these two features are actually useful :)

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Craig Sanders
On Wed, Dec 08, 2004 at 07:51:13PM +1100, Russell Coker wrote:
 On Wednesday 08 December 2004 09:55, Michael Loftis [EMAIL PROTECTED] 
 wrote:
  I have to agree with that statement. For us it suits our needs very
  well. I don't mind handling the extra retry traffic if it means
  legitimate mail on a 'grey/pink' host is just temporarily rejected
  or delayed while they clean up, in fact this is far more desireable
  for us.

 How would I configure Postfix to do this?

probably maps_rbl_reject_code = 450

 Craig, why do you think it's undesirable to do so?

because i dont want the extra retry traffic.  i want spammers to take FOAD as
an answer, and i dont want to welcome them with a pleasant please try again
later message.  i think it is a sin to be polite or pleasant to a spammer :)

even on my little home system, at the end of an adsl line, i reject nearly
10,000 spams per day (and climbing all the time).  i would expect that to at
least double or triple if i 4xx-ed them rather than 5xx, depending on how much
came from open relays or spamhaus rather than dynamic/DUL.


i can see why some might want to do this, but not me.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Craig Sanders
On Thu, Dec 09, 2004 at 12:00:42AM +1100, Russell Coker wrote:
 On Wednesday 08 December 2004 20:16, Craig Sanders [EMAIL PROTECTED] wrote:
   Craig, why do you think it's undesirable to do so?
 
  because i dont want the extra retry traffic.  i want spammers to take FOAD
  as an answer, and i dont want to welcome them with a pleasant please try
  again later message.  i think it is a sin to be polite or pleasant to a
  spammer :)
 
 I agree that we don't want to be nice to spammers.  But there is also the 
 issue of being nice in the case of false-positives.

if it's a false positive, the sender will get a bounce from their MTA and they
can fix the problem or route around it.  IMO, that's far nicer to legit senders
than them not knowing that their mail isn't being delivered because it's stuck
in their MTA's queue rather than bouncing back to them - the former means it's
probably 5 days before they know there is a problem, while the latter gives
them instant feedback.

 The extra traffic shouldn't be that great (the message body and headers are 
 not being transmitted).  

it's still MY bandwidth being used by spamming vermin, even if it's not much (i
begrudge those bastards even a single bit) and it still generates huge amounts
of noise in my mail.log files.

the log file noise issue is important to me - i've recently started monitoring
mail.log and adding iptables rules to block smtp connections from client IPs
that commit various spammish-looking crimes against my system.  some crimes get
blocked for 60 seconds, some for 10 minutes, some for an hour.  each time the
same IP address is seen committing a crime, the time is doubled.  i am doing
this not because i'm worried that spammers will get their junk through my
anti-spam rules but because a) i don't want their noise in my mail.log, and b)
it was an interesting programming project that amused me for a few days of part
time perl hacking.

 When a legit user accidentally gets into a black-list their request
 to get the black-list adjusted can often be processed within the time
 that their mail server is re-trying the message.

similarly, they can resend the message themselves when they know the problem
has been fixed, WITHOUT flooding my logs with crap i don't want to see AND
they'll have had immediate feedback about the problem with their mail system.
everyone wins.

if it's important, they'll resend it.  if the sender doesn't think it's
important enough to bother resending, then why should i care?


  even on my little home system, at the end of an adsl line, i reject
  nearly 10,000 spams per day (and climbing all the time). i would
  expect that to at least double or triple if i 4xx-ed them rather
  than 5xx, depending on how much came from open relays or spamhaus
  rather than dynamic/DUL.

 30,000 rejections per day is only one every three seconds. Not a huge
 load.

the important factor here is that it's one every 3 seconds that I DON'T WANT.
i don't want the ~10,000 per day that i currently get and i see no reason to
take any action that will increase that number.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Craig Sanders
On Wed, Dec 08, 2004 at 07:41:12PM +0100, Philipp Kern wrote:
   Received: from [217.226.195.183] by web60309.mail.yahoo.com via HTTP; Mon,
   29 Nov 2004 19:12:36 CET Content-Type: text/plain; charset=iso-8859-1
 
 SpamAssassin looks at all the headers. If this is a good choice or not
 is debatable. The MTA would only judge by the IP that connects to him
 which was in fact a Yahoo IP.

i turn off DUL checking in SpamAssassin.  i use DULs in postfix RBL checks, but
it makes no sense to do DUL checking in SA on mail received from a real MTA -
almost all mail will originate from a dialup/dynamic IP.

in local.cf, that looks like this:

# ignore DUL
score RCVD_IN_DYNABLOCK 0.0

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Craig Sanders
On Wed, Dec 08, 2004 at 03:38:36PM -0700, Michael Loftis wrote:
 --On Thursday, December 09, 2004 01:12 +1100 Craig Sanders [EMAIL 
 PROTECTED] 
 wrote:
 
 if it's a false positive, the sender will get a bounce from their MTA and
 they can fix the problem or route around it.  IMO, that's far nicer to
 legit senders than them not knowing that their mail isn't being delivered
 because it's stuck in their MTA's queue rather than bouncing back to them
 - the former means it's probably 5 days before they know there is a
 problem, while the latter gives them instant feedback.
 
 You massively over estimate the intelligence of email users.  Far far 
 over-estimate it.  

no, it's just that i don't give a damn about the dumb ones.  if they're too
dumb to figure it out, then they're too dumb to be using the net.

i'd rather provide help  useful information to the smart ones than spoonfeed
dumb users.  no matter how much you spoonfeed dumb users, they'll still cause
you problems and cost you time  money.  best of all, providing useful
information also gives a potentially educational experience to all, with a
small but real chance of starting them on the path towards being a smart user.

(in short, my unshakably firm belief about tech support has always been that
education is far better than spoonfeeding.  education works in the long run,
spoonfeeding is just begging for an eternal problem)


 When we were doing 5xx returns, about a dozen bounces a day were  
 reported as SPAM to my abuse address. 

never happened to me.  if it ever did, i'd probably just ignore it.  at most,
i'd reply this is not spam, it's a bounce, and include a URL to a page with
information about how mail works.  i certainly wouldn't waste much time on it.


 Further more, most people do not understand bounce messages, at all.  

yes, i've seen that thousands of times.  the common occurence is some idiot
user forwarding you the bounce message that clearly states user unknown or
similar and asking you why did this bounce?

if i'm in a bad mood, i'll reply and say something like 'User unknown' means
that the user is unknown.  

or if i'm feeling particularly helpful, i may expand on that and give them some
suggestions on what to do or try.

either way, in any reply i'll *always* point out that the answer to their
question was right there in front of them - all they had to do was make the
trivial effort to read it.

 If they ever get the bounce message, increasingly I'm seeing the owrrying 
 trend that bounce messages from MTAs end up as SPAM or just /dev/null-ed.

you're not responsible for the stupid things that other people do.

 Anyway there certainly ARE merits to both sides, and I can and do 
 understand and see your point.  I don't like the log chatter but that's 
 easy to deal with compared to our L1 support time when we 5xx.  People get 
 stupidly angry sometimes too 'you have no right to bounce my mail!'  that's 
 like saying the post office has to deliver all bombs, but logic has nothing 
 to do with most people :)

yes, i've seen that too.  the obvious response is that the post office can't
deliver misaddressed mail either.  if you send a snail-mail letter with the
wrong address on it, the post office can't magically correct it.  if you're
lucky, you'll get it returned back to you.  most likely, it will just sit on
the hall-stand of the wrong address along with all the other misaddressed mail
and mail for people who used to live there that they receive.

just because they're angry doesn't make them right.  if they won't listen to
reason, then ignore them.  you've done your best, that's all that can be asked
of you.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Craig Sanders
On Thu, Dec 09, 2004 at 11:27:27AM +1100, Russell Coker wrote:
 On Thursday 09 December 2004 01:12, Craig Sanders [EMAIL PROTECTED] wrote:
  the log file noise issue is important to me - i've recently started
  monitoring mail.log and adding iptables rules to block smtp connections
  from client IPs that commit various spammish-looking crimes against my
  system.  
 
 Interesting.  Do you plan to package it for Debian?

nope, it's just a trivial script - and one that's probably dangerous to use if
you don't understand what it's doing, and i don't plan on documenting it beyond
comments in the script itself.  in short, it's a toy for me.

if you want to see it, look in http://taz.net.au/postfix/scripts/

it's called watch-maillog.pl

there's a bunch of other postfix related scripts in there.

you may also like qvmenu.pl, a curses-based postfix queue browser that i wrote.
it allows you to pipe queued messages into less or urlview, select multiple
messages and delete, hold, unhold or re-queue them (i.e. a wrapper around
postsuper).  pretty simple to add new features...as with most of my scripts, i
write for readability rather than optimised speed.

i work on this occasionally, and add new stuff as i need ite.g. there's a
half-implemented bounce to sa-spam/sa-ham alias featurethat's because i
have a header_checks rule to HOLD all mail with an SA score over 10.  this is
the reason i wrote qvmenu.pl in the first place.  false-positives i just
unhold.  spam, i usually view  urlview them to get fodder for my SA 
body_checks rules, then delete them.  i also want to be able to bounce them to
sa-spam or sa-ham (sa-learn aliases on my system).   i'll finish that off when
i get time.

because it calls mailq to get the queue listing, it's probably too slow to use
on any system with thousands of messages in the queue.  i've used it on systems
with hundreds, and found it to be OK.  actually, i doubt if it would be much
faster even if it trawled the queue directories itself - mailq isn't exactly
inefficient and bloated.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-08 Thread Craig Sanders
On Thu, Dec 09, 2004 at 11:27:27AM +1100, Russell Coker wrote:
 On Thursday 09 December 2004 01:12, Craig Sanders [EMAIL PROTECTED] wrote:
  the log file noise issue is important to me - i've recently started
  monitoring mail.log and adding iptables rules to block smtp connections

i also wrote another trivial script which fetches a named blackholes.us text
file and creates iptables rules to match.  not sure if this is a worthwhile
experiment - if for no other reason than the fact that iptables doesn't seem to
cope well with thousands of rules in a chain (could probably work around that
with a chain per country...but i'm probably not going to bother since i'm
pretty sure that this is NOT a good thing to do).

i'm currently running with korea.blackholes.us completely filtered out as a
test.  (korea is where most of my spam attempts come from).  so far it has
blocked over 16000 packets from korea.  since all *I* ever get from there is
spam and probes from script-kiddies and viruses, that's a Good Thingtm.  it
probably wouldn't be a good thing anywhere other than on my home gateway.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: a couple of postfix questions

2004-12-07 Thread Craig Sanders
On Tue, Dec 07, 2004 at 03:57:30PM -0500, Stephen Gran wrote:
 I think that I would like to migrate to all exim4 and postfix (I would
 basically like to dump the sendmail and qmail systems).

good choices.

 The things that are vitally important are the ability to reject at smtp
 time for invalid localparts and for viruses - I believe that postfix (at
 least in recent versions) can do this, but I am just not sure.  I do not

postfix can.  in fact, it does it by default.  

you can also configure it with a relay_recipient map to reject at stmp level
for unknown users in relay domain as well as local domains (by listing all the
valid users in the relay_recipient map)particularly useful for backup MX
machines and gateway boxes that forward to an internal/firewalled mail server.


 I guess what I am asking for is people's experiences migrating existing
 (especially sendmail) systems to postfix, and how easy it is to tie other
 things into it, especially at smtp time.  We're talking about migrating

migrating from sendmail to postfix is easy.  in fact, migrating between
sendmail, postfix, exim, smail and most other MTAs except qmail is fairly
straight-forward - as long as you plan out what you're going to do in advance
and follow the plan, you're unlikely to run into any problems.  they're all
similar enough that you can even re-use some of the map files, although some
require minor transformations.  e.g.  sendmail and postfix virtual user tables
are almost identical, except that postfix's virtual table allows multiple
recipients on the RHS.

migrating to/from qmail is always a PITA.  aside from being ancient (and thus
not keeping up with current mail practices, especially spammers and viruses),
the main problem with qmail is that it is a dead-end trap.it makes no
attempt at backwards/forwards- compatibility with other MTAs, so any migration
basically involves re-doing everything from scratch.  you won't be able to
re-use map files (like /etc/aliases) or make the fairly trivial transformations
to convert them, e.g., a sendmail mailertable to a postfix transport table.


 Thanks for any pointers to docs, experiences, or anything else. Martin
 and Craig - I know you two in particular are both big advocates of
 postfix, so I guess I am partly addressing this to you two, although
 feel no obligation to give free tech support :)

well, if you've read the archives, you've already seen my reasons for preferring
postfix, so i won't repeat them here.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: blacklists

2004-12-07 Thread Craig Sanders
On Tue, Dec 07, 2004 at 10:18:28PM +0100, Marek Podmaka wrote:
   My question is - does the spam software (or whatever is used for
   sending majority of spams) try to re-send it? 

most (if not all) spamware and viruses won't.  open relays and spamhaus sites
and other real MTAs will.

 How often 

impossible to say, depends on many factors on the sending host - how it is
configured, what software it is running, how busy it is, how many items in the
queue, etc etc.could be every few seconds, could be several hours between
delivery attempts.

many mail servers implement an exponential backoff strategy.  they will try
again very quickly to start with and, on each failure, increase the delay (e.g.
doubling the time) between attempts until it reaches a maximum delay.

 and for how long? 

until the queue lifetime expires (usually 5 days, although it could be 
anything).

 Now I reject by 554 code...  should I change to 4xx?

if it suits your needs.  i wouldn't.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: a couple of postfix questions

2004-12-07 Thread Craig Sanders
On Tue, Dec 07, 2004 at 06:13:58PM -0900, W.D.McKinney wrote:
 On Wed, 2004-12-08 at 08:14 +1100, Craig Sanders wrote:
  migrating to/from qmail is always a PITA.  aside from being ancient (and 
  thus
  not keeping up with current mail practices, especially spammers and 
  viruses),
  the main problem with qmail is that it is a dead-end trap.it makes no
  attempt at backwards/forwards- compatibility with other MTAs, so any 
  migration
  basically involves re-doing everything from scratch.  you won't be able to
  re-use map files (like /etc/aliases) or make the fairly trivial 
  transformations
  to convert them, e.g., a sendmail mailertable to a postfix transport table.
 
 Wow Craig,
 
 We moved over from the bloated Postfix box to a lean mean qmail install,
 been rock solid since. 

you obviously speak a different language, with strange and bizarre definitions
for common words  phrases like bloated and rock solid.

trying to interpret here, bloated must mean something like has essential
features, and rock solid probably means reasonably solid if you ignore
really stupid annoyances like the fact that it can't reject a message at the
SMTP level, it *always* accepts and then bounces it.

 To each his own though and as I always say, pick a horse and learn to
 ride. :-)

yes, but it's generally better to pick a good horse rather than a three-legged,
half-blind bad-tempered mule that is well past retirement age.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: a couple of postfix questions

2004-12-07 Thread Craig Sanders
On Tue, Dec 07, 2004 at 06:35:47PM -0900, W.D.McKinney wrote:
   To each his own though and as I always say, pick a horse and learn to
   ride. :-)
  
  yes, but it's generally better to pick a good horse rather than a 
  three-legged,
  half-blind bad-tempered mule that is well past retirement age.
  
  craig
 
 Hmm, meaning Hotmail, Yahoo and others run three legged mules ? :-)

yes.

the fact that some large sites run a particular piece of software isn't
terribly significant.

huge companies like Microsoft run Windows, but that doesn't in any way mean
that Windows isn't a huge steaming POS.

and many large mail sites still use sendmail.  ditto.

they either don't know any better or it would take too much effort and/or cause
too many problems to change that it's not worth it.


 Bloated means overweight, non essential and not availble to chuck out
 the window up here.

it's stretching the imagination way beyond credibility to call postfix in any
way bloated.

even with all the extra features (many of which are *essential* these days),
postfix still out-performs qmail in every way.  in fact, some of the extra
features help it to outperform qmail.


 Rock Solid means it's been so long long since we needed to make a
 change, it's easy to forget how.

the fact that a) qmail makes it hard to make changes, and b) qmail doesn't even
support many of the things required in a modern MTA, means that you have no
choice but to ignore important things like backscatter and recipient
validation. 

that's not a feature, that's a bug.

that doesn't mean you *SHOULD* ignore them, it means that the software you 
choose
to use makes it impossible to do anything about them.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: hanging imapd processes (was Re: Runaway processes ?)

2004-11-28 Thread Craig Sanders
On Sun, Nov 28, 2004 at 09:26:34PM +0200, Uffe Pensar wrote:
 I'm in the process of separating the webmail from the imap and I have 
 have installed xinetd with
 max_load and other limits.
 But still I can't understand that i get those hanging imapd processes ?
 
 # ps axu|grep imapd|grep root
 root 22700  0.0  0.0  4064 1308 ?SNov25   0:00 [imapd]
 root 11359  0.0  0.0  4064 1400 ?SNov27   0:00 [imapd]
 root  6473  0.0  0.0  4064 1400 ?SNov27   0:00 [imapd]
 root  3801  0.0  0.0  4064 1400 ?SNov27   0:00 [imapd]
 root  6194  0.0  0.0  1752  732 pts/7S21:11   0:00 grep imapd
 
 woody and uw-imap-ssl (those hanging connections are not coming from 
 webmail)

there could be clients still connected from Nov 25  Nov 27.  impossible to
tell just from a ps listing.  you can use netstat and lsof to see which
processes have active connections (or are listening) on the imap port.   see
the man pages for details.

but i wouldn't bother.  uw-imapd is junk, and its problems are pretty much
unfixable.  that explains a lot.  i'd just replace it with something sane.


BTW, you chose the more difficult path.  instead of just replacing uw-imapd
with dovecot, which would have been a simple action with one isolated effect
(changing the imap daemon), you chose to replace inetd with xinetd, which
affects dozens or possibly hundreds of unrelated inet daemons.  why?

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Runaway processes ?

2004-11-24 Thread Craig Sanders
On Wed, Nov 24, 2004 at 07:15:43PM +0200, Uffe Pensar wrote:
  [141 lines of quoted material deleted.  please learn to trim your quotes ]

 ok thanks for all the good advices, I will install postfix and have a 
 look att the dovecot and xinetd packages

if you're running dovecot, you don't need xineted.  it is a standalone daemon,
which has it's own (configurable) limits on maximum simultaneous connections.

 but as a quick fix its seems that authenticating from a local server
 (instead of radius) and restricting the number of webmailsessions has
 helped for the moment. But I suppose we have to buy more servers in
 the near future.

yes, separating the tasks of 1) sending  receiving mail and 2) storing it 
providing imap/pop/webmail/etc access is very useful.

remember that mail is an I/O bound system.  i.e. most of the time the processor
is sitting idle waiting for disk I/O to complete.  upgrading the CPU will do
little or no good here. to improve performance, you need to improve the I/O
speed - you can do this with faster disks, hardware raid card with large
non-volatile cache, and by adding more RAM to the system.  or by spreading the
I/O load over more disks and/or more servers.

a good starting point for a design is to have multiple small  cheap machines
which are configured to accept all incoming mail (i.e. the MX records point at
them), filter spam and viruses, and then either forward it on to the backend
mail store server, or write it directly into NFS-mounted mail spool
directories.  if you use nfs, then you MUST use an NFS-safe mailbox store, like
Maildir.  trying to use mbox over NFS will almost certainly lead to mailbox
corruption (although on debian it should be safe because *all* mail handling
programs should use the *same* NFS-safe locking method.  i wouldn't count on it
though, especially if you compile your own mail programs rather than use the
packaged ones)

note that these MX boxes *MUST* have access to a list of valid recipients for
all domains that it accepts mail for.  this allows it to reject mail for
unknown users during the smtp session rather than accept-and-bounce, so it
doesn't generate backscatter and get bogged down with virus bounces and
undeliverable spam bounces.


the backend mail store could be either one very large and expensive server
which stores the mail(*) and handles all the imap clients, or one medium-sized
file-server (which stores the mail) and several small and cheap imap boxes
which handle the imap connections, with NFS-mounted mail store (see comments
above about NFS-safe mailboxes).  when building the mail store, I/O performance
is your key design criteria.  don't worry about CPU, all the CPU-heavy tasks
(like spam and virus filtering) are dont by the MX boxes.


you may optionally want another box to handle all outgoing mail (i.e. the one
that your clients use to send mail through, the mail relay).  this one should
also be optimised for fast I/O.  one good way of doing this is to use a
solid-state-disk (SSD - essentially a large battery-backed ramdisk that looks
like a scsi or ide drive) for the mail queue.  e.g. mount /var/spool/postfix
(or /var/spool/mqueue if using sendmail) on the SSD device.SSDs are
typically small and expensive, 1 or 2GB is likely to be the limit of
affordabilitybut that's more than enough for a mail queue partition.

in my experience, though, all but the largest ISPs receive dozens or hundreds
of times more mail than they send, even when taking mailing lists into account.
the mail store machine can double as the outbound mail relay - giving it an SSD
device for the mail queue is a good idea.
 


(*) e.g. on multiple 15000 rpm hard disks on a hardware raid-5 controller with
at least 128MB of non-volatile cache ram.  or whatever else it takes to
optimise this box for extremely fast I/O.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Runaway processes ?

2004-11-23 Thread Craig Sanders
On Tue, Nov 23, 2004 at 02:28:43PM +0200, Ulf Pensar wrote:
 We have an emailserver that we had to reboot the hard way a couple
 of times a week.Now its a couple of time a day (perhaps because
 the number of users have been growing) 
 [...]
 the inetd generates root owned processes and
 it doesnt stop before inetd is being killed. Then we have to reboot
 the server to go on working.

if you really must run your imap server from inetd, then consider using xinetd
which allows limits on the number of simultaneous connections.  or get rid of
uw-imapd junk and replace it with something better (see below)

 I guess it is the webmail that is creating those imap-processes but
 I'm not sure (could be imaps-clients of course).
 
 Have you seen anything like that and what could be done?
 
 Facts:
 dell power edge 600 Sc, intel celeron 1,7 GHz, 2 GB ECC memory
 running:
 - woody (kernel compiled from debian package kernel-source-2.4.18)

fine so far.

fortunately, there's a lot you can do to help performance.

 - uw-imap-ssl (starting from inetd)

replace with something sane.  dovecot or courier-imapd for example.

dovecot works with mbox and Maildir mail boxes, courier-imapd only with
Maildir.  you probably have mbox if you're running an old sendmail machine.

it's a trivial upgrade - apt-get install dovecot.

btw, both dovecot and courier support both SSL encrypted and unencrypted
versions of the protocols.  dovecot does pop  imap in the one package, while
courier has separate packages: courier-imapd and courier-popd.

 - sendmail (with milter and clam)

replace with postfix.  watch your mail transport related load vanish
instantly.

 - bind, dhcp, mysql, radiusd-cistron (latest woody packages)

it kind of makes sense to have radius on your mail server, IF you are
authenticating against /etc/passwd.

dhcp, and mysql could be moved to other machines.

bind probably can't be moved without a lot of pain, if you have domains
delegated to this IP address.  if you're just using it as a caching resolver,
then consider replacing it with something lighter - perhaps maradns or djbdns.


 - webmail (the latest stable imp/horde)
 - php-4.3.9
 - imapproxy, just for the webmail (the latest)

imp doesn't have to run on the mail server.

consider moving these to another machine, perhaps your web server.

or consider using a different webmail program.  imp is pretty heavy on
resources like memory.  courier-sqwebmail is fairly light and integrates well
with the other programs in the courier suite, courier-maildrop, courier-imap,
etc.  there are other lightweight ones around too, if you don't like sqwebmail.



other things you can do:

1. encourage people to delete mail from the server rather than leaving it on
there.  you can do this by implementing quotas.  start by setting a quota which
is several megabytes *above* the largest mailbox (i.e. the unofficial,
temporary quota).  announce that you are setting a quota of what your eventual
target is (i.e. the official quota).  gradually reduce the quota every week
until target is reached.  don't tell your users how much leeway they have at
any given moment because they will abuse that knowledge.  if they ask just tell
them the official quota and mention that some *unspecified* leeway is given for
a short *unspecified* time.

if you don't want to compile the kernel for quota support and install the quota
package, you can crudely simulate it for mailboxes with postfix's
'mailbox_size_limit' parameter.  this is per mailbox file.  imap users can get
around the quota by saving messages to different mailbox files.


alternatively, if you must allow users to have huge mailboxes, then:

2. switch to Maildir rather than mbox.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: CMS

2004-11-22 Thread Craig Sanders
On Mon, Nov 22, 2004 at 09:46:47AM -0500, Ross, Chris wrote:
  I am looking for a web content management system to allow K12
 teachers and possibly students to easily maintain personal or project
 web space. There are a lot of products out there but most of them are
 not set up to allow access to one area and not the entire site. This
 does not have to scale too big. A few thousand user sites max. ( We
 have no idea what the user load will be since we only have 400 to 500
 users now and most don't keep their sites up to date. )

 Criteria:

 1. Access control that would allow someone access to areas that they
 have been allowed to work and no other area.

 2. Web browser accessible. GUI editor.

 3. EASY to use for non technical folks!

 4. Little modification needed.

dunno if it's what you want, but there's a kind of clone of yahoo groups
called GNU Glubs (GNU Clubbing System).

when i last looked at, about a year ago, it was missing a few features compared
to yahoo, but it seemed to do the basic job.

sourceforge project page is at:

http://sourceforge.net/projects/glubs/


doesn't look like it's been changed since Feb 2003.

it's in perl, can use postgresql (or mysql too, i think) as the db backend, and
the code was relatively easy to understand and modify.  works with apache 
CGI, or apache with mod-perl.

not finished, but a pretty good base, i thought.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: CMS

2004-11-22 Thread Craig Sanders
On Tue, Nov 23, 2004 at 09:20:33AM +1100, Craig Sanders wrote:
 it's in perl, can use postgresql (or mysql too, i think) as the db backend,

oops, wrong.  it uses mysql, not postgres.  i hacked it to work with postgres
on my system because i didn't want to install rubbish like mysql.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Updated Debian boot-floppies for Proliant

2004-11-17 Thread Craig Sanders
On Thu, Oct 14, 2004 at 01:51:00PM +0200, Emmanuel Halbwachs wrote:
 The next step is getting the hp monitoring tools working on debian.
 
 Take a look at http://www.sk-tech.net/support/HPrpm2deb.sh.html
 It worked straightforward for me.
 
 This script fetches the RPM for RedHat on HP's FTP site, does alien,
 adapts some things for debian and eventually gaves debs.

anyone know if this has been updated for kernel 2.6.x?


BTW, i installed sarge on a DL360G3 a few days ago (a conversion from RHEL).
it worked perfectly up until the point where i rebooted, then it
kernel-paniced...couldn't find /dev/console.

i suspect that the kernel that is installed to the system, or the modules in
the initrd.gz, is somehow different from the one that debinstaller boots on.  i
fixed it by compiling a new 2.6.8 kernel on another machine, with all the
drivers that it needs compiled in, and without an initrd (personally, i think
initrds are way more trouble than they're worth).  they make it easier for
installers but they're a PITA for a running system.

after installing a custom kernel, it booted up fine, and then with a lot of
filesystem shuffling, i got everything that was running on the old RH system to
run on the new debian system - except for the compaq health monitoring.


one other problem, is that i can't get the kernel to detect the full amount of
RAM - it has 2GB, but it's only detecting 1GB.  I tried adding mem=1920M in
grub but that didn't help.

craig

ps: i've got 4 more to convert to debian over the month or two, 2 x DL360s and
2 x DL380s.  i'll have to figure out how to make a sarge installer iso with my
custom kernel on it (and without initrd).

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Updated Debian boot-floppies for Proliant

2004-11-17 Thread Craig Sanders
On Thu, Nov 18, 2004 at 07:40:01AM +1100, Craig Sanders wrote:
 one other problem, is that i can't get the kernel to detect the full
 amount of RAM - it has 2GB, but it's only detecting 1GB. I tried
 adding mem=1920M in grub but that didn't help.

doh!  i forgot to compile high memory support into the kernel.  fixed now.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-12 Thread Craig Sanders
On Fri, Nov 12, 2004 at 10:09:36AM +0100, Adrian 'Dagurashibanipal' von Bidder 
wrote:
 On Friday 12 November 2004 07.47, Craig Sanders wrote:
  On Fri, Nov 12, 2004 at 05:12:34AM +, John Goerzen wrote:
 
  4 ETRN
  
   Weird, people are just sending ETRN commands to you?
 
 me too. One is a mail server of a respected company that is apparently 
 misconfigured, and has been for a few years.  I've written the postmaster, 
 I've written the IP block owners etc. - they just don't care.
 
 I probably should flood them with bogus email when they call in next time, 
 perhaps that would make them pay attention... :-]

i just ignore it, same as i ignore all the probe attempts on various ports.

they're annoying, and i wish they wouldn't happen, and i have to take steps to
protect my systems against them, but they happen far too often to get too upset
about them.  block it, log it, and move on.


 26 RBL Dynablock.njabl.org
  
   My own static DSL IP is on this one.  Lots of people have legit reasons
^^
   for not using their ISP's sucky, crappy mail servers.
 
  viruses that come from dynamic IPs.
  ^^^
 
 Craig, you seen that? 

sorry, i didn't notice that first time around.  thanks for pointing it out.

 Dynablock seems to include some static IPs.

IIRC, dynablock notes that this can happen on their web site.  they say it's
typically because the ISP concerned does something like:

1. allocates static IPs from the same pool as dynamic IPs
2. has reverse DNS entries that imply dynamic IP
3. maybe some other similar reasons, i forget...

unfortunately, there's nothing the end-user can do to resolve this.  the only
people they will listen to for requests to remove such possibly-bogus dynamic
listings are the owner(s) of the netblock (i.e. the ISP).  presumably that is
because spammers are not above lying if it suits them and have no qualms about
claiming that they are a legit mail operator on a really, truly,
honest-i-tell-you static IP.

possibly also because it's a way to encourage slack-arse ISPs to adopt better
practices.

personally, i'm inclined to still use dynamic blocks even with these errors,
and add whitelist entries to my rbl_override map if and when i need to.

 (I guess John is at one of those ISPs who mix static IPs and dynamic IPs in 
 the same IP range, or at least use the same xxx.dsl... reverse DNS.)

possibly.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-11 Thread Craig Sanders
On Thu, Nov 11, 2004 at 09:25:52PM +, John Goerzen wrote:
 I just switched from Postfix to Exim.  I am now a big fan of Exim.
 
 http://changelog.complete.org/articles/2004/11/08/latest-experiment-exim/
 http://changelog.complete.org/articles/2004/11/11/exim-transition-successful/

glad to hear it worked for you.


a few comments, though:

1. synchronization detection - postfix has done this for years, except that
it's called reject_unauth_pipelining.  you enable it as one of the
smtpd_*_restrictions.

2. postfix does support filtering during the SMTP transaction.  the difference
is that the postfix author tells you up front that it is inherently problematic
(for *ANY* MTA, not just postfix) because of the potential for SMTP timeouts if
the filter takes too long to run (SpamAssassin, for example, could take ages to
complete regardless of whether it's run from exim or postfix...especially if
it's doing DNSRBL and other remote lookups), and he recommends that you don't
do it.

other MTAs blithely ignore the potential problem and tell you to go ahead and
do it.

that said, though, exiscan-acl sounds cool.  

on a light to moderately loaded server, it's probably not a huge problem.


i manage to avoid the problem by having good anti-spam/anti-virus rules (and a
huge junk map and set of body_checks  header_checks rules) that it rejects
about 99% of all spam during the SMTP session.  very little makes it through
them to be scanned with amavsid-new/spamasssassin/clamav.  still, i sometimes
think it would be nice to run SA at the SMTP stage.

e.g. my spam-stats.pl report for last week (this is for a little home mail
server with about half a dozen users):

ganesh:/etc/postfix# spam-stats.pl /var/log/mail.log.0
  2 RBL bogusmx.rfc-ignorant.org
  4 Unwanted Virus Notification
  4 ETRN
  6 body checks (VIRUS)
 12 header checks (VIRUS)
 15 RBL taiwan.blackholes.us
 26 RBL Dynablock.njabl.org
 28 RBL hongkong.blackholes.us
 39 RBL brazil.blackholes.us
 76 Local access rule: Helo command rejected
114 Relay access denied
145 SpamAssassin score far too high
148 body checks (Spam)
163 Local address forgery
200 strict 7-bit headers
202 RBL dul.dnsbl.sorbs.net
212 RBL sbl-xbl.spamhaus.org
253 header checks (Spam)
288 Need FQDN address
297 Recipient Domain Not Found
429 RBL list.dsbl.org
517 Local access rule: Client host rejected
687 Greylisted delivery attempt
717 Dynamic IP Trespass
   1361 RBL cn-kr.blackholes.us
   1463 Sender Domain Not Found
   4779 User unknown
   6422 Recipient address rejected
   6970 Local access rule: Sender address rejected
  22256 Bad HELO

  47835 TOTAL


Spamassassin stats:
 77 spam
   2919 clean
   2996 TOTAL

Percentages:
spam:non-spam (47912/50831) 94.26%
tagged messages (77/2996) 2.57%
rejected spam (47835/47912) 99.84%


only 2996 messages (out of 50831) were accepted by postfix and scanned
by SA.  of those, only 77 were tagged as spam, plus another 145 that were
discarded by a header_checks rule which detects whether the SA score
is over 13.0 (discard, not reject) when amavisd-new tried to reinject
the message back into postfix after content-filtering.


that was a pretty average week, although (as ever) the number of attempts to
deliver spam goes up all the time.  2 months ago, it was averaging about 30-35K
rejects per week.  now it's nearly 50K.  the percentages don't change much,
spam is already well over 90% of what my MTA sees.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-11 Thread Craig Sanders
On Thu, Nov 11, 2004 at 05:12:10PM -0500, Mark Bucciarelli wrote:
 On Thursday 11 November 2004 17:04, Craig Sanders wrote:
 
22256 Bad HELO
 
 wow.

most of them being spammers trying to use my IP address or a bogus domain name
in the HELO/EHLO string.  and most of them from Korea.

most of them were also to non-existent recipients (it's just that the HELO
check rules were triggered first) - i expect i pissed off a few spammers over
the last 10 years or so that i've had my domain, and they've retaliated by
adding many thousands of bogus @taz.net.au addresses to their spam lists, which
get swapped with or sold to other spammers.  once an address gets on a spam
list, it never gets off, it just gets added to more and more spam lists.
regardless of whether it exists, or even whether it ever existed.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-11 Thread Craig Sanders
 or pop-before-smtp or any one of dozens of alternative
authentication methods.

IMO, if someone couldn't be bothered doing that, i couldn't be bothered
receiving their mail.  i get more than enough mail as it is, it's not as if
it's going to distress me to not get one more.

   28 RBL hongkong.blackholes.us
   39 RBL brazil.blackholes.us
 
 I have to talk to people in this country, too.

i don't.

  202 RBL dul.dnsbl.sorbs.net
 
 Ditto on this one.

yep, ditto.

 1361 RBL cn-kr.blackholes.us
 
 Have to talk to Chinese people too...

not me.  most of my spam comes from Korean IP space these days.  a few years
ago, it was from Chinese IP space.  as with taiwan, i don't need or want mail
from either country.


 4779 User unknown
 
 I am stunned at how many attempts I get to send mail to non-existant
 accounts, too.

spammers sell their lists based on the number of addresses.  they don't care if
the addresses they are selling actually exist.


22256 Bad HELO
 
 And I get many legitimate e-mails with a bad HELO.  In fact, I would
 argue that your rule here is wrong.  If I send you an e-mail from my
 laptop, it is not going to send you an address of a server that can
 receive mail (or has a DNS entry) in HELO, but everything else will be
 valid, and I argue that this is OK.

on my system, a good HELO is any real FQDN (except for my own - nobody outside
my network should HELO as my domain).  this is a rule i've tightened over the
years, it used to be anything that seemed like an FQDN, now i require that the
FQDN actually exists in the DNS.

 Anyway, thanks for the info.  It's always interesting to see what other
 people are doing.

of course.  TMTOWTDI


 And now I know where not to mail you from. :-)

:)

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-10 Thread Craig Sanders
On Wed, Nov 10, 2004 at 08:21:14AM +0100, martin f krafft wrote:
 also sprach Craig Sanders [EMAIL PROTECTED] [2004.11.10.0010 +0100]:
   There have been some very simple things that I've needed to find
   solutions to with postfix in the past which I ended up having to
   do with procmail that I can now deal with in ~ 3 lines in the exim
   config.
  
  my guess is that you just know exim better than postfix, so things
  that an experienced postfix user would find easy aren't as easy for
  you as just using exim.
 
  all of the things you listed as benefits of exim, my first thought
  was but postfix does that (and it does it better :).

 You are not seriously arguing this, right?

yes.

 The exim routers are far beyond what postfix can do.

not in my experience.

 IMHO, they are far beyond the job of an MTA, so it's more a plus for
 exim than a minus for postfix.

show me anything that you think can't be done in postfix and i'll probably tell
you how it can be done.

in my experience, the only people who say postfix can't do that are people
who don't actually know postfix, or who are so caught up in the way that you do
it in some other MTA that it never occurs to them to investigate how you might
do it in something else such as postfix.

every MTA has a different conceptual model for how mail is handled.  if someone
insists on applying exim models to postfix (or vice-versa) then they're not
going to be very successful.

 Anyway, if you are so confident about postfix, then maybe you can
 teach me how to set up spamassassin to run under the local user's
 identity,

procmail, maildrop or whatever local delivery agent you use can run
spamassassin.  that's part of an LDA's job.

even on the simplest level, a .forward file which pipes to SA is
executed under the UID of the user.

before you say but i want the MTA to do it, that's just you thinking
in terms of a monolithic MTA like exim. anyone who thinks in postfix
terms would be horrified by the idea of having a huge setuid binary try
to do everything. postfix consists of several small, modular parts. each
one does it's job, and each one is replacable. postfix can hand off
local delivery to it's own LDA called local or it can hand off local
delivery to procmail or maildrop or cyrus or whatever. you can even have
some local mail delivered by local and some by procmail etc. as far as
postfix is concerned, it doesn't matter - as long as they fulfil the
function of a local delivery agent.

 and how to route messages based on the sending address
 (for SPF reasons).

no idea, never needed to do it.  try the postfix-users archives.

if it's not straight-forward, i'll bet you could do it with a policy server.


  ps: i've used pretty nearly all of the free software MTAs (and
  some not-so-free, like qmail) over the last 15 years.
 
 So have i, but i miss in your list a mention of exim. 

i tried exim sometime after switching to sendmail.  it was just smail without
the stupid bugs, so i saw no reason to switch to it.  it's progressed a lot
since then, but it is still the same model as exim.

 I have also never used exim because I had settled on postfix through
 much the same path (I also checked out zmailer in between) as you and
 was

me too.  it didn't do anything amazingly different and was even clumsier to use
than qmail.

i tried pretty nearly every MTA i ever cam acrossand am a firm believer in
the maxim that all mail programs suck, but some suck less.  and postfix sucks
least of all.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-10 Thread Craig Sanders
On Wed, Nov 10, 2004 at 11:09:47AM +0100, martin f krafft wrote:
 also sprach Craig Sanders [EMAIL PROTECTED] [2004.11.10.1014 +0100]:
   I agree. But exim can do it. And even though this is the LDA
   part of it, postfix also includes an LDA, which is just not up
   to speed.
  
  and postfix can do it too.
 
 No, it cannot, unless you use spamassassin as the LDA, which is
 deprecated. 

spamassassin is not an LDA.

you use procmail or maildrop or something as the LDA, and that calls SA,
running as the user.


 Exim can use multiple sequential filters as part of the LDA (which are
 all run as the user).

that's a function of the LDA.  procmail can do that, and so can maildrop.

i have no idea if postfix's local can do it because i've never actually 
used it - i've always used procmail.

but it doesn't matter - that's the job of the LDA, not the MTA, and postfix
happens to have a modular design which lets you use any LDA you like.


  postfix doesn't do it the same way as exim because postfix is not
  a single monolithic process. 
 
 Stop harping on that and respond to my points, if at all. 

it wouldn't be necessary to harpn on if you didn't consistently miss the
obvious.  postfix is not exim.  stop insisting that it try to be exactly the
same.

i'll try expressing the concept in simpler language for you, and maybe you'll
understand:

you go into a take-away food shop and order a steak sandwich.  when it arrives,
you complain that it doesn't taste like chicken.  well, WTF did you expect?
it's steak, not chicken.  if you had wanted chicken, you should have ordered
that.

similarly, if you want the exim behaviour and model, then install exim.  if you
want postifx, then install postfix.  but don't expect postfix to operate
exactly the same way as exim.  to get postfix to do things, you take advantage
of the way that postfix works, not complain that it doesn't work exactly like
exim.

 Even a modular architecture can support filters as part of the LDA;
 Postfix does not.

again, you don't know what you are talking about.


   ... not manageable...
  
  of course not.   but a) it works, and b) it doesn't have to be
  manageable, .forward files are not a system-wide setting, they
  are a per user thing.
 
 So you suggest .forward files for a machine hosting about 1700
 Windows users?

no.  try reading what i wrote.

  if you want it to run for every user without each user having to
  do custom configuration, then use procmail as the LDA and create
  a rule in /etc/procmailrc.  problem solved.
 
 If you object to exim because of its monolithic setuid nature, how
 can you possibly advocate procmail?

for the same reason that i can appreciate cats.  i.e. it's irrelevant
to the question.

procmail is not an MTA.  and postfix is not an LDA.  they have different
jobs.  

more to the point, whatever it's other faults, procmail is not monolithic -
it does one job, and it does it reasonably well.  it fits the modular,
small-tools paradigm.

the fact that it is setuid root is not necessarily a problem.  in fact, it's
unavoidable.  if you're delivering mail to local users, at some point in the
process something has to run as root so that it can change UID to the user. 

IMO, it's better to have that root or setuid process do just one job (LDA) and
revoke root privs as early as possible, than to do half a dozen different jobs
(monolithic MTA).


 Sure, it's run as the user. But it's a bloody performance hog. Try
 that with 1700 users and about 130 to 200 mails per minute, and you'll
 find that it does not work.

1. you want to run SpamAssassin for 1700 users and 200 mails/minute and
you're complaing that it's *procmail* that's the performance hog. i
think you need to resynchronise your brain with reality.

2. use maildrop instead if procmail's performance bothers you.

3. write your own mini LDA

3. the CPU time, memory, and I/O used by either procmail or maildrop (or
any LDA) is utterly insignificant compared to that used by SpamAssassin.


  if you don't care about using per-user settings in SA, then just
  use a content filter and you'll get SA checking on ALL mail, not
  just on locally-delivered mail.  again, problem solved.  IMO, this
  is the best way to do it.
 
 If you do SA on a system-wide basis, the auto-whitelisting feature
 is a problem, 

true, it doens't work as nicely as it could otherwise.but not very
important because auto-whitelisting isn't as useful as it sounds, anyway.

 and Bayesian filtering is basically useless.

nope, it's not.  SA's bayesian filters works perfectly well when used as a
system-wide filter.

  but if the question you are asking is i want postfix to work
  exactly the same as exim, then you'll never get an answer.
 
 I did not say so.

you have done so repeatedly.


  *ALL* mail is both incoming AND outgoing.
 
 Which (sensible) MTA does not do it this way?

dunno, which is why it's so puzzling that people have difficulty understanding 
it.

i think it's because they insist

Re: Value of backup MX

2004-11-10 Thread Craig Sanders
On Wed, Nov 10, 2004 at 02:10:18PM -0500, Robert Brockway wrote:
 On Wed, 10 Nov 2004, Craig Sanders wrote:
 
  backup MX is obsolete these days, very few people need it (most of
 
 This does seem to be a prevailing opinion but I think backup MXs are
 valuable now for the same reason they always were - outages happen.  We
 have no way of knowing how long a remote MTA will continue attempting to
 resend, even if it is following the rules of SMTP.  I do not want to lose
 mail because a remote admin can't afford to hold mail for very long
 (assuming a major issue like a hardware fault).
 
 I do fully support the idea of the backup MXs having the same anti-spam
 capabilities as the primary (rsync over ssh can do wonders)

if you have full administrative control over your own backup MX, *AND* if you
maintain the list of valid relay recipients, then it is perfectly OK to have a
backup MX.

you probably won't benefit from it anywhere near as much as you think you will
(and it will be constantly bombarded by spammers), but it won't cause any
problems for anyone else.

 Peered MXs (eg, 2 x MX 10) and dynamic backups which don't just queue mail
 but continue to deliver when the primary is down are even better.

that's not backup MX, that's load-balancing.  a different kettle of fish.  in
order to work at all, they *MUST* have a list of valid recipients.  again, not
a problem AND, unlike backup MX, you will get significant benefits from running
a load-balanced mail setup if you have the mail volume to warrant it.

  those who *think* they do are just running on ancient  obsolete
  gossip/common sense from the days when backup MXes were useful).
  almost all mail these days is delivered by SMTP, and all real SMTP
 
 MXs are hardly useful for mail that is not travelling over SMTP :)

true.  however, very little mail travels over alternative transports these
days.  except for a few weirdoes like myself who set up uucp over tcp instead
of the brain-damaged kludge of multi-drop POP mailboxes, almost nobody does
anything but SMTP.

and in any case, when using uucp, you generally don't set up the uucp host as a
secondary MX.  you set it up as the primary MX, and give it a transport table
entry to route mail for the destination domain via uucp.


  servers(*) will retry delivery. this works perfectly well without a
  backup MX, and in fact works BETTER without a backup MX.
 
 How does it work _better_ without a backup MX?

1. it's not clogged up with undeliverable spam bounces

2. it's not clogged up with backscatter

3. the original sender (or their sysadmin) can tell that their mail hasn't been
delivered yet, instead of wondering why they haven't got a reply to their
important mail that has been waiting in a queue on the backup MX for 3+ days.


  if you do have a backup MX, then you need to have the same anti-spam
   anti-virus rules as on your primary server AND (most important!) it
 
 I agree with this (as noted above)
 
  needs to have a list of valid recipients, so that it can 5xx reject
  mail for unknown users rather than accept and bounce them (known as
 
 I disagree with this.  I'd sooner not have a backup than use this
 strategy.  Sounds like a good way to lose new customers.

a list of valid relay recipients is essential.

without it, you generate vast quantities of backscatter.  if you do that, you
are contributing to the spam  virus problem.



sooner or later, someone will get pissed off enough by backscatter to create a
backscatter DNSRBL that lists sites which generate large amounts of backscatter
(and sites that send out those annoying bogus virus notifications from their AV
scanners).

as with open-relay and open-proxy and spam-source DNSRBLs, this can only be a
good thing because it will force lazy and ignorant system admins to do the
Right Thing if they want their legit mail to be delivered.


  btw, backscatter also causes problems for you and your server. many
  of the spam/virus bounces are from undeliverable return addresses,
  so they end up clogging your mail queue for days and slowing the
  entire system down.

 Only if the queue is really huge, honestly.

yes, and this happens in very short time.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Value of backup MX

2004-11-10 Thread Craig Sanders
On Wed, Nov 10, 2004 at 02:18:50PM -0500, Robert Brockway wrote:
 On Wed, 10 Nov 2004, Craig Sanders wrote:
  if you do have a backup MX, then you need to have the same anti-spam
   anti-virus rules as on your primary server AND (most important!) it
  needs to have a list of valid recipients, so that it can 5xx reject
  mail for unknown users rather than accept and bounce them (known as
  backscatter).
 
 Oh you mean reject mail for unknown recipients rather than bounce the
 mail[1].  Ok, I can see why you are suggesting it but it is an RFC
 violation.  

so are most anti-spam and anti-virus methods.  theory is fine, but pragmatic
reality sometimes dictates divergence from an idealist theory.

 Not that I'm necessary saying not to do it but one must walk
 towards RFC violations fully aware of the fact (for the benefit of those
 reading along at home).

true.  you've got to know the rules before you break them.

 [1] I thought you were suggesting the MTA should drop mail from unknown
 senders - a type of brutal white list :)

not at all.  that wouldn't make much sense, and it wouldn't be very useful.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Value of backup MX

2004-11-09 Thread Craig Sanders
On Tue, Nov 09, 2004 at 04:10:07PM +0100, martin f krafft wrote:
 also sprach John Goerzen [EMAIL PROTECTED] [2004.11.09.1514 +0100]:
  It seems to make a lot of sense to me, but it seems too that
  I must be missing something.
 
 if the backup MX is configured exactly like the primary, then it
 makes sense. but it's all too easy to get out of sync.
 
 i usually have my backup MX accept everything and then don't treat
 them specially on the primary. 

then you are generating backscatter.  i.e. you are part of the virus/spam
problem and not part of the solution.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Value of backup MX

2004-11-09 Thread Craig Sanders
On Tue, Nov 09, 2004 at 03:30:03PM +, John Goerzen wrote:
 On 2004-11-09, Steve Drees [EMAIL PROTECTED] wrote:
  John Goerzen  wrote:
  I'm looking at redoing my mail setup due primarily to spam filtering.
  Over at http://www.tldp.org/HOWTO/Spam-Filtering-for-MX/multimx.html,
  they are suggesting not to use redundant mail servers unless needed
  for load balancing.
 
  This is poor advice.
 
 Could you elaborate a bit on why that is?  The author is saying that
 well-behaved (ie, non-spamming) MTAs would keep retrying for several
 days anyway, so the only time a backup MX would really prevent mail loss
 is due to an outage extending more than that time.  What do you think?

it isn't likely to help even then because the backup MX is unlikely to have a
longer queue lifetime than the original sending server (5 days is the typical
default).

to illustrate, there are two basic possibilities here:

1. you control the server.you could set the queue lifetime to more than the
standard 5 days, but you're not likely to because it causes more problems than
it solves.  

your queue will get even more clogged with undeliverable spam bounces (held for
10, 15, 20 or whatever days rather than the standard 5).  spammers tend to
focus on backup MX records rather than primary MXs (hoping to bypass anti-spam
rules), so it's pretty much guaranteed that the box WILL be flooded with
undeliverable spam bounces. 

also your users will wonder why they are getting bounces for undeliverable mail
that they sent over a week ago.

2. you don't control the server.  you will have no chance of getting the
operators to set a longer than standard for pretty much the same reasons as in
case 1. above, plus the additional reason that there's not even an illusory
benefit to them in doing it.

 [...]
 Now think what happens when viruses/spammers do this.  My backup MX is
 sending out a lot of bounce messages to potentially innocent victims for
 this reason.

yes.  you're definitely on the right track with this thought.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Value of backup MX

2004-11-09 Thread Craig Sanders
On Tue, Nov 09, 2004 at 08:04:24PM +0100, martin f krafft wrote:
 also sprach Dale E. Martin [EMAIL PROTECTED] [2004.11.09.1954 +0100]:
  This got me to thinking, it would be neat if one could _easily_
  replicate RBLs on their own local DNS server.
 
 rbldns (djbdns) is (a) non-free, 

nope.

rbldnsd is NOT djbdns.  it's a small DNS server written by Michael Tokarev
which is specifically for running RBLs.  it is free (GPL).

it's packaged for debian, and the home page is at: 

http://www.corpit.ru/mjt/rbldnsd.html

 and (b) really nice and easy to use for this purpose.

yep.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: exim or postfix

2004-11-09 Thread Craig Sanders
On Sun, Nov 07, 2004 at 01:40:30PM +, Brett Parker wrote:

 There have been some very simple things that I've needed to find
 solutions to with postfix in the past which I ended up having to
 do with procmail that I can now deal with in ~ 3 lines in the exim
 config.

my guess is that you just know exim better than postfix, so things that an
experienced postfix user would find easy aren't as easy for you as just using
exim.

all of the things you listed as benefits of exim, my first thought was but
postfix does that (and it does it better :).


 Then, I've always prefered exim, I like having control at my finger
 tips, and things to do what I expect :)

odd.  that's one of the reasons i prefer postfix over exim.

exim's OK, but the best thing i can say about it is that it is smail done
right, without the really stupid bugs.  which is not exactly a glowing
recommendation.  on the plus side, exim's author is damn smart and knows his
stuff...but i still prefer postfix.

for someone who knows exim really well, i'd say stick with what you know
best, you're unlikely to get enough benefit from switching to be worth the
effort.

for someone who isn't already a long-term exim user, i'd say that they're much 
better off using postfix.  you'll be able to do more, with far less effort.

craig

ps: i've used pretty nearly all of the free software MTAs (and some
not-so-free, like qmail) over the last 15 years.i was an smail fan for a
long time, then sendmail got a lot better and i switched to that for a few
years.  then qmail came along, and i used either sendmail or qmail on all
systems for a few more years, depending on need (i liked most of qmail's
features but didn't like the license and really didn't like the feeling that it
was a dead-end incompatible trap as bad as any proprietary commercial
software).  then vmailer aka postfix came along and within a few months i had
converted all machines to postfix and now i won't willingly use anything else.
it had everything i had wished for for years.


-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Value of backup MX

2004-11-09 Thread Craig Sanders
On Tue, Nov 09, 2004 at 11:56:04PM +0100, Christoph Moench-Tegeder wrote:
 ## Craig Sanders ([EMAIL PROTECTED]):
  On Tue, Nov 09, 2004 at 08:04:24PM +0100, martin f krafft wrote:
   also sprach Dale E. Martin [EMAIL PROTECTED] [2004.11.09.1954 +0100]:
 
   rbldns (djbdns) is (a) non-free, 
  nope.
  rbldnsd is NOT djbdns.
 
 Confusion :)
 There is rbldns, part of djbdns: http://cr.yp.to/djbdns/rbldns.html
 And there is rbldnsd by Michael Tokarev: http://www.corpit.ru/mjt/rbldnsd.html

duh, yes.  i saw a final d that wasn't there.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apache log files

2004-11-05 Thread Craig Sanders
On Fri, Nov 05, 2004 at 09:40:28AM +0100, Francesco P. Lovergine wrote:
 On Fri, Nov 05, 2004 at 09:09:16AM +1100, Craig Sanders wrote:
   For ErrorLog you can pipe to a suitable program which does the same.
  
  but this doesn't.  unless apache has added this feature since i last looked
  into this (about six months ago) the suitable program has no way of
  separating the error logs for each virtual host, because it's just STDERR with
  no vhost prefix on each line.
  
 
 ErrorLog | mytrickyprog www.mydomain.com
 
 where mytrickyprog simply echos stdin on the right per-domain file or
 the same log file with the right prefix for each line. Of course you
 need a different directive for each vhost.

which means one open file handle per virtual host per apache process.  which is
exactly what we were trying to avoid.

there's no benefit in doing this...in fact, you're much worse off than just
specifying the ErrorLog filename because you not only have num_vhost *
num_apache_children file handles, you also have the same number of
mytrickyprog instances running.  each of which takes up memory and CPU time,
and has at least 4 file handles open itself (stdin, stdout, stderr, and the
error log file)


the whole point of this thread was how to reduce the number of file handles
open, per apache process and on the entire system.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apache log files

2004-11-04 Thread Craig Sanders
On Thu, Nov 04, 2004 at 11:19:22AM +0100, Francesco P. Lovergine wrote:
 I personally prefer a single CustomLog file with a suitable domain
 prefix for every domain. That allows a nice grepping to extract 
 information and avoid resources wasting. 

yes, this works.

 For ErrorLog you can pipe to a suitable program which does the same.

but this doesn't.  unless apache has added this feature since i last looked
into this (about six months ago) the suitable program has no way of
separating the error logs for each virtual host, because it's just STDERR with
no vhost prefix on each line.

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apache log files

2004-11-03 Thread Craig Sanders
On Wed, Nov 03, 2004 at 11:11:13PM +0100, Marek Podmaka wrote:
   I have apache 1.3 webserver hosting about 150 domains (more than 400
   virtual hosts). Now I have separate error log for each domain
   (something.sk) and separate combined log for each virtual host (for
   example www.abcq.sk and new.abcq.sk). This has many positives for
   me: easy to find some data related to each virtual host and that I
   can make seaparate statistics for each virtual host. I use awstats.
   And now the bad side - the number of open files in each apache
   process is more than 500 just for these log files. It's no problem
   for now, but with more domains in future it will hit the 1024 per
   process limit of open files.
 
   And now the questions :)
   1) Where does that 1024 open files limit come from? Is it somewhere
   configurable? 

edit /etc/init.d/apache and add a line like ulimit -n 4096

-n is the maximum number of open file descriptors per process.  use a value
that's about twice as much as you think you'll need.

you also need to set /proc/sys/fs/file-max to some suitably high value (again,
calculate how many you think you'll need and double it).  this can be set in
/etc/sysctl.conf


 Or do you think it's totally bad idea to have such number of log
 files?

until recently, i ran a web server with about 600 virtual hosts on it, each
with its own access.log and error.log files.

with 200 max apache children, that worked out as up to about 240,000 (600 x 200
x 2) file handles opened by apache processes for logging at any given time.
this was on a dual p3-866 with 512MB RAM.   it worked.

it bothered me a little that it wasn't really scalable and that eventually i'd
have to do something about logging.  i had some ideas on what to do, but was
limited by the fact that i wanted to have separate error.log files for each
virtual host.  overall, though, my attitude was it aint broke, so don't fix
it.

this wouldn't be a problem if apache could be configured to prefix each
error.log line with the virtual host's domain name..then you could have a
single pipe to an error logging script which wrote the error lines to the right
file, same as you can do for the access.log.

but apache can't be configured to do that, and i never bothered looking at the
source to see how easy it would be to hack it in, so that means you either have
a shared error.log for all vhosts or you put up with having lots of open file
handles.  i chose the latter, and occasionally increased both ulimit -n and
/proc/sys/fs/file-max as requiredi never did run into any limit.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: network monitoring

2004-10-31 Thread Craig Sanders
On Sun, Oct 31, 2004 at 02:17:35PM +0100, martin f krafft wrote:
  Nagios mainly uses SNMP to pull its data - authenitcated but not
  encrypted. Big Sister - Have heard its similar to big brother
  - simple to set up (compared to nagios) and for your small network
  should be more than adequate. Big Brother (and probably big
  sister) have client software that runs on each machine that sends
  the status info back to the display server.
 
 Yeah, but I want a pulll approach, not a push approach!

take a look at mon.  it's a framework for monitoring systems and sending
alerts via email, sms, or whatever.

it comes with many scripts to test availability of common services (like
smtp, ftp, http, etc), and can test pretty much anything as long as you 
can write a script to do the test.


Package: mon
Priority: extra
Section: admin
Installed-Size: 800
Maintainer: Roderick Schertler [EMAIL PROTECTED]
Architecture: i386
Version: 0.99.2-7
Depends: perl, libmon-perl (= 0.10), libtime-period-perl, libtime-hires-perl, libc6 
(= 2.3.2.ds1-4)
Suggests: fping, libauthen-pam-perl, libfilesys-diskspace-perl, libnet-perl, 
libnet-dns-perl, libnet-ldap-perl, libnet-telnet-perl, libsnmp-perl, 
libstatistics-descriptive-perl
Filename: pool/main/m/mon/mon_0.99.2-7_i386.deb
Size: 177160
MD5sum: 35d62495d9befa374227ffae9a9e3b91
Description: monitor hosts/services/whatever and alert about problems
 mon is a tool for monitoring the availability of services.  Services
 may be network-related, environmental conditions, or anything that can
 be tested with software.  If a service is unavailable mon can tell you
 with syslog, email, your pager or a script of your choice.  You can
 control who gets each alert based on the time of day or day of week,
 and you can control how often an existing problem is re-alerted.
 .
 More information can be found at http://www.kernel.org/software/mon/.


craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: distributing SSH keys in a cluster environment

2004-10-29 Thread Craig Sanders
On Fri, Oct 29, 2004 at 07:03:02PM +0200, Martin F Krafft wrote:
 As far as I can tell, there remains one problem: we use SSH hostbased
 authentication between the nodes, and while I finally got that to
 work, every machine gets a new host key on every reinstallation,
 requiring the global database to be updated. Of course, ssh-keyscan
 makes that easy, but people *will* forget to call it, and I refuse to
 automate the process because there is almost no intrusion detection
 going on, so that it would be trivial to take a get access to the
 cluster with a laptop. As it stands, I kept the attack vector small
 with respect to the data stored on the cluster, physical security is
 good, and the whole thing is behind a fascist firewall anyway.

 So what can I do about these SSH keys?

how about something like this:

1. each node should have gnupg installed, with a public and private key shared
between all machines (with a fiendishly long pass-phrase, of course).  this key
set should be used ONLY for distributing the correct ssh keys to each machine.
make a special account for it or specify the config file to use on the gpg
command line when decrypting.

2. keep a copy of each node's ssh keys in individual .tar.gz files on the
master/boot server machine.  each tar.gz file should be encrypted by gnupg for
the key above, and the filename should indicate the node's hostname or
ip address or some other unique identifier that you can remember when you are
building each node.

3. when a machine is being built or rebuilt, install the correct ssh keys in
/etc/ssh.  they can be fetched via password-protected http or https or ftp or
even tftp, then decrypted and untarred.  since they're encrypted you don't have
to be completely paranoid about them - normal security precautions are
adequate. 

this can be done before ssh is installed (in which case, the post-install
script won't generate new keys), or it can be done after ssh is installed (in
which case, sshd needs to be restarted after the keys are changed).


craig


-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: distributing SSH keys in a cluster environment

2004-10-29 Thread Craig Sanders
On Sat, Oct 30, 2004 at 12:37:31AM +0200, martin f krafft wrote:
 also sprach Craig Sanders [EMAIL PROTECTED] [2004.10.30.0015 +0200]:
  3. when a machine is being built or rebuilt, install the correct
  ssh keys in /etc/ssh.  they can be fetched via password-protected
  http or https or ftp or even tftp, then decrypted and untarred.
  since they're encrypted you don't have to be completely paranoid
  about them - normal security precautions are adequate. 
 
 well, the decryption requires a password, so the installation is not
 unattended anymore. since we have a number of headless number
 crunchers in the cluster, this is essential.

you could do it without the encryption and pass-phrase (or write an expect type
script but that would require putting the pass-phrase in plaintext in the
script, which defeats the purpose of having a password), but then you'd have to
be much more careful about access to the key files.

 i am beginning to believe that i am looking for a solution where non
 exists...

you probably wont get it completely automated if you care about security of the
ssh keys.  mostly automated with some manual intervention is the best you can
expect.

of course, you can be a bit looser with with keys if you're confident that
physical access to the machines AND to the network segment they are on is
properly restricted, AND you have firewall or other access rules to prevent
external machines from fetching the key files. 

craig

-- 
craig sanders [EMAIL PROTECTED]   (part time cyborg)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Advice for an IP accounting program

2004-10-19 Thread Craig Sanders
On Tue, Oct 19, 2004 at 06:13:03PM +0200, Hilko Bengen wrote:
 Francesco P. Lovergine [EMAIL PROTECTED] writes:
 
  The main purpose is identify periodically boxes on an internal
  private network which cause very high traffic, due to worms, virus
  and so. A per-IP simple report a la mrtg could be nice.
 
 plug mode=shameless My ulog-acctd, installed on the border router
 using Netfilter, has put much less load on the routers as compared to
 net-acct and any libpcap-based tool in tests at the ISP for which I
 wrote it./plug

sounds like a good tool.
 
 With a little know-how in shell-scripting, it should be trivial to
 generate statistics and graphs from its output.

if you modified it to produce Netflow output (same as cisco and other routers),
then there's a good range of tools which already exist to do this.   and, it's
always a good idea to use an existing standard rather than reinvent the wheel.

e.g. these are already in debian:

flow-tools - collects and processes NetFlow data
flowscan - flow-based IP traffic analysis and visualization tool
libcflow-perl - Perl module for analyzing raw IP flow files written by cflowd


btw, there are also two libpcap-based netflow capturers already debianised - a
netfilter/ulog alternative would be a good thing.

fprobe - exports NetFlow V5 datagrams to a remote collector
pmacct - promiscuous mode traffic accountant


craig

-- 
craig sanders [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Advice for an IP accounting program

2004-10-19 Thread Craig Sanders
On Tue, Oct 19, 2004 at 09:31:24PM +0100, Steve Kemp wrote:
 On Wed, Oct 20, 2004 at 06:18:26AM +1000, Craig Sanders wrote:
 
  btw, there are also two libpcap-based netflow capturers already debianised - a
  netfilter/ulog alternative would be a good thing.
  
  fprobe - exports NetFlow V5 datagrams to a remote collector
  pmacct - promiscuous mode traffic accountant
 
   A third would be ipaudit, 

not exactly.  it looks like a very nice package (especially ipstrings and
total), but it doesn't produce netflow output.  it has it's own output format.


 which I've been testing for a few months now.  I will almost certainly
 package it shortly.

i'll look forward to seeing the package.

craig

-- 
craig sanders [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: initrd in Debian kernel-image

2004-10-15 Thread Craig Sanders
On Thu, Sep 30, 2004 at 12:12:30AM +1000, Donovan Baarda wrote:
  Le mercredi 29 septembre 2004 ? 12:37, Gavin Hamill ?crivait:
   My question is... how does dpkg know that I need to load the megaraid
   module in the initrd so the system can mount / for init to boot the
   machine? I've looked in /etc/mkinitrd and seen the 'modules' file -
   should I just stick 'megaraid' in there just in case? Would this cause
   any harm if it's already been included?
 [...]
 The trick is getting the initrd right... Debian has /etc/mkinitrd/modules
 and /etc/mkinitrd/mkinitrd.conf to tweak this... read up on the initrd-tools
 package, and note that the Debian kernel-image packages depend on this
 package to build their initrd images when they are installed.

i find it far less hassle to build custom kernels without an initrd image.

IMO, initrd is useful for a distribution kernel which has to run on lots of
different machines, but is a waste of time, effort, and RAM when building a
custom kernel for a specific machine.

just make sure you compile the drivers you need to boot in to the kernel, and
all other drivers can be either modules or compiled in (doesn't really matter).

personally, i like most stuff compiled in but have non-essential stuff (sound,
usb, v4l, etc) compiled as modules.  

i like the networking stuff compiled in - every machine i build needs
networking so i see no benefit in having ipv4 or packet socket or any of the
other core network stuff as modules.

i usually compile various common network card drivers as modules - that way if
a NIC dies, i can just replace it with whatever i have handy or can get on
short notice and know that a driver module will be already on the system.


the basic rule of thumb is: if i'm likely to need it to boot or if it's
essential for what the machine is supposed to do, then it gets compiled in to
the kernel.  otherwise as a module.

craig

-- 
craig sanders [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: problem with /var/mail and procmail

2004-09-27 Thread Craig Sanders
On Mon, Sep 27, 2004 at 12:00:38PM +0200, Francisco Castillo wrote:
 Yesterday i decide to move /var/mail folder to /mnt/var/mail where i has
 another partition but this has caused to stop working my mail system.
 
 In order to do this i do
 
 cp /var/mail/* /mnt/var/mail

this is the cause of your problems.  plain cp will copy the files, but they
will be owned by the current user (root, most likely), and permissions will be
set according to the current umask.

try cp -a instead.  this copies the files AND preserves ownership and
permissions.  -a will also recurse subdirectories.

e.g.

cp -a /var/mail /mnt/var

NOTE: stop postfix and your POP/IMAP daemons before copying and restart them
afterwards.  you don't want new mail to arrive or old mail to be deleted while
the copy is in progress.  in fact, you don't want either of those things
to happen until you're sure that the changes are working without problem.

craig
-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-07 Thread Craig Sanders
On Tue, Sep 07, 2004 at 12:06:13AM -0400, Chris Wagner wrote:
 If ur looking for a fast RAID product that's reasonably priced I'ld take a
 look at NetCell's SyncRAID product (http://www.netcell.com/) which uses a 64
 bit RAID-3 variant they call RAID XL.  It got a good review from Tom's
 Hardware Guide and it looks like they've really solved the read-calc-write
 problem of RAID-5.

looks good, but is it supported in linux?  the web site says not:

  Currently only supports Windows XP, 2000,  2003

  Retail Box includes PCI card, CD, Manual, IDE cables.

  We are currently only shipping the SyncRAID 5000 product for Windows
  users. Apple and Linux versions will be available soon. NetCell is
  currently only shipping products to US and Canada.


and similar phrasing on the info pages for the other models (3 drive and SATA).


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apt upgrade

2004-09-07 Thread Craig Sanders
On Sun, Sep 05, 2004 at 10:58:40PM +0300, Ivan Adams wrote:
 I used script with apt-get upgrade -y on Debian 3.0 Woody in crond.
 Everything was ok when one day call me for problem in that linux.  When I
 enter in console I saw in logs that previous day he was apt-get upgrade -y
 and upgraded squid. The problem was the new version of squid has one more
 option in squid.conf, and i have to append the file and done the job by hand.

 My quiestion is how I can avoid that kind of problems when on some Debian I
 have that kind of apt scripts.

write an expect (or similar) script.  which requires knowing in advance what
questions you're going to be asked - which, of course, you don't because the
questions will change for every upgrade.

now you know one of the many reasons why running 'apt-get upgrade' from cron is
a bad idea.  even if there are no packaging errors, you're occasionally going
to get hit by something like this.
 
upgrades really need someone competent watching them anyway.  they should never
be completely automated.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apt upgrade

2004-09-07 Thread Craig Sanders
[ cc-ed back to debian-isp ]

On Tue, Sep 07, 2004 at 09:21:20PM +0300, Ivan Adams wrote:
 but how can i understand when there have critical backdoor in some of my
 packets in all Debians and need upgrade!

subscribe to the security alert lists and upgrade when advised.

you're trying to automate something which should not be automated.

 RedHat have client who updates all critical problems automaticaly (or from
 Web, but you just say update and that's). I mean how in RedHat all
 administrators are sure that their linux is fine after update!

if that's all you want then run stable and update from security.debian.org -
you'll just get the security updates, which are infrequent.  you might
occasionally run into the same problem but, given that the security updates are
a) backports rather than new versions and b) rare, it's nowhere near as likely
as with unstable or testing.

with unstable or testing, updated packages will be many and frequent - usually
dozens every day.  the more packages, the more likely that one of them will
need to ask a question, or have a new config file which is incompatible with
the previous version, or some other show-stopping problem.


 Is that one step back for Debian !?

no.  i doubt that it works perfectly for RH either.  it's not a task that can
be completely automated.  upgrading requires a skilled person in control of the
process.


and if you run unstable on production servers (as i do), then you really ought
to test all upgrades on other servers or workstations first.  the last thing
you need is to discover that an upgraded apache or postfix or squid or whatever
is broken AFTER you've upgraded it on the server that your users depend upon.


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RAID-1 to RAID-5 online migration?

2004-09-05 Thread Craig Sanders
On Fri, Sep 03, 2004 at 09:28:02AM +0100, Gavin Hamill wrote:
 On Friday 03 September 2004 06:28, Dave Watkins wrote:
  After that is done you can delete the old raid1 completly and add the now
  free disk to the raid5...
 
  Ralph Pa?gang wrote: I've actually done this exact thing before and it
  worked flawlessly.
 
 Ooh you lovely people - thank you for the good news :)

i've done it too, and it works.but the catch is that it takes a lot longer
to do it this way than to just backup your data, create the new raid from
scratch, and restore.

making a raid array is a very quick operation.  so to do it from scratch takes:
whatever time to backup your data, less than a minute to mkraid and mkfs.  then
however long it takes to restore your data.

if you have significantly less than 200GB of data (i.e. the size of each disk
in the array), then this will be much quicker than hot-adding a third drive
into a degraded-mode raid-5 array.  doing it this way will take: a minute to
mkraid and mkfs the new raid-5, however long to copy your data, and then a long
time to hot-add the third drive.

of course, the advantage is that even though it takes a long time for the
hot-add to complete, it is running in the background so the machine can be up
and running as normal (but slower for the duration).

actually, downtime for both ways of doing it is about the same (time to
mkraid/mkfs and either copy or restore your data).  the difference is that
doing it from scratch, the job will be finished as soon as you've restored, but
with array juggling it won't be finished until the entire 200GB drive is synced
with the rest of the array.



FYI, on one of my boxes (P3-933, 512MB RAM) it took about 9 or 10 hours to
hot-add an 80GB drive (seagate barracuda 7200rpm, 8MB cache) in the background.
the machine was running as normal but was quite slow until it finished.  the
entire operation worked perfectly.

it wouldn't have taken anywhere near that long to just copy 80GB of data from
one drive to another, so the parity calculations must be really slowing it
down.

craig

PS: i wouldn't recommend software raid 5 if you care about performance.  i am
going to convert one of my raid-5 machines (4 x 80GB barracudas) to raid-1 (2 x
200GB barracudas) very soon because i'm unhappy with the performance(*)...if i
had a spare approx $600AUD, i'd buy an IDE raid card with at least 32MB
non-volatile cache memory and that would give me raid-5 with decent
performance, but it's just not worth that much to me for a workstation.

(*) also because it gives me the 4 x 80GB drives to use in other machines :)

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Craig Sanders
On Tue, Jul 27, 2004 at 09:00:58AM -0400, Fraser Campbell wrote:
 On July 27, 2004 03:58 am, Henrik Heil wrote:
  The record_ids will stay the same with mysqldump.
  What makes you think they will not?
 
 I have seen problems with this.  The existing auto-incremented fields were 
 just fine but new ones were a little bit off.  In a normal mysqldb if you 
 have a single record with id 1 and delete it then add another record the new 
 record will get id 2 (not filling in the missing 1).  I've seen a case that 
 after a mysqldump and restore the new records did not honour have that 
 behaviour, missing ids were reused.  I'm sure that I did something wrong 
 with the dump but in that case it was not important so I didn't research it 
 further.

that's bizarreand could easily lead to a hopelessly corrupted database when
other tables refer to that id field.

how are you supposed to restore a mysql db from backup then?


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Craig Sanders
On Wed, Jul 28, 2004 at 11:39:40AM +1000, Kevin Littlejohn wrote:
 that's bizarreand could easily lead to a hopelessly corrupted database
 when other tables refer to that id field.
 
 how are you supposed to restore a mysql db from backup then?
 
 Two answers:
 
 1) Why are you relying on the auto_increment field to increment from highest
 point each time?  So long as it gives you a unique value (and it should
 always do that), it shouldn't matter if it's re-using an old value (if it
 does, you shouldn't have deleted the old value...). 

i'm not.  i was just curious.

btw, sometimes it does matter if record ids are re-used.  e.g. one reason not
to re-use id numbers is if it's a search field on a web database.  if someone
bookmarks a particular search (e.g. for id=99) then returning to that bookmark
should either return the same record or it should return no such record if it
has been deleted.  it should never return a completely different record.

actually, this is true for any kind of app, not only for web databases.  e.g.
if your sales staff are used to entering product ids from memory, or if your
customers quote their customer ID, this can lead to serious confusion or
problems.  at best, some time will be wasted sorting out the mess.  at worst,
the wrong product may be shipped or the wrong customer may be billedor the
wrong medical records may be referred to when consulting with a patient.

in short, unique IDs need to be unique forever(*), not just unique for the
present moment.

(*) or at least a reasonable facsimile of forever :)

 Certinaly, if you're referring to those IDs elsewhere, and you've 
 deleted the record it was referring to, good database design would be to 
 not leave the references lying around, imnsho.

true enough.

more to the point, good database design wouldn't LET you leave them lying
around.  note: i mean database design here, not application design or schema
design.  i mean the database engine itself should not allow this to happen, it
is not something that can or should be left up to the application to enforce,
it has to be enforced by the database engine itself.



 2) You can set the point to increment from, in a fairly hackish way, by 
 doing a alter table tbl_name auto_increment = x where x is the highest 
 number in use.  Requires scripting around your backup/restore process, 
 unfortunately.

no big deal.  some scripting is almost inevitable in database backup and restore.

 With regard 1, the actual definition of auto_increment doesn't preclude
 re-use of numbers as far as I know, so if you're relying on it not to, you've
 got broken code anyway.  That means the mysqldump is doing the correct thing,
 according to spec for auto_increment - there's no requirement in there to
 retain the highest number.  The name of auto_increment is misleading,
 obviously ;)

ok. works as designed - it's not an implementation bug it's a design bug :)


 With regard Craig's comment, if your database leaves hanging references to
 non-existant data around, you've got a broken database, whether you've
 realised it yet or not.

true, i didn't think about that at the time.  it was just my initial reaction
to the idea that there was weirdness with restoring a mysql dump.  since
dumping to text (or other re-importable format) is the only good way of backing
up a database, it seems like a major problembeing able to *reliably* backup
and restore a database is, IMO, an essential feature of any database.  you need
to be certain that what you will restore is *identical* to what you backed up.

whether it actually is a major problem or not, i don't know.  that's why i was
asking.  the alter table workaround you mentioned seems reasonable.


OTOH, since mysql doesn't actually do transactions(*) or check referential
integrity, it's quite possible to have such references in the db.  and in this
case, an import like this will convert dangling references which point to
non-existent records into references that point to records that actually exist
(but aren't the right ones).

(*) yes, i know about innodbbut hardly anyone actually uses it because that
means giving up the only feature that mysql users (mistakenly) care about - raw
speed.  not that mysql is actually any faster in the real world with multiple
simultaneous readers and writers, but that's the mythology.

 General note:  We make a policy of using auto_increment _only_ to create
 sequence tables, which we manage ourselves.  This is in line with postgres
 and oracle's use of sequence tables, and makes porting easier.  We don't
 bother with ensuring that the next ID is higher than all previous ones - as
 long as they're unique, that's sufficient, any references to a defunct entry
 are removed when the entry is removed.

postgres sequences (and serial fields) are what i'm used to.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL

Re: Outlook and Qmail

2004-07-26 Thread Craig Sanders
On Mon, Jul 26, 2004 at 06:05:33PM +0200, David Zejda wrote:
 dunno.  large messages obviously aren't the ONLY factor, it's a combination
 of factors - one of which is that the message is large.
 
 I have a similar (sometimes, large messages, dialup) problem with OE +
 Postfix.

postfix doesn't do POP, that's the job of whatever POP daemon you're using.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Outlook and Qmail

2004-07-23 Thread Craig Sanders
Kris Deugau wrote:
 Anil Gupte wrote:
  I am having a problem with one of my customers who is using Outlook
  2000 SP-3 to connect to our Qmail server.  When downloading messages
  from his POP account, Outlook will hang.  It is most likely a
  corrupted message, since he can delete the messages using a webmail
  interface, and then continue to download messages.
 
 This has happened across Novell IMS, qpopper, UW ipop3d, and Teapop. 
 (In fact, that one Hotmail-originated message that *always* hung OE did
 so across all but qpopper (which was not in use at the time) *every*
 time.)  Examining the raw message in the mailbox has turned up
 absolutely NOTHING any time I've met this.  :(

yep.

it happens with any MTA and any POP daemon.  that's because the problem is not
in the message, the MTA or the POP daemon.  it's in outlook.

  Has anyone run into this problem?  I know at least one other ISP
  having the same problem with some of his customers, but we have not
  found a solution yet.  Any pointers will be appreciated.
 
 The only thing I (or my boss) could ever even vaguely point to as a
 cause for the problem was OE's handling of attachments while it's
 downloading the message.  We never found a real solution, except
 Don't do that.  (ie, Warn people not to send you big attachments)

almost right.

the problem is that outlook is broken.  it's broken in many ways but this
specific problem is due to the fact that outlook locks up when downloading
large messages.   it doesn't have to be an attachment, if the message is too
large, then outlook will hang.  i don't recall exactly what the definition of
large is, but in my experience even medium-length messages will trigger the
bug.

the only solution is to use a decent mail client.  point customer at mozilla
thunderbird (IIRC there *IS* a windoze version) - nice GUI mail client without
outlook's stupid bugs and without outlook's stupid security holes.  and it's
free.  if they don't like thunderbird there are many others to choose from, but
the Golden Rule is Anything But Outlook!.


alternatively, get used to occasionally having to manually delete large
messages from the mailboxes of people who use outlook.
 
craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Outlook and Qmail

2004-07-23 Thread Craig Sanders
On Fri, Jul 23, 2004 at 11:12:06AM -0400, Kris Deugau wrote:
 Craig Sanders wrote:
  the problem is that outlook is broken.  it's broken in many ways but
  this specific problem is due to the fact that outlook locks up when
  downloading large messages.   it doesn't have to be an attachment,
  if the message is too large, then outlook will hang.  i don't recall
  exactly what the definition of large is, but in my experience even
  medium-length messages will trigger the bug.
 
 Mmmh..  If it's inherently Outlook/Outlook Express, why do I have 3 or 4
 customers who seem to spend their time sending and receiving ~5-7M video
 files by email?

dunno.  large messages obviously aren't the ONLY factor, it's a combination of
factors - one of which is that the message is large.

 I've yet to find any one consistent This WILL cause a problem factor,
 although Outlook/OE are more likely to have trouble, and single large

either have i, but since it only ever happens on Outlook and OE, it is an
outlook problem.  the bug may or may not be deep within windows, i don't know,
but only outlook ever triggers it.

the closest i have come to finding a cause is the observation that it happens
on large messages but never on small ones.  my guess is that some buffer used
for POP downloading is overflowing.

  the only solution is to use a decent mail client.  point customer at
  mozilla thunderbird (IIRC there *IS* a windoze version) - nice GUI
  [...]
  Anything But Outlook!.
 
 Indeed.  One minor advantage I've found to Outlook Express (please note, very
 definitely *NOT* Outlook!) is that it does a *very* tidy job of

it's just as broken as Outlook, it still has stupid bugs, and it is a security
disaster.

  alternatively, get used to occasionally having to manually delete
  large messages from the mailboxes of people who use outlook.
 
 Or directing them to the webmail interface and letting them sort out
 their own mail.  g

that works too.

 -- 
 Get your mouse off of there!  You don't know where that email has been!

with outlook, you don't even need to click on a message for a virus to install
itself.

the answer is still Don't use outlook.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Trusting Backports and unofficial Repositories

2004-07-19 Thread Craig Sanders
On Sun, Jul 18, 2004 at 01:20:50PM +0200, Philipp wrote:
 looking for a solution i came across apt-get.org and the unofficial
 repositories and backports they offer. now heres my question: 
 would you trust these archives for you production servers ? 

probably not.


 1) Are you using unofficial repositories on production servers ?

no, i run unstable on several dozen production servers without a problem.  i
find that doing that is an excellent way of both keeping software up-to-date
and also keeping several months ahead of the script-kiddies.  i upgrade, on
average, once or twice a month by first upgrading my workstation (which
generally has the same packages as the servers for testing and development)
then, if that goes well, by upgrading the servers in priority of importance
(least important servers first - by the time i get around to upgrading the
really important servers i've gone through that particular upgrade over 20
times so any minor tweaks or adjustments that are needed are semi-automatic).

i also usually upgrade the core packages for each server individually (i.e.
apt-get install package1 package2 ... packageN rather than apt-get
dist-upgrade) before doing a full dist-upgrade - e.g. for a mail server i
upgrade postfix and all other core mail packages first, for a database server,
i upgrade postgres first, for a web server, i upgrade apache etc first.

i do this to a) minimise downtime of the core functions (i.e. to make sure that
the packages are restarted very quickly instead of waiting for hundreds of
other packages to be configured and restarted); and b) to minimise any problems
- the fewer packages upgraded at any one time, the less chance of a problem and
the easier it is to notice and deal with it immediately.

point b above also avoids one major problem with running stable, which is that
every few years you do a major upgrade from the previous stable release to the
new one.  at that time you have a couple of years of cruft and configuration
changes to hundreds (if not thousands) of packages to tweak (or completely
rewrite/reconfigure), all at once and probably with users screaming at you
while you're working on it.  by regularly upgrading unstable, you get to deal
with the same issues in much smaller pieces, one or two at a time rather than
all at once.


i really don't see the point of stable+backports - installing backports defeats
the original purpose of running stable, it's like saying i'll have a black
coffee..but with a little bit of cream*, so you may as well run unstable.
at least with unstable, you know the package is done by the official debian
package maintainer, that it is of a high enough standard to get into the debian
archive, and that all the usual debian infrastructure (incl. bugs.debian.org)
is there to support it.  you also get a package that is tested by hundreds or
thousands of people who use unstable rather than the handful that use stable +
backports (or worse, you're the ONLY person with YOUR exact combination of
stable plus other packages).

(*) no matter how nice it is, it's not a black coffee any more.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: configure squid to cache sites

2004-07-06 Thread Craig Sanders
On Tue, Jul 06, 2004 at 11:29:04AM -0600, Lucas Albers wrote:
 Thought I would share my squid configuration to allow caching of
 windowsupdate/mcafee and similar for clients.
 Needs ims config to work correctly.
 Sure saves bandwidth, and vastly speeds up updates, for windows clients.
 Not a transparent configuration.
 http://www.mail-archive.com/[EMAIL PROTECTED]/msg107772.html

a useful set of refresh_patterns for squid.

there was one typo (*. rather than .*) in the first regexp, and three of
the regexps could be re-written in a more generic form, so that they're not
tied to particular versions of the service packs.

also, a literal . should always be written as \. in a regexp, otherwise it
matches *any* character.


# refresh patterns to enable caching of MS windows update
refresh_pattern http://.*\.windowsupdate\.microsoft\.com/ 0 80% 20160 reload-into-ims
refresh_pattern http://office\.microsoft\.com/0 80% 20160 reload-into-ims
refresh_pattern http://windowsupdate\.microsoft\.com/ 0 80% 20160 reload-into-ims

# the next two can be rewritten as one regexp, which should also match other
# SP versions.
#refresh_pattern http://wxpsp2\.microsoft\.com/   0 80% 20160 reload-into-ims
#refresh_pattern http://xpsp1\.microsoft\.com/0 80% 20160 reload-into-ims
refresh_pattern http://w?xpsp[0-9]\.microsoft\.com/   0 80% 20160 reload-into-ims

# ditto for the next one.
#refresh_pattern http://w2ksp4\.microsoft\.com/   0 80% 20160 reload-into-ims
refresh_pattern http://w2ksp[0-9]\.microsoft\.com/0 80% 20160 reload-into-ims

refresh_pattern http://download\.microsoft\.com/  0 80% 20160 reload-into-ims

# and some other windows updaters
refresh_pattern http://download\.macromedia\.com/ 0 80% 20160 reload-into-ims
refresh_pattern ftp://ftp\.nai\.com/  0 80% 20160 reload-into-ims
refresh_pattern http://ftp\.software\.ibm\.com/   0 80% 20160 reload-into-ims


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: configure squid to cache sites

2004-07-06 Thread Craig Sanders
On Tue, Jul 06, 2004 at 11:29:04AM -0600, Lucas Albers wrote:
 Thought I would share my squid configuration to allow caching of
 windowsupdate/mcafee and similar for clients.
 Needs ims config to work correctly.
 Sure saves bandwidth, and vastly speeds up updates, for windows clients.
 Not a transparent configuration.
 http://www.mail-archive.com/debian-user@lists.debian.org/msg107772.html

a useful set of refresh_patterns for squid.

there was one typo (*. rather than .*) in the first regexp, and three of
the regexps could be re-written in a more generic form, so that they're not
tied to particular versions of the service packs.

also, a literal . should always be written as \. in a regexp, otherwise it
matches *any* character.


# refresh patterns to enable caching of MS windows update
refresh_pattern http://.*\.windowsupdate\.microsoft\.com/ 0 80% 20160 
reload-into-ims
refresh_pattern http://office\.microsoft\.com/0 80% 20160 
reload-into-ims
refresh_pattern http://windowsupdate\.microsoft\.com/ 0 80% 20160 
reload-into-ims

# the next two can be rewritten as one regexp, which should also match other
# SP versions.
#refresh_pattern http://wxpsp2\.microsoft\.com/   0 80% 20160 
reload-into-ims
#refresh_pattern http://xpsp1\.microsoft\.com/0 80% 20160 
reload-into-ims
refresh_pattern http://w?xpsp[0-9]\.microsoft\.com/   0 80% 20160 
reload-into-ims

# ditto for the next one.
#refresh_pattern http://w2ksp4\.microsoft\.com/   0 80% 20160 
reload-into-ims
refresh_pattern http://w2ksp[0-9]\.microsoft\.com/0 80% 20160 
reload-into-ims

refresh_pattern http://download\.microsoft\.com/  0 80% 20160 
reload-into-ims

# and some other windows updaters
refresh_pattern http://download\.macromedia\.com/ 0 80% 20160 
reload-into-ims
refresh_pattern ftp://ftp\.nai\.com/  0 80% 20160 
reload-into-ims
refresh_pattern http://ftp\.software\.ibm\.com/   0 80% 20160 
reload-into-ims


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-26 Thread Craig Sanders
On Sat, Jun 26, 2004 at 06:34:53PM +1000, Russell Coker wrote:
 On Thu, 24 Jun 2004 11:58, Jason Lim [EMAIL PROTECTED] wrote:
   most ISPs (and mail service providers like yahoo and hotmail), for
   instance, will never have SPF records in their DNS.  they may use SPF
   checking on their own MX servers, but they won't have the records in their
   DNS.  their users have legitimate needs to send mail using their address
   from any arbitrary location, which is exactly what SPF works to prevent.
 
 If someone wants to use a hotmail or yahoo email address when sending email to 
 me then they will use hotmail/yahoo servers to send it.  My mail server will 
 prevent them doing otherwise, and has been doing so since before SPF started 
 becoming popular.

doesn't matter.  hotmail and yahoo are only two domains out of millions that
will never have SPF records in the DNS.  some because the domain owners are
lazy and/or ignorant, some (like debian.org) because they have a legitimate
need to send mail from so many locations that it is impossible to specify all
allowed hosts.



  I feel SPF is not going to be implemented many placed not because people
  don't wont to reduce spam, but because SPF just won't work in many cases.
  In fact, depending on how you look at it, it doesn't reduce spam at ALL
  (phising is certainly bad, but that is a separate problem).
 
 If it stops people from joe-jobbing me then that's enough reason to have it.

that's a reason for you to have SPF records (well, it will be if/when enough MX
servers implement SPF checking...in the meantime, it doesn't hurt to have
them).  like me, you *can* have SPF records for your domain because you *can*
list all the hosts allowed to send mail claiming to be from your domain.  that
just isn't the case for many domains.

that is why SPF will never be a generic anti-spam tool.  it is a
tightly-focussed anti-forgery tool of very limited use.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Which Spam Block List to use for a network?

2004-06-26 Thread Craig Sanders
On Sat, Jun 26, 2004 at 06:34:53PM +1000, Russell Coker wrote:
 On Thu, 24 Jun 2004 11:58, Jason Lim [EMAIL PROTECTED] wrote:
   most ISPs (and mail service providers like yahoo and hotmail), for
   instance, will never have SPF records in their DNS.  they may use SPF
   checking on their own MX servers, but they won't have the records in their
   DNS.  their users have legitimate needs to send mail using their address
   from any arbitrary location, which is exactly what SPF works to prevent.
 
 If someone wants to use a hotmail or yahoo email address when sending email 
 to 
 me then they will use hotmail/yahoo servers to send it.  My mail server will 
 prevent them doing otherwise, and has been doing so since before SPF started 
 becoming popular.

doesn't matter.  hotmail and yahoo are only two domains out of millions that
will never have SPF records in the DNS.  some because the domain owners are
lazy and/or ignorant, some (like debian.org) because they have a legitimate
need to send mail from so many locations that it is impossible to specify all
allowed hosts.



  I feel SPF is not going to be implemented many placed not because people
  don't wont to reduce spam, but because SPF just won't work in many cases.
  In fact, depending on how you look at it, it doesn't reduce spam at ALL
  (phising is certainly bad, but that is a separate problem).
 
 If it stops people from joe-jobbing me then that's enough reason to have it.

that's a reason for you to have SPF records (well, it will be if/when enough MX
servers implement SPF checking...in the meantime, it doesn't hurt to have
them).  like me, you *can* have SPF records for your domain because you *can*
list all the hosts allowed to send mail claiming to be from your domain.  that
just isn't the case for many domains.

that is why SPF will never be a generic anti-spam tool.  it is a
tightly-focussed anti-forgery tool of very limited use.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-24 Thread Craig Sanders
On Thu, Jun 24, 2004 at 08:46:20AM -0400, Mark Bucciarelli wrote:
 On Thursday 24 June 2004 08:17, Kilian Krause wrote:
  Hi Mark,
 
  Am Do, den 24.06.2004 schrieb Mark Bucciarelli um 14:06:
   I'm pretty sure this is incorrect.  SPF checks the MAIL-FROM: header,
   not From:, so I think this case should work fine ...
 
  so you mean this will also cut down the secondary spam through mailinglists
  (which have a proper SPF most probably). 
 
 No.  I meant that I send my domain mail through my ISP's SMTP server and I
 can setup my domain's DNS txt record so this works with SPF.

yes.  SPF is useful for small domains, including small businesses, SOHO, and
vanity domains.  it's also useful for corporations that have mail gateways
through which ALL of their outbound mail is supposed to pass.

it's not much use in any other circumstance.

e.g. i have SPF records in my home domains.  it is appropriate to have them
there because i *KNOW* with absolute 100% certainty which hosts are allowed to
send mail claiming to be from those domains.  i also have them because the cost
of having them is negligible (a few minutes of time to create them) even if
there aren't many mail servers which actually check them (hopefully that will
change in future) - in other words, they're not much use at the moment but it
didn't cost me much to publish the SPF TXT records.

i don't have SPF records in any of the thousands of domains on my name-server
at work (an ISP) because i do not and can not know which hosts should be
allowed to send mail claiming to be from these domains.

 [BTW, debian.org does not have an SPF entry.]

nor should it.  there are over a thousand @debian.org addresses, belonging to
over a thousand people, all of whom use their own internet connections to send
mail.  it would be impossible to specify all the hosts allowed to send mail
claiming to be from @debian.org.

as mentioned before, SPF is only useful where the owner of a domain can define
exactly which hosts are allowed to send mail claiming to be from that domain.
as you correctly deduced earlier (but incorrectly dismissed), it IS a very
small percentage of domains which can do this.

for every domain that can have SPF records, there are tens of thousands that
can't...and for every domain that actually does have them, there are millions
that don't.  that will always be the case.  SPF is not useful as a generic
anti-spam/anti-virus tool.  it is a specifically focused anti-forgery tool with
a very limited and small set of domains where it can be used.

sorry to burst your bubble, but wishful thinking won't make it any different.

craig

ps: more on SPF records for debian.org..it's a good idea to think about the
consequences of any action *BEFORE* doing it.  jumping on the bandwagon just
because it's fashionable or because it's all shiny and new is stupid.


-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Which Spam Block List to use for a network?

2004-06-24 Thread Craig Sanders
On Thu, Jun 24, 2004 at 08:46:20AM -0400, Mark Bucciarelli wrote:
 On Thursday 24 June 2004 08:17, Kilian Krause wrote:
  Hi Mark,
 
  Am Do, den 24.06.2004 schrieb Mark Bucciarelli um 14:06:
   I'm pretty sure this is incorrect.  SPF checks the MAIL-FROM: header,
   not From:, so I think this case should work fine ...
 
  so you mean this will also cut down the secondary spam through mailinglists
  (which have a proper SPF most probably). 
 
 No.  I meant that I send my domain mail through my ISP's SMTP server and I
 can setup my domain's DNS txt record so this works with SPF.

yes.  SPF is useful for small domains, including small businesses, SOHO, and
vanity domains.  it's also useful for corporations that have mail gateways
through which ALL of their outbound mail is supposed to pass.

it's not much use in any other circumstance.

e.g. i have SPF records in my home domains.  it is appropriate to have them
there because i *KNOW* with absolute 100% certainty which hosts are allowed to
send mail claiming to be from those domains.  i also have them because the cost
of having them is negligible (a few minutes of time to create them) even if
there aren't many mail servers which actually check them (hopefully that will
change in future) - in other words, they're not much use at the moment but it
didn't cost me much to publish the SPF TXT records.

i don't have SPF records in any of the thousands of domains on my name-server
at work (an ISP) because i do not and can not know which hosts should be
allowed to send mail claiming to be from these domains.

 [BTW, debian.org does not have an SPF entry.]

nor should it.  there are over a thousand @debian.org addresses, belonging to
over a thousand people, all of whom use their own internet connections to send
mail.  it would be impossible to specify all the hosts allowed to send mail
claiming to be from @debian.org.

as mentioned before, SPF is only useful where the owner of a domain can define
exactly which hosts are allowed to send mail claiming to be from that domain.
as you correctly deduced earlier (but incorrectly dismissed), it IS a very
small percentage of domains which can do this.

for every domain that can have SPF records, there are tens of thousands that
can't...and for every domain that actually does have them, there are millions
that don't.  that will always be the case.  SPF is not useful as a generic
anti-spam/anti-virus tool.  it is a specifically focused anti-forgery tool with
a very limited and small set of domains where it can be used.

sorry to burst your bubble, but wishful thinking won't make it any different.

craig

ps: more on SPF records for debian.org..it's a good idea to think about the
consequences of any action *BEFORE* doing it.  jumping on the bandwagon just
because it's fashionable or because it's all shiny and new is stupid.


-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-23 Thread Craig Sanders
On Wed, Jun 23, 2004 at 12:05:57PM -0300, Yves Junqueira wrote:
 SPF is a proposed standard.
 http://www.ietf.org/internet-drafts/draft-mengwong-spf-00.txt
 Even Microsoft seemed to drops its CallerID proposal in favor of SPF.
 Check spf.pobox.com
 
 On Wed, 23 Jun 2004 11:45:40 +0200, Niccolo Rigacci [EMAIL PROTECTED] wrote:
 
  Please correct me if I'm wrong; I'm searching for RFCs which
  propose effective ways to block spam and viruses.

SPF isn't a very effective tool for blocking spam or viruses.  it is a tool for
preventing some kinds of forgery.  it is useful where the owner of a domain can
strictly define which hosts are allowed to send mail claiming to be from their
domain.  it is not useful otherwise.  

this means it is very useful for, say, banks and other corporations to
prevent/limit phishing style scams.  it is also useful for small businesses and
home vanity domains.  it is not useful as a general anti-spam/anti-virus tool
because spammers and viruses can just forge addresses in any of the millions of
domains that don't have (and never will have) SPF records.

most ISPs (and mail service providers like yahoo and hotmail), for instance,
will never have SPF records in their DNS.  they may use SPF checking on their
own MX servers, but they won't have the records in their DNS.  their users have
legitimate needs to send mail using their address from any arbitrary location,
which is exactly what SPF works to prevent.

SPF is useful and a *part* of the solution for *some* of the problem.  it is
not a magic bullet.

craig



PS: (standard quote information file)

please learn to quote properly. your reply goes UNDERNEATH the quoted
material, not above it. this allows the quoted message to be read in
sequential order rather than reverse chronological order.

top-posting screws up the chronological order of the replies making it a
jarring chore to make sense of them - you have to scroll backwards and
forwards trying to match who said what to whom and when.

the longer a thread goes on, the worse it gets.

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Which Spam Block List to use for a network?

2004-06-23 Thread Craig Sanders
On Wed, Jun 23, 2004 at 11:45:40AM +0200, Niccolo Rigacci wrote:
 On Wed, Jun 23, 2004 at 09:56:02AM +1000, Craig Sanders wrote:
   You want to block spam or viruses, this is OK but you are on the
   wrong way.
  
  no, it's absolutely the right way.  a large percentage of spam and
  almost all viruses come direct from dynamic IP addresses.
 
 I repeat for the last time: the fact that your block is effective
 to your problem does not metter that you are on the rigth way.

i'm so glad it's the last time.  it's very tiresome when someone
is both wrong and repetitive.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-23 Thread Craig Sanders
On Wed, Jun 23, 2004 at 12:05:57PM -0300, Yves Junqueira wrote:
 SPF is a proposed standard.
 http://www.ietf.org/internet-drafts/draft-mengwong-spf-00.txt
 Even Microsoft seemed to drops its CallerID proposal in favor of SPF.
 Check spf.pobox.com
 
 On Wed, 23 Jun 2004 11:45:40 +0200, Niccolo Rigacci [EMAIL PROTECTED] wrote:
 
  Please correct me if I'm wrong; I'm searching for RFCs which
  propose effective ways to block spam and viruses.

SPF isn't a very effective tool for blocking spam or viruses.  it is a tool for
preventing some kinds of forgery.  it is useful where the owner of a domain can
strictly define which hosts are allowed to send mail claiming to be from their
domain.  it is not useful otherwise.  

this means it is very useful for, say, banks and other corporations to
prevent/limit phishing style scams.  it is also useful for small businesses and
home vanity domains.  it is not useful as a general anti-spam/anti-virus tool
because spammers and viruses can just forge addresses in any of the millions of
domains that don't have (and never will have) SPF records.

most ISPs (and mail service providers like yahoo and hotmail), for instance,
will never have SPF records in their DNS.  they may use SPF checking on their
own MX servers, but they won't have the records in their DNS.  their users have
legitimate needs to send mail using their address from any arbitrary location,
which is exactly what SPF works to prevent.

SPF is useful and a *part* of the solution for *some* of the problem.  it is
not a magic bullet.

craig



PS: (standard quote information file)

please learn to quote properly. your reply goes UNDERNEATH the quoted
material, not above it. this allows the quoted message to be read in
sequential order rather than reverse chronological order.

top-posting screws up the chronological order of the replies making it a
jarring chore to make sense of them - you have to scroll backwards and
forwards trying to match who said what to whom and when.

the longer a thread goes on, the worse it gets.

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-22 Thread Craig Sanders
On Mon, Jun 21, 2004 at 12:46:01PM +0200, Francisco Borges wrote:
 ? On Sat, Jun 19, 2004 at 08:15:11AM +, Adam Funk wrote:
 
  On Friday 18 June 2004 15:40, Francisco Borges wrote:
 
   THE QUESTION:
  
   We need to use some form of Block List at the connection level,
 
  Whatever you do, don't be one of those ignorant, asinine admins who
  block mail from all dynamic IPs.
 
 No, I don't intend to do that.

yeah, good decision.  blocking mail from dynamic/dialup IP addresses is the
right thing to do, but it's much better to be an informed, intelligent and
suave admin who does that than an ignorant, asinine one (but that's true of
everything, isn't it?).


 Interestingly enough, *today* I got a note from a colleague has started doing
 it at his network.

smart colleague.

 I don't know the axact number by heart but we are above 1500 users here;
 blocking dynamic IPs would be a disaster.

permit your own dynamic/dialup IP addresses, same as you (should) do with other
restrictions (e.g. rejecting non-fqdn hostnames...good thing to block from
external sources, but not a good idea to block from your own users).

reject other dyn/dialups - they should use their own ISP or mail server.

in postfix, you do that by putting the permit_mynetworks rule *before* the
reject_rbl_client  rule.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Which Spam Block List to use for a network?

2004-06-22 Thread Craig Sanders
On Tue, Jun 22, 2004 at 09:04:03PM -0400, Blu wrote:
 On Wed, Jun 23, 2004 at 09:56:02AM +1000, Craig Sanders wrote:
  On Tue, Jun 22, 2004 at 11:37:41AM +0200, Niccolo Rigacci wrote:
   You want to block spam or viruses, this is OK but you are on the
   wrong way.
  
  no, it's absolutely the right way.  a large percentage of spam and
  almost all viruses come direct from dynamic IP addresses.  block
  mail from them and you instantly block most of the problem.
 
 And you block a lot of legitimate email too.

actually, almost none.

the number of geeks who want to run their own mail server from a dynamic IP
address is vanishingly small.  the number of false positives from blocking
dynamic IPs is not just lost in the noise of all the spam and viruses coming
from dynamics, it is completely indistinguishable from noise.  far less than 1
in a million messages.

a very small price to pay to block an enormous quantity of spam and viruses,
especially when those legitimate mailers who are affected can, if they could be
bothered, work around it quite easily and cheaply.


 In my server, my policy is to reject mail from hosts which are blocking
 me. 

good for you.  your server, your rules.  sounds like a stupid thing to do, but
you are entirely within your rights to do so.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Which Spam Block List to use for a network?

2004-06-22 Thread Craig Sanders
On Mon, Jun 21, 2004 at 12:46:01PM +0200, Francisco Borges wrote:
 ? On Sat, Jun 19, 2004 at 08:15:11AM +, Adam Funk wrote:
 
  On Friday 18 June 2004 15:40, Francisco Borges wrote:
 
   THE QUESTION:
  
   We need to use some form of Block List at the connection level,
 
  Whatever you do, don't be one of those ignorant, asinine admins who
  block mail from all dynamic IPs.
 
 No, I don't intend to do that.

yeah, good decision.  blocking mail from dynamic/dialup IP addresses is the
right thing to do, but it's much better to be an informed, intelligent and
suave admin who does that than an ignorant, asinine one (but that's true of
everything, isn't it?).


 Interestingly enough, *today* I got a note from a colleague has started doing
 it at his network.

smart colleague.

 I don't know the axact number by heart but we are above 1500 users here;
 blocking dynamic IPs would be a disaster.

permit your own dynamic/dialup IP addresses, same as you (should) do with other
restrictions (e.g. rejecting non-fqdn hostnames...good thing to block from
external sources, but not a good idea to block from your own users).

reject other dyn/dialups - they should use their own ISP or mail server.

in postfix, you do that by putting the permit_mynetworks rule *before* the
reject_rbl_client  rule.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-22 Thread Craig Sanders
On Tue, Jun 22, 2004 at 11:37:41AM +0200, Niccolo Rigacci wrote:
 You want to block spam or viruses, this is OK but you are on the
 wrong way.

no, it's absolutely the right way.  a large percentage of spam and
almost all viruses come direct from dynamic IP addresses.  block
mail from them and you instantly block most of the problem.

 I work for a firm and we ave about 150 Debian servers installed
 to customers sites, they are connected with adsl. The IP ranges
 are owned by the largest Italian provider and they are listed as
 dynamic ones, despite the fact that they are assigned in a static
 way. Our customers run their own mail server with SMTP, POP3,
 IMAP, and webmail.

1. 150 customers may be a large enough block to get the ISP to allocate IP
addresses from a different block.

2. if you're using dynamic ip addresses because it's a cheaper option than
static, then you've just discovered that if you pay for a lower-quality service
then that is what you get.

3. if the IP addresses really are statically assigned and just happen to be in
a netblock that whois claims is dynamic, then most dialup RBLs will adjust
their zone file if the ISP provides some proof of that fact.

 You have to explain to me why you are blocking their mails.

no, he doesn't.  his mail server, his rules.  

 You also have to explain to me why do you want to force them to use a smart
 host for their outgoing mails.

again, he doesn't have to explain but it is because they have dynamic IP
addresses and dynamic IP addresses should not attempt to deliver mail direct to
destination.

 They have purchased bare adsl connectivity, why do you want force them to
 purchase also smtp service from an ISP?

they do not have to purchase smtp service from an ISP.

you have 150 debian boxes there.  that's 150 people to share the cost of a
co-located host (available for approx $50 US per month - or even less.  i.e.
less than 33 cents each per month).  all of those 150 boxes could use the
co-located machine as the smart host.  end of problem.  no reliance on some
ISP's crappy mail server, no DNSRBL listing due to dynamic IP addresses.  it
can act as outbound relay and optionally as inbound MX (although that's not a
good idea unless you can keep the local recipients list for all 150 machines up
to date on the co-lo box)

use tunnels, uucp-over-tcp, smtp auth, SSL certificate based relaying or any
one of a dozen other methods to allow your 150 mail servers to relay through
the co-lo box without being an open relay.

if you don't have the skill, or couldn't be bothered doing what it takes to
make it work, then you really shouldn't be operating mail servers on the public
internet.

 You are following an unexistant cause-effect link and you are wasting your
 time. For a virus writer it is a metter of an hour to change his code to post
 to the isp's smtp server instead of posting directly. 

virus writers don't do this for the same reasons that spammers don't.  that is
partly because they'd have to customise their virus for each individual ISP,
but mostly it is because ISPs keep track of mail flowing through their servers.
an ISP's mail server is an excellent place to block viruses - they can and do
run AV scanners, rate-limiters.

in any case, if virus writers did this then that would be a good thing.  we
want ISPs to take responsibility for their customers use and abuse of the net.

 Now you have an huge infrastructure (dynaddr lists) perfectly useless that do
 big harm to the network.

no, they're not doing harm.  they're doing a good job of enabling those who do
no want to accept mail from dynamic/dialup IP addresses to automatically reject
mail from those sources.

nobody is forcing you to block mail on that criteria, but you also have no
right to prevent (or even whine about the fact) other people from rejecting
mail from THEIR servers for that reason.  their server, their rules.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which Spam Block List to use for a network?

2004-06-22 Thread Craig Sanders
On Tue, Jun 22, 2004 at 09:04:03PM -0400, Blu wrote:
 On Wed, Jun 23, 2004 at 09:56:02AM +1000, Craig Sanders wrote:
  On Tue, Jun 22, 2004 at 11:37:41AM +0200, Niccolo Rigacci wrote:
   You want to block spam or viruses, this is OK but you are on the
   wrong way.
  
  no, it's absolutely the right way.  a large percentage of spam and
  almost all viruses come direct from dynamic IP addresses.  block
  mail from them and you instantly block most of the problem.
 
 And you block a lot of legitimate email too.

actually, almost none.

the number of geeks who want to run their own mail server from a dynamic IP
address is vanishingly small.  the number of false positives from blocking
dynamic IPs is not just lost in the noise of all the spam and viruses coming
from dynamics, it is completely indistinguishable from noise.  far less than 1
in a million messages.

a very small price to pay to block an enormous quantity of spam and viruses,
especially when those legitimate mailers who are affected can, if they could be
bothered, work around it quite easily and cheaply.


 In my server, my policy is to reject mail from hosts which are blocking
 me. 

good for you.  your server, your rules.  sounds like a stupid thing to do, but
you are entirely within your rights to do so.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Adjusting MTU

2004-05-18 Thread Craig Sanders
On Wed, May 12, 2004 at 02:40:10PM -0400, Ryan Tucker wrote:
 I'm trying to find the Debian Way to adjust the MTU on an interface...
 basically, we have a box behind a firewall which is blocking the ICMP Can't
 Fragment packets, and we're sending fairly large data packets through, and,
 well, the obvious problem occurs.  They can put a little firewall on the LAN
 which has an MTU adjustment (and sends the packets back), which fixes the
 problem nicely, but that's kinda a hack.

the correct solution is to throw out the broken firewall and replace it
with something that wasn't made by brain-damaged cretins.


if you need some proof to get management to throw away the firewall that they
wisely decided to waste lots of money on, then see the Blocking ICMP
section of the Common ISP Mistakes document:

http://www.freelabs.com/~whitis/isp_mistakes.html

this document is fairly old now but is still very relevant - it should be
required reading for all ISP tech and management staff.
 


See also Broken PMTU causes slow networks:

http://www.burgettsys.com/stories/56239/

and PMTU Discovery:

http://www.netheaven.com/pmtu.html


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: help with PHP/SQL

2004-05-18 Thread Craig Sanders
On Thu, May 13, 2004 at 09:58:35PM -0400, ziada Mrisho wrote:
 I got your e-mail address from a forum thread on SQL Help. I desperately need
 help for my class project. I need to know how to insert an image in a
 database table. When I issue a CREATE table command, what attribute
 represents an image file? eg:
 
 CREATE TABLE SClassTable (
   ?image _? NOT NULL,
   description VARCHAR (64) DEFAULT NULL,
   reference INT (5) DEFAULT NULL,
   price VARCHAR(15),
   PRIMARY KEY (reference),
   KEY (description)

it's not a good idea to store images in databases.  you're much better off
storing the image in the filesystem and using the database to store metadata
about the image, including description, title, copyright details, and
especially the path and/or URL to the image.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Adjusting MTU

2004-05-18 Thread Craig Sanders
On Wed, May 12, 2004 at 02:40:10PM -0400, Ryan Tucker wrote:
 I'm trying to find the Debian Way to adjust the MTU on an interface...
 basically, we have a box behind a firewall which is blocking the ICMP Can't
 Fragment packets, and we're sending fairly large data packets through, and,
 well, the obvious problem occurs.  They can put a little firewall on the LAN
 which has an MTU adjustment (and sends the packets back), which fixes the
 problem nicely, but that's kinda a hack.

the correct solution is to throw out the broken firewall and replace it
with something that wasn't made by brain-damaged cretins.


if you need some proof to get management to throw away the firewall that they
wisely decided to waste lots of money on, then see the Blocking ICMP
section of the Common ISP Mistakes document:

http://www.freelabs.com/~whitis/isp_mistakes.html

this document is fairly old now but is still very relevant - it should be
required reading for all ISP tech and management staff.
 


See also Broken PMTU causes slow networks:

http://www.burgettsys.com/stories/56239/

and PMTU Discovery:

http://www.netheaven.com/pmtu.html


craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: help with PHP/SQL

2004-05-18 Thread Craig Sanders
On Thu, May 13, 2004 at 09:58:35PM -0400, ziada Mrisho wrote:
 I got your e-mail address from a forum thread on SQL Help. I desperately need
 help for my class project. I need to know how to insert an image in a
 database table. When I issue a CREATE table command, what attribute
 represents an image file? eg:
 
 CREATE TABLE SClassTable (
   ?image _? NOT NULL,
   description VARCHAR (64) DEFAULT NULL,
   reference INT (5) DEFAULT NULL,
   price VARCHAR(15),
   PRIMARY KEY (reference),
   KEY (description)

it's not a good idea to store images in databases.  you're much better off
storing the image in the filesystem and using the database to store metadata
about the image, including description, title, copyright details, and
especially the path and/or URL to the image.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Fixed (hardisk) device names?

2004-04-01 Thread Craig Sanders
On Thu, Apr 01, 2004 at 09:06:33AM +0200, Arnd Vehling wrote:
 And why doesnt the bootblock get copied when using identical discs and making
 a dd if=/dev/had of=/dev/hdb?

it does.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Fixed (hardisk) device names?

2004-03-31 Thread Craig Sanders
On Wed, Mar 31, 2004 at 07:54:19AM +0200, Arnd Vehling wrote:
 does anyone know how to fix the device name on a debian linux
 system? For example. If i have two IDE hardisks, the devices will
 be named like this.
 
 /dev/hda
 /dev/hdb
 
 If i now must remove the first harddisk (/dev/hda) the second (/dev/hdb)
 will be renamed to (/dev/hda) after the reboot. As i want /dev/hdb to be
 a mirror of /dev/hda and used as failover disk _without_ opening the
 case and tampering with the IDE bus setup, i want linux to keep the name
 /dev/hdb for the drive no matter what happens.

huh?

that's EXACTLY what linux does for IDE drives.  the slave drive on the primary
IDE controller will *always* be /dev/hdb, regardless of whether there is a
master drive or not.

/dev/hda  - master drive on primary IDE controller
/dev/hdb  - slave drive on primary IDE controller
/dev/hdc  - master drive on secondary IDE controller
/dev/hdd  - slave drive on secondary IDE controller

 Is this possible?

it's standard.

 Another question. How can i copy two identical discs _including_ the boot
 block? dd if=/dev/hda of=/dev/hdb doesnt do it 

don't use dd for that.  set up a raid-1 mirror instead.  it's easy to do, only
about 5 minutes work.

also, for performance and safety, put your second drive on a separate IDE
controller.  that way it will still work even if one IDE controller fails.
e.g. have /dev/hda (primary IDE master) and /dev/hdc (secondary IDE master)
rather than /dev/hda  /dev/hdb.

 and there are no raw devices on linux AFAIK.

/dev/hd? ARE the raw devices.   

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Fixed (hardisk) device names?

2004-03-31 Thread Craig Sanders
On Wed, Mar 31, 2004 at 07:54:19AM +0200, Arnd Vehling wrote:
 does anyone know how to fix the device name on a debian linux
 system? For example. If i have two IDE hardisks, the devices will
 be named like this.
 
 /dev/hda
 /dev/hdb
 
 If i now must remove the first harddisk (/dev/hda) the second (/dev/hdb)
 will be renamed to (/dev/hda) after the reboot. As i want /dev/hdb to be
 a mirror of /dev/hda and used as failover disk _without_ opening the
 case and tampering with the IDE bus setup, i want linux to keep the name
 /dev/hdb for the drive no matter what happens.

huh?

that's EXACTLY what linux does for IDE drives.  the slave drive on the primary
IDE controller will *always* be /dev/hdb, regardless of whether there is a
master drive or not.

/dev/hda  - master drive on primary IDE controller
/dev/hdb  - slave drive on primary IDE controller
/dev/hdc  - master drive on secondary IDE controller
/dev/hdd  - slave drive on secondary IDE controller

 Is this possible?

it's standard.

 Another question. How can i copy two identical discs _including_ the boot
 block? dd if=/dev/hda of=/dev/hdb doesnt do it 

don't use dd for that.  set up a raid-1 mirror instead.  it's easy to do, only
about 5 minutes work.

also, for performance and safety, put your second drive on a separate IDE
controller.  that way it will still work even if one IDE controller fails.
e.g. have /dev/hda (primary IDE master) and /dev/hdc (secondary IDE master)
rather than /dev/hda  /dev/hdb.

 and there are no raw devices on linux AFAIK.

/dev/hd? ARE the raw devices.   

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: Which SATA RAID controller?

2004-03-23 Thread Craig Sanders
On Tue, Mar 23, 2004 at 07:08:57PM +0100, Marc Schiffbauer wrote:
 * Marcin Owsiany schrieb am 23.03.04 um 18:10 Uhr:
  Hi!
  
  I need to choose between:
   - 3Ware Escalade 8006-2LP
   - Promise Fast Track S150 TX4
  
  The Fast Track is a little cheaper, and has 4 interfaces (3Ware only 2).
  Is there any good reason to choose 3Ware?
 
 IMO 3ware are the only reasonable RAID-Controllers for (S)ATA under
 Linux. 

anyone have any opinions about the adaptec 2400 (ATA) or 2410 (SATA)?

they have driver support in 2.4.x and 2.6.x kernels - no idea how good, though.

unlike the 3ware cards (or any other IDE/SATA raid cards i've heard of), they
do have a large (128MB) write-cache - which is essential for raid-5
performance.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Which SATA RAID controller?

2004-03-23 Thread Craig Sanders
On Tue, Mar 23, 2004 at 07:08:57PM +0100, Marc Schiffbauer wrote:
 * Marcin Owsiany schrieb am 23.03.04 um 18:10 Uhr:
  Hi!
  
  I need to choose between:
   - 3Ware Escalade 8006-2LP
   - Promise Fast Track S150 TX4
  
  The Fast Track is a little cheaper, and has 4 interfaces (3Ware only 2).
  Is there any good reason to choose 3Ware?
 
 IMO 3ware are the only reasonable RAID-Controllers for (S)ATA under
 Linux. 

anyone have any opinions about the adaptec 2400 (ATA) or 2410 (SATA)?

they have driver support in 2.4.x and 2.6.x kernels - no idea how good, though.

unlike the 3ware cards (or any other IDE/SATA raid cards i've heard of), they
do have a large (128MB) write-cache - which is essential for raid-5
performance.

craig

-- 
craig sanders [EMAIL PROTECTED]

The next time you vote, remember that Regime change begins at home




Re: apt-get upgrade or .tgz

2004-03-03 Thread Craig Sanders
On Wed, Mar 03, 2004 at 09:03:51AM -0500, Andrew P. Kaplan wrote:
 I have an old version of Postfix running on my Debian box. I don't remember
 if I used apt-get or installed from a .tgz file. If I use apt-get install I
 am concerned I could end up with two version of Postfix. What's the best way
 to upgrade.

somebody else already posted some ideas on how to tell whether it is a package
or not.  useful info:

   dpkg -l postfix*


if it's not a package, the best way to upgrade is to backup your postfix
config, delete the .tgz install of postfix, and then apt-get install the latest
postfix packages.  they you never have to worry about it again.

if it is a package then just use apt-get to upgrade postfix.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apt-get upgrade or .tgz

2004-03-03 Thread Craig Sanders
On Wed, Mar 03, 2004 at 09:03:51AM -0500, Andrew P. Kaplan wrote:
 I have an old version of Postfix running on my Debian box. I don't remember
 if I used apt-get or installed from a .tgz file. If I use apt-get install I
 am concerned I could end up with two version of Postfix. What's the best way
 to upgrade.

somebody else already posted some ideas on how to tell whether it is a package
or not.  useful info:

   dpkg -l postfix*


if it's not a package, the best way to upgrade is to backup your postfix
config, delete the .tgz install of postfix, and then apt-get install the latest
postfix packages.  they you never have to worry about it again.

if it is a package then just use apt-get to upgrade postfix.

craig




Re: qmail or postfix? (was: RE: What is the best mailling list manager for qmail and Domain Tech. Control ?)

2004-02-27 Thread Craig Sanders
On Tue, Feb 24, 2004 at 03:29:04PM +0100, Thomas GOIRAND wrote:
 - Original Message - 
 From: Craig Sanders [EMAIL PROTECTED]
  On Thu, Feb 19, 2004 at 09:34:52PM +0100, Bj?rnar Bj?rgum Larsen wrote:
 
  4. the configuration is truly bizarre.bernstein has his own
  non-standard directory structures, and a liking for many little files.
  many of these files are 'magical' - the contents are irrelevant, mere
  existence of them alters behaviour of the program, and even causes programs
  to be run automagically.
 
  this makes it impossible to experiment by temporarily commenting out
  particular lines - you have to delete a file, and then hope you can
  remember what it was called if you need to re-enable that feature.
 
 I deseagree on that. I've found qmail's config file a lot more efficient than
 one stupid unic file,

fine.  you have every right to be wrong.




 Can someone write here an easy understandable configuration for
 Postfix with virtual domains ? After some call for help here, none of
 you that know Posfix did it...

sorry, but it's not our problem if YOU can't understand clear and simple
instructions or concepts.

virtual domains are a well-documented part of postfix, and have been for years.


  5. bernstein likes to reinvent the wheel.  he does this (and does it badly)
  without regard to whether the wheel actually needs to be reinvented or not
  (e.g. ucspi-tcp).
 
  this is compounded by the fact that it is a complete PITA to use any of his
  programs without using all of his programs.
 
 I deseagree a lot on that also. Bernstein has coded ucspi-tcp as a
 replacement for the standard tcp program. 

the program you are thinking of is called inetd (or xinetd - another version
with resource limitation controls built in).

 He has the rights to do so, and you have the rights not to use it if you like
 inetd...

of course he has the right to do so.  that is beyond question.

it was just unneccesary and stupid of him to do so.

more to the point, if he's going to reinvent the wheel he should at least try
to do a good job - a square wheel isn't any use to anyone.



 that focus on staying on unix style,

you couldn't be more wrong on this point.

his programs implement BERNSTEIN-style, not traditional unix style.  his
programs are about as different from unix style as it's possible for software
to be and still run on unix systems.


craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: qmail or postfix? (was: RE: What is the best mailling list manager for qmail and Domain Tech. Control ?)

2004-02-20 Thread Craig Sanders
On Fri, Feb 20, 2004 at 08:36:08AM +0100, Adrian 'Dagurashibanipal' von Bidder wrote:
 On Thursday 19 February 2004 23.28, Craig Sanders wrote:
  On Thu, Feb 19, 2004 at 09:34:52PM +0100, Bj?rnar Bj?rgum Larsen wrote:
   For example, I'd like comments on
   http://homepages.tesco.net/~J.deBoynePollard/Reviews/UnixMTSes/postfix.ht
  ml
 
  a collection of lies, half-truths, and mistruths.
 
 Since Bj?rnar was asking for qualified information, let's do the dance for 
 him...

well done.  you put a lot more effort in than i thought was warranted for tripe
like that.


  the best that can be said about this document is that the author doesn't
  know what he is talking about.
 
 I guess the document was written years ago, when postfix did indeed lack 
 *some* of the features people did expect (one of them being the ability to 
 reject mail instead of bounce it ;-)

actually, it is qmail and not postfix that can't 5xx reject mail.  qmail has to
accept and bounce it.postfix has always been able to reject unwanted mail
during the SMTP session (although the relay_recipient_maps option is a
relatively recent addition for rejecting unknown relay recipient addresses).

BTW, bouncing rather than rejecting contributes significantly to the spam and
virus problem.  when a virus or spamware encounters a 5xx rejection, it does
nothing, it just ignores it and moves on to the next victim address.  when
qmail accepts and bounces such a mail, it ends up spamming the forged sender
address with unwanted bounces (which is also extra work for the qmail system
itself - serious consequences during a spammer dictionary attack)




   http://homepages.tesco.net/~J.deBoynePollard/Reviews/UnixMTSes/qmail.html

 | host and user masquerading, 
 | virtual users, 
 | virtual domains, 
 | users that are not in /etc/passwd, 
 | SMTP Relay being denied by default, 
 | per-host SMTP Relay control, 
 | consultation of SMTP client blacklist and whitelist databases (using 
 |   rblsmtpd from UCSPI-TCP), and  
 | an 8-bit clean SMTP server. 
 
 postfix does all of these.

but qmail doesn't do all of them.

in particular, it is not really an 8-bit clean SMTP server.  one of the
requirements for 8-bit clean-ness is that the MTA translate 8-bit bodies to
7-bit quoted-printable if the mail is being sent to a non-8-bit MTA.  qmail
doesn't bother to do this.

qmail's failure here is quite deliberate.  bernstein's intention is to cause
breakage for what he sees as obsolete systems.  fair enough, they may be
obsolete but to deliberately feed them data that you know they can't handle is
irresponsible vandalism.  it is also an extreme version of his notorious
disdain for any kind of backwards-compatibility or migration path.

see section 3.1 of http://www-dt.e-technik.uni-dortmund.de/~ma/qmail-bugs.html
and bernstein's own words on the subject: http://cr.yp.to/smtp/8bitmime.html

(in fact, the entire qmail-bugs document mentioned above is worth reading)


craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: qmail or postfix? (was: RE: What is the best mailling list manager for qmail and Domain Tech. Control ?)

2004-02-19 Thread Craig Sanders
On Thu, Feb 19, 2004 at 09:34:52PM +0100, Bj?rnar Bj?rgum Larsen wrote:

 [3] Craig Sanders wrote:
  ps: qmail is a bad idea.  postfix is better.
 
 Your conclusion may be right, but the arguments are missing. Would you please
 share?

search the archives of this list.  MTA comparisons have been discussed many
times.  i've made the arguments several times before and i'm getting bored of
it.

to summarise:

1. because qmail is so different from other MTAs, it is a dead-end trap, just
like proprietary software.  bernstein doesn't believe in making any effort to
assist people who were using other MTAs and want to switch - migrating to qmail
is a pain, and migrating away from it is just as bad.

2. it has severe licensing problems, which mean that the code basically
stagnated years ago.  no patches are ever accepted into qmail, and the author
doesn't appear to be interested in making any improvements (in his estimation,
it is already perfect...ignoring several glaringly obvious faults and lacks).

the license means that using qmail is a reversion to the bad old days before
free software became ubiqitous - the late 1980s for instance.  back then you
had to hunt for the original source (easy enough), then hunt for every patch
that you needed to make it useful, then apply them (and hope that the patches
are compatiblediscovering by trial and error that they can be compatible
but only if applied in a particular *undocumented* order), then compile and
install it.

3. bernstein insists that you discard years of practice, tools, and techniques
and start from scratch.  if you don't like it, then you are a moron because
bernstein is Always Right so don't complain.

4. the configuration is truly bizarre.bernstein has his own non-standard
directory structures, and a liking for many little files.  many of these files
are 'magical' - the contents are irrelevant, mere existence of them alters
behaviour of the program, and even causes programs to be run automagically.

this makes it impossible to experiment by temporarily commenting out particular
lines - you have to delete a file, and then hope you can remember what it was
called if you need to re-enable that feature.

it also means that there is no config file containing comments to serve as
working reference documentation.

5. bernstein likes to reinvent the wheel.  he does this (and does it badly)
without regard to whether the wheel actually needs to be reinvented or not
(e.g. ucspi-tcp).

this is compounded by the fact that it is a complete PITA to use any of his
programs without using all of his programs.

6. the author is a rude jerk.  this is undisputed, even by those who actually
like bernstein's software.


craig

ps: as for postfix being better - it is:

1. free software, with a real free software license (IBM public license)
2. actively developed, with a friendly principal developer and helpful
developer  user community.
3. backwards compatible with sendmail, so migration is easy
4. secure
5. fast (much faster than qmail)
6. the best anti-spam features of any MTA available
7. more features than you can poke a stick at



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: qmail or postfix? (was: RE: What is the best mailling list manager for qmail and Domain Tech. Control ?)

2004-02-19 Thread Craig Sanders
On Thu, Feb 19, 2004 at 09:34:52PM +0100, Bj?rnar Bj?rgum Larsen wrote:
 For example, I'd like comments on
 http://homepages.tesco.net/~J.deBoynePollard/Reviews/UnixMTSes/postfix.html

a collection of lies, half-truths, and mistruths.

the best that can be said about this document is that the author doesn't know
what he is talking about.

 and 
 http://homepages.tesco.net/~J.deBoynePollard/Reviews/UnixMTSes/qmail.html

biased bullshit and boosterism.  rah rah rah! worship bernstein.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: What is the best mailling list manager for qmail and Domain Tech. Control ?

2004-02-18 Thread Craig Sanders
On Mon, Feb 16, 2004 at 09:35:00PM +0100, Joris wrote:
 Majordomo is good, but I think you'd like mailman better.
 
 Web interface for both users and administrators, very configurable, etc.
 
 I'd recommend mailman too, but I have to warn for it's archive function.

all list managers suck, but in different ways.

 Afaik mailman is only capable of archiving messages in Mbox format.

if only that was true.  it's archiving program (pipermail) creates something
that is *almost* but not quite mbox.  to load the archive into an mbox mail
reader (e.g. mutt or elm) you have to open the file in a text editor and change
every From_ line so that it conforms to mbox format.

 Yes, that same dreadfull mbox format that has kept all mail related 
 applications slow for years.

you don't know what you're talking about.

there is only one circumstance where mbox is slower than maildir, and
manipulating archives is not it.  reading mail with a crappy pop daemon (like
qpopper) that copies the entire mbox to /tmp is where mbox is slower, and
that's mostly because qpopper sucks rather than mbox itself sucking.  better
quality pop daemons (e.g. cucipop or anything newer) are not noticably slower
on reasonably-sized mboxes.

where maildir shines is when you have many thousands of messages and you need
direct access to just one of them.  maildir can be much faster at that IFF
you're not on a file system that sucks (like ext2 or ext3) - you really need a
fs that doesn't suck when you have thousands of files in one directory: xfs or
reiserfs, for example.

 I've had a mailing list's archive grow over a couple 100MB's, and mailman
 started bogging down the system. (took quite a while to realise what was
 going on)

i have numerous majordomo based list archives, as well as my own personal mail
archives, all in mbox format(*).  there are no speed problems with any of them.
pipermail is broken.

(*) mbox is, IMO, a superior format for archiving.  one file per archive is 
better
than squillions of little files.

craig




Re: What is the best mailling list manager for qmail and Domain Tech. Control ?

2004-02-18 Thread Craig Sanders
On Mon, Feb 16, 2004 at 08:19:20AM -0500, John Keimel wrote:
 On Mon, Feb 16, 2004 at 07:17:57AM +0100, Thomas GOIRAND wrote:
  I wish to implement mailling list management to my software for all virtual
  domains. DTC uses qmail, so it has to be compatible with it. DTC will
  generate all config file for the given mailling list manager.
  
 Ecartis (formerly known as listar) works pretty well for me, but the
 documentation for it is _still_ woefully inferior.

i still use ecartis for a few hundred lists, but have moved away from it on
general principles.  it started out with a lot of promise, but very little
serious work has been done on it in recent years.

one major problem is that it still has serious bugs with mime  message
attachments, making it useless for any list where subscribers habitually
PGP-sign their messages (any geek list is bound to have a few users that do
that).  i tried using it for the debian-melb list but had to abandon 
the attempt and switch to majordomo because of this.

craig

ps: qmail is a bad idea.  postfix is better.




Re: What is the best mailling list manager for qmail and Domain Tech. Control ?

2004-02-17 Thread Craig Sanders
On Mon, Feb 16, 2004 at 08:19:20AM -0500, John Keimel wrote:
 On Mon, Feb 16, 2004 at 07:17:57AM +0100, Thomas GOIRAND wrote:
  I wish to implement mailling list management to my software for all virtual
  domains. DTC uses qmail, so it has to be compatible with it. DTC will
  generate all config file for the given mailling list manager.
  
 Ecartis (formerly known as listar) works pretty well for me, but the
 documentation for it is _still_ woefully inferior.

i still use ecartis for a few hundred lists, but have moved away from it on
general principles.  it started out with a lot of promise, but very little
serious work has been done on it in recent years.

one major problem is that it still has serious bugs with mime  message
attachments, making it useless for any list where subscribers habitually
PGP-sign their messages (any geek list is bound to have a few users that do
that).  i tried using it for the debian-melb list but had to abandon 
the attempt and switch to majordomo because of this.

craig

ps: qmail is a bad idea.  postfix is better.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How do you manage Perl modules?

2004-02-07 Thread Craig Sanders
On Fri, Feb 06, 2004 at 05:41:18PM -0500, Kris Deugau wrote:
 However, I've just discovered that there's also a bad version mismatch
 between the default libdb version used by DB_File in RedHat, and the one in
 Debian (db3 in RedHat vs db1 [I think] in Debian).  I also discovered that
 this has been included as a part of the monolithic perl-5.6.1 package, and I
 *really* don't want to go anywhere near backporting that myself or using a
 third-party backport.
 
 I discovered this in trying to get the SA2.63 install (from backports.org) to
 recognize the ~40M global Bayes dbs and per-user AWL files;  instead I
 discover pairs of .dir + .pag files for AWL (which I vaguely recall are an
 artifact of db1) and SA won't open the existing bayes_* files.

sounds like you've run into a reason to upgrade to unstable.

you have three choices:

1. backport perl 5.8.x and libdb4 and all associated modules and other
   packages.

2. try to find a backports archive where someone else has done the same.

3. point sources.list at unstable and either 'apt-get install' perl and
   other packages, or 'apt-get dist-upgrade'.

choice 1 is a lot of work.

choice 2 doesn't really offer any benefits over just upgrading to 'unstable',
or upgrading certain packages to their 'unstable' versions.

choice 3 will result in the least problems, and will be better tested - there
are far more people using unstable than there are using backports of perl.
  
 Is there something like cpan2rpm or cpanflute for Debian?  I'd like to
 pull in current versions of Perl modules 

dh-make-perl can fetch a package from CPAN and produce a working package that
is good enough for local use (but not polished enough to upload to debian for
re-distribution).

 (or even just recompile the
 stable version against different libs).

this is always an option.  it's called 'back-porting'.  download the debianised
source from unstable (along with any build dependancies) and build it.


 I *could* hack together some bits to force db3 to work by building on
 RedHat, and using alien to install... but that's just plain ugly and as
 I've already discovered it *will* break because of differences in how
 RedHat and Debian handle the core Perl install and addon modules.

really, upgrading to 'unstable' will be the least-hassle option.

'unstable' means that the entire system is in flux, that it changes constantly.  it
does not mean that the packages in it are unreliable.

craig

ps: i've been running ALL of my production servers on 'unstable' since 1995.
i upgrade them semi-regularly.  no major problems.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: configuring postfix to reject messages to non-existing user account

2004-02-07 Thread Craig Sanders
On Sat, Feb 07, 2004 at 04:38:58PM +, Shri Shrikumar wrote:
 I have a postfix installation and it accepts all email to specified domains
 regardless of the user part. This seems to pose a security hole in sending
 spam / viruses.
 
 Say someone sends an email to the server with the from of [EMAIL PROTECTED]
 and the to of [EMAIL PROTECTED], postfix accepts this email
 although there is not local account for [EMAIL PROTECTED] It then
 tries to bounce the message back including the full message and any
 attachments.
 
 Postfix is configured with virtual domains retrieved from an sql database.
 
 Can anyone point me in the right direction for getting postfix to reject
 messages for non-existent local accounts instead of just bouncing it?

look at the local_recipient_maps and/or relay_recipient_maps options in main.cf.

see also /usr/share/doc/postfix/examples/sample-smtpd.cf.gz:

# REJECTING MAIL FOR UNKNOWN LOCAL USERS
#
# The local_recipient_maps parameter specifies optional lookup tables
# with all names or addresses of users that are local with respect
# to $mydestination and $inet_interfaces.
#
# If this parameter is defined, then the SMTP server will reject
# mail for unknown local users. This parameter is defined by default.
#
# To turn off local recipient checking in the SMTP server, specify
# local_recipient_maps = (i.e. empty).
#
# The default setting assumes that you use the default Postfix local
# delivery agent for local delivery. You need to update the
# local_recipient_maps setting if:
#
# - You define $mydestination domain recipients in files other than
#   /etc/passwd, /etc/aliases, or the $virtual_alias_maps files.
#   For example, you define $mydestination domain recipients in
#   the $virtual_mailbox_maps files.
#
# - You redefine the local delivery agent in master.cf.
#
# - You redefine the local_transport setting in main.cf.
#
# - You use the luser_relay, mailbox_transport, or fallback_transport
#   feature of the Postfix local delivery agent (see sample-local.cf).
#
# Details are described in the LOCAL_RECIPIENT_README file.
#
# Beware: if the Postfix SMTP server runs chrooted, you probably have
# to access the passwd file via the proxymap service, in order to
# overcome chroot restrictions. The alternative, having a copy of
# the system passwd file in the chroot jail is just not practical.
# 
# The right-hand side of the lookup tables is conveniently ignored.
# In the left-hand side, specify a bare username, an @domain.tld
# wild-card, or specify a [EMAIL PROTECTED] address.
#
#local_recipient_maps =
#local_recipient_maps = unix:passwd.byname $alias_maps
local_recipient_maps = proxy:unix:passwd.byname $alias_maps



# REJECTING UNKNOWN RELAY USERS
#
# The relay_recipient_maps parameter specifies optional lookup tables
# with all addresses in the domains that match $relay_domains.
#
# If this parameter is defined, then the SMTP server will reject
# mail for unknown relay users. This feature is off by default.
#
# The right-hand side of the lookup tables is conveniently ignored.
# In the left-hand side, specify an @domain.tld wild-card, or specify
# a [EMAIL PROTECTED] address.
# 
#relay_recipient_maps = hash:/etc/postfix/relay_recipients



craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How do you manage Perl modules?

2004-02-07 Thread Craig Sanders
On Fri, Feb 06, 2004 at 05:41:18PM -0500, Kris Deugau wrote:
 However, I've just discovered that there's also a bad version mismatch
 between the default libdb version used by DB_File in RedHat, and the one in
 Debian (db3 in RedHat vs db1 [I think] in Debian).  I also discovered that
 this has been included as a part of the monolithic perl-5.6.1 package, and I
 *really* don't want to go anywhere near backporting that myself or using a
 third-party backport.
 
 I discovered this in trying to get the SA2.63 install (from backports.org) to
 recognize the ~40M global Bayes dbs and per-user AWL files;  instead I
 discover pairs of .dir + .pag files for AWL (which I vaguely recall are an
 artifact of db1) and SA won't open the existing bayes_* files.

sounds like you've run into a reason to upgrade to unstable.

you have three choices:

1. backport perl 5.8.x and libdb4 and all associated modules and other
   packages.

2. try to find a backports archive where someone else has done the same.

3. point sources.list at unstable and either 'apt-get install' perl and
   other packages, or 'apt-get dist-upgrade'.

choice 1 is a lot of work.

choice 2 doesn't really offer any benefits over just upgrading to 'unstable',
or upgrading certain packages to their 'unstable' versions.

choice 3 will result in the least problems, and will be better tested - there
are far more people using unstable than there are using backports of perl.
  
 Is there something like cpan2rpm or cpanflute for Debian?  I'd like to
 pull in current versions of Perl modules 

dh-make-perl can fetch a package from CPAN and produce a working package that
is good enough for local use (but not polished enough to upload to debian for
re-distribution).

 (or even just recompile the
 stable version against different libs).

this is always an option.  it's called 'back-porting'.  download the debianised
source from unstable (along with any build dependancies) and build it.


 I *could* hack together some bits to force db3 to work by building on
 RedHat, and using alien to install... but that's just plain ugly and as
 I've already discovered it *will* break because of differences in how
 RedHat and Debian handle the core Perl install and addon modules.

really, upgrading to 'unstable' will be the least-hassle option.

'unstable' means that the entire system is in flux, that it changes constantly. 
 it
does not mean that the packages in it are unreliable.

craig

ps: i've been running ALL of my production servers on 'unstable' since 1995.
i upgrade them semi-regularly.  no major problems.




Re: configuring postfix to reject messages to non-existing user account

2004-02-07 Thread Craig Sanders
On Sat, Feb 07, 2004 at 04:38:58PM +, Shri Shrikumar wrote:
 I have a postfix installation and it accepts all email to specified domains
 regardless of the user part. This seems to pose a security hole in sending
 spam / viruses.
 
 Say someone sends an email to the server with the from of [EMAIL PROTECTED]
 and the to of [EMAIL PROTECTED], postfix accepts this email
 although there is not local account for [EMAIL PROTECTED] It then
 tries to bounce the message back including the full message and any
 attachments.
 
 Postfix is configured with virtual domains retrieved from an sql database.
 
 Can anyone point me in the right direction for getting postfix to reject
 messages for non-existent local accounts instead of just bouncing it?

look at the local_recipient_maps and/or relay_recipient_maps options in main.cf.

see also /usr/share/doc/postfix/examples/sample-smtpd.cf.gz:

# REJECTING MAIL FOR UNKNOWN LOCAL USERS
#
# The local_recipient_maps parameter specifies optional lookup tables
# with all names or addresses of users that are local with respect
# to $mydestination and $inet_interfaces.
#
# If this parameter is defined, then the SMTP server will reject
# mail for unknown local users. This parameter is defined by default.
#
# To turn off local recipient checking in the SMTP server, specify
# local_recipient_maps = (i.e. empty).
#
# The default setting assumes that you use the default Postfix local
# delivery agent for local delivery. You need to update the
# local_recipient_maps setting if:
#
# - You define $mydestination domain recipients in files other than
#   /etc/passwd, /etc/aliases, or the $virtual_alias_maps files.
#   For example, you define $mydestination domain recipients in
#   the $virtual_mailbox_maps files.
#
# - You redefine the local delivery agent in master.cf.
#
# - You redefine the local_transport setting in main.cf.
#
# - You use the luser_relay, mailbox_transport, or fallback_transport
#   feature of the Postfix local delivery agent (see sample-local.cf).
#
# Details are described in the LOCAL_RECIPIENT_README file.
#
# Beware: if the Postfix SMTP server runs chrooted, you probably have
# to access the passwd file via the proxymap service, in order to
# overcome chroot restrictions. The alternative, having a copy of
# the system passwd file in the chroot jail is just not practical.
# 
# The right-hand side of the lookup tables is conveniently ignored.
# In the left-hand side, specify a bare username, an @domain.tld
# wild-card, or specify a [EMAIL PROTECTED] address.
#
#local_recipient_maps =
#local_recipient_maps = unix:passwd.byname $alias_maps
local_recipient_maps = proxy:unix:passwd.byname $alias_maps



# REJECTING UNKNOWN RELAY USERS
#
# The relay_recipient_maps parameter specifies optional lookup tables
# with all addresses in the domains that match $relay_domains.
#
# If this parameter is defined, then the SMTP server will reject
# mail for unknown relay users. This feature is off by default.
#
# The right-hand side of the lookup tables is conveniently ignored.
# In the left-hand side, specify an @domain.tld wild-card, or specify
# a [EMAIL PROTECTED] address.
# 
#relay_recipient_maps = hash:/etc/postfix/relay_recipients



craig




Re: Why doesn't Exim ever clean out /var/spool/exim/input?

2004-01-30 Thread Craig Sanders
On Fri, Jan 30, 2004 at 03:35:33PM -0500, [EMAIL PROTECTED] wrote:
 I don't have the results after all this time. Exim beat postfix in raw
 speed of moving mail in and/or out by over 15%. 

that must be specific to your particular hardware and/or usage, because it's
contrary to every other postfix vs exim benchmark i've ever heard of.

e.g. Matthias Andree did a comprehensive benchmark comparison of postfix,
qmail, and exim, and sendmailand a followup comparison about a year later.

it seems to have vanished off the web at the moment, but is still available by
google cachei've saved a copy of both benchmark pages at
http://siva.taz.net.au/~cas/matthias/ (vsqmail.html is the first, bench2.html
is the second).

he tested the MTAs in various configurations, and postfix came out ahead in all
of them - in one case, with postfix getting four times the throughput of exim
(16.1 msgs/second vs 3.8).

significantly, the only way that either exim or qmail could come close to
postfix's speed was to enable the softupdates option of the freebsd
filesystem, which risks losing mail if there is a crash or power-outage.
postfix doesn't have that risk because it ensures that all mail is synced to
disk before sending a 250 OK.


 However, if you want the most blazingly fast mailer, use zmailer. It's just
 not a general purpose MTA

true.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why doesn't Exim ever clean out /var/spool/exim/input?

2004-01-30 Thread Craig Sanders
On Fri, Jan 30, 2004 at 08:38:36PM -0500, [EMAIL PROTECTED] wrote:

   However, if you want the most blazingly fast mailer, use zmailer. It's
   just not a general purpose MTA
  true.
 
 For our mailman server, all mail goes to our zmailer (dedicated) machine, and
 BOY does that mail just fly outa there! The first time we tried this, I
 thought something was wrong, since the queue was empty before we had a chance
 to look! :)

i've had similar experiences after switching large lists from sendmail to
postfix.

if you have the inclination to experiment with a working setup :-), try
building a postfix box and configuring mailman to relay through it.  my bet is
you would be pleasantly surprised at just how well postfix compares to zmailer
for that task.

my guess is that, given comparable hardware, there'd be no significant speed
advantage to zmailer over postfix...and postfix IS a general purpose MTA as
well as being fast.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why doesn't Exim ever clean out /var/spool/exim/input?

2004-01-30 Thread Craig Sanders
On Fri, Jan 30, 2004 at 03:35:33PM -0500, [EMAIL PROTECTED] wrote:
 I don't have the results after all this time. Exim beat postfix in raw
 speed of moving mail in and/or out by over 15%. 

that must be specific to your particular hardware and/or usage, because it's
contrary to every other postfix vs exim benchmark i've ever heard of.

e.g. Matthias Andree did a comprehensive benchmark comparison of postfix,
qmail, and exim, and sendmailand a followup comparison about a year later.

it seems to have vanished off the web at the moment, but is still available by
google cachei've saved a copy of both benchmark pages at
http://siva.taz.net.au/~cas/matthias/ (vsqmail.html is the first, bench2.html
is the second).

he tested the MTAs in various configurations, and postfix came out ahead in all
of them - in one case, with postfix getting four times the throughput of exim
(16.1 msgs/second vs 3.8).

significantly, the only way that either exim or qmail could come close to
postfix's speed was to enable the softupdates option of the freebsd
filesystem, which risks losing mail if there is a crash or power-outage.
postfix doesn't have that risk because it ensures that all mail is synced to
disk before sending a 250 OK.


 However, if you want the most blazingly fast mailer, use zmailer. It's just
 not a general purpose MTA

true.

craig




Re: Why doesn't Exim ever clean out /var/spool/exim/input?

2004-01-30 Thread Craig Sanders
On Fri, Jan 30, 2004 at 08:38:36PM -0500, [EMAIL PROTECTED] wrote:

   However, if you want the most blazingly fast mailer, use zmailer. It's
   just not a general purpose MTA
  true.
 
 For our mailman server, all mail goes to our zmailer (dedicated) machine, and
 BOY does that mail just fly outa there! The first time we tried this, I
 thought something was wrong, since the queue was empty before we had a chance
 to look! :)

i've had similar experiences after switching large lists from sendmail to
postfix.

if you have the inclination to experiment with a working setup :-), try
building a postfix box and configuring mailman to relay through it.  my bet is
you would be pleasantly surprised at just how well postfix compares to zmailer
for that task.

my guess is that, given comparable hardware, there'd be no significant speed
advantage to zmailer over postfix...and postfix IS a general purpose MTA as
well as being fast.

craig




Re: Why doesn't Exim ever clean out /var/spool/exim/input?

2004-01-29 Thread Craig Sanders
On Thu, Jan 29, 2004 at 10:03:35AM +, Ronny Adsetts wrote:
 Craig Sanders said the following on 28/01/04 23:36:
  i can't answer your question, but here's some relevant advice for you:
 
  exim doesn't scale.  if you want performance, switch to postfix.
 
 On what do you base this conlusion?

the fact that it doesn't scale.

the original poster's system was an example.

 Several large ISP's in the UK use exim that I know of which seems to indicate
 otherwise.

several large ISPs around the world use IIS  MS SQL servers too...doesn't make
that a good idea, either.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Exim: Different mail retry times depending upon response from remote host...

2004-01-29 Thread Craig Sanders
On Thu, Jan 29, 2004 at 10:58:19AM -0800, Joe Emenaker wrote:
 why should there be?
  [...]

 Because, like you mentioned later in your message, not all mailers give
 proper responses. For example, I've see a lot of 5xx codes where the verbal
 explanation is that the user is over quota.

well, that's normal (at least, it is not wrong to do that).  what to do in an
excess-quota situation is a local policy decision.  some sites choose 5xx, some
choose 4xx.

 But the *real* problem, I guess, is that I'm seeing so many 5xx's in 
 /var/spool/exim/msglog at *all*. 

you shouldn't be seeing *ANY* 5xxs in the spool at all.  5xx specifically means
DO NOT TRY AGAIN.  exim should not ever retry delivery on permanent-failure
codes (unless there is some debugging option like postfix's soft_bounce in
effect).  



 If the sender address is bogus, the bounce notification just hangs around
 forever, it seems. I'd like to be able to give bounce notifications avout 4
 hours to be delivered and then, buh'bye.

ah, ok.  that's a different problem entirely.  that's not retrying a 5xx,
that's inability to deliver a bounce.

you need to configure exim to REJECT mail sent to non-existent addresses (or
which fail your anti-spam/anti-virus etc rules) immediately, rather than
accept-and-bounce.  that way it is the sending MTA's responsibility to deal
with the problem, rather than yours.

e.g. if a message comes in for [EMAIL PROTECTED], don't accept it then
find out that the user doesn't exist, and then bounce it.  it is far better to
just reject it during the smtp session with a 550 No such user response.

that way, the bounce is not your responsibility.  The sending MTA is
responsible for dealing with errors.  if the sending MTA is a virus, then it
probably does nothing - AFAIK, no viruses have bounce-handling codebut it
really doesn't matter what the sending MTA is or what it does, the key point is
that it is *NOT YOUR PROBLEM*, you have not accepted the mail and have not
accepted responsibility for delivering-or-bouncing it.

if you can't reject during the smtp session, then your best option is to
tag-and-deliver (best for spam) or just discard (best for viruses).


IIRC, exim *can* do any or all of these things, depending on how you configure
it.  probably some exim expert here can tell you how to do it.


btw, AFAIK, exim doesn't have any option to specify a different retry period
for bounce-messages.  that would be a useful feature for dealing with spam and
viruses that get through the filters.

on my own systems, i have inbound MX boxes and outbound mail relays.  the
inbound MXs do all the spam  virus filtering, and forward the mail to the
POP/IMAP box.  they have a retry period of 1 day.  it is set so low to avoid
the queue getting clogged with undeliverable spam bounces (stuff which makes it
through my access maps, but gets caught by amavisd-new/spamassassin/clamav).
the outbound relays are for users to send their mail, and they have a retry
period of 5 days.
 
 these sound like 5xx errors, rather than 4xx.  exim should be bouncing
 these, if the remote systems are issuing the correct error codes.if they
 aren't, there's little you can do about it.

 Except write a script, I guess. :)

you're better off not letting these bounce messages get into the queue in the
first place (i.e. prevention is better than cure).  you don't want them, they
just slow down your machinereject unwanted mail with 5xx during the SMTP
session wherever possible.

 one possibility is that there is some error in your configuration which is
 making permanent errors be treated as temporary (4xx) errors,

 Well, I haven't tweaked our config too much... BUT it's the config 
 file from when we switched to Exim about 4 years ago, and I haven't 
 allowed Debian to overwrite it with a new one (lest we lose our mods to 
 the config file).

 So, it might be time to get a new config file and move our changes over by
 hand. But... if we're going through that much trouble geez... I'd just
 switch to Courier.

why switch to courier-mta when you can switch to postfix? :-)

courier's other tools (maildrop, pop, sqwebmail, etc) work fine with postfix as
the MTA.

courier makes a very nice delivery system for real  virtual users.  postfix
makes a very nice MTA (better than anything else, including courier-mta).  the
combination works extremely well.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why doesn't Exim ever clean out /var/spool/exim/input?

2004-01-29 Thread Craig Sanders
On Thu, Jan 29, 2004 at 04:37:07PM +0100, Thomas GOIRAND wrote:
 Not looking for a fight either, but...  ALL the MTAs? What are the results
 for qmail then? I've always heard it's the fastest...

no, postfix beats it.

qmail WAS the fastest several years ago. then postfix arrived.

craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



  1   2   3   4   5   >