Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Adrian 'Dagurashibanipal' von Bidder
On Friday 26 November 2004 03.34, Stephen Frost wrote:
 * Adrian 'Dagurashibanipal' von Bidder ([EMAIL PROTECTED]) wrote:
  plug
  And, of course, postgrey as the very first line of defense.
  /plug
  Coupled with the usual checking on HELO (blocking 'localhost' HELOs and
  my own IP does wonders!), SMTP protocol conformance (pipelining),
  sender (envelope) address checking.

 Things which increase the load on the remote mail servers are *bad*.
 That would include responding with temporary errors unnecessairly and
 adding unnecessary delays in communication.  pipelining by itself isn't
 necessairly terrible- adding things like 2 minute delays is bad though.

I'm happy to queue my outgoing email if the remote end uses greylisting, as 
I expect the remote site to queue my incoming mail with my greylisting.

Add to the the fact that amongst the mail senders big enough so that the 
queue size matters are probably many of those ISPs with badly policed 
(DSL/cable) network, operating the spam zombies which cause me to use 
greylisting in the first place...

About pipelining: what postfix does is enforce proper use of pipelining: the 
sender may only start pipelining requests when it has actually seen that 
postfix does support pipelining.  Regular mail servers never notice this, 
but some stupid spammers just push the request out without waiting for 
responses at all - these are rejected.

-- vbi

-- 
TODO: apt-get install signify


pgpfp7vTpvIYg.pgp
Description: PGP signature


Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Christian Storch
On Fr, 26.11.2004, 03:34, Stephen Frost wrote:
 * Adrian 'Dagurashibanipal' von Bidder ([EMAIL PROTECTED]) wrote:
 plug
 And, of course, postgrey as the very first line of defense.
 /plug
 Coupled with the usual checking on HELO (blocking 'localhost' HELOs and
 my
 own IP does wonders!), SMTP protocol conformance (pipelining), sender
 (envelope) address checking.

 Things which increase the load on the remote mail servers are *bad*.
 That would include responding with temporary errors unnecessairly and
 adding unnecessary delays in communication.  pipelining by itself isn't
 necessairly terrible- adding things like 2 minute delays is bad though.

What about greylisting depending on results of e.g. SA?
Only above a limit of scores from SA greylisting would be become active.

Christian


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Florian Weimer
* Christian Storch:

 Things which increase the load on the remote mail servers are *bad*.
 That would include responding with temporary errors unnecessairly and
 adding unnecessary delays in communication.  pipelining by itself isn't
 necessairly terrible- adding things like 2 minute delays is bad though.

 What about greylisting depending on results of e.g. SA?
 Only above a limit of scores from SA greylisting would be become active.

This is very impolite because it requires that the entire message is
transferred at least twice.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: murphy in sbl.spamhaus.org

2004-11-26 Thread David Schmitt
On Fri, Nov 26, 2004 at 10:04:38AM +0100, Christian Storch wrote:
 What about greylisting depending on results of e.g. SA?
 Only above a limit of scores from SA greylisting would be become active.

Use as many RBLs instead of the SA score, but use them not for blocking but
for activating greylisting, teergrubing and non-pipelining/synch-checks.
All these actions effectivly block ratware but by using the RBLs you can
avoid hitting the delays on real MTAs.


Regards,

David
-- 
  * Customer: My palmtop won't turn on.
  * Tech Support: Did the battery run out, maybe?
  * Customer: No, it doesn't use batteries. It's Windows powered.
-- http://www.rinkworks.com/stupid/cs_power.shtml


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: murphy in sbl.spamhaus.org

2004-11-26 Thread George Georgalis
On Fri, Nov 26, 2004 at 10:57:31AM +0100, Florian Weimer wrote:
* Christian Storch:

 Things which increase the load on the remote mail servers are *bad*.
 That would include responding with temporary errors unnecessairly and
 adding unnecessary delays in communication.  pipelining by itself isn't
 necessairly terrible- adding things like 2 minute delays is bad though.

 What about greylisting depending on results of e.g. SA?
 Only above a limit of scores from SA greylisting would be become active.

This is very impolite because it requires that the entire message is
transferred at least twice.

I thought greylisting closes the smtp connection with a temporary
failure immediately to unfamiliar routers. Then they can transmit the
message on a second attempt, but since spam relays don't queue, they
won't try again.

// George


-- 
George Georgalis, systems architect, administrator Linux BSD IXOYE
http://galis.org/george/ cell:646-331-2027 mailto:[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Mike Gerber
George Georgalis schrieb/wrote/a écrit/escribió:
 On Fri, Nov 26, 2004 at 10:57:31AM +0100, Florian Weimer wrote:
 * Christian Storch:
  What about greylisting depending on results of e.g. SA?
  Only above a limit of scores from SA greylisting would be become active.
 This is very impolite because it requires that the entire message is
 transferred at least twice.
 I thought greylisting closes the smtp connection with a temporary
 failure immediately to unfamiliar routers. Then they can transmit the
 message on a second attempt, but since spam relays don't queue, they
 won't try again.

Chrisitan proposed greylisting based on SA scores - that requires the
messages to be transmitted before rejecting them the first time (with
an temporary error), and then to be transmitted again.

It's a matter of time til the spammers begin to implement queues in
their spamware (on some infected Windows zombies). Greylisting is
obsolete then.


pgpPPnWcUtG40.pgp
Description: PGP signature


zip sarge's package vulnerable to CAN-2004-1010

2004-11-26 Thread Otavio Salvador
Hello,

Current CAN-2004-1010 was fixed on zip 2.30-8 but current sarge
version still vulnerable. This package need to be included on sarge to
solve it.

Thanks in advance,
Otavio

-- 
O T A V I OS A L V A D O R
-
 E-mail: [EMAIL PROTECTED]  UIN: 5906116
 GNU/Linux User: 239058 GPG ID: 49A5F855
 Home Page: http://www.freedom.ind.br/otavio
-
Microsoft gives you Windows ... Linux gives
 you the whole house.


pgpbVo2rLESj4.pgp
Description: PGP signature


Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Stephen Frost
* Adrian 'Dagurashibanipal' von Bidder ([EMAIL PROTECTED]) wrote:
 On Friday 26 November 2004 03.34, Stephen Frost wrote:
  * Adrian 'Dagurashibanipal' von Bidder ([EMAIL PROTECTED]) wrote:
   plug
   And, of course, postgrey as the very first line of defense.
   /plug
   Coupled with the usual checking on HELO (blocking 'localhost' HELOs and
   my own IP does wonders!), SMTP protocol conformance (pipelining),
   sender (envelope) address checking.
 
  Things which increase the load on the remote mail servers are *bad*.
  That would include responding with temporary errors unnecessairly and
  adding unnecessary delays in communication.  pipelining by itself isn't
  necessairly terrible- adding things like 2 minute delays is bad though.
 
 I'm happy to queue my outgoing email if the remote end uses greylisting, as 
 I expect the remote site to queue my incoming mail with my greylisting.

That's nice, obviously you don't handle much mail.

 Add to the the fact that amongst the mail senders big enough so that the 
 queue size matters are probably many of those ISPs with badly policed 
 (DSL/cable) network, operating the spam zombies which cause me to use 
 greylisting in the first place...

That's a *terrible* and just plain stupid assumption.  Queue size makes
a difference to me, both on a machine I run for some friends and in the
part-time work that I do for a small ISP (which, hey, doesn't even
provide DSL or cable modem service).  Queue size matters to universities
who are draconian about their policing, and I'm sure it matters to the
'good' ISPs too.

Let me tell you that if you use that greylisting crap against the
servers that *I* run your mail will get dumped into a secondary queue
which is processed at a much slower rate.  I won't slow things down for
others because of a few idiots who can't figure out how to configure
their mail servers.

 About pipelining: what postfix does is enforce proper use of pipelining: the 
 sender may only start pipelining requests when it has actually seen that 
 postfix does support pipelining.  Regular mail servers never notice this, 
 but some stupid spammers just push the request out without waiting for 
 responses at all - these are rejected.

That'd be why I said that pipelineing isn't really an issue but adding
in random unnecessary delays is bad, which is something that's been
advocated in a number of places and ends up increasing the load on my
mail servers.

Stephen


signature.asc
Description: Digital signature


Re: zip sarge's package vulnerable to CAN-2004-1010

2004-11-26 Thread Colin Watson
On Fri, Nov 26, 2004 at 05:21:03PM -0200, Otavio Salvador wrote:
 Current CAN-2004-1010 was fixed on zip 2.30-8 but current sarge
 version still vulnerable. This package need to be included on sarge to
 solve it.

zip 2.30-8 is already in sarge:

   zip | 2.30-8 |   testing | source, alpha, arm, hppa, i386, ia64, 
m68k, mips, mipsel, powerpc, s390, sparc
   zip | 2.30-8 |  unstable | source, alpha, arm, hppa, i386, ia64, 
m68k, mips, mipsel, powerpc, s390, sparc

Cheers,

-- 
Colin Watson   [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: zip sarge's package vulnerable to CAN-2004-1010

2004-11-26 Thread Steve Langasek
On Fri, Nov 26, 2004 at 05:21:03PM -0200, Otavio Salvador wrote:
 Current CAN-2004-1010 was fixed on zip 2.30-8 but current sarge
 version still vulnerable. This package need to be included on sarge to
 solve it.

It already has been.


-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Stephen Gran
This one time, at band camp, Stephen Frost said:
 * Adrian 'Dagurashibanipal' von Bidder ([EMAIL PROTECTED]) wrote:
  On Friday 26 November 2004 03.34, Stephen Frost wrote:
   * Adrian 'Dagurashibanipal' von Bidder ([EMAIL PROTECTED]) wrote:

And, of course, postgrey as the very first line of defense.
  
   Things which increase the load on the remote mail servers are *bad*.
   That would include responding with temporary errors unnecessairly and
   adding unnecessary delays in communication.

Things which slow down my MTA are equally bad.

  Add to the the fact that amongst the mail senders big enough so that the 
  queue size matters are probably many of those ISPs with badly policed 
  (DSL/cable) network, operating the spam zombies which cause me to use 
  greylisting in the first place...
 
 That's a *terrible* and just plain stupid assumption.  Queue size makes
 a difference to me, both on a machine I run for some friends and in the
 part-time work that I do for a small ISP (which, hey, doesn't even
 provide DSL or cable modem service).  Queue size matters to universities
 who are draconian about their policing, and I'm sure it matters to the
 'good' ISPs too.

Of course queue size matters.  I don't think that's what he's saying
though.  He's saying that those organizations that are that big are also
a source of a lot fo the spam flying about.  Not %100 true, but there is
some correlation.  

A sensible greylisting scheme will auto-whitelist a sending IP after
so many whitelisted entries (successful retries) - the only point of
greylisting is that we know that the remote end won't retry in most cases.
Once it's been shown that they do retry, why bother greylisting them
anymore?

If you do implement this sort of scheme, any extra load you place on the
remote end is transient, and will go away shortly, never to be repeated.
It's not really a large price to pay for a pretty effective tool.

 Let me tell you that if you use that greylisting crap against the
 servers that *I* run your mail will get dumped into a secondary queue
 which is processed at a much slower rate.  I won't slow things down for
 others because of a few idiots who can't figure out how to configure
 their mail servers.

Take a deep breath, and count to ten.  

I handle some fairly large mail installations at work, and greylisting
has been of tremendous benefit to us.  If it's done sensibly (as above)
it does transiently increase load on remote MTA's for a couple of days
until the common sending IPs have been whitelisted, and then it settles
back down.

  About pipelining: what postfix does is enforce proper use of pipelining: 
  the 
  sender may only start pipelining requests when it has actually seen that 
  postfix does support pipelining.  Regular mail servers never notice this, 
  but some stupid spammers just push the request out without waiting for 
  responses at all - these are rejected.
 
 That'd be why I said that pipelineing isn't really an issue but adding
 in random unnecessary delays is bad, which is something that's been
 advocated in a number of places and ends up increasing the load on my
 mail servers.

Introducing delays is usually only advocated for fairly egregious and
obvious spam-like tactics (HELO as my IP/hostname, etc).  If you're
doing that, then you deserve to be a little overloaded.  If you're
running a reasonably sensible MTA, then you should never trip those
kinds of checks.  If people are delaying a sensible MTA, then they
deserve to be LART'ed for doing something silly, but the practice itself
isn't unreasonable.
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


pgprmOgfdY15j.pgp
Description: PGP signature


bad md5's on ftp.us.debian.org ?

2004-11-26 Thread hanasaki
Below are the errors reported by apt-get update.  Is this correct? 
Could someone explain please?

Thanks.
=== 16:35 CST 2004-11-26
Failed to fetch 
http://ftp.us.debian.org/debian/dists/sarge/main/binary-i386/Packages.gz 
 MD5Sum mismatch
Failed to fetch 
http://ftp.us.debian.org/debian/dists/sarge/main/source/Sources.gz 
MD5Sum mismatch
Failed to fetch 
http://ftp.us.debian.org/debian/dists/testing/main/binary-i386/Packages.gz 
 MD5Sum mismatch
Failed to fetch 
http://ftp.us.debian.org/debian/dists/testing/main/source/Sources.gz 
MD5Sum mismatch
Failed to fetch 
http://ftp.us.debian.org/debian/dists/unstable/main/binary-i386/Packages.gz 
 MD5Sum mismatch
Failed to fetch 
http://ftp.us.debian.org/debian/dists/unstable/non-free/binary-i386/Packages.gz 
 MD5Sum mismatch

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Serious problem after tetex security update

2004-11-26 Thread Andreas Goesele
Hi!

After the last security update with libkpathsea3 and tetex-bin my
LaTeX installation doesn't work any more. When I try to compile a
LaTeX file I get:

I can't find the format file `latex.fmt'!

What can I do to get a working LaTeX installation back? I urgently
need it!

Thanks a lot in advance!

Andreas Goesele
-- 
Omnis enim res, quae dando non deficit, dum habetur et non datur,
nondum habetur, quomodo habenda est.
  Augustinus, De doctrina christiana


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Serious problem after tetex security update

2004-11-26 Thread Andreas Goesele
Andreas Goesele [EMAIL PROTECTED] writes:

 After the last security update with libkpathsea3 and tetex-bin my
 LaTeX installation doesn't work any more. When I try to compile a
 LaTeX file I get:

 I can't find the format file `latex.fmt'!

 What can I do to get a working LaTeX installation back? I urgently
 need it!

I found the solution. There is a bug in the new package:

/usr/share/texmf/web2c does not link to /var/lib/texmf/web2c (as it
should) but to /var/lib/texmf/web2 (note the missing c). As a result
the link wasn't even created in my case. (There was no
/usr/share/texmf/web2c after the security update.)

Is it enough to report that here, or should I report it somewhere else
too?

Andreas Goesele

-- 
Omnis enim res, quae dando non deficit, dum habetur et non datur,
nondum habetur, quomodo habenda est.
  Augustinus, De doctrina christiana


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Serious problem after tetex security update

2004-11-26 Thread Henrique de Moraes Holschuh
On Sat, 27 Nov 2004, Andreas Goesele wrote:
 Andreas Goesele [EMAIL PROTECTED] writes:
  After the last security update with libkpathsea3 and tetex-bin my
  LaTeX installation doesn't work any more. When I try to compile a
  LaTeX file I get:
 
  I can't find the format file `latex.fmt'!
 
  What can I do to get a working LaTeX installation back? I urgently
  need it!
 
 I found the solution. There is a bug in the new package:
 
 /usr/share/texmf/web2c does not link to /var/lib/texmf/web2c (as it
 should) but to /var/lib/texmf/web2 (note the missing c). As a result
 the link wasn't even created in my case. (There was no
 /usr/share/texmf/web2c after the security update.)
 
 Is it enough to report that here, or should I report it somewhere else
 too?

Please use the reportbug tool to file a bug.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Serious problem after tetex security update

2004-11-26 Thread s. keeling
Incoming from Andreas Goesele:
 Andreas Goesele [EMAIL PROTECTED] writes:
 
  After the last security update with libkpathsea3 and tetex-bin my
  LaTeX installation doesn't work any more. When I try to compile a
  LaTeX file I get:
 
  I can't find the format file `latex.fmt'!
 
  What can I do to get a working LaTeX installation back? I urgently
  need it!
 
 I found the solution. There is a bug in the new package:
 
 /usr/share/texmf/web2c does not link to /var/lib/texmf/web2c (as it

Odd.  It worked for me (though I haven't tried any LaTeX commands):

(0) keeling /home/keeling/.mozilla/plugins_ ls -al /usr/share/texmf/web2c 
/var/lib/texmf/web2c /var/lib/texmf/web2 
ls: /var/lib/texmf/web2: No such file or directory
lrwxrwxrwx1 root root   20 Nov 25 11:06 /usr/share/texmf/web2c 
- /var/lib/texmf/web2c/

/var/lib/texmf/web2c:
total 14315
drwxr-xr-x2 root root 3072 Nov 25 11:07 ./
drwxr-xr-x3 root root 1024 Nov 26 06:30 ../
-rw-r--r--1 root root 5320 Nov 24 01:55 amiga-pl.tcx
-rw-r--r--1 root root   405356 Nov 25 11:07 amstex.fmt
-rw-r--r--1 root root 3064 Nov 25 11:07 amstex.log
...


-- 
Any technology distinguishable from magic is insufficiently advanced.
(*)http://www.spots.ab.ca/~keeling  Please don't Cc: me.
- -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Serious problem after tetex security update

2004-11-26 Thread Christoph Moench-Tegeder
## Andreas Goesele ([EMAIL PROTECTED]):

  After the last security update with libkpathsea3 and tetex-bin my
  LaTeX installation doesn't work any more. When I try to compile a
  LaTeX file I get:
  I can't find the format file `latex.fmt'!
  What can I do to get a working LaTeX installation back? I urgently
  need it!
 I found the solution. There is a bug in the new package:
 /usr/share/texmf/web2c does not link to /var/lib/texmf/web2c (as it
 should) but to /var/lib/texmf/web2 (note the missing c). As a result
 the link wasn't even created in my case. (There was no
 /usr/share/texmf/web2c after the security update.)

Just checked, everything fine here (using tetex-bin 1.0.7+20011202-7.3).

Regards,
Christoph

-- 
Spare Space


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Stephen Frost
* Stephen Gran ([EMAIL PROTECTED]) wrote:
 This one time, at band camp, Stephen Frost said:
  That's a *terrible* and just plain stupid assumption.  Queue size makes
  a difference to me, both on a machine I run for some friends and in the
  part-time work that I do for a small ISP (which, hey, doesn't even
  provide DSL or cable modem service).  Queue size matters to universities
  who are draconian about their policing, and I'm sure it matters to the
  'good' ISPs too.
 
 Of course queue size matters.  I don't think that's what he's saying
 though.  He's saying that those organizations that are that big are also
 a source of a lot fo the spam flying about.  Not %100 true, but there is
 some correlation.  

The problem is that it's probably not even 20% true.  There really isn't
any correlation.  There are lots of good ISPs and other service
providers who are big enough that queue size matters to them but which
aren't sources of spam.

 A sensible greylisting scheme will auto-whitelist a sending IP after
 so many whitelisted entries (successful retries) - the only point of
 greylisting is that we know that the remote end won't retry in most cases.
 Once it's been shown that they do retry, why bother greylisting them
 anymore?
 
 If you do implement this sort of scheme, any extra load you place on the
 remote end is transient, and will go away shortly, never to be repeated.
 It's not really a large price to pay for a pretty effective tool.

A system which didn't increase the load on the remote mail servers would
be better.  Having whitelisting w/ greylisting is better but I still
don't like it.  What about secondary MX's?  How is the greylist going to
work with those, a seperate list, or will they be unified?  Will it
correctly pick up on and whitelist a server that goes for a secondary MX
after getting a temporary error at the first?  Of course, I've run into
people who have had secondary MX's configured incorrectly to the point
where it drops mail, or refuses to forward the message on, etc.  You can
understand that I'd have my doubts about the ability of people to
implement a scheme such as this.

  Let me tell you that if you use that greylisting crap against the
  servers that *I* run your mail will get dumped into a secondary queue
  which is processed at a much slower rate.  I won't slow things down for
  others because of a few idiots who can't figure out how to configure
  their mail servers.
 
 Take a deep breath, and count to ten.  

Hey, it's how my mail servers are configured- if you're too slow to
respond or the mail doesn't leave the main queue fast enough for
whatever reason it'll get dumped into a slower queue that's run less
often.  It wasn't a threat. :)

 I handle some fairly large mail installations at work, and greylisting
 has been of tremendous benefit to us.  If it's done sensibly (as above)
 it does transiently increase load on remote MTA's for a couple of days
 until the common sending IPs have been whitelisted, and then it settles
 back down.

Perhaps it would have made sense to look at your logs and whitelist
those hosts ahead of time.  Sounds like you know what you're doing- that
wouldn't be hard.

   About pipelining: what postfix does is enforce proper use of pipelining: 
   the 
   sender may only start pipelining requests when it has actually seen that 
   postfix does support pipelining.  Regular mail servers never notice this, 
   but some stupid spammers just push the request out without waiting for 
   responses at all - these are rejected.
  
  That'd be why I said that pipelineing isn't really an issue but adding
  in random unnecessary delays is bad, which is something that's been
  advocated in a number of places and ends up increasing the load on my
  mail servers.
 
 Introducing delays is usually only advocated for fairly egregious and
 obvious spam-like tactics (HELO as my IP/hostname, etc).  If you're
 doing that, then you deserve to be a little overloaded.  If you're
 running a reasonably sensible MTA, then you should never trip those
 kinds of checks.  If people are delaying a sensible MTA, then they
 deserve to be LART'ed for doing something silly, but the practice itself
 isn't unreasonable.

Unfortunately I had someone complain at me that my mail servers kept
going to their secondary MX, turns out it was because he had a forced 2
minute delay before responding to HELO and my mail server wasn't waiting
around that long.  He had it because some website (the mailer was exim, 
iirc) was advocating it and encouraging people to do it for all incoming
connections.  The problem is that some people *are* silly, or stupid,
and schemes like this often end up misused, misconfigured, and
implemented wrong.  That's why I don't like them.

Stephen


signature.asc
Description: Digital signature


Re: Serious problem after tetex security update

2004-11-26 Thread Andreas Goesele
s. keeling [EMAIL PROTECTED] writes:

 Incoming from Andreas Goesele:
 I found the solution. There is a bug in the new package:
 
 /usr/share/texmf/web2c does not link to /var/lib/texmf/web2c (as it

 Odd.  It worked for me (though I haven't tried any LaTeX commands):

 (0) keeling /home/keeling/.mozilla/plugins_ ls -al /usr/share/texmf/web2c 
 /var/lib/texmf/web2c /var/lib/texmf/web2 
 ls: /var/lib/texmf/web2: No such file or directory
 lrwxrwxrwx1 root root   20 Nov 25 11:06 
 /usr/share/texmf/web2c - /var/lib/texmf/web2c/

This link wasn't created in my case. But maybe I was wrong about the
reason. Just to test it I reinstalled the new package and this time
there was no problem.

The wrong link to the non-existing /var/lib/texmf/web2 in the package
might be irrelevant as the right link should be created by the
postinst script. (Anyway, so it seems to me.)

No idea why the link wasn't created when I first updated the
package. But as long nobody else gets the same problem I assume there
is no bug ...

Andreas Goesele

-- 
Omnis enim res, quae dando non deficit, dum habetur et non datur,
nondum habetur, quomodo habenda est.
  Augustinus, De doctrina christiana


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: murphy in sbl.spamhaus.org

2004-11-26 Thread Stephen Gran
This one time, at band camp, Stephen Frost said:
 * Stephen Gran ([EMAIL PROTECTED]) wrote:
  A sensible greylisting scheme will auto-whitelist a sending IP after
  so many whitelisted entries (successful retries) - the only point of
  greylisting is that we know that the remote end won't retry in most cases.
  Once it's been shown that they do retry, why bother greylisting them
  anymore?
  
  If you do implement this sort of scheme, any extra load you place on the
  remote end is transient, and will go away shortly, never to be repeated.
  It's not really a large price to pay for a pretty effective tool.
 
 A system which didn't increase the load on the remote mail servers would
 be better.  Having whitelisting w/ greylisting is better but I still
 don't like it.  What about secondary MX's?  How is the greylist going to
 work with those, a seperate list, or will they be unified?  Will it
 correctly pick up on and whitelist a server that goes for a secondary MX
 after getting a temporary error at the first?  Of course, I've run into
 people who have had secondary MX's configured incorrectly to the point
 where it drops mail, or refuses to forward the message on, etc.  You can
 understand that I'd have my doubts about the ability of people to
 implement a scheme such as this.

I wouldn't really advocate using secondary MX's these days, and certainly
not ones not ones not under your control.  If you need redundancy, I'd
use a load balancer with a server farm behind it.  They can all share
access to the same database for greylisting purposes that way, and you
get better failover.  Secondary MX's tend to just be spam targets and
rarely serve any real use.  If you must have a seperate secondary MX
(you don't have the hardware resources, or whatever) I would really
strongly advocate that you find a way to get a list of valid users from
the primary onto the secondary - all the bounces when the primary comes
back up and spam starts failing are part of the problem.

  I handle some fairly large mail installations at work, and greylisting
  has been of tremendous benefit to us.  If it's done sensibly (as above)
  it does transiently increase load on remote MTA's for a couple of days
  until the common sending IPs have been whitelisted, and then it settles
  back down.
 
 Perhaps it would have made sense to look at your logs and whitelist
 those hosts ahead of time.  Sounds like you know what you're doing- that
 wouldn't be hard.

We did pre-whitelist about 50 or so known good hosts, determined the
old-fashioned way, with grep, sed, sort, and uniq -c.  As always, we
missed several, but they got added automagically.  This is one of those
situations where it is hard to always know programatically what's going
to come up, and so that's where the importance of an auto whitelister
come in.  For instance, we had almost no hits from Southwest airlines
MX before the switchover, and then they opened a hub in Philadelphia.
We started getting a ton of emails from them, since they're so cheap -
they're the US answer to Ryan air.  They also happen to use VERP for
their notification emails, so _all_ of their mail was delayed until the
auto whitelister added them after a day or two.

  Introducing delays is usually only advocated for fairly egregious and
  obvious spam-like tactics (HELO as my IP/hostname, etc).  If you're
  doing that, then you deserve to be a little overloaded.  If you're
  running a reasonably sensible MTA, then you should never trip those
  kinds of checks.  If people are delaying a sensible MTA, then they
  deserve to be LART'ed for doing something silly, but the practice itself
  isn't unreasonable.
 
 Unfortunately I had someone complain at me that my mail servers kept
 going to their secondary MX, turns out it was because he had a forced 2
 minute delay before responding to HELO and my mail server wasn't waiting
 around that long.  He had it because some website (the mailer was exim, 
 iirc) was advocating it and encouraging people to do it for all incoming
 connections.  

That is just stupid, and he probably shouldn't be bothering to try and
run a mail server, much less complain that people are hitting his
secondary.  There are other good reasons for introducing delays, and
they are effective, but the above idea is just retarded.

 The problem is that some people *are* silly, or stupid,
 and schemes like this often end up misused, misconfigured, and
 implemented wrong.  That's why I don't like them.

I can't disagree there.

I guess what I'm trying to say is, I understand your misgivings, beause
people implementing most anything can manage to do it in a really stupid,
painful and harmful way.  That doesn't necessarily mean the idea is
unsound.  Greylisting is, itself, a one-trick pony.  It will lose it's
effectiveness whenever spammers get around to implementing queues on
their zombie clients.  OTOH, admin'ing an MTA these days is an arms race,
and a new weapon can be a lot of help.  

Take care,
--