RE: Better phish detection

2012-03-16 Thread Aaron Bennett
-Original Message-
From: David F. Skoll [mailto:d...@roaringpenguin.com] 
Sent: Monday, March 12, 2012 12:49 PM
To: users@spamassassin.apache.org
Subject: Re: Better phish detection

Hi,

I've been following this thread... not sure how many of you are aware of this 
project:

http://code.google.com/p/anti-phishing-email-reply/

We use the phishing address list and it does catch a few things.  We don't yet 
use the phishing URL list, but it looks like it might help.

Naturally, this list is reactive, but if enough people used it and contributed 
to it, the results might be pretty good.

Regards,

David.
---

We use it here; I've got a little python script that parses out recent entries 
from that project and builds a simple postfix static map to block mail attempts 
to them.  I'm happy to share if anyone's interested.

- Aaron Bennett

Manager, Systems Administration
Clark University ITS



preventing authenticated smtp users from triggering PBL

2010-12-17 Thread Aaron Bennett
Hi,

I've got an issue where users off-campus who are doing authenticated SMTP/TLS 
from home networks are having their mail hit by the PBL.  I have 
trusted_networks set to include the incoming relay,  but still the PBL hits it 
as follows:

Received: from cmail.clarku.edu (muse.clarku.edu [140.232.1.151])
by mothra.clarku.edu (Postfix) with ESMTP id D4FC2684FEA
for re...@clarku.edu; Tue,  7 Dec 2010 00:11:24 -0500 (EST)
Received: from SENDERMACHINE (macaddress.hsd1.ma.comcast.net
[98.216.185.77])
by cmail.clarku.edu (Postfix) with ESMTP id 82F21901E48
for re...@clarku.edu; Tue,  7 Dec 2010 00:11:24 -0500 (EST)
From: USER NAME sen...@clarku.edu

Despite that internal_networks and trusted_networks are set to 140.232.0.0/16, 
the message still triggers the PBL rule.  Given that I know that (unless 
there's a trojaned machine or whatever) I must trust email that comes in over 
authenticated SMTP/TLS through the 'cmail' host, how can I prevent it from 
hitting the PBL?

Thanks,

Aaron  

--- 
Aaron Bennett
Manager of Systems Administration
Clark University ITS



RE: preventing authenticated smtp users from triggering PBL

2010-12-17 Thread Aaron Bennett

 -Original Message-
 
 Based on the headers you included, there's nothing indicating the sender
 was authenticated.  Are you using the following in postfix?
 
 smtpd_sasl_authenticated_header  yes


No, I'm not -- that's a good idea.  If I turn that on, can I write a rule based 
on it, or will SA pick up on it automatically?

Thanks,

Aaron


RE: preventing authenticated smtp users from triggering PBL

2010-12-17 Thread Aaron Bennett
 -Original Message-
 From: Ted Mittelstaedt [mailto:t...@ipinc.net]
 Sent: Friday, December 17, 2010 12:20 PM
 To: users@spamassassin.apache.org
 Subject: Re: preventing authenticated smtp users from triggering PBL
 
 why are you using authenticated SMTP from trusted networks?
 
 The whole point of auth smtp is to come from UN-trusted networks.
 


I think you are misunderstanding.  I may be on an unstrusted network, but I 
want to send email through a host on a trusted network.  By authenticating, I 
can.  It was the trusted host which authenticated me, and thus SA needs to 
take that I was authenticated by a trusted host into consideration before 
applying the PBL rule to the address the mail initiated on.




Re: sane values for size of bayes_token database in MySQL

2010-06-29 Thread Aaron Bennett

On 06/29/2010 11:00 AM, Kris Deugau wrote:

Aaron Bennett wrote:
   


1) Are you supposed to have a global Bayes DB?

2) How many users do you have?

3) If the answer to 1) is yes, did you set bayes_sql_override_username?

If the answer to 1) is no, you're probably not running Bayes expiry for
every user, so their individual sub-databases are growing without bound.
   Better to re-enable auto-expiry (it's primarily a concern with global
databases, particularly with DB_File).
   



We are using amavis-maia so every bayes transaction is made under the 
amavis user -- is that the same as a global database?


sane values for size of bayes_token database in MySQL

2010-06-28 Thread Aaron Bennett

I'm sort of pulling at straws here, but I'm reading the manpage for
sa-learn and it says that sa-learn will try to expire bayes tokens
according to this:

- the number of tokens in the DB is  100,000
- the number of tokens in the DB is  bayes_expiry_max_db_size
- there is at least a 12 hr difference between the oldest and
newest token atimes


I haven't changed bayes_expiry_max_db_size and I run sa-learn
--force-expire every night via cron and I have bayes_auto_expire set to 0.

That said, my bayes_token database is huge:

| Name  | Engine | Version | Row_format | Rows  |
Avg_row_length | Data_length | Max_data_length | Index_length |
Data_free | Auto_increment | Create_time | Update_time |
Check_time | Collation | Checksum | Create_options |
Comment  |
+---++-++---++-+-+--+---++-+-++---+--++--+
| bayes_expire  | InnoDB |   9 | Fixed  | 1
|  16384 |   16384 |NULL |16384
| 0 |   NULL | 2006-07-06 11:25:28 | NULL|
NULL   | latin1_swedish_ci | NULL || InnoDB
free: 29522944 kB |
| bayes_global_vars | InnoDB |   9 | Dynamic| 1
|  16384 |   16384 |NULL |0
| 0 |   NULL | 2006-07-06 11:25:28 | NULL|
NULL   | latin1_swedish_ci | NULL || InnoDB
free: 29522944 kB |
| bayes_seen| InnoDB |   9 | Dynamic|  90902320
|175 | 15980298240 |NULL |0
| 0 |   NULL | 2006-07-06 11:25:28 | NULL|
NULL   | latin1_swedish_ci | NULL || InnoDB
free: 29522944 kB |
| bayes_token   | InnoDB |   9 | Fixed  | 596422823
| 83 | 49507483648 |NULL |  40946384896
| 0 |   NULL | 2006-07-06 11:25:28 | NULL|
NULL   | latin1_swedish_ci | NULL || InnoDB
free: 29522944 kB |


particularly bayes_token which is almost 50GB and has WAY more then
150,000 rows.

Is this sane?




Re: new kind of spam (apparently from mailer daemon)

2010-04-26 Thread Aaron Wolfe
On Mon, Apr 26, 2010 at 4:27 AM, Lucio Chiappetti
lu...@lambrate.inaf.it wrote:
 I have just found a new kind of spam which went through our spamassassin
 (actually it got a banned notification - we quarantine spam and virus but
 let banned be delivered).

 The subject was Delivery reports about your e-mail, the apparent
 originator was From: MAILER-DAEMON nore...@ourdomain, the body was empty
 and there was a single attachment transcript.zip.

 There are only two Received lines in the header as seen on my destination
 machine (I've edited out the local details):

 Received: from our_mx by my_machine for my_address
 Received: from ourdomain (localhost [113.167.75.53] (may be forged)by our_mx

 So it looks like the spammer connected directly to our mx (one of two),
 faking its name as our domain.

FWIW, outright blocking mail from hosts that use our domain name (or
even the ip address of one of our MXes) as their HELO has proven to be
a safe and efficient way to block some amount of junk.  Not too many
spammers try this, but when they do it makes things simple.


 To users it seems a strange mailer daemon message, since our mx are linux
 boxes and do not send zipped reports. So it is obvious spam.

 My question is : is it ok to feed it into the sa-learn crontab we use for
 spam which escapes spamassassin, or the way it is forged will cause problems
 (e.g. filtering legitimate mailer daemon reports ?)


 --
 
 Lucio Chiappetti - INAF/IASF - via Bassini 15 - I-20133 Milano (Italy)
 
 Citizens entrusted of public functions have the duty to accomplish them
 with discipline and honour
                          [Art. 54 Constitution of the Italian Republic]
 
 For more info : http://www.iasf-milano.inaf.it/~lucio/personal.html
 



Re: Off Topic - SPF - What a Disaster

2010-02-23 Thread Aaron Wolfe
On Tue, Feb 23, 2010 at 4:11 PM, Mike Hutchinson packetl...@ping.net.nz wrote:
 Hello,

 My company attempted to adopt SPF before I started working here. I recall it
 was a recent event when I joined, and I looked into what went wrong (as I
 became the mail administrator not long after). Basically the exact same
 experience was encountered. Customers could not understand the system, which
 is basically what killed it. Some Admin's of remote systems sending our
 customers important E-Mail did not understand the system, or even want to
 deal with it - leaving us without the resources to fix all SPF related
 problems.

 Adoption of SPF was dropped after 3 days, and we're never going back.

 Same result, SPF is a good idea, but we certainly cannot afford to train
 other site's administrators, nor all of our customers, on SPF.

ditto here.  the only folks that seem capable of implementing SPF
properly are the spammers


 Cheers,
 Mike,


 -Original Message-
 From: Jeff Koch [mailto:jeffk...@intersessions.com]
 Sent: Wednesday, 24 February 2010 9:38 a.m.
 To: users@spamassassin.apache.org
 Subject: Off Topic - SPF - What a Disaster


 In an effort to reduce spam further we tried implementing SPF enforcement.
 Within three days we turned it off. What we found was that:

 - domain owners are allowing SPF records to be added to their zone files
 without understanding the implications or that are just not correct
 - domain owners and their employees regularly send email from mailservers
 that violate their SPF.
 - our customers were unable to receive email from important business
 contacts
 - our customers were unable to understand why we would be enforcing a
 system that prevented
   them from getting important email.
 - our customers couldn't understand what SPF does.
 - our customers could not explain SPF to their business contacts who would
 have had to contact their IT people to correct the SPF records.

 Our assessment is that SPF is a good idea but pretty much unworkable for an
 ISP/host without a major education program which we neither have the time
 or money to do. Since we like our customers and they pay the bills it is
 now a dead issue.

 Any other experiences? I love to hear.



 Best Regards,

 Jeff Koch, Intersessions




Re: Magical mystery colon

2010-01-30 Thread Aaron Wolfe
wow, based on the subject alone, I thought my SA had missed a very strange
spam :)


On Sat, Jan 30, 2010 at 3:16 PM, Philip A. Prindeville 
philipp_s...@redfish-solutions.com wrote:

 I ran yum update on my FC11 machine a couple of days ago, and now I'm
 getting nightly cron errors:

 plugin: failed to parse plugin (from @INC): syntax error at (eval 84) line
 1, near require Mail::SpamAssassin:

 plugin: failed to parse plugin (from @INC): syntax error at (eval 148) line
 1, near require Mail::SpamAssassin:

 I've seen this message periodically, but never figured out what generated
 it.

 Can someone set me straight?  It of course doesn't mention a file, so it's
 hard to know where it's coming from.

 Also, how come the eval block:


foreach $thing (qw(Anomy::HTMLCleaner Archive::Zip Digest::SHA1
 HTML::Parser HTML::TokeParser IO::Socket IO::Stringy MIME::Base64
 MIME::Tools MIME::Words Mail::Mailer Mail::SpamAssassin Net::DNS
 Unix::Syslog )) {
unless (eval require $thing) {
printf(%-30s: missing\n, $thing);
next;
}

 doesn't contain a terminating ';', i.e.:

 eval require $thing; instead?

 Thanks,

 -Philip





Re: Spamassassin, no new version ?

2010-01-19 Thread Aaron Wolfe
On Tue, Jan 19, 2010 at 1:05 PM, Mikael Syska mik...@syska.dk wrote:
 Hi,

 On Tue, Jan 19, 2010 at 6:57 PM, Stephane MAGAND
 stmagconsult...@gmail.com wrote:
 Hi

 Since Jun 2008, he don't have a new version of spamassassin ? the project
 are dead ?

 Are you even reading the mailing list? or  3.3.0 should published soon.


To be fair, the SA news page does give the impression that nothing has
happened since 08:  http://spamassassin.apache.org/news.html
This is not the first confused/concerned post like this in recent
weeks.  I guess when 3.3.0 comes out this will be mentioned on the
news page and help sort folks out?

Personally, I think it is a testament to the quality of SA that few
updates are needed to the core software.  The logic being in rules
also reduces the need to update the core that much I would think.

 I wan change my mail server, actually with a old version of spamassassin.
 Can you say me the best
 choice for best result ?

 SpamAssassin + ?
      what rules.cf
      Pyzor ? Razor ? Dcc ?
      What RBL ?

 Default SA ... 3.2.5 then when you got used to all the settings, do 
 experiment.


Google is your friend.  Many guides available, some better than
others, none that will be exactly what you want.  Getting a SA config
that works well for your site takes time, patience and willingness to
experiment and learn.  If you don't have these, please do yourself and
your users a favor and hire an expert to do this for you.


 If i want put more that one server, can i use only one bayes server ?

 Yes


 Thanks for your help.
 Stephane




Re: OT: Museum piece...

2009-12-16 Thread Aaron Wolfe
On Wed, Dec 16, 2009 at 9:20 PM, Gene Heskett gene.hesk...@verizon.net wrote:
 On Wednesday 16 December 2009, Benny Pedersen wrote:
On ons 16 dec 2009 16:49:52 CET, Charles Gregory wrote

 On Tue, 15 Dec 2009, Chris Hoogendyk wrote:
 Marc Perkel wrote:
 http://www.vintage-computer.com/asr33.shtml

 There was actually a time when I had one of those in my house.

 For your amusement:

 I still have my old Commodore 64 and 1541 drive sitting in the basement.

 And I still have several coco's, including a coco3 in the basement that all
 boots up with a flick of the power switch.

my commodore 128 have basic 7.0 copyrighted from microsoft, i bet bill
gates have seen one of them with a reu 1750 and sayed the final words
of 640k ram ougth to be enough for anyone :)

i still have 8bit computers that works, and also cpm where i have
pascal, fortran, autocad wordstar, you name it, best of all it works !

 No cpm here, but what was once os-9, now nitros-9 because we changed the cpu
 to a hitachi 6309, cmos  smarter, then re-wrote os-9.  Both levels.

my nokia e51 have frodo c64 emulator that emulate all what a 64  1541
can do if one have the hardware, apple iphones have a c64 app aswell
now, so no excuse for not have fun anymore :)

c128 have 1M of mem page mapped in 64k pages, it realy have mmu, so it
can adress one whole meg of mem, fun part is that if i start cpm on
this, the m drive have 4 times more disk space then the system disks :)

 My coco3 has 2 megs, in 8k pages, 64k at a time, instant switch to a
 different map of 64k, and just a few microseconds to remap any of that 2 megs
 into the 64k that is visible.

 One year my daughter's school had a project to construct exhibits
 for a show called 'working class treasures' for the local Worker's
 Heritage Museum. The idea was to put on display 'precious'
 possesions from their parents' childhood. Baseballs, old toys,
 favorite tools, whatever.

 Well, the only thing I had of any 'meaning' to me was my C-64. So
 she put that in her exhibit.

 So yes, my Commodore 64 has actually been displayed in a museum.
 Not just figuratively, but *literally* a 'museum piece'. :)

kids need to know how little is needed to do simple things, and when
thay have seen it, thay will code much better if thay get some jobs
that use there knowledge

 I agree Benny. To demo that, I have the old coco2 that acted like a $20,000
 dollar Grass Valley Group E-Disk for the production video switchers in the
 300 series they made about 20 years ago.  For $245 worth of stuff, its 4x
 faster and 100x more friendly for the tech directors to use than the $20k GVG
 package was.

 Coding in assembly for one of those is something I can still do, I just
 rewrote the mouse driver which was suffering from a huge lack of tlc.

 When someone comes over who can be impressed, I go boot the coco3 up, then
 come back to this linux box, and over a bluetooth serial emulation, log into
 it with minicom.  Just to impress the frogs of course.


Long live the Coco :)

At this moment I am working on a project (half 6809 assembler, half
Java) that allows multiple simultaneous telnet sessions in and out of
a Coco running NitrOS-9.  Just two days ago we made Coco history when
three people (including one of the original OS-9 developers) all
connected over the internet into my coco 3.

8 bit CPUs and ancient operating systems are still very fun to play with.

-Aaron

sorry to be OT

 There must be a Senor Wences line here someplace, but I'll have to plead
 oldtimers.

 --
 Cheers, Gene
 There are four boxes to be used in defense of liberty:
  soap, ballot, jury, and ammo. Please use in that order.
 -Ed Howdershelt (Author)
 The NRA is offering FREE Associate memberships to anyone who wants them.
 https://www.nrahq.org/nrabonus/accept-membership.asp

 No act of kindness, no matter how small, is ever wasted.
                -- Aesop



Re: well, isnt that special...

2009-11-25 Thread Aaron Wolfe
On Wed, Nov 25, 2009 at 12:04 PM, Ned Slider n...@unixmail.co.uk wrote:
 R-Elists wrote:

 just got spammed via constant contact via Aloha Communications Group on
 our
 email lists email address from afrit...@aloha-com.ccsend.com

 obviously trolling for email addresses

 would the Constant Contact employee(s) and advocate on this list please
 kick
 some hiney after you are done rolling around in the money pile?

 on a much more important note, can those on the list that have a good
 handle
 on better filtering spam and/or UCE from Constant please share your SA
 info
 on that please?


 Here's mine:

 uri             LOCAL_URI_C_CONTACT     m{constantcontact\.com\b}
 score           LOCAL_URI_C_CONTACT     12
 describe        LOCAL_URI_C_CONTACT     contains link to constant contact
 [dot] com

 Got fed up with these clowns a long time ago so I hammer anything from them
 on sight.

That score is a bit extreme, but I've also found that a small positive
score is appropriate for constantcrap mail.

-Aaron


Re: HABEAS_ACCREDITED SPAMMER

2009-11-23 Thread Aaron Wolfe
On Mon, Nov 23, 2009 at 4:46 PM, jdow j...@earthlink.net wrote:
 From: J.D. Falk jdfalk-li...@cybernothing.org
 Sent: Monday, 2009/November/23 13:37


 On Nov 23, 2009, at 6:14 AM, Matus UHLAR - fantomas wrote:

 You should complain to ReturnPath. Iirc, HABEAS used to sue spammers
 misusing their technology. Don't know if ReturnPath continues prac ticing
 this.

 Actually, you're confusing Habeas's first technology (which involved suing
 misuse of their copywritten header, and was abandoned years ago) with their
 safe list whitelist product, which Return Path now operates.  Rather than
 suing them, we'll simply kick 'em off the list if they don't meet our
 standards.

 http://wiki.apache.org/spamassassin/Rules/HABEAS_ACCREDITED_COI has some
 basic info, including an address to complain at if you're receiving spam
 from a safelisted IP.

 --
 J.D. Falk jdf...@returnpath.net
 Return Path Inc



 As a sort of intolerant b**ch is my interpretation of what you just
 said as Habeas is useless a reasonable statement? If not, why not?

 {^_^}    Habeas gets a zero score here now.


Habeas accredited spam has been getting a positive score here for some years.

-Aaron


aup examples

2009-11-09 Thread Aaron Wolfe
http://basepath.com/aup/ex/ptutil_8c.html


Re: New to Spamassassin. Have a few ?s...

2009-11-08 Thread Aaron Wolfe
On Sun, Nov 8, 2009 at 11:43 PM, Computerflake gledf...@phhw.com wrote:



 Directly? No.. SpamAssassin, by itself, is really just a scanning engine
 with header modification abilities. It does not do email management,
 quarantines, etc at all. It receives a message, evaluates it, and
 modifies it based on the results, nothing more, nothing less.  (this is
 done to make SA flexible.. it's a mail pipe, so you can glue it into
 almost anything.)

 Generally matters like this are handled by integration tools such as
 MailScanner, amavisd-new, etc, although I do not know of any that
 provide comprehensive quarantine management. That said, I've never
 desired such, so I've not looked at length for one. (I mostly just tag
 mail, and let users filter at the client level as they see fit.)

 See also:
 http://wiki.apache.org/spamassassin/IntegratedInMta


 Wow. Really? Barracuda and Sonicwall both include this feature and it's one

You're comparing apples to oranges.  SA can be used as one part of a
system that does the same things that those products do.  It is not,
by itself, the same thing.   Barracuda is to automobile as SA is to
gasoline engine.


 of the most popular features my clients (who own these products) enjoy. I'll
 have to take a look at the products you mentioned. Anyone else have any
 experience with these types of functions?
 --
 View this message in context: 
 http://old.nabble.com/New-to-Spamassassin.-Have-a-few--s...-tp26260803p26261237.html
 Sent from the SpamAssassin - Users mailing list archive at Nabble.com.




Re: Constant Contact

2009-10-17 Thread Aaron Wolfe
On Sat, Oct 17, 2009 at 5:47 AM, rich...@buzzhost.co.uk
rich...@buzzhost.co.uk wrote:
 On Fri, 2009-10-16 at 13:29 -0700, John Hardin wrote:
 On Fri, 16 Oct 2009, John Rudd wrote:

  Me.  I work for one of their clients (a University).  One or two of
  our divisions use them for large mailings to our internal users.

 How is Constant Contact better than (say) GNU mailman for that purpose?

 It's so you can pay someone to send spam, skip past lots of things like
 Barracuda Network$$$ devices and other filters and not have to face the
 music and termination from your provider for spamming.

 Constant Contact = Constant Spam. A IPTables dropping all of their
 ranges from SYN is a great way to cut *lots* of crap mail



For a personal server, I'd agree they send nothing I want to receive.

However, for anything more, I think you will get complaints.  Constant
Contact is one of the better ESPs, kind of like a kick in the shin
is better than a kick in the teeth.  They do have some legitimate
customers, and they do have some spamming customers.  The truth is not
so good as Tara would like it to be, and not so bad as some have
claimed.

What I really can't understand is why they are on any kind of
whitelist.  Putting this type of company on a whitelist is great if
you're trying to support their revenue model.. now they can tell their
clients to use their service because they are on whitelists, this is
very attractive to spammers.  But what good does it do for anyone
else?  Why not let their messages meet the same scrutiny as any other
potential source of spam?  If they get blacklisted, great, now their
revenue model is hurt until they find ways to avoid it.  If they
manage to stay off the lists, even better, they are running as spam
free as they claim to be.  Why are we covering for their mistakes and
supporting a company that profits from sending spam, even if its only
sometimes, by whitelisting them?


Re: White lists and white rules

2009-10-12 Thread Aaron Wolfe
On Mon, Oct 12, 2009 at 11:50 AM, Marc Perkel m...@perkel.com wrote:


 Warren Togami wrote:

 On 10/12/2009 09:18 AM, Marc Perkel wrote:

 For what it's worth there are really only 3 serious white lists on the
 planet. I'm surprised no one is
 testing the emailreg list. There are dozens of black lists. Doing white
 lists is actually easier than doing
 black lists because there are thousands of servers out there that send
 nothing but good email. That have
 good FcRDNS, they are static, and unlike the black lists IPs they aren't
 trying to be evasive. It's low
 hanging fruit. On my servers if you are white listed your message just
 sails through the system.

 This seems to me like a naive system.  Even the best networks that send
 nothing but ham will occasionally have an infected spambot.

 BTW, how do I report HOSTKARMA W failures?

 Warren


 Not true. There are servers that say send out bank statements and 100% of
 what it sends is bank statements.


Until the day those servers get hacked, or they take on a new client
who sends a different type of mail, etc.


Re: Problems with high spam

2009-09-23 Thread Aaron Wolfe
On Wed, Sep 23, 2009 at 2:06 PM, Jose Luis Marin Perez 
jolumape...@hotmail.com wrote:

  Dear Sirs

 A few moments ago I noticed that SA was not assigned any score for SPAM
 emails, reviewing the log I see this:

 *...@40004aba627c21bee88c [25630] info: spamd: got connection over
 /tmp/spamd.sock
 @40004aba627c21dbc344 [10362] info: prefork: child states:
 
 @40004aba627c21de4f9c [10362] info: prefork: server reached
 --max-children setting, consider raising it
 @40004aba627c21f6a9fc [29083] info: spamd: got connection over
 /tmp/spamd.sock
 @40004aba627c22137ce4 [10362] info: prefork: child states:
 
 @40004aba627c23420234 [25630] info: spamd: processing message 
 20090923123800.35362610...@mail6.shermanstravel.com for
 cama...@qnet.com.pe:89
 @40004aba627c235e293c [10362] info: prefork: server reached
 --max-children setting, consider raising it
 @40004aba627c26639554 [29083] info: spamd: processing message 
 20090923174010.29472.qm...@mkt1.lan.com for cbr...@qnet.com.pe:89
 @40004aba62832e01e694 [10362] info: prefork: child states:
 
 @40004aba62832e01ee64 [10362] info: prefork: server reached
 --max-children setting, consider raising it
 tail: `/var/log/qmail/spamd/current' has been replaced;  following end of
 new file*

 cpu

 *Cpu(s): 89.2% us,  9.8% sy,  0.0% ni,  0.0% id,  0.0% wa,  1.0% hi,  0.0%
 si*

 memory

 * total   used   free sharedbuffers cached
 Mem:   501319181  0 22 78
 -/+ buffers/cache:218282
 Swap: 1027 38988*

 Load
 * 13:02:27 up 35 days, 21:49,  4 users,  load average: 21.76, 21.17, 17.37
 *


 Was solved by restarting SA

 This is due to lack of server resources?

 Thanks

 Jose Luis From: list...@abbacomm.net



maybe.  probably not.  who knows?

why was your system load at 21?  maybe you just have way too many instances
of spamassasin running
maybe you've got your system configured in a really inefficient way.

how could we know?




 list...@abbacomm.net
  To: users@spamassassin.apache.org
  Subject: RE: Problems with high spam
  Date: Wed, 23 Sep 2009 10:27:38 -0700

 
 
 
   but it could be nice that sare rules was checked in the mass
   check for 3.3.x to get the best rules out in new rule sets
  
   or would some other try this ?
  
   --
   xpoint
 
  Benny!
 
  excellent idea in general...
 
  will those in authority in SA team please act upon this and tell us in
 some
  positive way what appears to be best to keep out of SARE and what is
 not...
 
  much of the time it seems like we are double dipping with some rules and
  something needs to change...
 
  i realize it can be different from site to site yet maybe if we had some
  extra info we could all make better decisions eh???
 
  :-)
 
  - rh
 

 --
 Invite your mail contacts to join your friends list with Windows Live
 Spaces. It's easy! Try 
 it!http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmkt=en-us



Re: Problems with high spam

2009-09-23 Thread Aaron Wolfe
On Wed, Sep 23, 2009 at 2:58 PM, Jari Fredriksson ja...@iki.fi wrote:

  Dear Sirs,
 
  So runs Spamd
 
  states: 
 
  /usr/bin/spamd -v -u vpopmail -m 20 -x -q -s stderr -r
  /var/run/spamd/spamd.pid
 
  If I have about 10,000 emails to have less processes
  SpamD (Example 5) did not cause problems?
 
  Thanks
 
  Jose Luis
 
 Well 1 is what I get in a month, so I'm no expert.

 But if you put too many processes for your hardware to maintain, there will 
 be problems, because they will just trash the system and not run.

 If you try with 5, everything will run, and the email will grow the queue. 
 Nothing will be lost.

 But with -m 20 I'm afraid something will eventually be lost as the system may 
 crash.

 1 mails in queue... Maybe you need a farm of those machines. SpamAssassin 
 can do that.


Yes this looks like the problem.  Reduce the # of processes to fit
within ram and the machine can handle much more mail.   Look at how
much ram each one uses and adjust so that you have as many as possible
without swapping.  Sounds like 5 would be a good place to start.

Its no problem to have some mails in the queue, more important is the
time any one message spends there, or if the queue continues to grow.
10k in queue is not too bad as long as the number starts dropping
after proper adjustment of SA instances.

A lot of the time SA spends with a message is just idling waiting on
network checks to finish.  A local caching nameserver can speed this
up.  do you use one?  probably worth the ram it takes away from SA.

Once you limit the # of instances to work within the available RAM,
see if the delay is reasonable.

good luck
-Aaron


Re: MagicSpam

2009-09-23 Thread Aaron Wolfe
On Wed, Sep 23, 2009 at 1:40 PM, linuxmagic sa...@linuxmagic.com wrote:

 Slightly old thread, but we should clear any misconceptions.  MagicSpam is
 NOT anything like SpamAssassin.  LinuxMagic has been developing Anti-Spam
 solutions for the ISP and Telco markets for quite some time, focusing on the
 SMTP transaction layer.  This approach gives a more 'Zero Day' style
 protection, as it can identify spam sources prior to accepting the email,
 reducing backscatter and overhead.

 Mail Servers should have the protection during the SMTP transaction, and we
 have been porting our technology to other mail servers which do not have
 this ability.  Our first ports were to Qmail style mail servers, and since
 then we have ported to many others including Linux and Windows platforms.

 Just visit the forums, and see what customers have to say about this
 product, as it speaks for itself.  We have patent pending technology in
 place, to provide for an especially unique methodology, and more
 importantly, we make it very easy to install and operate.

 http://www.magicspam.com and http://forums.wizard.ca/viewforum.php?f=16



I really like this quote from their sales web site:

Now you can have MagicSpam spam protection for your Postfix (Linux)
Mail Servers. Complete with one click install...

I am quite interested to see this one click install on my Postfix
(Linux) Mail Server, as like most postfix servers there is no mouse or
gui.  Even given a server that has these things, I'm surprised they
have invented technology that can analyze a postfix install to the
degree needed for correct installation of their product with no more
than a single click.  With tech like that, I can't believe they
haven't taken the world by storm.  Maybe they're still working on
single click world domination technology.


Re: Problems with high spam

2009-09-22 Thread Aaron Wolfe
On Tue, Sep 22, 2009 at 4:02 PM, Jose Luis Marin Perez
jolumape...@hotmail.com wrote:
 Dear Sirs.

 Thank you for your answers

 Qmail-Smtpd have the following RBL configured:

 bl.spamcop.net
 cbl.abuseat.org
 combined.njabl.org

Consider zen.  It is excellent.  Spamcop and NJABL have caused too
many false positives to be used for blocking here, although very
useful in scoring mail.  Everyone's mail is different, YMMV.

Also consider the invalument block lists, see http://dnsbl.invaluement.com/
A very, very good list that is usable for blocking.  Not free, but
very affordable.


 These are the SARE rules which adds to SA:


careful with this, some of those sets will cause you FPs!   Don't just
blindly copy things, read about what you are doing first.

 echo 70_sare_adult.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_bayes_poison_nxm.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_evilnum0.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_evilnum1.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_evilnum2.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_genlsubj0.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_genlsubj1.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_genlsubj2.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_genlsubj3.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_genlsubj.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_genlsubj_x30.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_header0.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_header1.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_header2.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_header3.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_header.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_highrisk.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_html0.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_html1.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_html2.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_html3.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_html4.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_html.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_obfu0.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_obfu1.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_obfu2.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_obfu3.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_obfu.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_oem.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_random.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_specific.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_spoof.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_stocks.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_unsub.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_uri0.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_uri1.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_uri3.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_whitelist.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_whitelist_rcvd.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 70_sare_whitelist_spf.cf.sare.sa-update.dostech.net 
 /etc/mail/spamassassin/sare-sa-update-channels.txt
 echo 

Re: Problems with high spam

2009-09-22 Thread Aaron Wolfe
On Tue, Sep 22, 2009 at 10:21 PM, LuKreme krem...@kreme.com wrote:

 On 22-Sep-2009, at 14:42, Aaron Wolfe wrote:

 Also consider the invalument block lists, see
 http://dnsbl.invaluement.com/
 A very, very good list that is usable for blocking.  Not free, but
 very affordable.


 I don't like how involvement does their pricing structure, actually.
 Firstly, I don't feel comfortable telling a 3rd party how many 'users' I
 have. Secondly, I don't feel like determining what they consider a 'user'.
 Third, because of my HELO/EHLO restrictions and rejection of unknown users I
 make FAR fewer RBL calls than most mailservers (I reject about 87% of all
 connections, and 98% of those rejections are in HELO/EHLO or unknown, Only
 0.66% over the last week rejected by zen's RBL), so if I used invalument, it
 would probably only be for a handful of callouts per day but I would be
 paying the same amount as someone who was using it to do many tens of
 thousands of callouts per day.


If you used the invalument lists, you would not be doing *any* callouts per
day.  The list is provided via rsync, you serve it from your own DNS servers
to your MXes.  You rsync the entire list every few minutes. Thus all sites,
10 users or 10 million users, use the same amount of invalument's resources
to aquire the list.  This is not what you are paying for.

You're paying for the time and effort that the maintainer has put into
making this list so good.  How else can such a system offer a fair payment
structure, if not by basing it on the number of users who benefit at each
site?



 Sure, $20 a month is not a lot of money, but looking at my mail I figure
 that would be costing me about 1/2 a cent per check, if not more (I average
 out only about 1000 checks of zen per week), assuming I made exactly as many
 checks to involvement as zen means slightly over 1/2 cent per check.


Most people would value this is terms of the time they save by not dealing
with the spam, or in a larger organization the reduced calls to tech support
about spam + the time the employees save by not getting the spam.  You might
also find that there is great value in the reduced load on your content
scanners, because the invalument list can cut the traffic making it to these
resource hungry systems quite dramatically.  The list has save my
organization many times its cost simply by reducing the number of content
filtering nodes we have to run, or in other words allowing us to support
more paying customers on less hardware.

Everyone is entitled to their opinion, but for us the invaluement RBL is a
no brainer.  Sorry to sound like an advert here, practically all these same
reasons are used to justify the purchase of a Zen rsync feed when you
outgrow their free level of service.  That will cost you quite a bit more,
but still generally worth it in terms of support and hardware savings.


-- 
 Don't congratulate yourself too much, or berate yourself either.
You choices are half chance; so are everybody else's.




Re: Problems with high spam

2009-09-21 Thread Aaron Wolfe
On Mon, Sep 21, 2009 at 11:34 AM, Martin Gregorie mar...@gregorie.org wrote:
 On Mon, 2009-09-21 at 09:58 -0500, Jose Luis Marin Perez wrote:

 I will implement improvements in the configuration  suggested and
 observe the results, however, that more could be suggested to improve
 my spam service?

 I think you need to find out more about where your system resources are
 going.

 For starters, take a look at maillog (/var/log/maillog on my system) to
 check whether any SA child processes are timing out. If they are, you
 need to find out why processing those messages took so long and, if
 possible, speed that up, e.g. if RBL checks or domain name lookups are
 slow, consider running a local caching DNS.

 If that doesn't turn up anything obvious, use performance monitoring
 tools (sar, iostat, mpstat, etc) to see what is consuming the system
 resources: you have to know where and what the bottleneck(s) are before
 you can do anything about them. You can find these tools here:

 http://freshmeat.net/projects/sysstat/

 if they aren't part of your distro's package repository.


 Martin




Has there been any evidence that the OP's system is short on
resources?  If so I missed it.
The complaint was that too much spam is making it past the filter,
with a detection rate of only 54%.
This is not a very good percentage for a typical mail flow (if it is
actually accurate, i.e. not missing the mails rejected by RBLs or
RFC/syntax checks).

There were several issues with the configuration that kind people on
the list have pointed out.  Assuming these suggested changes have been
implemented, what is the detection rate now?

From the posted local.cf, it is evident that the SA configuration is
not working very well.  There are many manually entered whitelist
rules, and also many manually added rules that score 100.  This is a
telltale sign of a very bad setup that is attempting to bandaid
instead of fixing the core issue.   And as pointed out before, both
the whitelist and the subject match - 100 are very bad ideas.
Whitelisting the sender is so easily taken advantage of by spammers,
and those +100pts matches are sure to generate FPs.  Using rules this
way demonstrates lack of understanding in the way that SA is supposed
to work.  SA rules rarely attempt to kill a message in one shot (100
pts), instead they add or subtract a small amount from the score based
on likelyhood that a match means spam or ham.  Fine tuning, not
smashing with a hammer.

So, I think it is pretty safe to assume that the problem lies within
the SA configuration.

Maybe there are old rulesets that need to be updated.  Maybe not a
good selection of rulesets in the first place.  Perhaps this is an
out of the box configuration that has never been properly set up.

There are many good guides to setting up SA and supporting services
available online.  If the OP were to follow one of them to the letter,
I think the detection rate would be much improved.  Also some time
spent learning more about SA in general would allow the OP to fine
tune his config so that the current manual effort put into creating
hammer smashing rules is unneeded.

Good luck
-Aaron


Re: Problems with high spam

2009-09-19 Thread Aaron Wolfe
2009/9/18 Karsten Bräckelmann guent...@rudersport.de:
 On Sat, 2009-09-19 at 09:48 +1200, Jason Haar wrote:
 On 09/19/2009 09:13 AM, Jose Luis Marin Perez wrote:
  For more than 1 emails a day how much memory should be the server?
  as one can calculate the amount of memory needed?

 10,000 a day means you are running a real mail server (ie not just for
 your home), as such you really need a real server. I'm surprised

 The CPU should be capable of handling it, I guess. I mean, I've set up
 more than a single SA server on an Atom CPU, each of them pretty much
 bored to death -- granted, not 10k messages a day each, but still,
 they're just idling...

 The RAM is the killer here. With half a Gig, I'd feel uncomfortable
 running SA for 10k messages a day. And then there's ClamAV, the MTA, and
 probably more. I just hope he's not also running...

 Crap. I was about to say something along the lines of webserver,
 mediawiki and thus SQL server, but -- he is!

 This reminded me of the fact that he is running an SQL server for user
 prefs, AWL and Bayes. Wow.


 This machine NEEDS more RAM. In fact, I'd guess half of the spam
 slipping through is due to timeouts. Thrashing into hell.


throwing ram at a server is not a solution in this case.  512MB is
sufficient to handle this mail load, as indicated by his post showing
little swap utilization on the system and confirmed by my real world
experience. here we handle over 1 million messages per day per node,
each node has 1GB ram.   ram required is easily calculated by base
services + SA instance usage X number of instances you'd like to use.
having less instances generally just means slight (very slight in most
cases) delays.  having more instances than your ram can contain means
big delays.   properly configured server will not start swapping and
falling over when a flood of mail comes in, mail simply spends more
time in queue.  the difference between 1 second and 1 minute in queue
is not usually significant to users.

the problem here is bad administration.  hopefully with the advice
given on list and better yet some time spent studying docs, this can
be corrected.



 you're not swapping to hell. What does the system feel like? What does
 top say? What does the spamd syslogs say? I'd think you'd be having all
 sorts of issues - which would impact how well spamd operates.

 BTW, my questions are rhetorical. I mean you need to do SysAdmin-y
 type things to ensure the solution you have in place is operating
 correctly - there is no one answer that anyone can give you that works
 for everyone. Read man pages, etc.

 --
 char *t=\10pse\0r\0dtu...@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4;
 main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
 (c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}




Re: Barracuda RBL in first place

2009-08-14 Thread Aaron Wolfe
On Fri, Aug 14, 2009 at 11:24 AM, Chris Owenow...@hubris.net wrote:
 On Aug 14, 2009, at 10:13 AM, Mike Cardwell wrote:

 The comparisons on that page are useless. What matters is list policy,
 reliability and reputation.

 SpamHaus is hands down the best dnsbl.

 While I certainly agree that SpamHaus is very good, I would argue that
 Invalument is currently better.  It certainly stops a lot more spam here and
 I think false positives are still extremely low.


Invaluement lists are also the top performers at my site:

Total messages: 273235355
Total blocked: 227710956 83.34%

Unknown user 32.00% (32.00%)87427696
  Greylisted 24.88% (16.92%)46225401
   Throttled 11.03% (5.64%) 15399444
 Relay access denied 0.01%  (0.00%) 7034
   Bogus DNS (Broadcast) 0.01%  (0.00%)11692
  Bogus DNS (RFC 1918 space) 0.07%  (0.03%)82135
 Spoofed Address 0.26%  (0.12%)   319551
  Unclassified Event 0.77%  (0.35%)   949388
 Temporary Local Problem 0.01%  (0.00%) 8165
 Require FQDN sender address 0.04%  (0.02%)51022
  Require FQDN for HELO hostname 8.97%  (4.02%) 10988455
 Require DNS for sender's domain 0.78%  (0.32%)   870643
 Require Reverse DNS 23.83% (9.65%) 26372877
   Require DNS for HELO hostname 0.20%  (0.06%)   165157
 The Spamhaus Block List 21.87% (6.74%) 18405091
  The Invaluement SIP Block List 22.14% (5.33%) 14557404
   The SIP/24 Block List 3.84%  (0.72%)  1965510
 The Barracuda Reputation Block List 3.89%  (0.70%)  1915628
(several RBLs not widely used snipped)

We have several hundred domains and each can use it's own filtering
options, so not all RBLs/checks are used on all mail.  Checks are
listed in order applied, so a message dropped by unknown user for
instance is never seen by greylisted.

Invalument lists block over 25% of all messages that make it past all
the checks in front of them, including Spamhaus.  That's massive.
Barracuda is not used by a majority of clients and is used after the
others, so the low number is not an indication of poor performance.
I've actually had pretty good luck with it.

-Aaron

 --
 RANK    RULE NAME                       COUNT  %OFMAIL %OFSPAM  %OFHAM
 --
  1     URIBL_INVALUEMENT               27029    47.58   85.13    0.60
  2     RCVD_IN_INVALUEMENT             26116    45.81   82.26    0.22
  3     HTML_MESSAGE                    25184    79.83   79.32   80.48
  4     BAYES_99                        23445    41.09   73.84    0.12
  5     RCVD_IN_INVALUEMENT24           23290    40.85   73.35    0.18
  6     URIBL_BLACK                     22372    39.49   70.46    0.74
  7     RCVD_IN_JMF_BL                  16845    30.70   53.06    2.74
  8     URIBL_JP_SURBL                  15962    27.99   50.27    0.12
  9     DKIM_SIGNED                     12137    37.32   38.23   36.18
  10     DKIM_VERIFIED                   11051    33.93   34.81   32.84

 Chris

 -
 Chris Owen         - Garden City (620) 275-1900 -  Lottery (noun):
 President          - Wichita     (316) 858-3000 -    A stupidity tax
 Hubris Communications Inc      www.hubris.net
 -







Re: Barracuda RBL in first place

2009-08-14 Thread Aaron Wolfe
On Fri, Aug 14, 2009 at 9:39 PM, LuKremekrem...@kreme.com wrote:
 On 14-Aug-2009, at 18:44, Aaron Wolfe wrote:

                The Spamhaus Block List 21.87% (6.74%)             18405091
         The Invaluement SIP Block List 22.14% (5.33%)             14557404


 What would be interesting is the XOR on these two.

well, you have half of it, as any hit shown here by invaluement was
missed by spamhaus.  I can't give you the data for other cases because
it's a short circuit - 550 type of thing.

Maybe someone else uses both these as scoring instead of block and can
provide the stats on overlap?

I know Rob's original intent with the Invalument lists was to augment
Spamhaus rather than replace it.  If this is still the case, I
wouldn't be surprised if XOR is mostly true.



 I also don't understand what the percentage number in parenthesis is.


its the percent of hits vs all messages, including the ones the check
never got to see. not particularly useful.


 --
 Q how do you titillate an ocelot?
 A you oscillate its tit a lot.




Re: Any one interested in using a proper forum?

2009-07-30 Thread Aaron Wolfe
On Thu, Jul 30, 2009 at 5:01 PM, ktnj_engl...@kawasaki-tn.com wrote:

 Actually I think Nabble is great for those of us who can't handle the traffic
 of the whole mailing list.


This list generates less than 50 messages per day on average:

 
http://gmane.org/plot-rate.php/plot.png?group=gmane.mail.spam.spamassassin.generalplot.png

I've got to ask, what type of system are you using that can't handle
this traffic?  And does SA even run on such a thing :)?


 And I wonder, what has REALLY gotten better since the '80s?  Google, cell
 phones, and Priuses is all I can think of off the top of my head.
 Powershell seems like Bash finally invented for Windows...
 --
 View this message in context: 
 http://www.nabble.com/Any-one-interested-in-using-a-proper-forum--tp24697144p24747242.html
 Sent from the SpamAssassin - Users mailing list archive at Nabble.com.




Re: Any one interested in using a proper forum?

2009-07-30 Thread Aaron Wolfe
On Thu, Jul 30, 2009 at 10:07 PM, John Ruddjr...@ucsc.edu wrote:
 On Thu, Jul 30, 2009 at 17:54, Aaron Wolfeaawo...@gmail.com wrote:
 On Thu, Jul 30, 2009 at 5:01 PM, ktnj_engl...@kawasaki-tn.com wrote:

 Actually I think Nabble is great for those of us who can't handle the 
 traffic
 of the whole mailing list.


 This list generates less than 50 messages per day on average:

  http://gmane.org/plot-rate.php/plot.png?group=gmane.mail.spam.spamassassin.generalplot.png

 I've got to ask, what type of system are you using that can't handle
 this traffic?  And does SA even run on such a thing :)?

 You say that as though this list is all we read.


I interpretted the phrase handle the traffic to mean something the
mail server was doing, not a human :)

 If this list was ALL I read, instead of 100's of emails per day from
 all of my list, work, personal, etc. correspondence, then that'd be
 different.

 Further, this list has one of the lowest signal to noise ratios of any
 of the lists I'm on (don't get me wrong, when I say noise here, I
 don't mean totally worthless, I mean not relevant to me).  So, the
 logical choice of reducing the flood of traffic is by cutting back
 on how many of those 50-100 emails per day hit my inbox.



Re: Any one interested in using a proper forum?

2009-07-28 Thread Aaron Wolfe
On Tue, Jul 28, 2009 at 7:07 AM, snowwebpe...@snowweb.co.uk wrote:

 I don't know about anyone else, but I'm getting a bit hacked of with this
 1980's style forum. I'm trying to get to the bottom of an SA issue and this
 list/forum thing is giving me a bigger headache than SA!

 Spamassassin has more than one or two users now and I personally think that
 it should have a support forum to match the class of software, which is now
 world class.

 I know it's free and all that, but even so, if this is the only form of
 support they provide, I'm thinking that I'll just start an alternative
 support forum, using standard, full featured forum software (like SMF).

 Is there any support for this (I already know there will be opposition from
 those who are 'resident' here. Sorry guys, I just want do something to help
 those who just dive in when they have an urgent problem. No hard feelings I
 hope.)

 Peter Snow



From your posts to the list (both this thread and others recently), it
seems you would like a place where you can easily just ask questions
any time things on your system don't work.  This fits in with what a
typical forum provides, but this is *not* what the spamassassin user
list has been in the past, and I for one hope it never becomes such a
thing.

When you post to this mailing list, you are putting your thoughts or
question in front of many experts (at least for a few seconds :).
This means you have a great responsibility to not waste everyone's
time.  It means you are expected to spend your own time learning
before you take time from others.   For the most part, posters
understand this (or are informed/reminded when needed) and the list
works well to serve it's intended purpose.

When you post to this list, you will get a response, and it will
generally be excellent information.  However, using this list for
support should be a last resort.  It should not be convenient, and we
should not seek to gain exposure.   The list can be found where you
would expect to find it, that is enough.

Compare this to a forum, where it is typical to post questions rather
than do any self study.  There is no barrier to entry, the forum seeks
to generate as many posts as possible so it can sell banner ads.  Now
you have lots of the same questions (most of which can be answered
with the slightest bit of learning), these questions often either go
unanswered or are incorrectly/incompletely answered, there is no peer
review (if there are any experts on the forum, they certainly don't
always see each other's work as they do on a mailing list).  Top it
all off with a helping of forum spam and you have something that is
*less* useful to all but the most beginner users (and even that might
be questionable).

Sure, a forum for SA would get more questions than the list does.  It
would not get better answers.

Funny that a request for forums would come from nabble...  If nabble
users are any indication of what a forum would be like, I think it's
pretty obvious that posting quality would be crap.

Just my $0.02.
-Aaron



 --
 View this message in context: 
 http://www.nabble.com/Any-one-interested-in-using-a-proper-forum--tp24697144p24697144.html
 Sent from the SpamAssassin - Users mailing list archive at Nabble.com.




boosting PBL score suggestions

2009-07-22 Thread Aaron Bennett

Hi,

We're noticing that much of the spam which makes it through our filter 
hits the spamhaus pbl rule.  However, that rule by itself scores only 
0.9.  Since we quarantine spam through a web interface (maia), we're 
pretty tolerant of false positives.


Do any of you folks have a suggestion about raising the RCVD_IN_PBL 
score?  I was thinking of raising it as high as 2 or 3.  Another thing 
I'm considering is a META rule that scores for PBL + BAYES_60, etc.


I am generally reluctant to mess much with the default scoring -- but 
I'm always looking for a better setup.


Aaron Bennett
Clark University ITS



Re: FWD offlist reply CONSTANT CONTACT

2009-07-06 Thread Aaron Wolfe
+1 for ending this thread

On Mon, Jul 6, 2009 at 2:25 PM,
rich...@buzzhost.co.ukrich...@buzzhost.co.uk wrote:
                              From:
 Chris Owen ow...@hubris.net
                                To:
 rich...@buzzhost.co.uk
                                Cc:
 Tara Natanson t...@natanson.net
                           Subject:
 Re: constantcontact.com
                              Date:
 Mon, 6 Jul 2009 13:02:07 -0500
 (19:02 BST)
                            Mailer:
 Apple Mail (2.935.3)


 On Jul 6, 2009, at 1:00 PM, rich...@buzzhost.co.uk wrote:

 I'm keen to hear a cross section of views.

 Can you please just give this a rest.  It was stupid 3 days ago.  Now
 it is just wasting everyone's time.

 Chris

 --
 Chris Owen         - Garden City (620) 275-1900 -  Lottery (noun):
 President          - Wichita     (316) 858-3000 -    A stupidity tax
 Hubris Communications Inc      www.hubris.net
 --
 Why? Are you in charge?








Re: constantcontact.com

2009-07-03 Thread Aaron Wolfe
On Fri, Jul 3, 2009 at 2:39 AM,
rich...@buzzhost.co.ukrich...@buzzhost.co.uk wrote:
 I'm probably missing something here - but Constant Contact (who we block
 by IP) have been a nagging source of spam for us. I'm just wondering why

Could you share your IP list?  I'd like to block these clowns too (and
I'm lazy).


 25_uribl.cf has this line in it:

 ## DOMAINS TO SKIP (KNOWN GOOD)

 # Don't bother looking for example domains as per RFC 2606.
 uridnsbl_skip_domain example.com example.net example.org

 ..
 uridnsbl_skip_domain constantcontact.com corporate-ir.net cox.net cs.com

 Is this a uri that is really suitable for white listing ?





Re: constantcontact.com

2009-07-03 Thread Aaron Wolfe
On Fri, Jul 3, 2009 at 5:06 AM, Justin Masonj...@jmason.org wrote:
 I've heard that they are diligent about terminating abusive clients.
 Are you reporting these spams to them?

 --j.


From what I've seen, most of the traffic from them probably doesn't
qualify as spam by the common definition.  It is, however, stuff that
nobody here wants.  I'm surprised SA is giving them a pass, but there
have been other strange things that got a free ride through SA in the
past, like Habeas certified junk.


 On Fri, Jul 3, 2009 at 09:55, Mike
 Cardwellspamassassin-us...@lists.grepular.com wrote:
 rich...@buzzhost.co.uk wrote:

 I'm probably missing something here - but Constant Contact (who we block
 by IP) have been a nagging source of spam for us. I'm just wondering why
 25_uribl.cf has this line in it:

 ## DOMAINS TO SKIP (KNOWN GOOD)

 # Don't bother looking for example domains as per RFC 2606.
 uridnsbl_skip_domain example.com example.net example.org

 ..
 uridnsbl_skip_domain constantcontact.com corporate-ir.net cox.net cs.com

 Is this a uri that is really suitable for white listing ?

 A set of perl modules has been uploaded to cpan today for talking to the
 ConstantContact API:

 http://search.cpan.org/~arich/Email-ConstantContact-0.02/lib/Email/ConstantContact.pm

 I just thought it was a weird coincidence, seeing as I'd never heared of
 them before today.

 --
 Mike Cardwell - IT Consultant and LAMP developer
 Cardwell IT Ltd. (UK Reg'd Company #06920226) http://cardwellit.com/





Re: constantcontact.com

2009-07-03 Thread Aaron Wolfe
On Fri, Jul 3, 2009 at 6:11 AM,
rich...@buzzhost.co.ukrich...@buzzhost.co.uk wrote:
 On Fri, 2009-07-03 at 12:06 +0200, Yet Another Ninja wrote:
 On 7/3/2009 11:14 AM, rich...@buzzhost.co.uk wrote:
  On Fri, 2009-07-03 at 10:06 +0100, Justin Mason wrote:
  I've heard that they are diligent about terminating abusive clients.
  Are you reporting these spams to them?
 
  Yes - but you would thing a log full of 550's may be a clue.
 
  What concerns me is SpamAssassin effectively white listing spammers.
  White listing should be a user option - not something added in a
  nefarious manner. At least it is clear to see with Spamassassin which is
  a plus - but I cannot pretend that I am not disappointed to find a
  whitelisted 'spammer net' in the core rules. I'm wondering why (other
  than MONEY) it would have ended up in there?

 this has a historical reasons and its not about whitelisting spammers

 Many moons ago, when SA started doing URI lookup with the SpamcopURI
 plugin, there was only one URI BL: SURBL and to spare it from
 unnecessary queries, the skip list was implemented avoid the extar load
 and a number of ESPs which back then were considered to never send
 UBE/UCE were added.
 Times have changed and there's option regarding URI lookups, in public
 and private BLs. Also, URI Bls can handle way more traffic than they
 could 6 or 7 years back.

 There have been numerous requests to get some of these skip entries
 removed but non was honoured.

 The bottom line is that its trivial and cheaper to write a static URI
 rule to tag a URL (if you really need to) and which doesn't affect the
 globe, than hammering the BLs with zillion of extra queries.

 SA is conservative and caters to a VERY wide user base, with VERY
 different understanding what is UBE/UCE so while everyone saves reources
 on useless queries, you still havea  way to score constantcontact with
 100 if its your choice.


 axb
 Should that be Hi$torical Rea$ons ? ;-) There is no current excuse and
 this kind of alleged legacy rubbish needs to be pulled out.

 As it stands the is simply white listing a bulker. A spam filter that
 white lists a spammer - how bizarre ! I'm cynical. The only logical
 reason I can see for anything of this nature is money changing hands.



I think the point was that the URIBL's are never going to be listing
these domains, so why waste time looking them up, right or wrong.
It's not really an endorsement by SA, just a way to save resources
since this check is not going to return results anyway.  Don't know if
this theory is correct, but if this is the only special treatment
given to constant contact, then I don't really think there is any
conspiracy here.  Why do a check that isn't going to work anyway?
Hopefully the other rules will judge the messages on their own merit,
they do seem to catch *some* of the junk coming out of c.c.


Re: constantcontact.com

2009-07-03 Thread Aaron Wolfe
On Fri, Jul 3, 2009 at 6:26 AM, Mike
Cardwellspamassassin-us...@lists.grepular.com wrote:
 Aaron Wolfe wrote:

 I think the point was that the URIBL's are never going to be listing
 these domains, so why waste time looking them up

 m...@haven:~$ host constantcontact.com.multi.uribl.com
 constantcontact.com.multi.uribl.com     A       127.0.0.4
 m...@haven:~$


to be clear, I was explaining why the entry exists, not whether or not
it should be there.  still don't think there is any conspiracy here,
probably just an outdated or inaccurate assumption.


 --
 Mike Cardwell - IT Consultant and LAMP developer
 Cardwell IT Ltd. (UK Reg'd Company #06920226) http://cardwellit.com/



Re: constantcontact.com

2009-07-03 Thread Aaron Wolfe
On Fri, Jul 3, 2009 at 10:15 AM, Michael Grantmichael.gr...@gmail.com wrote:
 In defense of Constant Contact, they are in the business of sending
 out mailings for people, they are not themselves spammers.  They
 perform a service and they do it as best they can given the
 circumstances in which they work.


arms dealers don't cause war, but they sure profit from it.  esps by
nature have a sketchy business model with a clear monetary incentive
to allow as much mail to flow as they can get away with.  whether or
not they are the source of the spam is irrelevant, they are enabling
it and they are profiting from it.  there might be some good people
with good intentions somewhere in the organization, but its just a
dirty business.

 I have used them to send out mail to mailing lists of a non-profit
 organization that I help and also used it during the previous
 presidential campaign.  All the addresses were collected via people
 coming to the website, typing in their address, getting an email from
 constant contact and clicking on a yes, I want to sign up for this
 list link.

 All mail was sent out with a return address that went to a real
 person, and every message contained a link to get off the mailing.
 This is required by Constant Contact.

 Secondly, if you unsubscribe using the unsubscribe link, Constant
 Contact does not let that address be mailed to again unless it is
 re-opted in by signing up again and the person clicking on the opt-in
 link.

 Constant Contact keeps track of complaints and when it gets above
 something like one or two per thousand they cancel the account.

 If you are getting spam via them, you should send it to their abuse
 department.  They do take the reports seriously.


despite your personal experience, there is no shortage of
contradictory evidence.  as many have posted here and on other spam
related mailing lists (not sure if the old spam-l archives are still
available online, but cc was a subject of discussion there many
times).  lots of unwanted mail is coming from their systems.  i
regularly get complaints about mail from cc to the small network i
directly deal with (300 people).

 And by the way, from time to time I receive what surely looks like
 spam via Constant Contact.  I save all my mail.  I went back and
 searched and sure enough, it *was* something I signed up for but had
 completely forgotten.  A simple click of their unsubscribe link and no
 more of that.

 I would not personally give mail from Constant Contact a higher score
 just because it originated from there.  The likelihood is the message
 is ham, most likely the user forgot they opted like I did, or perhaps
 someone is abusing Constant Comment.


abusing constant comment?  by helping them turn a profit?

the ratio of wanted/unwanted here doesn't seem to be very good.  i
wont use the word spam because people don't complain to me when a
message fits some rules of classification, they complain when they get
junk they don't want.  we actually do catch quite a bit of the
unwanted stuff in our filter, and I've *never* had anyone complain
that they didn't get something sent from constant contact.
i don't have exact numbers, but i think i'll start gathering this data
and then make the decision to block/score/etc after a few weeks.


 Michael Grant



Re: opinions on greylisting and others

2009-05-22 Thread Aaron Wolfe
On Fri, May 22, 2009 at 9:06 AM, McDonald, Dan
dan.mcdon...@austinenergy.com wrote:
 On Fri, 2009-05-22 at 14:14 +0200, Arvid Ephraim Picciani wrote:
 Greetings.
 I'm thinking of implementing:
 - greylisting

 very effective.  I cut my incoming mail by about 80% when we put up
 greylisting.  I'm using sqlgrey.

 - honeypots
 - rejecting broken HELO at smtp time  (such as  MUMS_XP_BOX)

 We had too many false-positives when I did that.  In particular,
 Exchange administrators sem to be completely incapable of setting the
 HELO name to something sane.


Although I would agree with that a couple years ago, in the past
several months I have been scoring very high on retarded HELO names
with good results.  I think the tide is turning, more and more admins
finally getting a clue and more sites blocking or scoring highly on
misconfiguration.  I may start blocking at the MTA, the score I'm
giving is essentially a block already.

 - rejecting dynamic IPS at smtp time (PBL)
 - firewalling hosts  with 100% spam,  forever.


 I'm getting lots of it from zombies, so i wonder if its legitime to scan
 the sender before accepting. For example if it blocks icmp,  its very
 likely a home router.

 Any sane enterprise server administrator will block external icmp.
 I would recommend that you use p0f and a tool like BOTNET.pm to detect
 zombies - if they have messed up DNS and are running Windows, then it's
 a bot...

  But i have no data on that, and no clue.
 Spamhaus has only about half of the zombies. PBL even lacks half of the
 german dialup ISPs. i'm thinking i need my own techniques to build such
 lists.

 thanks.
 --
 Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX
 www.austinenergy.com



Re: one domain gets 99% of spam

2009-05-19 Thread Aaron Wolfe
On Wed, May 20, 2009 at 1:09 AM, Marc Perkel m...@perkel.com wrote:


 option8 wrote:

 on my small server setup, i host around 30 domains. between SA and a
 fairly
 aggressive exim setup, very little spam gets through to the end users.
 most
 of it doesn't even get far enough to hit my logs.

 however, one domain that i host gets constantly bombarded, and has since i
 took it over from another ISP a few years ago. most of these connections
 look like dictionary attacks (joe@, bill@, admin@, webmaster@, etc) or
 backscatter/bounces.

 at first, i thought it might be an attempt at a DOS on them (or me), since
 my traffic spiked right after i took over the domain, but it hasn't let
 up.
 is there any particular reason this might be happening to just this one
 domain?

 beyond that, is there any hope of this ever stopping? short of offloading
 their MX to gmail or something, i feel like i may be stuck with fending
 off
 a ton of spam for this one domain, while the rest only ever see a trickle.

 --option8.


it is common for one domains to get an order of magnitude more spam
than another that seems just like it.  like mark said, it probably
won't stop.  low overhead techniques like greylisting or no listing
can reduce the stress on your server quite a bit.  configuring your
mta to close connections after X errors will help with the dictionary
attacks, and you can combine that with fail2ban to go even further.



 I have a few of those myself. And since I took over filtering it's down some
 but they still get a few hundred thousand spams a day. So - it's probably
 not going away.





Re: I want MORE SPAM - MORE SPAM

2009-05-18 Thread Aaron Wolfe
On Mon, May 18, 2009 at 11:36 AM, DAve dave.l...@pixelhammer.com wrote:
 Marc Perkel wrote:

 Hi Everyone,

 My blacklist hostkarma.junkemailfilter.com is rising in the charts. Here's
 a blacklist comparison chart.

 http://www.sdsc.edu/~jeff/spam/cbc.html


 Those results differ wildly with my stats over the past year. Barracuda
 throws far too many FP for me to use on the MTA, I have to use it in SA and
 let the better tests pull the score up to tagging levels. It does provide a
 good foundation for the score though.

 The invaluement lists are not even tested and URI is my single best URI test
 with a %98.3 hit rate and zero FP. I could survive with Invaluement's URI
 and URIBL_BLACK (%97.3, zero FP) and tag nearly all my URI spam. Invaluement
 SIP is not listed either and it fully accounts for over 40% of my MTA
 blocks, beating SpamHaus most of the time.


+1 for the invaluement lists.  they are excellent, sad that they
aren't listed in that comparison.  we seem to get better results with
barracuda than you've seen, many of our clients choose to use the
barracuda list to block.  we offer the hostkarma lists as well but
probably introduced them too soon, the FPs were high back when we
first offered it and most clients chose to score only if they use them
at all.  I am going to re-evaluate though, and maybe recommend to some
clients. I have heard that the FP rate has improved.

 While we are not a big deal in email, we do get 300k connections a day every
 day, approaching 600k when things get bad. We have a wide variety of clients
 from dialup/DSL users to corporate users. Our clients receive mail from
 Europe and the Pacific Rim regularly.

 I'm just sayin...

 DAve

 --
 Posterity, you will know how much it cost the present generation to
 preserve your freedom.  I hope you will make good use of it.  If you
 do not, I shall repent in heaven that ever I took half the pains to
 preserve it. John Quincy Adams

 http://appleseedinfo.org




Re: OpenDNS and Spamassassin

2009-04-02 Thread Aaron Wolfe
On Thu, Apr 2, 2009 at 8:32 PM, LuKreme krem...@kreme.com wrote:
 On 2-Apr-2009, at 15:56, Evan Platt wrote:

 I logged into our server, and saw the OpenDNS was resolving EVERYTHING -
 blah.blah , nothing.nothing, etc.

 This is not a OpenDNS problem, this is a problem with the know-nothing who
 set it up for their system.  I used OpenDNS for quite a while on my
 mailserver (several months) and had no such issue.

 Configure it right, and it works quite well, and it is VERY configurable.

 Sorry, OpenDNS had to go.


 Or, you know, configured correctly.

 Each of these is a configuration option:


Trusting a critical service required by your network to a third party
who's basic business model involves tampering with that service seems
irresponsible at best.

Sure, you can disable all these features now, but when the
XYZfreeDNS marketing guys push for the next Big Thing to be enabled by
default, it's now your Big Problem.

DNS matters. It needs to work correctly *and* fail correctly.  I'm not
saying OpenDNS has any bad intentions, but their motivation to change
DNS behavior is pretty clear.

If your mail just isn't important then maybe it's a neat thing, but
considering how easy it is to set up a working local DNS, I just don't
see the value.

-Aaron

 Allow users to create child networks

 Enable stats and logs

 Enable typo correction
        Exceptions for VPN users
        Enable filtering of .cm wildcard

 Block internal IP addresses

 Apply my shortcuts to this network
        Makes all your shortcuts work on this network, whether you're signed
 in or not.

 Enable OpenDNS proxy
        Routes certain address bar requests through a simple proxy, ensuring
 that your shortcuts and other OpenDNS features always work. For more
 details, including potential privacy issues you should be aware of, read our
 KB article.

 Enable Botnet protection on this network
        Blocks infected computers on your network from connecting to botnet
 central controllers. At this time, this feature blocks the Conficker virus,
 and will be expanded to include others.

 --
 You and me
 Sunday driving
 Not arriving




Re: zen.spamhaus.org

2009-03-31 Thread Aaron Wolfe
On Tue, Mar 31, 2009 at 3:25 PM, Mark ad...@asarian-host.net wrote:
 -Original Message-
 From: Martin Hepworth [mailto:max...@gmail.com]
 Sent: dinsdag 31 maart 2009 20:56
 To: hlug090...@buzzhost.co.uk
 Cc: Rejaine Monteiro; Spamassassin list
 Subject: Re: zen.spamhaus.org

 Err no.

 spamhaus is great for low use. For high use they expect you to pay -
 see the TC's for use. Heck they gotta eat ya know.

 Yeah, how very unreasonable of them. :) Like with anything, if you want to
 make commercial use of (and off) it, just pay a fee.

 As for the barracuda rbl...well didn't add any value for me when I
 ran it for a couple of months. Scored spam with other tools and
 actually caught a few FP's which is kinda what i see in their pay
 for product at newplace of work. Basically not worth the bother IMHO

 When someone tells me 'their' list is much more aggressive than spamhaus,
 my first reaction is not: Oh, coolie, more to block! More like: Another
 one of those overly aggressive blocklists that in its rampant 'Off with
 their heads' policy just renders itself pretty much useless. So, indeed,
 thanks, but not no thanks.


Just my experience, but the barracuda list performs pretty well here
(we have just enough volume to be a paying subscriber to zen).  I
wouldn't call it more aggressive than zen necessarily.  They both have
an occasional FP, maybe slightly more from barracuda, but if your
scoring is good that almost never presents an issue.  Some of our
clients outright block using both.  I haven't had to deal with any
complaints due to either one in a very long time.


Re: HABEAS_ACCREDITED_COI

2009-03-17 Thread Aaron Wolfe
On Tue, Mar 17, 2009 at 1:42 AM, LuKreme krem...@kreme.com wrote:
 On 16-Mar-2009, at 16:40, Chris wrote:

 -8.0 HABEAS_ACCREDITED_COI  RBL: Habeas Accredited Confirmed Opt-In or
                           Better
                           [208.82.16.109 listed in


 I changed my HABEAS scores ages ago:

 score HABEAS_ACCREDITED_COI -1.0
 score HABEAS_ACCREDITED_SOI -0.5
 score HABEAS_CHECKED 0

 I'm seriously considering changing them to 1.0, 0.01, and 0, respectively.

 I seem to ONLY see the headers in spam messages. It's a shame the defaults
 in SA are still set absurd values.


Funny, I mentioned to Chris off list that I've been using positive
scores on all the Habeas accredited spam rules for quite some time
with good results.  Some of their junk is pure spam, more is the type
of shady commerical junk that technically might not be spam but is
still crap nobody wants, at least nobody here :)

I completely agree that SA should not be giving such high negative
scores to Habeas.  There are a lot of folks who run the defaults, and
they will get false negatives simply from these rules.



 --
 Major Strasser has been shot. Round up the usual suspects.




Re: HABEAS_ACCREDITED_COI

2009-03-17 Thread Aaron Wolfe
2009/3/17 Matus UHLAR - fantomas uh...@fantomas.sk:
  On 16-Mar-2009, at 16:40, Chris wrote:
  -8.0 HABEAS_ACCREDITED_COI  RBL: Habeas Accredited Confirmed Opt-In or
                            Better
                            [208.82.16.109 listed in

 On Tue, Mar 17, 2009 at 1:42 AM, LuKreme krem...@kreme.com wrote:
  I changed my HABEAS scores ages ago:
 
  score HABEAS_ACCREDITED_COI -1.0
  score HABEAS_ACCREDITED_SOI -0.5
  score HABEAS_CHECKED 0
 
  I'm seriously considering changing them to 1.0, 0.01, and 0, respectively.
 
  I seem to ONLY see the headers in spam messages. It's a shame the defaults
  in SA are still set absurd values.

 On 17.03.09 02:25, Aaron Wolfe wrote:
 Funny, I mentioned to Chris off list that I've been using positive
 scores on all the Habeas accredited spam rules for quite some time
 with good results.  Some of their junk is pure spam, more is the type
 of shady commerical junk that technically might not be spam but is
 still crap nobody wants, at least nobody here :)

 I completely agree that SA should not be giving such high negative
 scores to Habeas.  There are a lot of folks who run the defaults, and
 they will get false negatives simply from these rules.

 I still think it's much better to report them to habeas for spamming...
 COI means confirmed opt-in. If you did subscribe, it is NOT spam whether
 you want it or not. Isn't it good to have someone who will sue spammers?


The what is spam/not spam debate has been done many times before.
For me, it's spam if my users complain that they are getting spam,
and thats the only spam I have to worry about :)

Besides the questionable way some marketers use COI (or, the way users
don't seem to like getting what they asked for, depending on your
viewpoint), the specific problem with the Habeas rules in SA is that
the high scores sort of assume Habeas is correct about a message being
COI etc, when in fact Habeas is often wrong.  The scores are just too
trusting.

Reporting a message is fine but its not better than preventing the
spam in the first place, is it?
Best to tune the rules down and also report mistakes.

-Aaron

 --
 Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 The 3 biggets disasters: Hiroshima 45, Tschernobyl 86, Windows 95



Re: automated reporting plugin (was Re: HABEAS_ACCREDITED_COI)

2009-03-17 Thread Aaron Wolfe
On Tue, Mar 17, 2009 at 5:18 PM, J.D. Falk
jdfalk-li...@cybernothing.org wrote:
 RobertH wrote:

 there is bound to be some way that those (of us or the SA Team) that want
 to
 participate, can help you and help us at the same time.

 some type of automated plugin that needs to be created that reports to us
 and returnpath info relevant to stopping the bad eggs yet allowing the
 good
 eggs!

 something that does not toss internal security in the trash...

 We already receive copies of user complaints from most of the ISPs who
 utilize our data (and some who don't.  We also receive aggregate statistics
 from an even wider network.  I'd love to find a way to participate with the
 SA community in a similar way.

 We've been scratching our heads over how to implement it, though.  What do
 you have in mind?


Maia Mailguard is a neat project that uses SA/amavisd to provide users
with a web based quarantine.  When a user indicates that a message is
spam, the system can automatically submit the message to Razor, Pyzor,
DCC, and SpamCop.  I don't know the details of how these mechanisms
work, but surely you could emulate them or use their reporting systems
as an example.  The code is open source.

Good luck,
Aaron

 --
 J.D. Falk
 Return Path Inc
 http://www.returnpath.net/



Re: How can this free MX backup service be exploited?

2009-01-21 Thread Aaron Wolfe
On Wed, Jan 21, 2009 at 7:54 PM, Duane Hill d.h...@yournetplus.com wrote:
 On Thu, 22 Jan 2009, Steve Freegard wrote:

 5)  Privacy concerns;  potentially a domains entire mail stream for the
 last 5 days could be held on your mail spool.  This has obvious privacy
 implications for most people particularly as there is no contract
 between you and the end-user.  How does the end-user know that you've
 delivered it all?  Or that you haven't copied or read it?

 I would agree. I know for a fact our CEO would want a contract and an NDA
 notarized and signed. Not to mention, I have stricken down any decision made
 that would pass our email stream to/through any third party service(s).


you decided not to use the internet, then?


Re: workaround for DNS search service

2008-12-29 Thread Aaron Wolfe
On Mon, Dec 29, 2008 at 9:14 AM, Arvid Ephraim Picciani
a...@asgaartech.com wrote:
By any chance, didn't your ISP start providing search service for any
web name that does not exist?

 btw,  whats the workaround for this? opendns  didnt work for me as they have
 similar  features.

supposedly these can be using their web site.  very annoying though.

 do you simply  query the bl's  dns service directly?


run a local caching server, then you can get results straight from
authoritative servers and also reduce number of requests/speed up rbl
lookups.

 --
 best regards
 Arvid Ephraim Picciani
 Asgaard Technologies
 --
 The software engineer tribe.





Re: Bug in iXhash plugin - fixed version available

2008-12-03 Thread Aaron Wolfe
On Wed, Dec 3, 2008 at 1:57 PM, Arthur Dent [EMAIL PROTECTED] wrote:
 On Wed, Dec 03, 2008 at 01:08:32PM -0500, Rose, Bobby wrote:
 I just tried again with this 1.5.2 version and on box it times out querying 
 and on another it seems to run but no hits again.  Both my boxes are SA3.2.5.

 Does anyone have a message that is known to have hashes on any of iXhash 
 hosts?

 Well actually the one I posted earlier in this thread used to hit
 iXhash; failed on v. 1.5 and v. 1.5.1 but, I'm pleased to say hits (for
 me at least) with the latest version (v. 1.5.2).

 You can try it here:

 http://pastebin.ca/1269211

 And I would like to take this opportunity to thank Dirk and those who
 helped him (Karsten et al) to track down the bug for all their efforts
 to get this great little plugin working to the max once more.


As of 1.5.2 we are getting hits again here.  Thanks to everyone who
helped with this!


 Thanks!

 AD





Re: I'm thinking about offering a free MX backup service

2008-12-02 Thread Aaron Wolfe
On Tue, Dec 2, 2008 at 2:51 PM, Marc Perkel [EMAIL PROTECTED] wrote:
 Tell me if you think this is a good idea.

 I'm thinking about offering a free MX backup service that people without
 backup servers can use. I'm thinking about doing this as a way of promoting
 my spam filtering business because users will see a significant reduction in
 spam and might want to upgrade. The way it would work is that someone with
 just one MX record can add 2 more MX records.

 Anyhow - it's just something I'm thinking about. Want to get some feedback
 as to if I should do it. Anyone want to tell me why I'm crazy?



Without accurate user lists, backup mx will be source of massive
backscatter as messages are accepted for invalid users and then
rejected by the primary.  Getting user lists synced from the primary
sites will be difficult given all the different mail servers and
different ways they store their user info.  You could try to use
callouts to the primary to establish whether a user account is valid
before accepting the message, but then you arent much of a backup when
the primary goes down.   It isn't crazy but it is not trivial to do
backup mx well.

-Aaron


Re: I'm thinking about offering a free MX backup service

2008-12-02 Thread Aaron Wolfe
On Tue, Dec 2, 2008 at 3:59 PM, Marc Perkel [EMAIL PROTECTED] wrote:


 Rick Macdougall wrote:

 Marc Perkel wrote:



 Thanks Aaron, that is a good point. But I'm running Exim and I think I
 can code it so that it will not generate backscatter. I'll have to design
 that in up front.



 Interesting, how would you do that without dropping email (which is BAD).

 Rick

 If the recipient is bad then no one would have got the email anyway. But
 there wouldn't a a notification to the sender. I suppose I could make it
 smarter so that if the message is blessed in one of my many white lists then
 I would do a bounce message, otherwise not.

 OTOH, if someone is rarely down then the backscatter would probably be
 minimal. This will probably be something to experiment with.


no, you will get massive amounts of mail sent to invalid user accounts
regardless of whether the primary is online or not.  in fact, when the
primary is online the ratio of mail for invalid vs valid will be
higher.  if you accept these messages, you are responsible to send NDR
when the primary rejects them.  if you don't do this, you break rfc
compliance.  yes, it is usually a waste of everyone's time, but the
one time the CEO of your client's #1 partner mistypes an email address
on a critical message and doesn't get any error back, things go to
crap.  be careful with this.  consider a cached callout scheme as
mention in an earlier post.  it isn't perfect but it is probably good
enough and remains rfc compliant.


Re: New free blacklist: BRBL - Barracuda Reputation Block List

2008-09-24 Thread Aaron Wolfe
On Wed, Sep 24, 2008 at 5:41 PM,  [EMAIL PROTECTED] wrote:
 On Tue, 23 Sep 2008, McDonald, Dan wrote:

 On Tue, 2008-09-23 at 17:21 -0400, [EMAIL PROTECTED] wrote:

 Getting back to the subject...can anyone enlighten us to the efficacy of
 this DNSBL?  For example, how does it compare to zen.spamhaus.org,

 It hits significantly more spam than zen.spamhaus.org

 On my primary mx, today I had 94 mails that hit a zen list but not brbl,
 591 that hit a zen list and brbl, and 8042 that hit brbl but not zen.

 I am checking -lastexternal addresses only.

 Looking through the 2400 or so domains that were marked as spam, I
 didn't see any obvious false positives.  Looking through the 631 domains
 that did not have enough points to be classed as spam, I didn't see more
 than one or two that shouldn't have been blocked.  granted, i did not
 look through the emails themselves, just the domain name.

 I'm currently scoring it 1.0, and might raise it up to 2.0 in a couple
 of days if nobody starts squawking

 I was actually hoping to use it like I use zen.spamhaus.org and
 dul.sorbs.net and just reject emails listed on those.  It is very rare that
 I get a false positive from either, but their efficacy isn't what it used to
 be, either.  So, I just configured my tcpserver to invoke rblsmtpd using
 b.barracudacentral.org as well as the other two, and after only a few
 seconds, the difference was astounding.  Here is perhaps 2 minutes worth of
 stats:

 $ grep -c sorbs bl_stats
 9

 $ grep -c spamh bl_stats
 228

 $ grep -c barracud bl_stats
 1321

 I thought maybe something was broken and it was rejecting everything, but
 that doesn't appear to be the case.

 However, it may take a day or more to find out of the false positive ratio
 of this dnsbl is too high to use it like this.

 Has anyone else done this?  If so, what does the FP situation look like?

We've been testing here for over a week.  The FP rate is very low but
higher than that of zen or invaluement (which have practically none).
I'd guess you might be able to use it as a blocklist depending on your
site and user's expectations..  If you want a set it and forget it,
probably just add a decent score in SA.



 James Smallacombe PlantageNet, Inc. CEO and Janitor
 [EMAIL PROTECTED] 
 http://3.am
 =



Re: MagicSpam

2008-09-11 Thread Aaron Wolfe
On Thu, Sep 11, 2008 at 1:11 PM,  [EMAIL PROTECTED] wrote:
 Does anybody have any experience with this product?


It appears *noone* has any experience with it... Google finds only 2
links and they are on the company's own homepage.

 My company wants to replace SpamAssassin with this product, due to
 SpamAssassin being not being up to par other products.

What is the evidence for this statement?  I move customers from
commercial solutions to my company's SA based filtering regularly and
they are typically very impressed with what we can do for them with
Spamassassin.


 My argument is that people we give SpamAssassin to have no clue how to use
 it and what it's designed to do, therefore they think it sucks.


Why would your users even need to know you are using SA?  How are they
supposed to use it?  Just configure it to make spam go away and they
should be OK with that.  You can set up some sort of quarantine or
tagging system but people generally aren't going to use it much.





From what I can find of the company behind this Magic thing, it looks
like their products are repackaged open source software.  (Their
MagicMail product appears to be qmail).  There's a pretty decent
change they are selling you Spamassassin anyway :)


Re: senderbase rating - how to appeal?

2008-09-05 Thread Aaron Wolfe
On Fri, Sep 5, 2008 at 5:45 PM, Greg Troxel [EMAIL PROTECTED] wrote:

 Michele Neylon :: Blacknight [EMAIL PROTECTED] writes:

 Does anyone know how you can appeal or query a senderbase rating?

 I resisted answering at first, because I'm perhaps a bit too cynical:

  The way to appeal is to file a bug with spamassassin saying that
  senderbase is bogus and ask that any senderbase rules in SA be
  dropped.

 I don't know that spamassassin pays attention to senderbase; if not this
 probablly won't work.  I say this, mostly joking, from my experience
 with habeas.  I have gotten spam on multiple occasions from senders that
 are HABEAS_ACCREDITED_SOI, and complained to habeas - with absolutely
 zero useful response.  I filed a bug:

  https://issues.apache.org/SpamAssassin/show_bug.cgi?id=5902

 and soon heard from habeas, who claimed that they revoked the listing of
 that sender.

 I then got more spam from a different habeas-accredited spammer, and
 complained privately to [EMAIL PROTECTED], and heard nothing back.

 So the only rational conclusion seems to be that habeas accreditation is
 bogus, and they only respond to public pressure.  Perhaps that's not
 true and I've been unlucky, but that's how it feels from my end.


After seeing similar spam from accredited senders, we disabled any
score from the habeas rules long ago and have yet to notice any
increase in FP (we have ~5000 fairly sensitive users who definitely
let us know when things don't work as they want them to).  I've know
of other sites that have disabled the habeas rules/score as well with
similar results.   IMHO, they are not worth scoring on since they
obviously do accredit sites that send UCE.Does anyone see any
benefit from using habeus?  Does it outweigh the spam that gets
through because of them?


Re: Handy script for generating /etc/resolv.conf

2008-09-01 Thread Aaron Wolfe
On Mon, Sep 1, 2008 at 3:43 AM, Marc Perkel [EMAIL PROTECTED] wrote:


 Aaron Wolfe wrote:

 On Sun, Aug 31, 2008 at 10:59 PM, RobertH [EMAIL PROTECTED] wrote:


 It was explained somewhere earlier in the thread that he sometimes has
 to reboot his central dns servers and he apparently doesn't run local
 caching servers on the individual MX/SA nodes.

 I have to say (as others have mentioned in this thread and elsewhere)
 that running a local caching nameserver on any busy MX or SA server
 seems to solve this issue quite well without needing any scripts.  If
 you are rsyncing any zones from zen, etc. having the zone served up
 locally is awesome for quick lookups too.

 -Aaron



 Maybe my situation is unique. I'm running about 35 virtual servers and
 rather than run named in each one I have 3 virtual servers dedicated to
 doing caching DNS. One main one with 4 gigs allocated so that it caches for
 all of them and 2 backups in case something happens to the main one.




I think it's just a memory use vs. performance thing..  running a
nameserver in each instance might give you better performance and
stability, but of course it will use more ram.  Really though I don't
think named in a caching configuration is too bad of a pig on ram, and
there are high performance/low ram alternatives that just do caching.
I have a caching name server running on my home router (a linksys
thing that runs linux) and it has only 16MB ram total for the whole
system.  The nameserver used there is called dnsmasq and it appears to
use about 51`2k of ram in caching only mode.  Might be something to
consider since even with your script running every minute, you still
have (up to 60 sec)+(the time the script takes to run)+(the time it
takes everything to use the new resolv.conf) seconds of downtime that
could be avoided.


Re: Handy script for generating /etc/resolv.conf

2008-08-31 Thread Aaron Wolfe
On Sun, Aug 31, 2008 at 10:59 PM, RobertH [EMAIL PROTECTED] wrote:


 Well, the code works for me. If someone has a better solution I'll
 switch to yours. I just created it because I needed it and thought I'd
 share it with others who might need it. But if any of you want to
 improve it or replace it with something better I'm always looking for
 new tricks.

 Marc and list...

 Im confused...

 Why are you losing DNS in the first place where you would even have to worry
 about this?


It was explained somewhere earlier in the thread that he sometimes has
to reboot his central dns servers and he apparently doesn't run local
caching servers on the individual MX/SA nodes.

I have to say (as others have mentioned in this thread and elsewhere)
that running a local caching nameserver on any busy MX or SA server
seems to solve this issue quite well without needing any scripts.  If
you are rsyncing any zones from zen, etc. having the zone served up
locally is awesome for quick lookups too.

-Aaron


Re: Blacklist Mining Project - Project Tarbaby

2008-08-26 Thread Aaron Wolfe
On Tue, Aug 26, 2008 at 12:26 PM, Marc Perkel [EMAIL PROTECTED] wrote:


 Ken A wrote:

 Ralf Hildebrandt wrote:

 * Ken A [EMAIL PROTECTED]:

 How? He tempfails all mails.

 Are you asking how sending your customer, or company email off someplace
  you don't control might be a security risk?

 It's in no way more dangerous than using Postini...


 Have you compared Postini's contract to the one you get from Marc?

 Ummm.. just in case you have no luck finding that, what about a Privacy
 policy?

 See the link at bottom of
 http://wiki.junkemailfilter.com/index.php/Project_tarbaby
 for the Privacy Policy. It's currently a blank page. That doesn't give me
 a secure feeling..

 Ken


 Well, I'm definitely a privacy advocate as a former EFF employee but
 considering that we never receive and of the email (451 response before data
 is sent) there's no information to disclose. We aren't receiving the body of
 the email. Generally all we see is spam bot attempts and harvest those IPs
 for the blacklist which has now grown to 2 million.

You continue to miss the point, or maybe you just don't want to understand it.

Sending my client's email to your servers is irresponsible at best and
possibly even a violation of contract or illegal.
It does not matter that you claim to always give a temp fail.  It does
not matter that you are a Real Nice Guy.

What if your servers become compromised?
What if your DNS is hijacked?
What if your software giving the temp fail doesn't work properly?
What if a broken MTA sends the message even after you temp fail?
What if you turn into a Real Bad Guy?

There is also the issue that even if you do temp fail, even the
knowledge of which servers are trying to connect to my client's
domains may not be something they want you to gather.

As many have stated: if you are truly interested in this, get a client
together, preferably open source, that sends only the neccesary data
to your site.

-Aaron


Re: Blacklist Mining Project - Project Tarbaby

2008-08-25 Thread Aaron Wolfe
On Mon, Aug 25, 2008 at 3:13 PM, Jean-Paul Natola
[EMAIL PROTECTED] wrote:

 Hi everyone,

 I'm launching a free spam reduction service to help build up my
 blacklists. It involves adding a fake high numbered MX record to your
 existing MX list that points to one of our servers. We always return a
 451 error but we have a very good way of detecting virus infected spam
 bots on the first attempt. So this helps us and your spam bot traffic is
 significantly reduced, and this reduction will reduce the Spamassassin
 load because there will be less spam to process.

 We are also looking for dead domains or domains with no real email
 addresses that get a lot of spam to point at the tarbaby server. So far
 this is working really well. I'm tracking just over 2 million IP
 addresses of virus infected computers.

 Additionally you can use our blacklist to further reduce your spam and
 the blacklist will be tuned to the spam bots that are spamming your
 domain. Looking for some volunteers to use this and see how it works for
 you. Here's the details.

 http://wiki.junkemailfilter.com/index.php/Project_tarbaby

 Definitely looking for feedback from people who try it out.



 Is it just me , or am I having déjà vu,  I could swear I have read this
 message before -


This is at least 3 times.  There was at least once a response thread
discussing why most people are not interested in adding MX records
that direct their mail to someone else.

-Aaron


sa-update, dostech, / RHEL5 question

2008-06-06 Thread Aaron Bennett

Hi,

I'm in the process of converting to sa-update on rhel5, spamassassin 
3.2.4, to replace a rules_du_jour installation.  I'm trying to use the 
dostech sa-update channels.


Ultimately I'm looking to use a channel file, but for now I'm trying to 
get just one channel to work.  I'm getting this error when I run with 
debugging:



[20790] dbg: dns: query failed: 
4.2.3.72_sare_bml_post25x.cf.sare.sa-update.dostech.net = NOERROR



Thanks for any suggestions

- Aaron Bennett


Here's the complete output of the sa-update:


[EMAIL PROTECTED] ~]#  sa-update --channel 
72_sare_bml_post25x.cf.sare.sa-update.dostech.net -D --gpgkey 856AA88A

[20790] dbg: logger: adding facilities: all
[20790] dbg: logger: logging level is DBG
[20790] dbg: generic: SpamAssassin version 3.2.4
[20790] dbg: config: score set 0 chosen.
[20790] dbg: dns: no ipv6
[20790] dbg: dns: is Net::DNS::Resolver available? yes
[20790] dbg: dns: Net::DNS version: 0.63
[20790] dbg: generic: sa-update version svn607589
[20790] dbg: generic: using update directory: /var/lib/spamassassin/3.002004
[20790] dbg: diag: perl platform: 5.008008 linux
[20790] dbg: diag: module installed: Digest::SHA1, version 2.11
[20790] dbg: diag: module installed: HTML::Parser, version 3.56
[20790] dbg: diag: module installed: Net::DNS, version 0.63
[20790] dbg: diag: module installed: MIME::Base64, version 3.07
[20790] dbg: diag: module installed: DB_File, version 1.814
[20790] dbg: diag: module installed: Net::SMTP, version 2.29
[20790] dbg: diag: module not installed: Mail::SPF ('require' failed)
[20790] dbg: diag: module installed: Mail::SPF::Query, version 1.999001
[20790] dbg: diag: module installed: IP::Country::Fast, version 604.001
[20790] dbg: diag: module not installed: Razor2::Client::Agent 
('require' failed)

[20790] dbg: diag: module not installed: Net::Ident ('require' failed)
[20790] dbg: diag: module not installed: IO::Socket::INET6 ('require' 
failed)

[20790] dbg: diag: module installed: IO::Socket::SSL, version 1.13
[20790] dbg: diag: module installed: Compress::Zlib, version 2.01
[20790] dbg: diag: module installed: Time::HiRes, version 1.86
[20790] dbg: diag: module installed: Mail::DomainKeys, version 1.0
[20790] dbg: diag: module not installed: Mail::DKIM ('require' failed)
[20790] dbg: diag: module installed: DBI, version 1.604
[20790] dbg: diag: module installed: Getopt::Long, version 2.35
[20790] dbg: diag: module installed: LWP::UserAgent, version 2.033
[20790] dbg: diag: module installed: HTTP::Date, version 1.47
[20790] dbg: diag: module installed: Archive::Tar, version 1.38
[20790] dbg: diag: module installed: IO::Zlib, version 1.09
[20790] dbg: diag: module not installed: Encode::Detect ('require' failed)
[20790] dbg: gpg: adding key id 856AA88A
[20790] dbg: gpg: Searching for 'gpg'
[20790] dbg: util: current PATH is: 
/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

[20790] dbg: util: executable for gpg was found at /usr/bin/gpg
[20790] dbg: gpg: found /usr/bin/gpg
[20790] dbg: gpg: release trusted key id list: 
5E541DC959CB8BAC7C78DFDC4056A61A5244EC45 
26C900A46DD40CD5AD24F6D7DEE01987265FA05B 
0C2B1D7175B852C64B3CDC716C55397824F434CE 856AA88A
[20790] dbg: channel: attempting channel 
72_sare_bml_post25x.cf.sare.sa-update.dostech.net
[20790] dbg: channel: update directory 
/var/lib/spamassassin/3.002004/72_sare_bml_post25x_cf_sare_sa-update_dostech_net
[20790] dbg: channel: channel cf file 
/var/lib/spamassassin/3.002004/72_sare_bml_post25x_cf_sare_sa-update_dostech_net.cf
[20790] dbg: channel: channel pre file 
/var/lib/spamassassin/3.002004/72_sare_bml_post25x_cf_sare_sa-update_dostech_net.pre
[20790] dbg: dns: query failed: 
4.2.3.72_sare_bml_post25x.cf.sare.sa-update.dostech.net = NOERROR

[20790] dbg: channel: no updates available, skipping channel
[20790] dbg: diag: updates complete, exiting with code 1
[EMAIL PROTECTED] ~]#



Re: reject vs. delete

2008-05-23 Thread Aaron Wolfe
On Fri, May 23, 2008 at 3:00 PM, Jared Johnson [EMAIL PROTECTED] wrote:

 Hi,

 The product I've been working with allows th user to set Rejection and
 Deletion thresholds, at which a message identified as spam will be rejected
 with 550 - Message is Spam etc., or accepted with 250 OK but dropped on
 the floor, respectively.  Historically it has been believed that if we have
 a high enough confidence that a message is spam, it is adventageous to
 pretend we have accepted the message in order to avoid allowing spammers to
 know whether their methods are working.  I have not verified anywhere that
 this practice really does have a negative impact on spammers.  This would
 especially be invalidated if most of the rest of the spam filtering world
 does not make use of 'delete' and simply issues rejections -- in that case,
 if the spammers don't get the information from me, they'll get it from the
 next guy.

 I do know that having a delete threshold occasionally causes false
 positives to go undetected by end users.  That is a bit of a disadvantage.
  The suggestion has also been raised that claiming to accept spam rather
 than rejecting it might invite spammers to send more spam your way.

 Does anyone have any knowledge or opinions on these matters?  Does
 pretending to accept a message contribute to the fight against spam in
 some way?  Or does it invite more spam?  Is it worth it?



I prefer to follow the spirit if not the letter of the RFCs.  If I am not
going to take responsibility for a message, I reject it.

I do accept some things and quarantine them rather than put them into a
user's mailbox, but I never just throw anything away after saying I will
deliver it.

There are plenty of sites that do silently throw away mail, and plenty that
will reject.  unless you are a *really* big site I really don't think
spammers are going to care what you do, if they notice at all.  I'd worry
more about the legitimate users and what happens to their mail in a false
positive situation.

-Aaron




 Jared Johnson
 Software Developer and Support Engineer
 Network Management Group, Inc.
 620-664-6000 x118

 --
 Inbound and outbound email scanned for spam and viruses by the

 DoubleCheck Email Manager: http://www.doublecheckemail.com



VBounce ruleset

2008-05-14 Thread Aaron Bennett

Hi,

I'm giving some though to deploying the Vbounce ruleset into an existing 
SA 3.1.9+Maia Mailguard / 5,000 user email environment.  It makes good 
sense; the only thing that seems off is the scoring.  As I see it, none 
of the rules score greater then 0.1.  It's hard to see how that's going 
to catch much of any spam -- even if BOUNCE_MESSAGE, CRBOUNCE_MESSAGE, 
and VBOUNCE_MESSAGE all hit together it wouldn't score more then 0.3. 

My question is to people who've been using the rules in a real 
production environment -- do you see them working with the default 
scores, or have you tweaked them at all? 


Best,

Aaron Bennett


Re: VBounce ruleset

2008-05-14 Thread Aaron Bennett

Karsten Bräckelmann wrote:


Please check the recent archives for threads
about the VBounce plugin or backscatter.

  
I apologize for not doing that... however, had I, I would have still 
asked the question because the advice given is not suitable for an 
enterprise deployment:

# If you use this, set up procmail or your mail app to spot the
# ANY_BOUNCE_MESSAGE rule hits in the X-Spam-Status line, and move
# messages that match that to a 'vbounce' folder.

  
If you read further you'll see that you didn't answer my original 
question
  

My question is to people who've been using the rules in a real
production environment -- do you see them working with the default
scores, or have you tweaked them at all?



  


Using procmail or a client side filter to file spam based on an 
X-Spam-Status line is not appropriate for a large, or even moderately 
large, end-user focused deployment -- that's why I asked if others are 
altering the default scores.


Again, a thousand apologies for making you repeat yourself.

That all being said...

Is anyone using these rules for spam detection?  If so, how have you 
been scoring them?  I'm glad to have a confirmation that 0.1 is 
obviously not enough but I'm curious how others are scoring these rules; 
given a general spam target of 5.  I'm thinking of scoring in the range 
of 1.5 - 2...


Best,

Aaron Bennett


Re: Experimental - use my server for your high fake MX record

2008-05-07 Thread Aaron Wolfe
On Wed, May 7, 2008 at 5:11 PM, Marc Perkel [EMAIL PROTECTED] wrote:



 Randy Ramsdell wrote:

  DAve wrote:
 
   Marc Perkel wrote:
  
Looking for a few volunteers who want to reduce their spambot spam
and at the same time help me track spambots for my black list. This is 
free
and mutual benefit. I (junkemailfilter.com) want to be your highest
numbered fake MX record. Here's how you would configure your domain:
   
  
   A generous offer and an admirable effort. But if you think I or my
   clients are going to route mail to your servers you are mistaken. Even if 
   I
   knew you personally, I don't think ethics or common sense would allow me 
   to
   do so.
  
   DAve
  
  Not taking a position on this, but isn't outsourcing spam filtering
  normal? Although I would think one would consider carefully about
  outsourcing their e-mail filtering, I don' think common sense or ethics have
  a whole lot to do with it.
 
 
 Thanks Randy,

 I am in the outsourced spam filtering business so this all seems natural
 to me. And I look at it as win/win. I get useful data, the person letting me
 use their high numbered MX record gets some spam reduction. I'm not
 interested in the content of the message or anything other than catching the
 IP addresses of virus infected spam bots. That's all I want to do.


If you just want IPs, maybe instead of running an SMTP service that 450s,
you would want to use a packet filter like iptables instead.  You could get
the IPs simply by what packets you saw come in to port 25 and noone would
have to worry you were stealing their mail.

-Aaron


Re: Experimental - use my server for your high fake MX record

2008-05-07 Thread Aaron Wolfe
On Wed, May 7, 2008 at 5:44 PM, John Hardin [EMAIL PROTECTED] wrote:

 On Wed, 7 May 2008, Aaron Wolfe wrote:

  If you just want IPs, maybe instead of running an SMTP service that 450s,
  you would want to use a packet filter like iptables instead.  You could get
  the IPs simply by what packets you saw come in to port 25 and noone would
  have to worry you were stealing their mail.
 

 (1) Mark is trying to collect data on how the remote MTA behaves when
 presented with a 451 tmpfail result. A firewall rule can't do that.


From his message: I'm not interested in the content of the message or
anything other than catching the IP addresses of virus infected spam bots.
That's all I want to do.



 (2) If someone doesn't trust him when he says I won't accept or read your
 mail, why will they trust him if he says I have it firewalled off?


Because you can very easily check for yourself to see that this is true.

-Aaron


 --
  John Hardin KA7OHZ
 http://www.impsec.org/~jhardin/http://www.impsec.org/%7Ejhardin/
  [EMAIL PROTECTED]FALaholic #11174 pgpk -a [EMAIL PROTECTED]
  key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
 ---
  Where We Want You To Go Today 07/05/07: Microsoft patents in-OS
  adware architecture incorporating spyware, profiling, competitor
  suppression and delivery confirmation (U.S. Patent #20070157227)
 ---
  Tomorrow: the 63rd anniversary of VE day



Re: AWL Database Cleanup

2008-04-28 Thread Aaron Bennett

listmail wrote:

I noticed that the AWL database was getting rather large, so I used the
check_whitelist script to remove the stale entries. While this seems to have
removed a lot of entries from the database, it did not reduce the database size.
  


If you are using MySQL with the Innodb backend, removing entries will 
not always shrink the database's physical file.


http://dev.mysql.com/doc/refman/5.0/en/innodb-file-defragmenting.html




Re: relays.ordb.org returning positive for everything?

2008-04-16 Thread Aaron Wolfe
On Wed, Apr 16, 2008 at 5:13 AM, Daniel Zaugg
[EMAIL PROTECTED] wrote:


  John Rudd wrote:
  
   the error is ignored since it has no practical consequence (except
   maybe in some unread log file)
  
   Unread/unchecked only by half-assed postmasters who aren't worth their
   salt, and should thus be fired.
  
  
   A decent postmaster at least generates summaries of traffic ...

 
   A postmaster who doesn't check their logs in any fashion deserves
   whatever they get.
  

  Clearly, only half-baked providers do the latter.
  

  Wow ! Aren't you guys proud to be postmasters !

  For me being a postmaster clearly is a chore (one of many) to wich I devote
  an absolute minimum amount of my precious time.
  BTW firing me is not an option since I'm the CEO of my own (small) private
  owned company :-)

  Expecting all postmaster to be highly skilled professionals who have studied
  all the ins an outs of their system is in my view an unrealistic approach of
  a world where almost every company has to have an email server.
  I gladly accept all the qualifications you made about being half baked
  not decent etc..

  Is there somewhere a list of all the still working RBL's or an easy way for
  an unskilled neophyt like me to check if an RBL is still valid?

Google should give you pointers to RBL information.  RBLs, like many
spam fighting tools, are not a set it and forget it type of thing.
A properly working mail server (very little spam, practically no false
positives, good uptime, etc) is not a trivial task.  Spam is a moving
target.  Your config may need frequent adjustment and a close eye on
the logs to keeps things working well.

Since you're not interested in committing time to this task, why not
use one of the many services that can do this work for you?  They are
generally inexpensive and easy to use.

-Aaron


Re: relays.ordb.org returning positive for everything?

2008-03-26 Thread Aaron Wolfe
On Wed, Mar 26, 2008 at 2:23 AM, Dave Funk [EMAIL PROTECTED] wrote:
 On Tue, 25 Mar 2008, John Rudd wrote:

  Aaron Wolfe wrote:
  On Tue, Mar 25, 2008 at 11:50 PM, John Rudd [EMAIL PROTECTED] wrote:
   A postmaster who doesn't check their logs in any fashion deserves
   whatever they get.  Including having all of the spam sail through
   unchecked.  Or having their domain actually RBL'ed (ie. routed to null)
   because they've continued to do queries well past any reasonable
   expiration period.
 
   Generate all misses:  doesn't penalize the good postmasters, don't care
   about the effect on the bad postmasters.
 
   Generate all hits: penalizes the good postmasters, don't care about the
   effect on the bad postmasters.
 
  I think you're mistaken.  Generating all hits does not penalize a
  good postmaster, because no good postmaster will be using an RBL
  that's been dead for over a year.
 
  That's only specific to this case.  I'm talking about from day 1 of the RBL
  going dark.

 But that's exactly what this whole thread is about, an RBL that wants to
 go dark but is still being hammered upon by unmaintained mail systems.

 This thread was started by a mail-admin-wanabe who was asking why his
 systems suddenly started rejecting all mail. That PROVES that he was still
 using the dead RBL and needed the clue-by-4 along side the head to wake
 him up.


Does anyone actually read the posts they are responding to here, or is
it normal to just assume everyone is an idiot and start typing?

I started this thread.   I was not at all confused about why some of
my clients were having problems (which I had helped them correct
before I posted).   I simply made the observation that the RBL's
behavior seemd to have changed, offered what I knew about it, and
asked if anyone else knew more about the situation.

Maybe my post was unclear?  Two people have written in to inform me
that the RBL is dead.  Strange, since I mentioned that in my post.
Now I am called a mail admin wannabe etc?

To put it simply: WTF?


 This is not the first time an expiring RBL resorted to that technique and
 probably will not be the last (sad to say).

 --
 Dave Funk  University of Iowa
 dbfunk (at) engineering.uiowa.eduCollege of Engineering
 319/335-5751   FAX: 319/384-0549   1256 Seamans Center
 Sys_admin/Postmaster/cell_adminIowa City, IA 52242-1527
 #include std_disclaimer.h
 Better is not better, 'standard' is better. B{



Re: relays.ordb.org returning positive for everything?

2008-03-26 Thread Aaron Wolfe
On Wed, Mar 26, 2008 at 12:10 PM, mouss [EMAIL PROTECTED] wrote:
 nws.charlie wrote:
   I guess I'm one of the mail admin wannabe's... not by choice, but by
   inheritance. It was turned over to me with almost zero training or
   experience. :(
   I found the initial posts clear, and had to wonder at some of the replies
   myself! Just wanted to say thanks for posting the answer before I posted 
 the
   question. It shortened my head-bang session.
  

  I guess the real problem comes from sites using appliances or commercial
  solutions that use DNSBLs without the admins really realizing what this
  means (some may even think the DNSBL is managed by the solution vendor).
  The lesson for such vendors is that they must use some mechanism to
  verify the integrity of their solutions (not everybody will update
  their solution, so the check must be enabled since day 1). for instance,
  a cron would qury the DNSBLs for 127.0.0.1 or the like, and if it is
  listed, the DNSBL must be disabled.

  This can be done on home grown setups as well.




I assisted a site today that uses a Symantec antispam product on their
Exchange server.  They were blocking all mail with a very vague error,
571 message refused if i recall.

There was a feature called Block open relays or similar that made no
mention of using relays.ordb.org.  It just explained what an open
relay was and offered a check box to block them.  There was a separate
section for RBLs in another area of the interface.

Not sure if it's on by default, but if I was an admin using this
product, I'd probably check the box and assume Symantec was providing
the functionality.

It's a pretty safe bet that this feature queries relays.ordb.org,
since it never blocked mail before today and turning it off resolved
the problem.

I think you are right.  Vendors need to take responsibility here.  I
doubt many users of this product have any idea that they are querying
the RBL.


relays.ordb.org returning positive for everything?

2008-03-25 Thread Aaron Wolfe
It seems like relays.ordb.org (long dead) has started returning
positive answers for *all* IPs.
Today I've had several clients with old configs which still had this
RBL in them suddenly start blocking everything.
Is this a new thing?  Maybe the maintainers were tired of all the queries.


Re: relays.ordb.org returning positive for everything?

2008-03-25 Thread Aaron Wolfe
On Tue, Mar 25, 2008 at 3:23 PM, Per Jessen [EMAIL PROTECTED] wrote:

 Aaron Wolfe wrote:

   It seems like relays.ordb.org (long dead) has started returning
   positive answers for *all* IPs.
   Today I've had several clients with old configs which still had this
   RBL in them suddenly start blocking everything.
   Is this a new thing?  Maybe the maintainers were tired of all the
   queries.

  ordb has been off-line for quite some time:

  http://it.slashdot.org/article.pl?sid=06/12/18/154259from=rss


  /Per Jessen, Zürich


I'm aware of that, but I don't think the servers were giving positive
responses to all queries until recently.


Re: relays.ordb.org returning positive for everything?

2008-03-25 Thread Aaron Wolfe
On Tue, Mar 25, 2008 at 11:50 PM, John Rudd [EMAIL PROTECTED] wrote:
 mouss wrote:
   ajx wrote:
   It seems your logic is fundamentally flawed for several reasons.  By
   returning false positives, you're breaking mail gateways that use this
   once
   useful service. On the contrary, the best way would be to simply return a
   DNS host not found error or a connection refused message when a client
   tries
   to make contact to the service... This would reduce your bandwidth and
   not
   confuse and frustrate any users...
  
  
  
  
   It is your logic that is flawed.

   Returing an error brings nothing at
   all.

  Which is exactly why it is better.  It brings no false positives.
  That's infinitely better than returning all false positives.



   the error is ignored since it has no practical consequence (except
   maybe in some unread log file)

  Unread/unchecked only by half-assed postmasters who aren't worth their
  salt, and should thus be fired.


  A decent postmaster at least generates summaries of traffic (perhaps via
  cron), and will note that one of their DNSBLs dropped from lots of hits
  per day to no hits per day, wonders why, and looks into the problem.
   These responsible postmasters (who may have missed any notification of
  the impending death of the DNSBL they use) do not deserve to have the
  headaches caused by generating all false positives.  They will get
  angry calls from users whose mail was returned to the senders (many of
  whom will not resend, some of whom are even so lazy as to not even read
  bounce reports).  In short, returning an always block result from a
  deprecated DNSBL effectively, and inappropriately, penalizes the
  responsible postmasters who do in fact check the results, and
  investigate why things changed.


  A postmaster who doesn't check their logs in any fashion deserves
  whatever they get.  Including having all of the spam sail through
  unchecked.  Or having their domain actually RBL'ed (ie. routed to null)
  because they've continued to do queries well past any reasonable
  expiration period.


  Generate all misses:  doesn't penalize the good postmasters, don't care
  about the effect on the bad postmasters.

  Generate all hits: penalizes the good postmasters, don't care about the
  effect on the bad postmasters.

I think you're mistaken.  Generating all hits does not penalize a
good postmaster, because no good postmaster will be using an RBL
that's been dead for over a year.   It has no effect on good
postmasters.  Generating all misses penalizes the maintainers who were
nice enough to provide the list while it was active, because bad
postmasters will *never* stop pounding their servers with queries.




  Clearly, only half-baked providers do the latter.



Re: New Postfix compatible BLACK LIST

2008-03-21 Thread Aaron Wolfe
On Fri, Mar 21, 2008 at 6:25 AM, Henrik K [EMAIL PROTECTED] wrote:
 On Wed, Feb 27, 2008 at 07:17:07AM -0800, Marc Perkel wrote:

  Hello Everyone,
  
   My hostkarma black/white/yellow lists were too complex to be accessed by
   Postfix. So I have created a Postfix compatible blacklist for those of
   you who want to bounce a lot of spam before routing it into SA.
  
   reject_rbl_client blacklist.junkemailfilter.com

  By the way, I've been trying this list in SA for a while.

  Not very flattering results. There are _many_ false positives. Some big
  local ISP/government/etc servers, mailing lists..

  Probably the data isn't collected widely enough. Many FPs are from Finnish
  servers. Perhaps the list uses mostly US data..



I've been testing the lists here for about a month.  While there are
certainly some FPs (do not use it as a blocklist!), I've been using it
to add a small amount to the spam score with decent results.  There
are a number of messages that get pushed over the threshold thanks to
hits on hostkarma.  I deal with US mail primarily, maybe that is the
difference.

-Aaron


Re: How to report 120,000 spams

2008-03-09 Thread Aaron Wolfe
On Sun, Mar 9, 2008 at 8:53 PM, Tuc at T-B-O-H [EMAIL PROTECTED] wrote:
 
   Tuc at T-B-O-H.NET wrote:
I guess I'm still not being clear. There are 120K emails a day coming
to INVALID EMAIL ADDRESSES THAT NEVER EXISTED. Its not a case of a user 
 being
fickle, its a case that they are emailing addresses that NEVER EVER 
 ACTUALLY
EXISTED. About 1 ever 3/4 of a second. So running them through ANYTHING 
 is
counter productive since , atleast in my eyes, if you try to email an 
 email
address that never existed... ITS SPAM. Its not things the user ever 
 sees/knows,
etc. I have in my sendmail virtusertable:
   
[EMAIL PROTECTED]   bingo
[EMAIL PROTECTED]  bango
[EMAIL PROTECTED]   bongo
[EMAIL PROTECTED]  irving
[EMAIL PROTECTED]   nobody
   
The user doesn't even SEE the emails, and processing what they 
 consider
spam I really don't care about. But getting 120K emails to *@ that are 
 absolutely
known spam... I would like to help the community out by reporting them 
 to every
system possible. Yea, if the added benefit is the mail that bingo, 
 bango, bongo
and irving gets filtered a little better... I won't complain at all.
   
Tuc
   
  
   Just because mail goes to invalid addresses does not mean it is spam.
   people do mistype addresses some time. so this corpus is not safe.
  
 Yes, I realize people mistype email addresses. But the domain gets
  121,000 emails on an average day.

 Of those 121,000 emails a day, 120,000 are to email addresses that
  aren't of the 4 known/valid/acceptable ones. What percentage would you like
  to use of emails that are sent are mistyped. One out of 1000? That means
  121 invalid email addresses a day? But the other 999 of 1000 aren't valid...

 Of the other 1000 that ARE to the 4 known/valid/acceptable email
  addresses, about 900 of them are marked by SA as a spam level over 5.
  Usually WILDLY over 5, like 20's and 30's.

 Of those 100 delivered, 75 of them are rejected by the spam
  filter (Using a method that violates the standard RFC's according to
  sendmail) of the final destination for all 4 of those email boxes (Yes,
  bingo, bango, bongo, irving actually all end up forwarded to
  [EMAIL PROTECTED]).

 Of the 25 that make it through, the user tells me 15 of them are
  usually spam.

 So, 10 VALID/ACCEPTABLE emails a day out of 121,000 emails received
  a day .. Or 8 THOUSANDS OF A SINGLE PERCENT.

 So, while I definitely don't think people can type bingo, bango,
  bongo, irving correctly 100% of the time, with a valid email ratio of 8
  thousands of a percent, I don't think in the grand scheme of things
  that mistyped email addresses really account for much/any.

 Tuc


If you are proposing some kind of checksums or other types of 'message
identifying' techniques on the messages,  those few mistyped addresses
could certainly make a difference for your site.   What if bongo's mom
mistypes to bungo, realizes her mistake and resends it to bongo a few
minutes later.  It is quite likely that the valid message will be
rejected now since it's (almost) identical to the one your proposed
system just marked as spam.  What if bongo signs up for the a mailing
list and mistypes his own email address (yes, this happens).  Now your
system marks all list mailings as spam, so everyone using your system
starts losing their copies of the mailing list messages too?

I think you have good intentions but the source of your data is flawed
for anything but maybe limited statistical training.  Unfortunately it
probably is not great for that either, since the mail you are seeing
for non existent users is probably not at all similar to the mix of
spam you get to real accounts.  The scanner would end up biased
towards whatever junk the spammers desperate enough to use
dictionaries send, which would drown out the stats from those spams
that are actually difficult to detect.

Why do you accept messages for non existent accounts?  You're wasting
bandwidth, regardless of what you do or don't do with the junk after
you accept it.  From the sound of it you could reduce your mail
bandwidth to a tiny fraction of what it is now by just refusing this
stuff (which is what most everyone else does, AFAIK).

-Aaron


Re: Quick Postfix Question [OT]

2008-02-27 Thread Aaron Wolfe
On Wed, Feb 27, 2008 at 2:50 PM, Bob Proulx [EMAIL PROTECTED] wrote:
 Marc Perkel wrote:
   It appears that Postfix only does DNS blacklists and not whitelists
   then. I was going to publish my whitelist and Postfix instructions but I
   guess I can't do that.

  That would be a better question for the postfix-users list.  Probably
  the way to do this is with the check_policy_service functionality.
  The permit action should permit the request.  I haven't created my
  own policy daemon though and so this is an academically derived
  answer.  According to the manual Policy delegation is now the
  preferred method for adding policies to Postfix.

  Bob



Here's a hacked up version of postfix-policyd that uses the results
from the hostkarma rbl.
I'm sure it can be improved upon, but it works for me.




# postfix-policyd-spf-perl
# http://www.openspf.org/Software
# version 2.004
#
# (C) 2007  Scott Kitterman [EMAIL PROTECTED]
# (C) 2007  Julian Mehnle [EMAIL PROTECTED]
# (C) 2003-2004 Meng Weng Wong [EMAIL PROTECTED]
#
#This program is free software; you can redistribute it and/or modify
#it under the terms of the GNU General Public License as published by
#the Free Software Foundation; either version 2 of the License, or
#(at your option) any later version.
#
#This program is distributed in the hope that it will be useful,
#but WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#GNU General Public License for more details.
#
#You should have received a copy of the GNU General Public License along
#with this program; if not, write to the Free Software Foundation, Inc.,
#51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.


#  hacked up to query hostkama
#  by aaron [EMAIL PROTECTED]



use strict;

use IO::Handle;
use IO::Socket;
use Sys::Syslog qw(:DEFAULT setlogsock);
use NetAddr::IP;
use Net::DNS;
use Fcntl;


require /etc/eps/config.pl;

# --
#  configuration
# --



# Adding more handlers is easy:
my @HANDLERS = (
{
name = 'hostkarma_lookup',
code = \hostkarma_lookup
},

);

my $VERBOSE = 0;

my $DEFAULT_RESPONSE = 'DUNNO';

#
# Syslogging options for verbose mode and for fatal errors.
# NOTE: comment out the $syslog_socktype line if syslogging does not
# work on your system.
#

my $syslog_socktype = 'unix'; # inet, unix, stream, console
my $syslog_facility = 'mail';
my $syslog_options  = 'pid';
my $syslog_ident= 'postfix/hk_lookup';

use constant localhost_addresses = map(
NetAddr::IP-new($_),
qw(  127.0.0.0/8  :::127.0.0.0/104  ::1  )
);  # Does Postfix ever say client_address=:::ipv4-address?

use constant relay_addresses = map(
NetAddr::IP-new($_),
qw(  69.13.218.0/25 72.35.73.193/32 )
); # add addresses to qw (  ) above separated by spaces using CIDR notation.

my %results_cache;  # by message instance

my $dns  = Net::DNS::Resolver-new;


# --
#  initialization
# --

#
# Log an error and abort.
#
sub fatal_exit {
syslog(err = fatal_exit: @_);
syslog(warning = fatal_exit: @_);
syslog(info= fatal_exit: @_);
die(fatal: @_);
}

#
# Unbuffer standard output.
#
STDOUT-autoflush(1);

#
# This process runs as a daemon, so it can't log to a terminal. Use
# syslog so that people can actually see our messages.
#
setlogsock($syslog_socktype);
openlog($syslog_ident, $syslog_options, $syslog_facility);

# --
#   main
# --

#
# Receive a bunch of attributes, evaluate the policy, send the result.
#
my %attr;
while (STDIN) {
chomp;

if (/=/) {
my ($key, $value) =split (/=/, $_, 2);
$attr{$key} = $value;
next;
}
elsif (length) {
syslog(warning = sprintf(warning: ignoring garbage: %.100s, $_));
next;
}

if ($VERBOSE) {
for (sort keys %attr) {
syslog(debug = Attribute: %s=%s, $_, $attr{$_});
}
}

my $message_instance = $attr{instance};
my $cache = defined($message_instance) ?
$results_cache{$message_instance} ||= {} : {};

my $action = $DEFAULT_RESPONSE;

foreach my $handler (@HANDLERS) {
my $handler_name = $handler-{name};
my $handler_code = $handler-{code};

my $response = $handler_code-(attr = \%attr, cache = $cache);

if ($VERBOSE) {
syslog(debug = handler %s: %s, $handler_name, $response);
}

# Pick whatever response is not 'DUNNO'
if ($response and $response !~ /^DUNNO/i) {
 #   syslog(info = handler %s: is decisive

Re: Quick Postfix Question [OT]

2008-02-27 Thread Aaron Wolfe
On Wed, Feb 27, 2008 at 3:12 PM, Henrik K [EMAIL PROTECTED] wrote:
 On Wed, Feb 27, 2008 at 03:00:49PM -0500, Aaron Wolfe wrote:
   On Wed, Feb 27, 2008 at 2:50 PM, Bob Proulx [EMAIL PROTECTED] wrote:
Marc Perkel wrote:
  It appears that Postfix only does DNS blacklists and not whitelists
  then. I was going to publish my whitelist and Postfix instructions 
 but I
  guess I can't do that.
   
 That would be a better question for the postfix-users list.  Probably
 the way to do this is with the check_policy_service functionality.
 The permit action should permit the request.  I haven't created my
 own policy daemon though and so this is an academically derived
 answer.  According to the manual Policy delegation is now the
 preferred method for adding policies to Postfix.
   
 Bob
   
   
  
   Here's a hacked up version of postfix-policyd that uses the results
   from the hostkarma rbl.
   I'm sure it can be improved upon, but it works for me.

  I'm sure that works, but I seriously recommend postfwd: http://postfwd.org/

  You can easily use a config like:

  rbl=hostkarma.junkemailfilter.com/127.0.0.1; action=OK whitelisted
  rbl=hostkarma.junkemailfilter.com/127.0.0.2; action=REJECT blacklisted
  rbl=hostkarma.junkemailfilter.com/127.0.0.3; action=PREPEND X-Karma: yellow

  .. among many other things that are possible.



after looking at postfwd for only a few minutes, I have to agree..
don't use my messy code, use postfwd!
I will be soon.

-Aaron


Re: Bogus MX - blacklist service viable?

2008-02-22 Thread Aaron Wolfe
On Fri, Feb 22, 2008 at 7:55 AM, Marc Perkel [EMAIL PROTECTED] wrote:



  Aaron Wolfe wrote:

  On Thu, Feb 21, 2008 at 11:47 PM, Marc Perkel [EMAIL PROTECTED] wrote:


  Steve Radich wrote:
   Sorry; apparently I was unclear.
  
   MX records I'm saying as follows:
   100 - Real
   200 - Real perhaps, as many real as you want
   300 - Bogus - one that blocks port 25 with tcp reset for example
   400 - accept port, logs ip - blacklist (not to be scored
   aggressively at all) with a 421/retry.
  
   If a whole bunch of places are seeing the same smtp server hitting this
   400 level MX then I'm saying that seems like a useful thing to be
   included in a blacklist using a low score in sa.
  
   The point was to offer the 400 level mx as a free service to log the ips
   quickly for those that don't want to set up the server themselves.
  
   In theory the 400 level MX wouldn't be used by real smtp very often,
   hence it's likely a spammer and therefore the IP could be auto
   blacklisted. Realize I'm NOT proposing we block on this, just score
   based on this list.
  
   Steve Radich - http://www.aspdeveloper.net /
   http://www.virtualserverfaq.com
   BitShop, Inc. - Development, Training, Hosting, Troubleshooting -
   http://www.bitshop.com
  
  

  I'm actually doing something like that. What I do is track hits on the
  highest MX that has not hit the lowest numbered MX, then because I use
  Exim I can track which IP addresses don't send the QUIT command to close

  I am thinking about playing around with the same type of thing here..
 Is this any different from looking for lost connection after DATA or
 lost connection after RCPT errors in a postfix server's logs? Not
 sure why you can detect this because you run Exim specifically. Or
 am I missing something?

  Exim has ACLs that let you do things when the QUIT is received or not
 received. Exim probably has 100x the commans that Postfix does and you can
 do a lot of tricky stuff in Exim that no other MTA has.




  the connection. This combination creates a highly reliable blacklist and
  I'm currently tracking about 1.1 million virus infected spambots that
  have tried to spam me in the last 4 days.

  It's my hostkarma list.



  Sounds interesting.. do you block based on this list or just use it
 for scoring in SA or something like that? What is the false positve
 rate?



  Yes, I do block based on this list. Ther are some false positives but it's
 rare. I have a way for people to remove themselves from the list. There are
 other criteria that we blacklist on as well that makes for a few FP. But
 it's extremely low. I've put a lot of effort into getting it right.




Ok...  I have 24 hours of data to play with..  at first results seemed
promising. I found over 300,000 hosts that had connected only to my
higest MX and did not issue a quit.  But.. of that group:

96.0% are listed on spamhaus (zen, i did not breakdown onto the
individual lists)
2.3% of the hosts *not* listed on spamhaus are listed on Rob McEwen's
ivmSIP list (note that this is over 50% of the remaining hosts, about
10% higher than this list's hit rate with my normal mail flow).

I don't have the zone files for any other RBLs and I didn't want to
send out 300k queries via DNS.  But I think the picture is fairly
clear..  a vast majority of the hosts hitting the fake high MX will be
hosts already listed in major RBLs.

I'm sure my quick test is not perfect.  The remaining 1.7% of hosts
may include some amount of non spam sources (very small if any I would
guess).  Also, I ran the RBL checks all at once at the end of the
cycle. so some of the hits were 24 hours old.  Some amount of the
remainder were probably on the RBLs at the time they hit my server and
were since removed.

I will continue to look into this to see if today was a typical day.
Based on these number though... is this a promising way to reduce
server load/blacklist more hosts... or is this pointless?  I'm
interested in what people think since the data is so easy to gather
and use, if it makes any sense to use it.

-Aaron


Re: Bogus MX - blacklist service viable?

2008-02-21 Thread Aaron Wolfe
On Thu, Feb 21, 2008 at 11:47 PM, Marc Perkel [EMAIL PROTECTED] wrote:


  Steve Radich wrote:
   Sorry; apparently I was unclear.
  
   MX records I'm saying as follows:
 100 - Real
 200 - Real perhaps, as many real as you want
 300 - Bogus - one that blocks port 25 with tcp reset for example
 400 - accept port, logs ip - blacklist (not to be scored
   aggressively at all) with a 421/retry.
  
   If a whole bunch of places are seeing the same smtp server hitting this
   400 level MX then I'm saying that seems like a useful thing to be
   included in a blacklist using a low score in sa.
  
   The point was to offer the 400 level mx as a free service to log the ips
   quickly for those that don't want to set up the server themselves.
  
   In theory the 400 level MX wouldn't be used by real smtp very often,
   hence it's likely a spammer and therefore the IP could be auto
   blacklisted.  Realize I'm NOT proposing we block on this, just score
   based on this list.
  
   Steve Radich - http://www.aspdeveloper.net /
   http://www.virtualserverfaq.com
   BitShop, Inc. - Development, Training, Hosting, Troubleshooting -
   http://www.bitshop.com
  
  

  I'm actually doing something like that. What I do is track hits on the
  highest MX that has not hit the lowest numbered MX, then because I use
  Exim I can track which IP addresses don't send the QUIT command to close

I am thinking about playing around with the same type of thing here..
Is this any different from looking for lost connection after DATA or
lost connection after RCPT errors in a postfix server's logs?  Not
sure why you can detect this because you run Exim specifically.   Or
am I missing something?

  the connection. This combination creates a highly reliable blacklist and
  I'm currently tracking about 1.1 million virus infected spambots that
  have tried to spam me in the last 4 days.

  It's my hostkarma list.



Sounds interesting.. do you block based on this list or just use it
for scoring in SA or something like that?  What is the false positve
rate?

-Aaron



Re: [OT] Bogus MX opinions

2008-02-20 Thread Aaron Wolfe
Quotes from this  thread (and the nolisting site which was posted as a
response):

Michael Scheidell  -  Do NOT use a bogus mx as your lowest priority.
Bowie Bailey - I would say that it is too risky to put a non-smtp
host as your primary
MX

nolisting.org - longterm use has yet to yield a single false positive 
Marc Perkel - YES - it works... I have had no false positives at all
using this.


I am interested in this technique, and have been for some time.  It
seems like every discussion of it leads to a group saying you will
lose mail and a group saying you will not lose mail.   Is there any
way to resolve this once and for all?   It's hard for me to see why
either side would misrepresent the truth, but obviously someone is
wrong here.

One thing I notice (and I certainly could be wrong here)... the
proponents seem to be actually using nolisting and claiming no
problems, whilst those against the idea seem to be predicting problems
rather than reporting on actual issues they have experienced.

-Aaron


Re: Advice on MTA blacklist

2007-10-09 Thread Aaron Wolfe
On 10/9/07, R.Smits [EMAIL PROTECTED] wrote:

 Hello,

 Which spam blacklists do you use in your MTA config. (postfix)
 smptd_client_restrictions

 Currently we only use : reject_rbl_client list.dsbl.org

 We let spamassassin fight the rest of the spam. But the load of spam is
 getting to high for our organisation. Wich list is safe enough to block
 senders at MTA level ?

 Spamhaus, or spamcop ?

 I would like to hear some advice or maybe your current setup ?

 Thank you for any advice we can use .

 Greetings Richard


I would use spamhaus for MTA reject and spamcop in SA.   I've also been
evaluating a very interesting new RBL for several weeks called the ivmSIP
rbl.  Its designed to work after RBLs like spamhaus to catch what they miss
and it works quite well so far.  It's catching about 30% of the mail that
makes it past both spamhaus and spamcop (and of course some of that mail is
actually not spam :).  The web site for the new list isn't ready yet but you
can ask for a trial feed by emailing  Mr. Rob McEwen at [EMAIL PROTECTED]


Re: Handling Spam Surges

2007-09-10 Thread Aaron Wolfe
 at
 /xsys/lib/perl5/site_perl/5.8.8/Mail/SpamAssassin/SpamdForkScaling.pm line
 171.
 Fri Sep  7 16:30:41 2007 [26914] warn: prefork: killed child 24687


 Looking at the swap usage, I was thinking I would be better if I reduced
 the number of children processes and let thing queue up. I know I will
 also have to look at exim and it's ratelimit command. Any other idea's on
 handling spam surges/DoS?

 Thanks
 Paul



At my site we operate under the presumption that SpamAssassin should be
avoided if at all possible because it is so expensive on our resources
compared to some other easy checks.  This helps us to deal with DoS and
surges from retarded bots quite well (so far at least).

We reduce the messages bound for SA to less than 10% of our traffic by a
combination of postfix UCE checks, a couple very accurate RBLs, selective
greylisting and our own whitelist.  When the surges/DOS happen, they tend to
increase the number of messages thrown away but rarely effect the volume
running through SA.

-Aaron


Re: [OT] Seeing increase in smtp concurrency ?

2007-09-06 Thread Aaron Wolfe
On 9/6/07, Jeff Chan [EMAIL PROTECTED] wrote:

 Quoting Rajkumar S [EMAIL PROTECTED]:

  Hi,
 
  Does any one seeing increasing smtp concurrency for the past couple of
  weeks? I run couple of (qmail/simscan/spamassassin) mail servers and
  all experience the same problem. The spam does not increase, but this
  is hogging my mail servers. Probably a new crop of spamming tools?
 
  I am attaching one qmail-mtrg graph that shows the problem.
 
  http://img403.imageshack.us/img403/2224/smtpmonthyq4.png
 
  raj
 


 Some botnets are starting to hold mail connections open for much longer
 after
 getting a 5xxx blacklist response.  Reason is unknown; could be coding
 errors
 or deliberate.  Many people are changing their smtpd timeouts form the RFC
 300
 seconds down to 45 seconds:

   http://blogs.msdn.com/tzink/archive/2007/09/01/new-spamming-tactic.aspx

 Here's the postfix for it:


 ## to deal with botnets not hanging up
 # Drop default from RFC limit of 300s to 45s
 #
 smtpd_timeout = 45s


 Some people are even using 10 seconds, which seems short to me.  The RFC
 requires 300 seconds.

 Jeff C.




Same problem here on several servers.  Reducing the timeout helps, but
violates RFC and is simply reducing the effects rather than fixing the
issue.  Is there any RFC valid way for a server to hang up on a client,
especially after a 5xx?

-Aaron


Re: Posioned MX is a bad idea [Was: Email forwarding and RBL trouble]

2007-08-28 Thread Aaron Wolfe
On 8/27/07, Marc Perkel [EMAIL PROTECTED] wrote:



 Andy Sutton wrote:

 On Mon, 2007-08-27 at 12:59 -0700, Marc Perkel wrote:

  I've not run into a single instance where a legit server only tried
 the lowest MX. However, if I did there's a simple solution. If the
 fake lowest MX points to an IP on the same server as the working MX
 then you can use iptables to block port 25 on all IP addresses EXCEPT
 for the one broken server. That would fix the problem.

  I think the question is how you would identify a FP occurred, short of a
 client screaming?


 Clients screaming is that way the false positives are usually identified.
 I'm filtering 1600 domains and I've been doing this for almost a year and
 have yet to get a single report of a false positive. And when I screw up I
 usually hear about it.

 All I can say is - it works for me. If you want to try something safer
 create some fake higher numbered MX records and return 421 errors on them
 and you'll get rid of about 1/3 of your botnet spam. And you'l be able to
 see in your logs how many hits you get.

 The only way to determine if this works or not is to try it.



I have tried bogus MXes before and had too many false positives to possibly
deal with.  However after the repeated claims of zero FP on your large
installation, I decided to give it another try.   It's been a couple years
since my last try, and then I only used a fake 1st pref MX, not a fake last
MX as well.

Sunday evening I tried it on a single domain of one very tolerant and
friendly client.  I added one bogus lower MX and one higher, both IPs in the
same block as their actual mail server that were unused.

The first 24 hours seemed promising.  However today (tues) we have two false
positives, including one of their banks (!) and a small business that is
their long time customer.

It's scary that a bank has such a broken config, but its a reality.
Unfortunately, there are still too many bad admins/RFC ignorant
firewalls/whatever out there for bogus MXs to be a practical solution for
me.  Sure, if we all used it then they'd have to clean up their acts.. but
then the spammers would obviously just implement proper behavior in their
next bot version.  I just don't see this as a solution that can work.

I don't know what 1600 domains means.  Most people talk in terms of
messages/day, number of mailboxes, or some other meaningful measurement.
Just guessing that maybe a domain equals average 50 users... I cannot
imagine how you're not getting flooded with complaints.  I tried it with a
single small domain (less than 30 mailboxes) and didn't make it 2 business
days.

We'd all like to find that magic button to stop spam, but this aint it.

-Aaron


Re: Email forwarding and RBL trouble

2007-08-22 Thread Aaron Wolfe
On 8/22/07, Rense Buijen [EMAIL PROTECTED] wrote:

 Thanks a lot all, it's all clear to me now!
 I though that the trusted networks mean that the message will just be
 passed it it came from that source.
 I didnt know it will skip to the next Received IP. Thanks a lot.

 One question about the backscatter problem though, if I understand
 correctly it is always my Exchange server (the machine inline with SA)
 who will send out user does not exist messages, right? The backup MX
 will merely try to forward it and the Exchange server decides if that
 mail address exists or not. I think Exchange is configured the right way
 in such a way that it knows what users it has on the system..


Older Exchange servers will accept any message to a domain they are
responsible for, and then generate a new NDR message to the sender if the
user does not exist.  This is pretty retarded and leads to tons of
undeliverable NDRs clogging up your outbound queues, innocent people
gettings NDRs from spam they didn't send, etc.  At some point (I don't
remember the exact Exchange version, but definately 2003 and 2007, probably
2000) MS started allowing you to make Exchange reject mail for unknown users
during the SMTP transaction, which is a much better way to go about
things.   In your situation, that would just make your SA machine have to
send NDRs instead of your Exchange box, since it accepted the message.  This
is where you need to add recipient verification to your SA servers.  When
Exchange is in the reject unknown mode, that works fine and you can reject
unknowns before they enter your network at all.  This way you do not need to
mess with LDAP or expose any active directory to dmz/outside servers, and
you never have any NDR responsibility for spam.


 I would really like to drop the second mx altogether but policy forbids
 it :)



Backup MXs are a good thing, just have to configure them correctly and they
are don't require much maintenance after that.

Thanks for all the help guys!

 Rense

 Bowie Bailey wrote:
  Rense Buijen wrote:
 
  Mathhias,
 
  The problem is that when the mail enters the backup MX, we dont know
  if that mail is blacklisted at for instance spamcop.
  So if the backup mx accepts the mail (because it's dumb and it will
  accept it), and my primary mx (SA) has set the backup mx as trusted
  network/source, the mail will be delivered while it should not have
  been. You see the problem? SA cannot see if the mail that has been
  forwarded by my backup MX is valid (black/whitelisted) or not because
  it cannot check the IP against the RBL, it will lookup the wrong IP.
  And it should do this because there is NO rbl checking on the backup
  MX itself...
 
 
  You are making assumptions about what trusted_networks implies.  Just
  because mail comes from a machine in your trusted_networks doesn't mean
  that it will not be scanned.  The ONLY thing that trusted_networks means
  is that you trust those machines to put valid header information in the
  message.  It does NOT mean that you trust them not to forward spam.
 
  For your configuration, you need to put your backup MX into
  trusted_networks in order for the RBLs to work properly.
 
  The real problem with this setup is that once your backup MX starts
  forwarding messages to the primary and spam is rejected, then your
  backup is in the bad position of having to issue a delivery notification
  to the sender.  This is bad because most spam and viruses fake the
  sender information.  So most of your bounces will be going to the wrong
  person.  This is called backscatter and is another form of spam.  A
  mailserver should not accept mail that it will not be able to deliver.
 
  I would suggest that you either configure your backup the same as your
  primary, or just drop the backup altogether.  Without the backup, the
  sending MTAs will still retry the message (usually for at least a couple
  of days), so you don't lose anything unless your MX is down for an
  extended period of time.
 
 


 --
 Met vriendelijke groeten,

 Rense Buijen
 Chess Service Management
 Tel.: 023-5149250
 Email: [EMAIL PROTECTED]




Re: Conditionally bypassing RBL checks - how?

2007-08-18 Thread Aaron Wolfe
Just take away the scores for the individual RBLs, and your yellow
list as another RBL,  and use metarules to score.

-Aaron




On 8/18/07, Marc Perkel [EMAIL PROTECTED] wrote:
 I have what I call a yellow list which is a list of IP addresses of
 hosts like yahoo, google, hotmail, aol, etc that send a mix of spam and
 nonspam. The idea being that if you are yellow listed then don't check
 any other list because if it was listed it would be a false positive.

 So - the question - how would you write a rule to do a yellow list
 lookup and if listed then bypass all other RBL tests? This would
 increase speed and accuracy, and maybe get others inspired to build a
 better yellow list than I have.




Re: Question - How many of you run ALL your email through SA?

2007-08-16 Thread Aaron Wolfe
On 8/16/07, Matthias Haegele [EMAIL PROTECTED] wrote:
 John Rudd schrieb:
  Marc Perkel wrote:
  As opposed to preprocessing before using SA to reduce the load. (ie.
  using blacklist and whitelist before SA)
 
 
 
 
  I do not.
 
  (greet-pause of 5 seconds; zen and dsbl as blacklists; local access type
  blocks; dangerous attachment filename blocker; and then clamav with
  Sanesecurity, MSRBL, MBL signatures; all of those _reject_ messages
  during the SMTP session before Spam Assassin gets to see them)

 Nearly same setup as John. If you have the opportunity to block at MTA
 level i think u *really should do this*. (Its around 80% rejects here).
 Additionaly i block some TLDs like .ar|br|cl|ru|pl|jp|hu which i dont
 have regular mail contact here ...
 btw: MTA is Postfix.


I agree and have yet another similar setup here.  We reject about 80%
as well, which helps reduce the load on the servers and on the users
who manage their quarantines. We allow users to choose whether to use
no filtering, the pre SA, reject filtering only, or full content
filtering with SA.  A surprising number prefer to use just the more
basic checks and deal with what gets through with their mua.

-Aaron


Re: Question - How many of you run ALL your email through SA?

2007-08-16 Thread Aaron Wolfe
On 8/16/07, Dave Mifsud [EMAIL PROTECTED] wrote:
 On 16/08/07 08:45, Aaron Wolfe wrote:
  I agree and have yet another similar setup here.  We reject about 80%
  as well, which helps reduce the load on the servers and on the users
  who manage their quarantines. We allow users to choose whether to use
  no filtering, the pre SA, reject filtering only, or full content
  filtering with SA.  A surprising number prefer to use just the more
  basic checks and deal with what gets through with their mua.
 
  -Aaron
 

 What's the default option for users?


A good question... it's chosen by each domain's administrator when
they sign up.  There is no default, but there probably is a strong
correlation between what the admin chooses and what the users are
using :)   I should probably run some statistics on that.

-Aaron

 Dave
 --
 Dave Mifsud
 Systems Engineer
 Computing Services Centre
 University of Malta

 CSC Tel: (+356) 2340 3004  CSC Fax: (+356) 21 343 397




Re: Question - How many of you run ALL your email through SA?

2007-08-16 Thread Aaron Wolfe
On 8/16/07, Marc Perkel [EMAIL PROTECTED] wrote:

  OK - it's interesting that of all of you who responded this is the only
 person who is doing it right. I have to say that I'm somewhat surprised that
 so few people are preprocessing their email to reduce the SA load. As we all
 know SA is very processor and memory expensive.

I think it's interesting that you somehow missed all the messages from
people who described how they *do* filter prior to SA.  Considering
that you claim your setup never loses any mail, did you just forget to
read them somehow?

Claiming that there is one right way to use SA is just silly.  There
are so many different situations, and the right answer depends on the
amount of mail you process and the type of users you have.  Sending
everything through SA might be a perfectly acceptable configuration
for a small domain that wants a single point of control and simple
configuration.

What was the motivation behind your original post?  What were you
hoping to learn?


  Personally, I'm filtering 1600 domains and I route less than 1% of incoming
 email through SA. SA does do a good job on the remaining 1% that I can't
 figure out with blacklists and whitelists and Exim tricks, but if I ran
 everything through SA I'd have to have a rack of dedicated SA servers.

  [EMAIL PROTECTED] wrote:
  Am Donnerstag, 16. August 2007 schrieb Marc Perkel:


  As opposed to preprocessing before using SA to reduce the load. (ie.
 using blacklist and whitelist before SA)

  I use:

 At rcpt time:
 callout to recipient
 zen.spamhaus.org - Catches 90%
 bl.spamcop.net
 list.dsbl.org
 callout to sender

 At data time:
 clamd (malware is rejected)
 spamassassin (10 Rejected, 10 add headers)

 I think i will lower the spamassassin scores to 8 in the near future.

 At the moment less then 5% spam reaches spamassasin.





Re: fake MX records

2007-08-15 Thread Aaron Wolfe
On 8/14/07, Michael Scheidell [EMAIL PROTECTED] wrote:


  -Original Message-
  From: ram [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, August 14, 2007 6:07 AM
  To: users@spamassassin.apache.org
  Subject: fake MX records
 
 
  http://wiki.apache.org/spamassassin/OtherTricksthis page mentions
  setting up fake MXes
 
  Is this method relevant today too with a lot of spam being
  relayed through proper smtp channels
 
  The page says the primary MX should not be accepting
  connections at all. Has anyone else tried this , will this
  cause delay in my mail

 Yes, and some systems might not ever send you email (they violate RFC's)

This is the biggest problem with fake MX records for me.  If your
primary MX is not available, you will simply lose mail from some
senders.  It's entirely their fault for violating the RFCs but the
mail is still lost, and it isn't easy to explain whats going on to
your users/customers.  Greylisting gives me about the same effect but
it works with a bigger percentage of borken servers and I can easily
exclude broken mailservers if needed.

-Aaron


Re: Mail server hosted by Comcast

2007-08-11 Thread Aaron Wolfe
On 8/10/07, Jonn R Taylor [EMAIL PROTECTED] wrote:

 Jerry Durand wrote:
  At 01:28 PM 8/10/2007, Igor Chudov wrote:
  I am considering a local deal related to hosting by Comcast cable
  (8mbps down, 1 mbps up).
 
  I am concerned, however, with me sending email and being on comcast IP
  range, due to bad rap that Comcast has due to spamming by Comcast
  hosted zombies.
 
  Do you think that my mailserver will have issues if I host it on
  comcast netwrk?
 
  That would be a static IP and, hopefully, I can get comcast to reverse
  resolve it to a hostname on one of my domains.
 
  i
 
  We're on a dynamic Verizon business DSL and use the Verizon server (with
  AUTH) and haven't had much trouble.  The main thing is, SEND THROUGH A
  FIXED SERVER.  In your case, you might want to use the server from
  whoever hosts your DDNS.
 
 

 We use Comcast's WorkPlace Enhanced and it has been working very well
 with a 99.999% uptime. You should get static IP's from them, this way
 they can set your rDNS to your domain. This is what we do and we have no
 problem sending to any provider, including AOL and Yahoo.

 Jonn


i will block you for just for giving money to such a crooked evil company.
but, probably most people will not :)
if your dns is setup ok then i would not worry.


Bayesian DB problem?

2006-08-29 Thread Aaron Hill



Hi, 

I have a question regarding the Bayes Token 
database for SA.
I'm using:
SpamAssassin version 3.1.4Debian Linux kernel 
2.4.27-2 on a Pentium II based system
Qmail v1.03 (using qmail-queue and procmail to 
filter [SPAM] mails)
Courier IMAP

We have a network with about 200-300 users, and 
some of the users get a lot of SPAM. I have a nightly cron'd task to run through 
every user's "MissedSpam" IMAP folders and runthis command for each user: 


sa-learn --showdots --spam 
/home/$1/Maildir/.Spam.MissedSpam/

However, it doesn't appear to be working! We 
recently overhauled our mailserver about 2 months ago, and I would have expected 
the SPAM situation to improve since then, but it seems to be just as bad. We had 
an installation on our previous mailserver (running Postfix) and the nightly 
training worked GREAT -- after a couple weeks SPAM significantly dropped, and we 
had no false positives.

The relevant portion of my 
/etc/mail/spamassasin/local.cf
## Bayes path ##
bayes_path 
/etc/spamassassin/bayesbayes_file_mode 0666
/etc/spamassassin/bayes/ has permissions 775, 
and is owned by root.spamd
It's currently empty.


What am I doing wrong? I've done a lot of googling 
but have had no luck with getting any useful results. I was hoping someone on 
this list is familiar with the Bayes token DB's and could point me to why it's 
not working this time.

Thanks!
Aaron



Re: Bayesian DB problem?

2006-08-29 Thread Aaron Hill

host:/etc/spamassassin/bayes# ls
total 60
drwxrwxr-x  2 root spamd  4096 2006-08-29 14:02 .
drwxr-xr-x  4 root root   4096 2006-08-29 14:01 ..
-rw-rw-rw-  1 root root  12288 2006-08-29 14:02 bayes_db_seen
-rw-rw-rw-  1 root root  49152 2006-08-29 14:02 bayes_db_toks

It worked! Thanks!

My users and my sanity appreciate that. :)

Aaron

- Original Message - 
From: Theo Van Dinter [EMAIL PROTECTED]

To: users@spamassassin.apache.org
Sent: Tuesday, August 29, 2006 12:47 PM
Subject: Re: Bayesian DB problem?


On Tue, Aug 29, 2006 at 12:44:42PM -0400, Aaron Hill wrote:

bayes_path /etc/spamassassin/bayes

/etc/spamassassin/bayes/  has permissions 775, and is owned by root.spamd
It's currently empty.


It looks as if your bayes_path is set wrong.  If /etc/spamassassin/bayes is 
a

directory, you probably want something like:

bayes_path /etc/spamassassin/bayes/bayes

(bayes_path is a path and a file prefix -- pointing it at just a directory
causes a lint error.)

--
Randomly Generated Tagline:
I came here to eat carrots and kick butt, and I'm all out of carrots.
 - One Must Fall:2097



Re: Rejection text

2006-07-11 Thread aaron


John D. Hardin [EMAIL PROTECTED] wrote on 12/07/2006 02:16:49 PM:

 On Wed, 12 Jul 2006, Paul Dudley wrote:

  If we decide to reject low grade spam messages rather than
  quarantine them, is it possible to add text to the body of the
  rejection message?

 Rejecting (bouncing) spam is utterly pointless, as 99% of it will have
 forged sender information. You will either be sending your notice to a
 nonexistent address, in which case you get yet more useless traffic
 back to your server in the form of a bounce of your bounce, or your
 notice will go to some innocent third party, possibly contributing to
 an effective DDoS against their email account.

What about rejection of the message during message processing? Sending
back an SMTP error code rather than generation of a completely new
bounce message? Sendmail milter with Mimedefang etc allows you to do this.

Cheers,
Aaron


 --
  John Hardin KA7OHZICQ#15735746http://www.impsec.org/~jhardin/
  [EMAIL PROTECTED]FALaholic #11174pgpk -a [EMAIL PROTECTED]
  key: 0xB8732E79 - 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
 ---
  A weapons registration phase ... 4) allows for a degree of control
  to be exercised during the collection phase; 5) assists in the
  planning of the collection phase; ...
   -- the UN, who doesn't want to confiscate guns
 ---
  13 days until The 37th anniversary of Apollo 11 landing on the Moon





Re: sa-learn --username option

2006-06-07 Thread Aaron Axelsen
Matt,

Thanks for the reply.  I ended up writing a perl script to copy all the
spam to learn into a neutral location group owned.  In that same script
I then change the effective user id, and try to learn.  However, it
still is not learning as the effective user.  The script runs as root,
and still tries to learn as root

Is there some reason for this? Any suggestions?

-- Aaron

Matt Kettler wrote:
 Aaron Axelsen wrote:
   
 Hello,

 I am trying to run a cronjob as root which will learn a different
 accounts spam into my spam db.  Example command:

 sa-learn -u user1 --spam /home/user2/Maildir/.Spam/cur/

 When the command runs, it learns the spam into /root/.spamassassin
 instead of /home/user1/.spamassassin

 Does anyone have any idea why its doing this? 
 

 The -u option to sa-learn only works if you're using SQL for bayes
 storage, or if you're using virtual users.

 The caveat is revealed in the docs for sa-learn:
 --
 NOTE: This option will not change to the given /username/, it will only
 attempt to act on behalf of that user. Because of this you will need to
 have proper permissions to be able to change files owned by /username/.
 In the case of SQL this generally is not a problem.
 --

 In particular, that first sentence is important here. It will not change
 (setuid) to the given username, therefore the home directory does not
 change.

 If you want to exec sa-learn as a particular user, just use su in the
 straightforward unix fashion:

 su  user1  sa-learn --spam /home/user2/Maildir/.Spam/cur/

 Note that user1 will need read-privileges to
 /home/user2/Maildir/.Spam/cur/  for this to work.


   

-- 
Aaron Axelsen
[EMAIL PROTECTED]

Great hosting, low prices.  Modevia Web Services LLC -- http://www.modevia.com



sa-learn --username option

2006-06-06 Thread Aaron Axelsen
Hello,

I am trying to run a cronjob as root which will learn a different
accounts spam into my spam db.  Example command:

sa-learn -u user1 --spam /home/user2/Maildir/.Spam/cur/

When the command runs, it learns the spam into /root/.spamassassin
instead of /home/user1/.spamassassin

Does anyone have any idea why its doing this?  The user1 .spamassassin
folder is chown user1.user and has permissions 700.  Are the permissions
a problem?

I see there is a --spam-db option.  Do I need to use this?

-- 
Aaron Axelsen
[EMAIL PROTECTED]

Great hosting, low prices.  Modevia Web Services LLC -- http://www.modevia.com



Score ends in +10?

2006-05-23 Thread Aaron Grewell
Hello list, I'm trying to run amavislogsumm against my mail logs, and some of 
the scores are listed with a +10 at the end, which breaks the script.  For 
example:

May 23 10:17:22 216.186.73.25 amavis[7301]: (07301-01-9) SPAM-TAG, 
[EMAIL PROTECTED] - [EMAIL PROTECTED], Yes, score=6.13+10 
tagged_above=1 required=6.2 tests=[BAYES_50=0.001, BODY_OPT_OUT=1, 
FH_FROM_START_1=0.233, FORGED_RCVD_HELO=0.135, FU_DOM_END_NUM=0.35, 
FU_DOM_START_NUM=0.259, HELO_MISMATCH_INFO=1.448, HOST_NMATCH_HELOCOM=0.311, 
HTML_MESSAGE=0.001, MIME_HEADER_CTYPE_ONLY=0, MIME_HTML_ONLY=0.001, 
MSGID_FROM_MTA_ID=1.393, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, 
TO_BE_REMOVED_4=1]

Is that a score SA is generating, or do I need to redirect this to the 
amavisd-new list?

Thanks,
-Aaron


Re: Score ends in +10?

2006-05-23 Thread Aaron Grewell


  Is that a score SA is generating, or do I need to redirect this to the
  amavisd-new list?

 That's an amavis log entry, so you'd have to ask them.

OK, will do.  Thanks Theo.

-Aaron


Re: Score ends in +10?

2006-05-23 Thread Aaron Grewell
 Sorry this is off-topic.

 From amavisd-new RELEASE_NOTES:
 - in passed and quarantined mail a header field X-Spam-Status now shows
   score as an explicit sum of SA score and a by-recipient score_sender
 boost (when the boost is nonzero); the X-Spam-Score header field still
 shows a sum of both as a single number so as not to confuse MUA filters
 which may operate on that header field;

 The log entries are also in this format as you have seen. Somewhere in your
 @score_sender_maps (amavisd-new soft wbl) you have a score boost if a match
 is found on the sender [EMAIL PROTECTED]


Ah, I see.  I'll have to see if I can get amavislogsumm to use X-Spam-Score 
instead.  Thanks Gary!

-Aaron


Re: Delete spam or move to a folder?

2006-05-17 Thread aaron
Yusuf Ahmed [EMAIL PROTECTED] wrote on 17/05/2006 04:28:36 PM:

 Hi Guys,

 Couldn't find a thread like this hence this new one. Just wondering
 what strategy people are using when it comes to dealing with email
 that gets enough points to be considered as spam. Eg. being deleted
 and quarantined, or delivered and quarantined etc.

 I'm using store and deliver - is that the general concept out there
 with everyone?

 Regards,
 Yusuf.

As a business we take copies of all emails received by the mail gateway.
Messages determined to be Spam are not delivered to the end user.

Using MimeDefang, the message is pulled apart and all of the bits that
we find important are logged to a database so that we can use our
web applications for inquiry and recovery of false positives etc. Other
web applications have been written for administration purposes and to
track down emails when there is a complaint or query.

So by default we keep everything and provide mechanisms for our staff
to recover an email if required.

The ability to customise SpamAssassin and Mimedefang has been invaluable
for us.

Cheers,
Aaron



Re: Big Idiot Needs Instructions

2006-05-11 Thread aaron
jdow [EMAIL PROTECTED] wrote on 11/05/2006 06:52:06 AM:

 From: Chris Edwards [EMAIL PROTECTED]

 Hola,

 I have spent two days trying to figure out how to get the following to
 work.  I have set up Spamassassin and ClamAV, I am running sendmail on
 the Solaris 10 platform.  I would like to be able to scan for all spam
 and virus (in, out and relayed email).  Can someone please point me in
 the right direction?  Do I use procmail or something else.  I set this
 particular combination up years ago on a Linux box but I have had a lot
 of gigo since then.

 Thanks for any help

 jdow I use procmail with great success. I also use the SpamAssassin
 ClamAV plugin. (See plugins on the wiki.)

 {^_^}

I run SpamAssassin via MimeDefang.

Is there anything in particular you are having problems with?

Cheers,
Aaron



Remove Me

2006-05-09 Thread Aaron Boyles
How do I take myself off this mailing list?

-Javin


include not working as expected

2006-04-05 Thread Aaron Grewell
Hi all,
I'm using SA 3.0.4, and I wanted to keep my score modifications in a separate 
file from the rest of my configuration.  I removed my score changes from 
local.cf and put them in a separate file called custom_scores.txt, then put 
include custom_scores.txt in local.cf.  Now SA will no longer lint my config 
properly, and fails on each score line that isn't directly included in 
local.cf.  What's the point of include if I can't actually put config 
directives in the included file?



Re: include not working as expected

2006-04-05 Thread Aaron Grewell
 Using include completely redundant at the local.cf level, as SA
 automatically parses /etc/mail/spamassassin/*.cf. Rather than use
 custom_scores.txt, just use custom_scores.cf and put it alongside local.cf.

That's what I tried first, and got the same errors.  So I thought that maybe I 
was supposed to include scoring overrides in local.cf.  If that's not the 
case, then something else odd is going on.  The BAYES line it's erroring 
about is the first line in the now suitably renamed custom_scores.cf file.  
Here's what the debug output looks like:

spamassassin -D --lint
\debug: SpamAssassin version 3.0.4
debug: Score set 0 chosen.
debug: running in taint mode? yes
debug: Running in taint mode, removing unsafe env vars, and resetting PATH
debug: PATH included '/usr/kerberos/bin', keeping.
debug: PATH included '/usr/local/bin', keeping.
debug: PATH included '/bin', keeping.
debug: PATH included '/usr/bin', keeping.
debug: PATH included '/usr/X11R6/bin', keeping.
debug: PATH included '/var/amavis/bin', which doesn't exist, dropping.
debug: Final PATH set 
to: /usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin
debug: diag: module not installed: DBI ('require' failed)
debug: diag: module installed: DB_File, version 1.810
debug: diag: module installed: Digest::SHA1, version 2.10
debug: diag: module installed: IO::Socket::UNIX, version 1.2
debug: diag: module installed: MIME::Base64, version 2.12
debug: diag: module installed: Net::DNS, version 0.48_03
debug: diag: module not installed: Net::LDAP ('require' failed)
debug: diag: module not installed: Razor2::Client::Agent ('require' failed)
debug: diag: module installed: Storable, version 2.06
debug: diag: module installed: URI, version 1.21
debug: ignore: using a test message to lint rules
debug: using /etc/mail/spamassassin/init.pre for site rules init.pre
debug: config: read file /etc/mail/spamassassin/init.pre
debug: using /usr/share/spamassassin for default rules dir
debug: config: read file /usr/share/spamassassin/10_misc.cf
debug: config: read file /usr/share/spamassassin/20_anti_ratware.cf
debug: config: read file /usr/share/spamassassin/20_body_tests.cf
debug: config: read file /usr/share/spamassassin/20_compensate.cf
debug: config: read file /usr/share/spamassassin/20_dnsbl_tests.cf
debug: config: read file /usr/share/spamassassin/20_drugs.cf
debug: config: read file /usr/share/spamassassin/20_fake_helo_tests.cf
debug: config: read file /usr/share/spamassassin/20_head_tests.cf
debug: config: read file /usr/share/spamassassin/20_html_tests.cf
debug: config: read file /usr/share/spamassassin/20_meta_tests.cf
debug: config: read file /usr/share/spamassassin/20_phrases.cf
debug: config: read file /usr/share/spamassassin/20_porn.cf
debug: config: read file /usr/share/spamassassin/20_ratware.cf
debug: config: read file /usr/share/spamassassin/20_uri_tests.cf
debug: config: read file /usr/share/spamassassin/23_bayes.cf
debug: config: read file /usr/share/spamassassin/25_body_tests_es.cf
debug: config: read file /usr/share/spamassassin/25_hashcash.cf
debug: config: read file /usr/share/spamassassin/25_spf.cf
debug: config: read file /usr/share/spamassassin/25_uribl.cf
debug: config: read file /usr/share/spamassassin/30_text_de.cf
debug: config: read file /usr/share/spamassassin/30_text_fr.cf
debug: config: read file /usr/share/spamassassin/30_text_nl.cf
debug: config: read file /usr/share/spamassassin/30_text_pl.cf
debug: config: read file /usr/share/spamassassin/50_scores.cf
debug: config: read file /usr/share/spamassassin/60_whitelist.cf
debug: using /etc/mail/spamassassin for site rules dir
debug: config: read file /etc/mail/spamassassin/70_sare_adult.cf
debug: config: read file /etc/mail/spamassassin/70_sare_bayes_poison_nxm.cf
debug: config: read file /etc/mail/spamassassin/70_sare_evilnum0.cf
debug: config: read file /etc/mail/spamassassin/70_sare_genlsubj0.cf
debug: config: read file /etc/mail/spamassassin/70_sare_genlsubj_eng.cf
debug: config: read file /etc/mail/spamassassin/70_sare_header.cf
debug: config: read file /etc/mail/spamassassin/70_sare_header0.cf
debug: config: read file /etc/mail/spamassassin/70_sare_html.cf
debug: config: read file /etc/mail/spamassassin/70_sare_html0.cf
debug: config: read file /etc/mail/spamassassin/70_sare_obfu0.cf
debug: config: read file /etc/mail/spamassassin/70_sare_oem.cf
debug: config: read file /etc/mail/spamassassin/70_sare_random.cf
debug: config: read file /etc/mail/spamassassin/70_sare_ratware.cf
debug: config: read file /etc/mail/spamassassin/70_sare_specific.cf
debug: config: read file /etc/mail/spamassassin/70_sare_spoof.cf
debug: config: read file /etc/mail/spamassassin/70_sare_stocks.cf
debug: config: read file /etc/mail/spamassassin/70_sare_unsub.cf
debug: config: read file /etc/mail/spamassassin/70_sare_uri.cf
debug: config: read file /etc/mail/spamassassin/70_sare_uri0.cf
debug: config: read file /etc/mail/spamassassin/70_sare_uri_eng.cf
debug: config: read file 

  1   2   >