Re: OT: Website protection

2009-07-13 Thread schmero...@gmail.com

Thanks for the advise.

Rick Macdougall wrote:

Mikael Bak wrote:

schmero...@gmail.com wrote:

One of our client's websites gets hacked frequently - 1x per month -
usually with some kind of phishing scam.



We've also had some problems lately. After deep investigations we saw
that in 100% of the cases there were no break-ins at all. Not in the old
fashioned manner anyway. The ftp usernames and passwords were stolen
from the client's PC with keylogger or spyware. The hacker could then
log in to the ftp account and make changes to the website.



I've seen this myself on three different client machines (each hosting 
multiple sites). I have yet to discover what spyware was responsible as 
the owners of the different sites contacted the users in question 
themselves.


Regards,

Rick



starting spamd.exe win 2003 server

2009-07-13 Thread mtm81

If I try to run the spamd.exe service it will run as a process up to around
24k of memory usage then quit out.

nothing showing in error log or anything else???

I've tried to run it by itself...  also tried running it within a daemon
service provider such as NTrunner for example but no joy.. any ideas anyone?

I just to find out why the service keeps quitting??

if I run the spamassassin services without using the spamd side of things it
runs fine on the server but obviously uses a high amount of CPU instead of
the spamd service which I'm lead to believe is more efficient?

thanks for any replies..
-- 
View this message in context: 
http://www.nabble.com/starting-spamd.exe-win-2003-server-tp24458354p24458354.html
Sent from the SpamAssassin - Users mailing list archive at Nabble.com.



Re: questions about my SA configuration

2009-07-13 Thread Matus UHLAR - fantomas
On 10.07.09 08:43, Daniel Schaefer wrote:
 I'm running SA daemonized. I know that it reads  
 /.spamassassin/user_prefs (not a typo),

only for users whose homedir is the root (/) directory...

 /etc/mail/spamassassin/local.cf,  

actually, /etc/mail/spamassassin/*.pre and /etc/mail/spamassassin/*.cf

 and /usr/share/spamassassin/ for configuration.

it only reads rules in /usr/share/spamassassin/ if the /var/lib/spamassassin
directory does not exist. if you use (or at least once issued) sa-update,
the /usr/share/spamassassin/ is not used anymore (even if you did not update
SA rules, only added some third-party rules).

 I know I don't have  
 something set right, because /.spamassassin/user_prefs is being read  
 because spamd is run with user=nobody and nobody's home is /. I just  
 created the directory because the maillog was complaining. I will also  
 mention that all the email addresses are virtual (not system accounts,  
 just to be clear).
 First of all (and I've Google half a day away trying to find an answer),  
 how do I configure spamd so that each virtual email address can have  
 their own user_prefs file and perhaps a global user_prefs file?

/etc/mail/spamassassin/*.cf files are global, user_prefs are per-user.

spamd has a -x and --virtual-config-dir options for defining virtual users.

 Second, I don't want to keep adding/modifying rules/scores in  
 /.spamassassin/user_prefs if it's not the correct way. As I am  
 constantly tweaking my spam scores, can I add scores to a config file  
 and make them become active without having to restart SA? Right now,  
 adding them to /.spamassassin/user_prefs works correctly without having  
 to restart SA.

per-user files are afaik being read when mail is scanned, while for changing
global config file you have to reload spamd. I'm afraid it won't be
different. But I think that if you are permanently changing scores,
something goes wrong there. Be very careful about playing with scores!

 The below commented out lines were failed attempts at my first question.
 [r...@pony ~]# cat /etc/sysconfig/spamassassin
 # Options to spamd
 SPAMDOPTIONS=-d -c -m10 -H
 #SPAMDOPTIONS=-d -c -m5 -H -s /var/log/spamd.log -u nobody -x  
 --virtual-config-dir=/var/vmail/%d/%u/spamassassin
 #SPAMDOPTIONS=-d -c -m5 -H -x -u nobody  
 --virtual-config-dir=/var/vmail/%d/%u/spamassassin


 I received something like this in my maillog
 Jul  7 15:53:26 pony spamd[4732]: spamd: connection from  
 localhost.localdomain [127.0.0.1] at port 59780
 Jul  7 15:53:26 pony spamd[4732]: spamd: using default config for  
 nobody: /var/vmail//nobody/spamassassin/user_prefs
 Jul  7 15:53:26 pony spamd[4732]: spamd: processing message  
 4a53a7b3.9090...@performanceadmin.com for nobody:99
 Jul  7 15:53:26 pony spamd[4732]: auto-whitelist: open of auto-whitelist  
 file failed: locker: safe_lock: cannot create tmp lockfile  
 /var/vmail//nobody/spamassassin/auto-whitelist.lock.pony.performanceadmin.c
 om.4732 for /var/vmail//nobody/spamassassin/auto-whitelist.lock:  
 Permission denied

the nobody user apparently does not have filesystem permissions to create
files in /var/vmail//nobody and /var/vmail//nobody/spamassassin
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
He who laughs last thinks slowest. 


Re: Plugin extracting text from docs

2009-07-13 Thread Matus UHLAR - fantomas
On 10.07.09 16:48, Jonas Eckerman wrote:
 Rosenbaum, Larry M. wrote:

 I have found the Xpdf package [...] has a pdftotext command line utility.
  If you build it with the --without-x option,

 Ah. I didn't see that option. That's nice. I'm now using pdftotext  
 instead of pdftohtml here as well. :-)

I've been thinking about it. The pdftohtml could provide interesting
infromations like colour informations that could lead to better spam
detection. Any experiences with this?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Eagles may soar, but weasels don't get sucked into jet engines. 


Re: Managing SA/sa-learn with clamav

2009-07-13 Thread Matus UHLAR - fantomas
 On Fri, Jul 10, 2009 at 05:01:14PM +0200, Jonas Eckerman wrote:
  Steven W. Orr wrote:
 
  http://wiki.apache.org/spamassassin/ClamAVPlugin
 
  It looks like what I thought I wanted already exists. Based on what I wrote
  above, and that I like the result of running sa + clamav via the two 
  milters,
  does anyone have any caveats for me?
 
  1: When running ClamAV inside SA you have to run SA even if ClamAV finds  
  a virus. This requires more resources than just ClamAV. And ClamAV is  
  way faster and requires far less than SA does.

On 10.07.09 19:09, Henrik K wrote:
 When you block botnets directly from MTA (zen, helo checks, greylist etc),
 possible ClamAV/SA load is already reduced by a huge factor. Personally I
 only see handful of official ClamAV signatures hitting per 100k hams, so
 the scanning order wouldn't really matter.

It does, if you receive much of mail. If you don't, you can surely call
clamav and spamassassin (not spamc) from your .procmailrc as well but I
still won't recommend that.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese. 


Re: spamassassin not working

2009-07-13 Thread Matus UHLAR - fantomas
On 10.07.09 10:28, Admin wrote:
 I do not see spamassassin processing information in the SMTP header of  
 incoming messages.  So I am fairly sure that the processing is not  
 working.  I am hoping to get the postfix-procmail-spamc processing  
 path working system-wide.  I need some help though since it is not 
 working.

Why not use milter? It's much more effective and easier to set up.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I drive way too fast to worry about cholesterol. 


Re: rbl/dnsbl seems to use wrong ip sometimes

2009-07-13 Thread Matus UHLAR - fantomas
 On Sat, 2009-07-11 at 14:27 -0700, dmy wrote:
  So is there a way to configure that ALL DNS tests just use the last external
  ip address (or at least NOT the first one?). Because to me it doesn't make
  any sense to test the ip people use to deliver messages to their smarthost
  and it produces quite a few false positives on my system...

On 12.07.09 05:57, rich...@buzzhost.co.uk wrote:
 Someone throw me a tin opener - there is a can of worms needing it

Oh, you again?

 2 trains of thought on this;
 PRO: Scanning all the headers may pick up an IP being used to push spam
 through a legitimate clean gateway. Normal 'top of the tree' RBL lookups
 will miss this;
 
 CON: Scanning all the hops is a waste of DNS time as anything after the
 first one can be forged - often in an attempt to hit white lists and
 trusted lists IMHO.

whitelists only check for trusted IPs. If any spammer fakes blacklisted
address, good for us.

 PRO: Scanning just the top of the tree is going to break if you are
 behind a forwarder of some kind or even a nasty SMTP ALG/Proxying
 service on a firewall not configured to be entirely transparent. 
 
 CON: Fine tuning and white listing is needed and this can be tetchy to
 set up initially.

That's a PRO: you can fine-tune and whitelist to get better results with
faster scanning.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I intend to live forever - so far so good. 


Re: rbl/dnsbl seems to use wrong ip sometimes

2009-07-13 Thread rich...@buzzhost.co.uk
On Mon, 2009-07-13 at 12:10 +0200, Matus UHLAR - fantomas wrote:
  On Sat, 2009-07-11 at 14:27 -0700, dmy wrote:
   So is there a way to configure that ALL DNS tests just use the last 
   external
   ip address (or at least NOT the first one?). Because to me it doesn't make
   any sense to test the ip people use to deliver messages to their smarthost
   and it produces quite a few false positives on my system...
 
 On 12.07.09 05:57, rich...@buzzhost.co.uk wrote:
  Someone throw me a tin opener - there is a can of worms needing it
 
 Oh, you again?
 
Oh you again ? Sigh.



Re: spamassassin not working

2009-07-13 Thread Martin Gregorie
On Mon, 2009-07-13 at 12:03 +0200, Matus UHLAR - fantomas wrote:
 On 10.07.09 10:28, Admin wrote:
  I do not see spamassassin processing information in the SMTP header of  
  incoming messages.  So I am fairly sure that the processing is not  
  working.  I am hoping to get the postfix-procmail-spamc processing  
  path working system-wide.  I need some help though since it is not 
  working.
 
 Why not use milter? It's much more effective and easier to set up.
 
Or simply define a spamc service in master.cf? 

A search will turn up how-tos for doing it, e.g.
http://www.akadia.com/services/postfix_spamassassin.html

I know of two drawbacks to this approach:
- SA will scan outgoing as well as incoming mail (but you may want to
  do this)

- if you're using the always_bcc directive to feed a mail archive
  or equivalent, you'll see duplicates in the always_bcc output stream.


Martin




Re: Managing SA/sa-learn with clamav

2009-07-13 Thread Henrik K
On Mon, Jul 13, 2009 at 12:01:35PM +0200, Matus UHLAR - fantomas wrote:
 
 On 10.07.09 19:09, Henrik K wrote:
  When you block botnets directly from MTA (zen, helo checks, greylist etc),
  possible ClamAV/SA load is already reduced by a huge factor. Personally I
  only see handful of official ClamAV signatures hitting per 100k hams, so
  the scanning order wouldn't really matter.
 
 It does, if you receive much of mail. If you don't, you can surely call
 clamav and spamassassin (not spamc) from your .procmailrc as well but I
 still won't recommend that.

I'm not sure I got your point. Do you mean that running ClamAV before SA is
mandatory for much of mail? That's only if you are comfortable blocking
directly with all the 3rd party rules, then it's effective yes. Personally I
don't take the 3rd party FP chances and I also like SA to learn from those
mails.

The word order might be a little misleading here. It just comes down to
whether you want to block with ClamAV alone, or use ClamAV/SA together.

Similar thread here: http://marc.info/?t=12413908982



Re: Managing SA/sa-learn with clamav

2009-07-13 Thread Matus UHLAR - fantomas
 On Mon, Jul 13, 2009 at 12:01:35PM +0200, Matus UHLAR - fantomas wrote:
  
  On 10.07.09 19:09, Henrik K wrote:
   When you block botnets directly from MTA (zen, helo checks, greylist etc),
   possible ClamAV/SA load is already reduced by a huge factor. Personally I
   only see handful of official ClamAV signatures hitting per 100k hams, so
   the scanning order wouldn't really matter.
  
  It does, if you receive much of mail. If you don't, you can surely call
  clamav and spamassassin (not spamc) from your .procmailrc as well but I
  still won't recommend that.

On 13.07.09 13:35, Henrik K wrote:
 I'm not sure I got your point. Do you mean that running ClamAV before SA is
 mandatory for much of mail?

it means that it's always better to run ClamAV before SA and if someone is
receiving much of mail and the system is loaded, it could prevent the system
from overloading by preventing SA of scanning viruses.

 That's only if you are comfortable blocking directly with all the 3rd
 party rules, then it's effective yes.Personally I don't take the 3rd
 party FP chances and I also like SA to learn from those mails.

As it was already said, you can run clamav twice (although not elementary to
do) with different configurations (wih/without 3rd party rules).

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
If Barbie is so popular, why do you have to buy her friends? 


Re: questions about my SA configuration

2009-07-13 Thread Daniel Schaefer


Second, I don't want to keep adding/modifying rules/scores in  
/.spamassassin/user_prefs if it's not the correct way. As I am  
constantly tweaking my spam scores, can I add scores to a config file  
and make them become active without having to restart SA? Right now,  
adding them to /.spamassassin/user_prefs works correctly without having  
to restart SA.



per-user files are afaik being read when mail is scanned, while for changing
global config file you have to reload spamd. I'm afraid it won't be
different. But I think that if you are permanently changing scores,
something goes wrong there. Be very careful about playing with scores!

  

I guess it would make sense to change the scores in a load time loaded 
file as opposed to a run time loaded file, because of syntax errors and 
such. This would give me a chance to run run SA with the lint option.


--
Dan Schaefer
Application Developer
Performance Administration Corp.



Re: Extending XBL to all untrusted

2009-07-13 Thread Tony Finch
On Fri, 3 Jul 2009, RW wrote:

 I understand that Spamhaus doesn't recommend this, because dynamic IP
 addresses can be reassigned from a spambot to another user, but I added
 my own rule it does seem to work. In my mail it hits about 9% of my
 spam, with zero false-positives.

You will get false positives from senders that are using remote message
submission, and from some webmail users if their server puts the webmail
client IP address in the message headers.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
GERMAN BIGHT HUMBER: SOUTHWEST 5 TO 7. MODERATE OR ROUGH. SQUALLY SHOWERS.
MODERATE OR GOOD.


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread Charles Gregory


If I might interject. This seems to be an excellent occasion for
the PerlRE 'negative look-ahead' code (excuse the line wrap):

body =~ /(?!www\.[a-z]{2,6}[0-9]{2,6}\.(com|net|org))
www[^a-z0-9]+[a-z]{2,6}[0-9]{2,6}[^a-z0-9]+(com|net|org)/i

...unless someone can think of an FP for this rule?

- C


Re: rbl/dnsbl seems to use wrong ip sometimes

2009-07-13 Thread Charles Gregory

On Mon, 13 Jul 2009, rich...@buzzhost.co.uk wrote:

On Mon, 2009-07-13 at 12:10 +0200, Matus UHLAR - fantomas wrote:

Oh, you again?

Oh you again ? Sigh.


Here we ego again? :)

- C


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread rich...@buzzhost.co.uk
On Mon, 2009-07-13 at 10:46 -0400, Charles Gregory wrote:
 (?!www\.[a-z]{2,6}[0-9]{2,6}\.(com|net|org))
 www[^a-z0-9]+[a-z]{2,6}[0-9]{2,6}[^a-z0-9]+(com|net|org)

Does not seem to work with;

www. meds .com



Re: Extending XBL to all untrusted

2009-07-13 Thread Matus UHLAR - fantomas
 On Fri, 3 Jul 2009, RW wrote:
  I understand that Spamhaus doesn't recommend this, because dynamic IP
  addresses can be reassigned from a spambot to another user, but I added
  my own rule it does seem to work. In my mail it hits about 9% of my
  spam, with zero false-positives.

On 13.07.09 14:22, Tony Finch wrote:
 You will get false positives from senders that are using remote message
 submission, and from some webmail users if their server puts the webmail
 client IP address in the message headers.

agreed, although, some kind of authentication should be done in either case,
which should prevent the rules from hitting, but many ISPs and ESPs don';t
push auth informations to Received: headers...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Quantum mechanics: The dreams stuff is made of. 


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread McDonald, Dan
On Mon, 2009-07-13 at 16:03 +0100, rich...@buzzhost.co.uk wrote:
 On Mon, 2009-07-13 at 10:46 -0400, Charles Gregory wrote:
  (?!www\.[a-z]{2,6}[0-9]{2,6}\.(com|net|org))
  www[^a-z0-9]+[a-z]{2,6}[0-9]{2,6}[^a-z0-9]+(com|net|org)
 
 Does not seem to work with;
 
 www. meds .com

It shouldn't.  The spammers have been using domains with 2-4 alpha
characters and 2 digits.

 
-- 
Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX
www.austinenergy.com


signature.asc
Description: This is a digitally signed message part


Re: Extending XBL to all untrusted

2009-07-13 Thread rich...@buzzhost.co.uk
On Mon, 2009-07-13 at 17:19 +0200, Matus UHLAR - fantomas wrote:
  On Fri, 3 Jul 2009, RW wrote:
   I understand that Spamhaus doesn't recommend this, because dynamic IP
   addresses can be reassigned from a spambot to another user, but I added
   my own rule it does seem to work. In my mail it hits about 9% of my
   spam, with zero false-positives.
 
 On 13.07.09 14:22, Tony Finch wrote:
  You will get false positives from senders that are using remote message
  submission, and from some webmail users if their server puts the webmail
  client IP address in the message headers.
 
 agreed, although, some kind of authentication should be done in either case,
 which should prevent the rules from hitting, but many ISPs and ESPs don';t
 push auth informations to Received: headers...
 
Do the RFC's state that they need to?



Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread Charles Gregory

On Mon, 13 Jul 2009, rich...@buzzhost.co.uk wrote:

On Mon, 2009-07-13 at 10:46 -0400, Charles Gregory wrote:

(?!www\.[a-z]{2,6}[0-9]{2,6}\.(com|net|org))
www[^a-z0-9]+[a-z]{2,6}[0-9]{2,6}[^a-z0-9]+(com|net|org)


Does not seem to work with;
www. meds .com


Correct. With spaces being one of the possible obfuscation characters,
this otherwise 'broad' rule is limited to the cookie-cutter URL's with 
numeric suffixes in the hostnames - something unlikely to appear in 
conversational text like whether the [www can com]municate ideas... :)


- Charles




Re: Plugin extracting text from docs

2009-07-13 Thread Jonas Eckerman

Matus UHLAR - fantomas wrote:

Ah. I didn't see that option. That's nice. I'm now using pdftotext  
instead of pdftohtml here as well. :-)



I've been thinking about it. The pdftohtml could provide interesting
infromations like colour informations that could lead to better spam
detection. Any experiences with this?


You're right. It should be usefult to extract to HTML when possible, and 
then use Mail::SpamAssassin::HTML to get and then set properties just 
like the rendered method of Mail::SpamAssassin::Message::Node does.


The nice way to do this would IMHO be to make it possible for a plugin 
to call the rendered method of Mail::SpamAssassin::Message::Node 
passing type and extracted data as parameters.


Something like this (completely untested, and watch for wraps):
---8---
--- Node.pm Thu Jun 12 17:40:48 2008
+++ Node-new.pm Mon Jul 13 17:22:20 2009
@@ -411,16 +411,17 @@
 =cut

 sub rendered {
-  my ($self) = @_;
+  my ($self, $type, $text) = @_;

-  if (!exists $self-{rendered}) {
+  if ((defined($type)  defined($data)) || !exists $self-{rendered}) {
 # We only know how to render text/plain and text/html ...
 # Note: for bug 4843, make sure to skip text/calendar parts
 # we also want to skip things like text/x-vcard
 # text/x-aol is ignored here, but looks like text/html ...
+$type = $self-{'type'} unless (defined($type));
 return(undef,undef) unless ( $self-{'type'} =~ 
/^text\/(?:plain|html)$/i );


-my $text = $self-_normalize($self-decode(), $self-{charset});
+$text = $self-_normalize($self-decode(), $self-{charset}) unless 
(defined($text));

 my $raw = length($text);

 # render text/html always, or any other text|text/plain part as 
text/html

---8---

This way, AFAICT, any extracted (or generated) HTML should be treated 
the same way a normal text/html is. Making it available to HTML eval 
tests for example.


Otherwise my plugin could of course use Mail::SpamAssassin::HTML itself.
Unfortunately Mail::SpamAssassin::Message::Node has no nice methods for 
setting the separate relevant properties though, so either the 
set_rendered metod needs to be expanded or complemeted to allow this 
anyway, or my plugin will have to directly set the relevant properties 
(wich makes it depend on Mail::SpamAssassin::Message::Node not being 
changed too much).


I guess I could do the hack version now, and then update it if/when 
Mail::SpamAssassin::Message::Node is updated to support this in a nice 
way. :-)


Regards
/Jonas
--
Jonas Eckerman
Fruktträdet  Förbundet Sveriges Dövblinda
http://www.fsdb.org/
http://www.frukt.org/
http://whatever.frukt.org/


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread John Hardin

On Mon, 13 Jul 2009, McDonald, Dan wrote:


On Mon, 2009-07-13 at 16:03 +0100, rich...@buzzhost.co.uk wrote:

On Mon, 2009-07-13 at 10:46 -0400, Charles Gregory wrote:

(?!www\.[a-z]{2,6}[0-9]{2,6}\.(com|net|org))
www[^a-z0-9]+[a-z]{2,6}[0-9]{2,6}[^a-z0-9]+(com|net|org)


Does not seem to work with;

www. meds .com


It shouldn't.  The spammers have been using domains with 2-4 alpha
characters and 2 digits.


Why be restrictive on the domain name?

\b(?!www\.\w{2,20}\.(?:com|net|org))www[^a-z0-9]+\w{2,20}[^a-z0-9]+(?:com|net|org)\b

The + signs are a little risky, it might be better to use {1,3} instead. 
And the older rule allowed for spaces in the TLD. I don't recall if 
anybody provided more than one spample with that though.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Users mistake widespread adoption of Microsoft Office for the
  development of a document format standard.
---
 3 days until the 64th anniversary of the dawn of the Atomic Age


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread John Hardin

On Mon, 13 Jul 2009, Charles Gregory wrote:


On Mon, 13 Jul 2009, rich...@buzzhost.co.uk wrote:

 On Mon, 2009-07-13 at 10:46 -0400, Charles Gregory wrote:
  (?!www\.[a-z]{2,6}[0-9]{2,6}\.(com|net|org))
  www[^a-z0-9]+[a-z]{2,6}[0-9]{2,6}[^a-z0-9]+(com|net|org)

 Does not seem to work with;
 www. meds .com


Correct. With spaces being one of the possible obfuscation characters,
this otherwise 'broad' rule is limited to the cookie-cutter URL's with 
numeric suffixes in the hostnames - something unlikely to appear in 
conversational text like whether the [www can com]municate ideas... :)


That possible FP is why \b are important in the rule.

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Users mistake widespread adoption of Microsoft Office for the
  development of a document format standard.
---
 3 days until the 64th anniversary of the dawn of the Atomic Age


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread Charles Gregory

On Mon, 13 Jul 2009, John Hardin wrote:

Why be restrictive on the domain name?


If  a conservative spec is sufficient to match the spam, then we're
helping avoid false positives I'd rather tweak the rule to
catch the new tricks of the spammer than overgeneralize. :)

The + signs are a little risky, it might be better to use {1,3} instead.

(nod) Though without the '/m' option it would be limited to the same line.
My thinking is that a spammer would quickly figure out to add more 
obfuscation, and there is little risk of a false positive occuring with

that kind of broad spacing and an xxx99 domain name

And the older rule allowed for spaces in the TLD. I don't recall if 
anybody provided more than one spample with that though.


I've not seen it too much, though it doesn't hurt to keep it in the
rule. I actually added it back into my live rule after I posted

To answer your next post, I don't use '\b' because the next 'trick' coming 
will likely be something looking like Xwww herenn comX...  :)


- C


Re: Extending XBL to all untrusted

2009-07-13 Thread Ned Slider

RW wrote:

I think it might be worth having 2 XBL tests, a high scoring test on
last-external and a lower-scoring test that goes back through the
untrusted headers.

I understand that Spamhaus doesn't recommend this, because dynamic IP
addresses can be reassigned from a spambot to another user, but I added
my own rule it does seem to work. In my mail it hits about 9% of my
spam, with zero false-positives. I suspect that part of this is down to
UK dynamic addresses being very sticky, but I ran my mailing lists
through SA for a few weeks and got 3 FPs out of ~2400. 



I do a very similar thing and see very similar results to yours.

I use zen.spamhaus to block at the smtp level and then run all headers 
through sbl-xbl for a further few points. As already mentioned elsewhere 
in this thread, it will occasionally fire against ham but I've only 
noticed that from senders to mailing lists who originate from extremely 
spammy ISPs (ie, they hit plenty of other DNSBLs too).


Where I find it particularly useful is for mail accounts forwarding from 
ISP email addresses where checking of the last external IP would be 
inappropriate.



I think it's probably worth a point or so, and essentially it's free
- all of the zen lookups get done for SBL.






Re: Extending XBL to all untrusted

2009-07-13 Thread Matus UHLAR - fantomas
 On Mon, 2009-07-13 at 17:19 +0200, Matus UHLAR - fantomas wrote:
   On Fri, 3 Jul 2009, RW wrote:
I understand that Spamhaus doesn't recommend this, because dynamic IP
addresses can be reassigned from a spambot to another user, but I added
my own rule it does seem to work. In my mail it hits about 9% of my
spam, with zero false-positives.
  
  On 13.07.09 14:22, Tony Finch wrote:
   You will get false positives from senders that are using remote message
   submission, and from some webmail users if their server puts the webmail
   client IP address in the message headers.
  
  agreed, although, some kind of authentication should be done in either case,
  which should prevent the rules from hitting, but many ISPs and ESPs don';t
  push auth informations to Received: headers...

On 13.07.09 16:26, rich...@buzzhost.co.uk wrote:
 Do the RFC's state that they need to?

yes, RFC4954 in section 7 does

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam is for losers who can't get business any other way.


Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread John Hardin

On Mon, 13 Jul 2009, Charles Gregory wrote:


On Mon, 13 Jul 2009, John Hardin wrote:

 Why be restrictive on the domain name?


If a conservative spec is sufficient to match the spam, then we're
helping avoid false positives I'd rather tweak the rule to
catch the new tricks of the spammer than overgeneralize. :)


Fair enough.

The + signs are a little risky, it might be better to use {1,3} 
instead.


(nod) Though without the '/m' option it would be limited to the same 
line.


body rules work on paragraphs, but you are right, the badness has an upper 
limit.


My thinking is that a spammer would quickly figure out to add more 
obfuscation, and there is little risk of a false positive occuring with 
that kind of broad spacing and an xxx99 domain name


Again, fair enough. But there's a limit to how complex the obfuscation can 
be made, though, because there's a point where people won't deobfuscate 
the URI to visit it.


To answer your next post, I don't use '\b' because the next 'trick' 
coming will likely be something looking like Xwww herenn comX...  :)


At that point it can be dealt with. Until then, using \b is an important 
way to avoid FPs.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Ignorance doesn't make stuff not exist.   -- Bucky Katt
---
 3 days until the 64th anniversary of the dawn of the Atomic Age


Re: Extending XBL to all untrusted

2009-07-13 Thread rich...@buzzhost.co.uk
On Mon, 2009-07-13 at 18:28 +0200, Matus UHLAR - fantomas wrote:
  On Mon, 2009-07-13 at 17:19 +0200, Matus UHLAR - fantomas wrote:
On Fri, 3 Jul 2009, RW wrote:
 I understand that Spamhaus doesn't recommend this, because dynamic IP
 addresses can be reassigned from a spambot to another user, but I 
 added
 my own rule it does seem to work. In my mail it hits about 9% of my
 spam, with zero false-positives.
   
   On 13.07.09 14:22, Tony Finch wrote:
You will get false positives from senders that are using remote message
submission, and from some webmail users if their server puts the webmail
client IP address in the message headers.
   
   agreed, although, some kind of authentication should be done in either 
   case,
   which should prevent the rules from hitting, but many ISPs and ESPs don';t
   push auth informations to Received: headers...
 
 On 13.07.09 16:26, rich...@buzzhost.co.uk wrote:
  Do the RFC's state that they need to?
 
 yes, RFC4954 in section 7 does
 
Where - I don't see it say it needs to push auth informations to
Recieved: Headers;


7.  Additional Requirements on Servers


   As described in Section 4.4 of [SMTP], an SMTP server that receives a
   message for delivery or further processing MUST insert the
   Received: header field at the beginning of the message content.
   This document places additional requirements on the content of a
   generated Received: header field.  Upon successful authentication,
   a server SHOULD use the ESMTPA or the ESMTPSA [SMTP-TT] (when
   appropriate) keyword in the with clause of the Received header
   field.

Am I missing what you are saying here?



Re: Extending XBL to all untrusted

2009-07-13 Thread rich...@buzzhost.co.uk
On Mon, 2009-07-13 at 17:38 +0100, rich...@buzzhost.co.uk wrote:
 On Mon, 2009-07-13 at 18:28 +0200, Matus UHLAR - fantomas wrote:
   On Mon, 2009-07-13 at 17:19 +0200, Matus UHLAR - fantomas wrote:
 On Fri, 3 Jul 2009, RW wrote:
  I understand that Spamhaus doesn't recommend this, because dynamic 
  IP
  addresses can be reassigned from a spambot to another user, but I 
  added
  my own rule it does seem to work. In my mail it hits about 9% of my
  spam, with zero false-positives.

On 13.07.09 14:22, Tony Finch wrote:
 You will get false positives from senders that are using remote 
 message
 submission, and from some webmail users if their server puts the 
 webmail
 client IP address in the message headers.

agreed, although, some kind of authentication should be done in either 
case,
which should prevent the rules from hitting, but many ISPs and ESPs 
don';t
push auth informations to Received: headers...
  
  On 13.07.09 16:26, rich...@buzzhost.co.uk wrote:
   Do the RFC's state that they need to?
  
  yes, RFC4954 in section 7 does
  
 Where - I don't see it say it needs to push auth informations to
 Recieved: Headers;
 
 
 7.  Additional Requirements on Servers
 
 
As described in Section 4.4 of [SMTP], an SMTP server that receives a
message for delivery or further processing MUST insert the
Received: header field at the beginning of the message content.
This document places additional requirements on the content of a
generated Received: header field.  Upon successful authentication,
a server SHOULD use the ESMTPA or the ESMTPSA [SMTP-TT] (when
appropriate) keyword in the with clause of the Received header
field.
 
 Am I missing what you are saying here?
 
Got it! Now I understand where you are coming from;
Received: from [192.168.1.56] (rubiks [192.168.1.56]) by
 mail1.buzzhost.co.uk (XmasTree) 

AND HERE IT COMES.

with ESMTPA 


id E0C42AC0BE for

Now it makes sense.



Re: Extending XBL to all untrusted

2009-07-13 Thread Justin Mason
On Fri, Jul 3, 2009 at 22:43, RWrwmailli...@googlemail.com wrote:

 I think it might be worth having 2 XBL tests, a high scoring test on
 last-external and a lower-scoring test that goes back through the
 untrusted headers.

 I understand that Spamhaus doesn't recommend this, because dynamic IP
 addresses can be reassigned from a spambot to another user, but I added
 my own rule it does seem to work. In my mail it hits about 9% of my
 spam, with zero false-positives. I suspect that part of this is down to
 UK dynamic addresses being very sticky, but I ran my mailing lists
 through SA for a few weeks and got 3 FPs out of ~2400.

 I think it's probably worth a point or so, and essentially it's free
 - all of the zen lookups get done for SBL.

we used to do it this way, but the FPs are (surprisingly) high due to
dynamic-address-pool churn.

compare:
OVERALL%   SPAM% HAM% S/ORANK   SCORE  NAME
 5.100  10.1740   0.02000.998   0.650.01  T_RCVD_IN_XBL  (with
trusted-networks)
 5.417  10.6074   0.22030.980   0.180.00  RCVD_IN_XBL  (with all nets)

I'll forward on the old mail for hysterical raisins.

--j.


Fwd: DNSBL accuracy using -firsttrusted

2009-07-13 Thread Justin Mason
that old message I was talking about.


-- Forwarded message --
From: Daniel Quinlan quin...@pathname.com
Date: Sat, May 22, 2004 at 16:25
Subject: DNSBL accuracy using -firsttrusted
To: spamassassin-...@incubator.apache.org


Someone at Spamhaus poked me to try testing only the last IP address
with XBL and I tested it and it helps reduce false positives quite
nicely.  The concept with XBL is that if it came most recently from an
okay host, then the message is probably okay too.  It's a bit spooky but
it works and I suppose it is closer in behavior to how blacklists are
generally used at connect time, so perhaps most are tuned to be used
this way.

The main caveat is that if trusted networks is not guessed or set
correctly, then *no* blacklist hits will happen and the net score set
will be used to the detriment of the site.

I tried the same idea on more or less every applicable blacklist and
check out the results:

--- start of cut text --
OVERALL%   SPAM%     HAM%     S/O    RANK   SCORE  NAME
 29979    14999    14980    0.500   0.00    0.00  (all messages)
100.000  50.0317  49.9683    0.500   0.00    0.00  (all messages as %)

 12.212  24.4083   0.    1.000   1.00    0.01  T_RCVD_IN_NJABL_PROXY
 12.962  25.7951   0.1135    0.996   0.57    0.00  RCVD_IN_NJABL_PROXY

 18.186  36.3291   0.0200    0.999   0.95    1.00  __T_RCVD_IN_NJABL
 19.877  38.1225   1.6088    0.960   0.30    1.00  __RCVD_IN_NJABL

 8.613  17.2145   0.    1.000   0.91    0.01  T_RCVD_IN_SORBS_MISC
 9.136  18.2412   0.0200    0.999   0.80    0.00  RCVD_IN_SORBS_MISC

 29.124  58.1705   0.0401    0.999   0.90    0.01  T_RCVD_IN_DSBL
 30.395  60.2640   0.4873    0.992   0.43    0.00  RCVD_IN_DSBL

 7.966  15.9211   0.    1.000   0.87    0.01  T_RCVD_IN_SORBS_HTTP
 8.449  16.8011   0.0868    0.995   0.49    0.00  RCVD_IN_SORBS_HTTP

 5.337  10.6540   0.0134    0.999   0.74    0.01  T_RCVD_IN_RFCI
 7.162  12.3675   1.9493    0.864   0.00    0.00  RCVD_IN_RFCI

 9.804  19.5613   0.0334    0.998   0.73    0.01  T_RCVD_IN_SBL
 9.927  19.7747   0.0668    0.997   0.62    0.00  RCVD_IN_SBL

 14.610  29.1486   0.0534    0.998   0.73    1.00  __T_RCVD_IN_SBL_XBL
 15.044  29.7820   0.2870    0.990   0.35    1.00  __RCVD_IN_SBL_XBL

 3.116   6.2204   0.0067    0.999   0.72    0.00  RCVD_IN_NJABL_SPAM
 3.062   6.1137   0.0067    0.999   0.70    0.01  T_RCVD_IN_NJABL_SPAM

 2.055   4.1069   0.    1.000   0.66    0.01  T_RCVD_IN_BL_SPAMCOP_NET
 2.235   4.3070   0.1602    0.964   0.14    0.00  RCVD_IN_BL_SPAMCOP_NET

 5.100  10.1740   0.0200    0.998   0.65    0.01  T_RCVD_IN_XBL
 5.417  10.6074   0.2203    0.980   0.18    0.00  RCVD_IN_XBL

 21.869  43.5562   0.1535    0.996   0.64    0.01  T_RCVD_IN_SORBS_DUL
 22.146  44.0363   0.2270    0.995   0.48    0.00  RCVD_IN_SORBS_DUL

 34.071  67.9112   0.1869    0.997   0.63    1.00  __T_RCVD_IN_SORBS
 42.410  70.9047  13.8785    0.836   0.34    1.00  __RCVD_IN_SORBS

 1.868   3.7336   0.    1.000   0.64    0.00  RCVD_IN_SORBS_SMTP
 1.731   3.4602   0.    1.000   0.62    0.01  T_RCVD_IN_SORBS_SMTP

 2.935   5.8537   0.0134    0.998   0.63    0.00  RCVD_IN_NJABL_DIALUP
 2.879   5.7404   0.0134    0.998   0.61    0.01  T_RCVD_IN_NJABL_DIALUP

 0.934   1.8668   0.    1.000   0.57    0.01  T_RCVD_IN_RSL
 1.041   2.0735   0.0067    0.997   0.55    0.00  RCVD_IN_RSL

 0.607   1.2134   0.    1.000   0.53    0.01  T_RCVD_IN_SORBS_SOCKS
 0.637   1.2401   0.0334    0.974   0.33    0.00  RCVD_IN_SORBS_SOCKS

 0.430   0.8601   0.    1.000   0.49    0.01  T_RCVD_IN_SORBS_WEB
 0.447   0.8867   0.0067    0.993   0.46    0.00  RCVD_IN_SORBS_WEB

 0.254   0.5067   0.    1.000   0.44    0.01  T_RCVD_IN_SORBS_ZOMBIE
 0.307   0.5867   0.0267    0.956   0.29    0.00  RCVD_IN_SORBS_ZOMBIE

 0.117   0.2333   0.    1.000   0.42    0.00  RCVD_IN_NJABL_RELAY
 0.113   0.2267   0.    1.000   0.40    0.01  T_RCVD_IN_NJABL_RELAY
--- end 

change in RANK (relative to just the IP-based blacklists and the new
-firsttrusted ones in testing)

  0.74   RCVD_IN_RFCI
  0.52   RCVD_IN_BL_SPAMCOP_NET
  0.47   RCVD_IN_XBL
  0.47   RCVD_IN_DSBL
  0.43   RCVD_IN_NJABL_PROXY
  0.38   RCVD_IN_SORBS_HTTP
  0.20   RCVD_IN_SORBS_SOCKS
  0.16   RCVD_IN_SORBS_DUL
  0.15   RCVD_IN_SORBS_ZOMBIE
  0.11   RCVD_IN_SORBS_MISC
  0.11   RCVD_IN_SBL
  0.03   RCVD_IN_SORBS_WEB
  0.02   RCVD_IN_RSL
 -0.02   RCVD_IN_NJABL_DIALUP
 -0.02   RCVD_IN_NJABL_RELAY
 -0.02   RCVD_IN_NJABL_SPAM
 -0.02   RCVD_IN_SORBS_SMTP

and not really relevant unless we change entire sets to reduce the
number of look-ups:

  0.65   __RCVD_IN_NJABL
  0.38   __RCVD_IN_SBL_XBL
  0.29   __RCVD_IN_SORBS

Results for some fresh mail that may still have a few misfiles:

--- start of cut text --
OVERALL%   SPAM%     HAM%     S/O    RANK   SCORE  NAME
  4039     2294     1745    0.568   0.00    0.00  (all messages)
100.000  56.7962  43.2038    0.568   0.00    0.00  (all 

Re: Extending XBL to all untrusted

2009-07-13 Thread McDonald, Dan
On Mon, 2009-07-13 at 17:38 +0100, rich...@buzzhost.co.uk wrote:
 On Mon, 2009-07-13 at 18:28 +0200, Matus UHLAR - fantomas wrote:
  On 13.07.09 16:26, rich...@buzzhost.co.uk wrote:
   Do the RFC's state that they need to?
  
  yes, RFC4954 in section 7 does
  
 Where - I don't see it say it needs to push auth informations to
 Recieved: Headers;
 
 
 7.  Additional Requirements on Servers
 
 Upon successful authentication,
a server SHOULD use the ESMTPA or the ESMTPSA [SMTP-TT] (when
appropriate) keyword in the with clause of the Received header
field.

It's a SHOULD, not a MUST, but the intent is clear.


-- 
Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX
www.austinenergy.com


signature.asc
Description: This is a digitally signed message part


Re: Extending XBL to all untrusted

2009-07-13 Thread Rob McEwen
I agree so strongly about not checking against all IPs in the header
that I'll probably turn down business from large anti-spam vendors who
cannot guarantee in writing that ivmSIP and ivmSIP/24 will ONLY be
checked against the actual sending IP. If this means I lose 4-5 figures
in annual revenue from future vendors, so be it. (and I don't think any
of my current largest subscribers are doing this.)

There is a better system. Work to find ways to better know which headers
are forwarders, ignore them, and grab the original sender's 'mta' IP
from THAT received header. (not IP the workstation which originated the
e-mail, but the mail server IP that officially sent the message on
behalf of the sender, but before any other forwarding).

This surgeon's scalpel approach is not always as easy as the
alternative sledgehammer approach, but it is worth the effort. Certain
large anti-spam appliance vendors have no excuse for not making this
extra effort... and I've seen some egregious FPs (for example...
hand-typed messages from an attorney to their client, sent from an IP
which doesn't ever send spam) recently caused by such appliances which
check all IPs in the header against blacklists.

-- 
Rob McEwen
http://dnsbl.invaluement.com/
r...@invaluement.com
+1 (478) 475-9032





Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread Charles Gregory

On Mon, 13 Jul 2009, John Hardin wrote:

  The + signs are a little risky, it might be better to use {1,3} instead.
 (nod) Though without the '/m' option it would be limited to the same line.
body rules work on paragraphs, but you are right, the badness has an upper 
limit.


Ugh. Forgot it was 'paragraphs' and not 'lines' (and I just had that 
drilled into me recently, too). Paragraphs are too long. I'll switch it

to a specific limit


 To answer your next post, I don't use '\b' because the next 'trick' coming
 will likely be something looking like Xwww herenn comX...  :)

At that point it can be dealt with.


Well, they're getting close. I'm seeing non-alpha non-blank crud cozied up 
to the front of the 'www' now :)


- C


Re: Extending XBL to all untrusted

2009-07-13 Thread RW
On Mon, 13 Jul 2009 17:21:36 +0100
Ned Slider n...@unixmail.co.uk wrote:

 I do a very similar thing and see very similar results to yours.
 
 I use zen.spamhaus to block at the smtp level and then run all
 headers through sbl-xbl for a further few points. As already
 mentioned elsewhere in this thread, it will occasionally fire against
 ham but I've only noticed that from senders to mailing lists who
 originate from extremely spammy ISPs (ie, they hit plenty of other
 DNSBLs too).
 

On Mon, 13 Jul 2009 17:48:17 +0100
Justin Mason j...@jmason.org wrote:

 we used to do it this way, but the FPs are (surprisingly) high due to
 dynamic-address-pool churn.

That kind of thing doesn't happen much with UK DSL.

I notice Ned Slider has a .co.uk address, so I think it probably does
matter where your mail comes from.


 compare:
 OVERALL%   SPAM% HAM% S/ORANK   SCORE  NAME
  5.100  10.1740   0.02000.998   0.650.01  T_RCVD_IN_XBL  (with
 trusted-networks)
  5.417  10.6074   0.22030.980   0.180.00  RCVD_IN_XBL  (with
 all nets)


Even on those figures, I still think it's worth scoring at the 0.5 to
1 point level.



Re: trusted_networks and internal_networks

2009-07-13 Thread mouss
MrGibbage a écrit :
 I have read the help pages for those two settings over and over, and I guess
 I'm just not smart enough.  I can't figure out what I should put for those
 two settings.  Can one of you give me a hand by looking at the headers from
 an email?  I can tell you that my SA installation is on
 ps11651.dreamhostps.com and the way I receive email is I my email is sent
 to my public email address, s...@pelorus.org and I have an auto-forwarder
 which sends the mail to my SA box via email, at
 skip-mor...@psoneonesixfiveone.dreamhostps.com (mangled here).  I never
 receive mail directly to skip-mor...@psoneonesixfiveone.dreamhostps.com.  If
 I did, it would have to be spam because they scraped the address from
 somewhere.  pelorus.org and ps11651.dreamhostps.com are the same box.  All
 the appriver stuff below is done on the sending side of my company's
 exchange server.
 
 Anyway, maybe I got it, but these two settings seemed too important to get
 wrong, so I just want to be sure.
 
 #ps11651.dreamhostps.com and pelorus.org
 internal_networks 75.119.219.171
 trusted_networks 75.119.219.171 #I think this is wrong

no, it is not wrong. the documentation says:

Every entry in internal_networks must appear in trusted_net-

works;

so whenever you put an internal_network line, you should add the same
line with trusted instead of internal.


 
 So is the idea that I could add more trusted_networks to the list, sort of
 like a whitelist.  Perhaps adding my work ip addresses below?  Isn't that
 trusted_networks setting above saying **ALL** mail is trusted to not be
 spam since **ALL** mail comes in on that IP address?  And what about the
 Received: from homiemail-mx7.g.dreamhost.com
 (balanced.mail.policyd.dreamhost.com [208.97.132.119])?  I have checked and
 I do receive all mail from one of 208.97.132.*  Should that be on my
 internal_networks?
 [snip]

here, trusted mostly means the relay does not forge Received headers. it
can relay spam, but it is not controlled by spammers (directly or via
trojans/open proxies/...).

to summarise:

for those relays that you trust not to be operated by spammers (directly
or not):
- if they receive mail from residential/dynamic IPs (without
authentication), then list them in trusted_networks only
- else, list them in both internal_networks and trusted_networks

If this is too theoritical, consider the practical side: When SA looks
up PBL, SORBS_DUL, ..., it will not look up IPs listed in
internal_networks.

in general, your own relays will be listed in both internal_networks and
 trusted_networks. but if you have a forwarder that is not under your
control, and that may be used to relay mail for residential IPs, then
you don't want to put it in internal_networks (otherwise, mail from the
residential IPs may be caught by PBL, SORBS_DUL, ... evethough it is
relayedvia a smarthost, as is generally recommended).



Re: trusted_networks and internal_networks

2009-07-13 Thread Jari Fredriksson
 MrGibbage a écrit :
 #ps11651.dreamhostps.com and pelorus.org
 internal_networks 75.119.219.171
 trusted_networks 75.119.219.171 #I think this is wrong
 
 no, it is not wrong. the documentation says:
 
 Every entry in internal_networks must appear in
 trusted_net- 
 
 works;
 
 so whenever you put an internal_network line, you should
 add the same line with trusted instead of internal.
 

If that is indeed true, it is a BUG IMO.

Brain dead requirement!


Re: trusted_networks and internal_networks

2009-07-13 Thread mouss
Jari Fredriksson a écrit :
 MrGibbage a écrit :
 #ps11651.dreamhostps.com and pelorus.org
 internal_networks 75.119.219.171
 trusted_networks 75.119.219.171 #I think this is wrong
 no, it is not wrong. the documentation says:

 Every entry in internal_networks must appear in
 trusted_net- 

 works;

 so whenever you put an internal_network line, you should
 add the same line with trusted instead of internal.

 
 If that is indeed true,

As of 3.2.5, Received.pm contains this:

if (!$relay-{auth}  !$trusted-contains_ip($relay-{ip})) {
  $in_trusted = 0;
  $in_internal = 0; # if it's not trusted it's not internal


}

so as soon as an untrusted relay is found, it is considered as
external.

 it is a BUG IMO.
 

not really a bug. just a configuration annoyance . I mean, since
internal_networks is a subset of trusted_networks, then any internal
relay should automatically be considered as trusted, without the need
to duplicate information.


 Brain dead requirement!

the requirement is reasonable. an internal relay that wouldn't be
trusted is irrelevant. why would you want to skip PBL/DUL lookup for
an IP that may be forged?


forward mails as spam

2009-07-13 Thread neroxyr

Hi,
I've been running SA for about a month, everything is running great until:
I have configured our domain mail to forward messages to a gmail account.
I did a test sending an email from my gmail account to my domain mail; I
receive the message sent from my gmail account, but immediately this message
has to be sent to gmail.
Here's when the problem surfaces as I receive this message:

***
Gmail Test t...@gmail.com
a las 17:08
Mail Delivery Subsystem mailer-dae...@mydomain.com13 de julio de 2009
17:08
Para: t...@gmail.com
The original message was received at Tue, 14 Jul 2009 03:08:52 +0500 (GMT)
from avx [192.188.xx.xx]

  - The following addresses had permanent fatal errors -
t...@gmail.com
   (reason: 550 5.7.1 Blocked by SpamAssassin)
   (expanded from: b...@mydomain.com)

  - Transcript of session follows -
... while talking to breva.mydomain.com.:
 DATA
 550 5.7.1 Blocked by SpamAssassin
554 5.0.0 Service unavailable

Final-Recipient: RFC822; b...@mydomain.com
X-Actual-Recipient: RFC822; t...@gmail.com
Action: failed
Status: 5.7.1
Remote-MTA: DNS; breva.mydomain.com
Diagnostic-Code: SMTP; 550 5.7.1 Blocked by SpamAssassin
Last-Attempt-Date: Tue, 14 Jul 2009 03:08:58 +0500 (GMT)


-- Mensaje reenviado --
From: Test t...@gmail.com
To: b...@mydomain.com
Date: Mon, 13 Jul 2009 17:08:08 -0500
Subject: a las 17:08
probando sa a las 17:09
***

Checking the maillog, I can see why SA is blocking this message as it is
being considered as a spam with a score of 103.5/4.5. I don't know how SA
gets this score.

Hope you can help with that.

Thanks in advance,
Brennero Pardo
-- 
View this message in context: 
http://www.nabble.com/forward-mails-as-spam-tp24470970p24470970.html
Sent from the SpamAssassin - Users mailing list archive at Nabble.com.



Re: forward mails as spam

2009-07-13 Thread Evan Platt

At 04:03 PM 7/13/2009, you wrote:


Hi,
I've been running SA for about a month, everything is running great until:
I have configured our domain mail to forward messages to a gmail account.
I did a test sending an email from my gmail account to my domain mail; I
receive the message sent from my gmail account, but immediately this message
has to be sent to gmail.
Here's when the problem surfaces as I receive this message:

***

  - The following addresses had permanent fatal errors -
t...@gmail.com
   (reason: 550 5.7.1 Blocked by SpamAssassin)
   (expanded from: b...@mydomain.com)


So what is blocking the mail? SpamAssassin isn't. Your logs SAY 
Blocked by Spamassassin, but SpamAssassin doesn't have the 
capability to block messages.




Checking the maillog, I can see why SA is blocking this message as it is
being considered as a spam with a score of 103.5/4.5. I don't know how SA
gets this score.


As above, something else is blocking the message.

And without the original header and body of the message, no one knows 
why SA scored it 103.5.




Re: forward mails as spam

2009-07-13 Thread John Hardin

On Mon, 13 Jul 2009, neroxyr wrote:

Checking the maillog, I can see why SA is blocking this message as it is 
being considered as a spam with a score of 103.5/4.5. I don't know how 
SA gets this score.


Hope you can help with that.


Not without a copy of the message in question, including full headers as 
they appear at the time SA is sacnning the message...


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Politicians never accuse you of greed for wanting other people's
  money, only for wanting to keep your own money.-- Joseph Sobran
---
 3 days until the 64th anniversary of the dawn of the Atomic Age


Re: forward mails as spam

2009-07-13 Thread neroxyr

Hope this is the log you wanted

http://www.nabble.com/file/p24471425/block.jpg 
-- 
View this message in context: 
http://www.nabble.com/forward-mails-as-spam-tp24470970p24471425.html
Sent from the SpamAssassin - Users mailing list archive at Nabble.com.



Re: [NEW SPAM FLOOD] www.shopXX.net

2009-07-13 Thread Cedric Knight
Chris Owen wrote:
 On Jul 13, 2009, at 2:55 PM, Charles Gregory wrote:
 
 To answer your next post, I don't use '\b' because the next 'trick'
 coming
 will likely be something looking like Xwww herenn comX...  :)
 At that point it can be dealt with.
 
 Well, they're getting close. I'm seeing non-alpha non-blank crud
 cozied up to the front of the 'www' now :)

Not forgetting underscores are not word boundaries.  My alternative
rules are badly written but are still hitting with the \b:

rawbody NONLINK_SHORT
/^.{0,500}\b(?:H\s*T\s*T\s*P\s*[:;](?!http:)\W{0,10}|W\s{0,10}W\s{0,10}W\s{0,10}(?:[.,\'`_+\-]\s{0,10})?(?!www\.))[a-z0-9\-]{3,13}\s{0,10}(?:[.,\'`_+\-]\s{0,10})?(?![a-z0-9]\.)(?:net|c\s{0,10}o\s{0,10}m|org|info|biz)\b/si
describe NONLINK_SHORT  Obfuscated link near top of text
score NONLINK_SHORT 2.5

#quite strict:
rawbody NONLINK_VSHORT  /^.{0,100}\bwww{0,2}(?:\. | \.|
?[,*_\-\+] ?)[a-z]{2,5}[0-9\-]{1,5}(?:\. | \.| ?[,*_\-\+]
?)(?:net|c\s{0,10}o\s{0,10}m|org|info|biz)(?:\. \S|\s*$)/s
describe NONLINK_VSHORT Specific obfuscated link form near top
of text
score NONLINK_VSHORT2.5

(These use rawbody with a caret to limit the area of matching to the
first few lines.)

So how about dropping the \b and using something looser like: 'w
?w(?!\.[a-z0-9\-]{2,12}\.(?:com|info|net|org|biz))[[:punct:]X
]{1,4}[a-z0-9\-]{2,12}[[:punct:]X ]{1,4}(?:c ?o ?m|info|n ?e ?t|o ?r
?g|biz)([[:punct:]X ]|$)'   ...?

 
 
 Which of course means we've long since passed the point where any of
 these are going to do the spammers any good.  That's the frustrating part.

You're making the common assumption that spammers send UCE because it
makes them money.  In fact they do it because they are obnoxious
imbeciles who want to annoy people and waste as much time (human and
CPU) as possible.  I don't think it really matters to them that what
they are sending is incomprehensible noise, because noise is their message.

Cheers

CK


Re: forward mails as spam

2009-07-13 Thread Evan Platt

At 04:45 PM 7/13/2009, you wrote:


Hope this is the log you wanted

http://www.nabble.com/file/p24471425/block.jpg


Who are you talking to? I only see two replies, myne and another, and 
neither of us asked for a jpg image of a log.


If you're going to post something as simple as a log file, copy and 
paste as text.


And again, nothing in that log indicates SpamAssassin blocked the 
mail (it can't).


To my untrained eye, it looks like sendmail blocked the mail.

But again, without the full headers and body of the message, we don't know.




Re: forward mails as spam

2009-07-13 Thread John Hardin

On Mon, 13 Jul 2009, neroxyr wrote:


Hope this is the log you wanted

http://www.nabble.com/file/p24471425/block.jpg


No, don't send the log. Especially, don't send a *screenshot* of the log. 
Upload a copy of your test message (in text, with all headers intact) to 
someplace like pastebin. To do this you may need to alter your MTA/SA 
configuration to capture and save the message to a quarantine mailbox 
rather than rejecting it.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Windows Vista: Windows ME for the XP generation.
---
 3 days until the 64th anniversary of the dawn of the Atomic Age


Re: forward mails as spam

2009-07-13 Thread Cedric Knight
neroxyr wrote:
 Hope this is the log you wanted

 http://www.nabble.com/file/p24471425/block.jpg

It's not possible to see from this whether the first log line that you
have highlighted is necessarily related to the second and third
highlights (the message IDs are different), but I'll assume they are.

What is clear is that USER_IN_BLACKLIST caused 100 of the 103 point
score.  Do you perhaps have
   blacklist_from brennero..e etc
in your local.cf; or some blacklist_from with a * wildcard ?

CK