Re: Sudden surge in spam appearing to come from my email address

2023-07-14 Thread Robert Senger
I've set up a subdomain lists.mydomain.de (and with regex expressions
as local part, to have unique email address per list, forgot to do that
here...) with soft spf and dmarc policies and that I only use for
mailing lists. Then I can use hard failure spf and dkim policies for
the domain mydomain.de itself.

Robert

Am Freitag, dem 14.07.2023 um 19:28 -0500 schrieb Thomas Cameron:
> This kinda raises an important issue. I already have SPF/DMARC/DKIM
> set 
> up. But because I use several mailing lists, I do not have a hard
> fail 
> set up. I get SO many notices when I send email to lists that I'm
> really 
> worried about defining hard failures/rejections.
> 
> But I'll play around with what you suggested.
> 
> Thomas
> 
> On 7/14/23 18:58, David B Funk wrote:
> > 
> > Assuming you own/manage your infrastructure it should be 
> > straight-forward.
> > 
> > Create SFP records for your domain & SMTP server, set them to
> > either 
> > soft or hard fail mode.
> > If you can, also set up DKIM signing of your outgoing mail.
> > 
> > Then create rules that looks for your from address in a message and
> > a 
> > meta which says "if from me & DKIM-fail/SPF-fail hit it hard"
> > 
> > If you can work with the SPF hard fail you will also help to
> > improve 
> > your net reputation as spammers will have a harder time trying to
> > "Joe 
> > Job" you.
> > 
> > 
> > On Fri, 14 Jul 2023, Thomas Cameron wrote:
> > 
> > > All -
> > > 
> > > I am suddenly getting hammered by a BUNCH of spam that appears to
> > > be 
> > > from me. It scores low, and even though I keep feeding it to
> > > Bayes, 
> > > it's still not hitting the threshold to be marked as spam.
> > > 
> > > When I check the headers, it's coming from multiple random email 
> > > servers, but many appear to originate from hotmail/outlook.com.
> > > So 
> > > from outlook.com, through some unsecured email server, then to my
> > > server.
> > > 
> > > I'm trying to figure out how to block this stuff. Something like
> > > "if 
> > > it appears to come from me, but it's not actually coming from my 
> > > email server," block it. I don't necessarily think this is a job
> > > for 
> > > SA, but if there's a rule I can tweak or a setting I can change,
> > > I'm 
> > > all ears.
> > > 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > 
> 

-- 
Robert Senger





Re: Best practice for adding headers?

2023-07-11 Thread Robert Senger
> 
> I don't see a problem in this particular case. 
> 
> Nobody but SA or compatible spam filter adds X-Spam: headers.
> These headers are to be added by your local MTA when delivering mail
> and not 
> distributed over the net, although it happens.
> They also should not be used for DKIM signatures.
> 
> Trusting them generally when received from external source is silly,
> just 
> like trusting "this mail does not contain viruses" headers.
> 
> For those few sources I trust their x-spam* headers, I exclude
> sending MTA 
> addresses.
> 
> > Since I need to patch spamass-milter anyway to resolve a different
> > issue (calling "sendmail -bv " does not work on postfix
> > systems)
> 
> you can use -S option to override path to sendmail and call your own
> script 
> instead of patching spamass-milter
> 
> 

Agreed, it's not a problem from the technical point of view, as it's
not a problem to use the -S option to call something else which is not
sendmail (that's what I am doing right now).

It's more a matter of, well, cosmetics or aesthetics...

Regards,

Robert 

-- 
Robert Senger





Re: Best practice for adding headers?

2023-07-09 Thread Robert Senger
Am Sonntag, dem 09.07.2023 um 13:55 -0700 schrieb Loren Wilton:
> > I've patched spamass milter to let any previously added "X-Spam"
> > headers untouched
> 
> Its generally considered bad practice to pass thru X-Spam headers
> from an 
> unkonwn source.
> Like most anything else in an email header, a spammer could inject
> his own 
> headers, probably populated with items designed to generate a
> negative 
> score.
> 

Sure, but updating headers in place and adding own headers somewhere
else like spamass-milter is doing it is also bad practice in my eyes...

I've seen that other milters (clamav-milter in particular) offer an
option to either keep or remove existing virus scanning headers. 

Since I need to patch spamass-milter anyway to resolve a different
issue (calling "sendmail -bv " does not work on postfix
systems), it should be easy to add such an option to spamass-milter.

Regards,

Robert

-- 
Robert Senger





Re: Best practice for adding headers?

2023-07-09 Thread Robert Senger
Am Sonntag, dem 09.07.2023 um 19:23 +0200 schrieb David Bürgin:
> Hello Robert,
> 
> > Now, I am a bit uncertain about what would be the best practice for
> > a
> > milter to place its headers.
> > 
> > I've patched spamass milter to let any previously added "X-Spam"
> > headers untouched, and just add its own headers on top of the
> > header
> > list as required by spamassassin's results, thus leaving it up to
> > the
> > downstream software to choose which "X-Spam" headers to use for
> > furter
> > processing. This is okay for me.
> > 
> > In its original code, spamass-milter adds its own headers to the
> > bottom
> > of the header list, or updates existing "X-Spam" headers in place
> > if
> > their names match those spamass-milter uses. 
> > 
> > What do you think?
> 
> I can’t speak for spamass-milter, but in an alternative milter that I
> created¹, I tried to emulate what the ‘spamassassin’ executable does:
> Delete all incoming X-Spam- headers, and insert the newly added
> headers
> at the top.
> 
> Ciao,
> David
> 
Thanks David, never heard of spamassassin-milter before (it's not in
the Debian repos), but I'll give it a try as there seem to be more
issues with spamass-milter.

Robert

> ¹ https://crates.io/crates/spamassassin-milter

-- 
Robert Senger





Re: Share bayes database between servers

2023-07-09 Thread Robert Senger
Am Sonntag, dem 09.07.2023 um 19:21 +0200 schrieb Reindl Harald:
> 
> 
> Am 09.07.23 um 19:06 schrieb Robert Senger:
> > But bayes data may be updated by either the primary mx or the
> > backup
> > mx, since email may arrive at either server.
> 
> in a smart setup your bayes-database is read-only like here since
> 2014, 
> any autolearning disabled and strictly trained manually by a stored 
> corpus giving you the opportinity removed and add messages to the 
> training folders and revuild from scratch
> 
> we share our bayes-db even with a different company since 2014

Well, that's the boring solution... ;) Nevertheless, this is what I
will likely do if I encounter any problems with the mysql master-master
replication as I have it running now.

Robert

-- 
Robert Senger





Best practice for adding headers?

2023-07-09 Thread Robert Senger
First of all, thanks for your help!

Now, I am a bit uncertain about what would be the best practice for a
milter to place its headers.

I've patched spamass milter to let any previously added "X-Spam"
headers untouched, and just add its own headers on top of the header
list as required by spamassassin's results, thus leaving it up to the
downstream software to choose which "X-Spam" headers to use for furter
processing. This is okay for me.

In its original code, spamass-milter adds its own headers to the bottom
of the header list, or updates existing "X-Spam" headers in place if
their names match those spamass-milter uses. 

What do you think?

Robert


Am Mittwoch, dem 05.07.2023 um 01:38 +0200 schrieb Robert Senger:
> Hi all,
> 
> is there a reason why spamassassin adds its "X-Spam ..." headers to
> the
> bottom of the header block, not to the top like every other mail
> filtering software (e.g. opendkim, opendmarc, clamav ... ) does? Can
> this behavious be changed?
> 
> Regards, 
> 
> Robert
> 

-- 
Robert Senger





Share bayes database between servers

2023-07-09 Thread Robert Senger
Hi there,

I am running two mailservers, first one serving two domains, other one
serving one domain.

Both serve as backup mx for each other. Both know about users and
aliases of the other domain(s).

On both systems, spamassassin is configured to read/store userprefs and
bayes data (per user) in a local mysql database.

Both systems reject email if the score exceeds a certain limit. To
avoid backscatter (or the need to accept any spam not rejected by the
backup mx), both servers should do their spam filtering based on
exactly the same information, including bayes data.

Now, the question is, what is the best way to share bayes data between
two (or more) servers?

I already share userprefs by setting up master-master replication
between the two mysql databases on both servers. This is uncritical,
since users (or admins) will update only userprefs for the local
virtual users on each system, which means, backup mx will never touch
primary mx userprefs.

But bayes data may be updated by either the primary mx or the backup
mx, since email may arrive at either server. 

I've set up a testing environment that also uses master-master
replication of the mysql bayes database, with priority in dns set to
equal for both mx to get incoming mail distributed evenly to both
systems. So far, this seems to work, but this is a low load
environment.

Any suggestions?

Regards,

Robert


-- 
Robert Senger





Re: Position of X-Spam headers

2023-07-05 Thread Robert Senger
Am Mittwoch, dem 05.07.2023 um 14:50 +0200 schrieb Reindl Harald:
> 
> *nothing* should touch existing headers as you also have multiple 
> Reveived-headers

Good point. So, it seems that spamass-milter is doing things a bit,
well, unconventional...

I thought this is the case to not confuse later filtering (e.g. sieve)
with multiple "X-Spam-Flag" headers with possibly contradictory
results.

However, it should be easy to patch spamass-milter to keep existin
headers intact.

-- 
Robert Senger





Re: Position of X-Spam headers

2023-07-05 Thread Robert Senger
Am Mittwoch, dem 05.07.2023 um 10:20 +0200 schrieb Matus UHLAR -
fantomas:
> On 05.07.23 04:38, Robert Senger wrote:
> > Thanks for the hint that the milter is responsible for that. Found
> > a
> > little patch for spamass-milter that fixed this.
> 
> note that the headers that appear first in the message are considered
> trusted, while those below do not.
> That's why most of milters put added headers at the beginning of
> message.

Hm, trusted by whom? In my understanding, nothing in the headers can be
trusted at all as long as it's not covered by a digital signature (like
DKIM), or added by a machine under my own control...



Other point: Different spam processing milters seem to add different
"Spam-X-" headers. 

The spamass-milter software adds 

X-Spam-Checker-Version: 
X-Spam-Status: 

and, if it detects spam,

X-Spam-Flag: YES
X-Spam-Level: ***

Now, spamass-milter *replaces* any of these if they are found in the
incoming message. So, all the spam checking information added by my
backup MX is replaced by the headers of my primary MX when it receives
a message initially delivered to the backup MX, as they both use the
same spamass-milter software.

But it I look at a message received through this list, I see "Spam-X"
headers added by "Debian amavisd-new at spamproc1-he-fi.apache.org".
This software always adds

X-Spam-Score: 
X-Spam-Level: 
X-Spam-Status: 
(but no X-Spam-Checker-Version:)
 
to the top of the headers if the message is not classified as spam (it
would also add "X-Spam-Flag" if it detects spam, I assume). Now, my own
spamass-milter *replaces* "X-Spam-Status" at it's original position,
and *adds* "X-Spam-Checker-Version" at the bottom (or top, if patched)
of the headers. This is a mess...

Wouldn't it be better if all previous "Spam-X" headers get completely
removed?

-- 
Robert Senger





Re: Position of X-Spam headers

2023-07-04 Thread Robert Senger
Thanks for the hint that the milter is responsible for that. Found a
little patch for spamass-milter that fixed this.

Regards,

Robert


Am Dienstag, dem 04.07.2023 um 19:45 -0400 schrieb Jared Hall:
> On 7/4/2023 7:38 PM, Robert Senger wrote:
> > is there a reason why spamassassin adds its "X-Spam ..." headers to
> > the
> > bottom of the header block, not to the top like every other mail
> > filtering software (e.g. opendkim, opendmarc, clamav ... ) does?
> > Can
> > this behavious be changed?
> Mine are at the top, but usually this is the responsibility of the 
> Milter.  What Milter/content_filter are you using?
> 
> -- Jared Hall
> 

-- 
Robert Senger





Re: Position of X-Spam headers

2023-07-04 Thread Robert Senger
Hi Jared,

I am using spamass-milter.

Robert

Am Dienstag, dem 04.07.2023 um 19:45 -0400 schrieb Jared Hall:
> On 7/4/2023 7:38 PM, Robert Senger wrote:
> > is there a reason why spamassassin adds its "X-Spam ..." headers to
> > the
> > bottom of the header block, not to the top like every other mail
> > filtering software (e.g. opendkim, opendmarc, clamav ... ) does?
> > Can
> > this behavious be changed?
> Mine are at the top, but usually this is the responsibility of the 
> Milter.  What Milter/content_filter are you using?
> 
> -- Jared Hall
> 

-- 
Robert Senger





Position of X-Spam headers

2023-07-04 Thread Robert Senger
Hi all,

is there a reason why spamassassin adds its "X-Spam ..." headers to the
bottom of the header block, not to the top like every other mail
filtering software (e.g. opendkim, opendmarc, clamav ... ) does? Can
this behavious be changed?

Regards, 

Robert

-- 
Robert Senger





Re: 4.0.0 noisier than earlier releases?

2023-05-18 Thread Robert Nicholson
For what it’s worth my perlscript has the following “imports”

use IO::Handle;
use Date::Parse;
use Time::Zone;
use Mail::Audit qw(List KillDups);
use Mail::SpamAssassin;
use Mail::SpamAssassin::Message;
use Mail::SpamAssassin::PerMsgStatus;
use Mail::SpamAssassin::PluginHandler;
use IO::Scalar;
use MIME::Parser;
use MIME::Entity;
use Mail::Address;
use Email::Valid;
use Text::Wrap;
use File::Path;
#use SOAP::Lite;
use Mail::IMAPClient;
use HTTP::Request::Common qw(GET POST);
use LWP::UserAgent;
use XML::XPath;
use Getopt::Long;
use HTML::LinkExtor;

use DB_File::Lock;
use Fcntl qw(:DEFAULT :flock);
use POSIX qw(strftime);

use Authen::Captcha;

use MIME::Base64;
use Digest::HMAC_SHA1 qw(hmac_sha1 hmac_sha1_hex);
use URI::Escape qw(uri_escape);

It’s not a coincidence is it that when you research these errors you end up 
finding SA specific reference like the following

https://www.mail-archive.com/users@spamassassin.apache.org/msg100167.html

 perl -version

This is perl 5, version 16, subversion 3 (v5.16.3) built for 
x86_64-linux-thread-multi
(with 44 registered patches, see perl -V for more detail)

Copyright 1987-2012, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.



> On May 15, 2023, at 8:52 PM, Robert Nicholson  wrote:
> 
> Subroutine NetAddr::IP::STORABLE_freeze redefined at 
> /usr/local/lib64/perl5/NetAddr/IP.pm line 365.



Exim errors related to the SpamAssassin?

2023-05-17 Thread Robert Nicholson
So the exim error I see is something like this

2023-05-17 13:16:14 1pyvlo-0006AM-0v internal problem in userforward router 
(recipient is elast...@lhvm02.lizardhill.com): failure to transfer data from 
subprocess: status=0100 readerror='No such file or directory’

Now the userforward filter I have is a perlscript that will launch spamassassin 
programmatically.

I cannot tell for sure but email from one particular address consistently 
results in the above error and I’m thinking it may be related to plugins in use 
by SA.

Besides Razor (You’ll see I’ve commented it out)  which of these rely upon 
external executables?

init.pre:# loadplugin Mail::SpamAssassin::Plugin::RelayCountry
init.pre:loadplugin Mail::SpamAssassin::Plugin::URIDNSBL
init.pre:loadplugin Mail::SpamAssassin::Plugin::SPF
v310.pre:#loadplugin Mail::SpamAssassin::Plugin::DCC
v310.pre:loadplugin Mail::SpamAssassin::Plugin::Pyzor
v310.pre:#RDN loadplugin Mail::SpamAssassin::Plugin::Razor2
v310.pre:loadplugin Mail::SpamAssassin::Plugin::SpamCop
v310.pre:#loadplugin Mail::SpamAssassin::Plugin::AntiVirus
v310.pre:#loadplugin Mail::SpamAssassin::Plugin::AWL
v310.pre:loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold
v310.pre:#loadplugin Mail::SpamAssassin::Plugin::TextCat
v310.pre:#loadplugin Mail::SpamAssassin::Plugin::AccessDB
v310.pre:loadplugin Mail::SpamAssassin::Plugin::WelcomeListSubject
v310.pre:loadplugin Mail::SpamAssassin::Plugin::MIMEHeader
v310.pre:loadplugin Mail::SpamAssassin::Plugin::ReplaceTags
v312.pre:loadplugin Mail::SpamAssassin::Plugin::DKIM
v320.pre:loadplugin Mail::SpamAssassin::Plugin::Check
v320.pre:loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch
v320.pre:loadplugin Mail::SpamAssassin::Plugin::URIDetail
v320.pre:# loadplugin Mail::SpamAssassin::Plugin::Shortcircuit
v320.pre:loadplugin Mail::SpamAssassin::Plugin::Bayes
v320.pre:loadplugin Mail::SpamAssassin::Plugin::BodyEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::DNSEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::HTMLEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::HeaderEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::MIMEEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::RelayEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::URIEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::WLBLEval
v320.pre:loadplugin Mail::SpamAssassin::Plugin::VBounce
v320.pre:# loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
v320.pre:# loadplugin Mail::SpamAssassin::Plugin::ASN
v320.pre:loadplugin Mail::SpamAssassin::Plugin::ImageInfo
v330.pre:#loadplugin Mail::SpamAssassin::Plugin::PhishTag
v330.pre:loadplugin Mail::SpamAssassin::Plugin::FreeMail
v340.pre:loadplugin Mail::SpamAssassin::Plugin::AskDNS
v341.pre:# loadplugin Mail::SpamAssassin::Plugin::TxRep
v341.pre:# loadplugin Mail::SpamAssassin::Plugin::URILocalBL
v341.pre:# loadplugin Mail::SpamAssassin::Plugin::PDFInfo
v342.pre:loadplugin Mail::SpamAssassin::Plugin::HashBL
v342.pre:# loadplugin Mail::SpamAssassin::Plugin::ResourceLimits
v342.pre:# loadplugin Mail::SpamAssassin::Plugin::FromNameSpoof
v342.pre:# loadplugin Mail::SpamAssassin::Plugin::Phishing
v343.pre:# loadplugin Mail::SpamAssassin::Plugin::OLEVBMacro
v400.pre:# loadplugin Mail::SpamAssassin::Plugin::ExtractText
v400.pre:# loadplugin Mail::SpamAssassin::Plugin::DecodeShortURLs
v400.pre:loadplugin Mail::SpamAssassin::Plugin::DMARC

Please note Razor does this on my machine at the moment which means it’s out of 
date relative to what’s happen on the box.

./razor-client
-bash: ./razor-client: /usr/local/bin/perl: bad interpreter: No such file or 
directory

But also note the No such file or directory 

Any of the above plugins similarly use external executables?

I’m not the administrator on the machine so I cannot debug the exim but I could 
if necessary debug the perlscript and SA but so far I’ve not seen anything 
unusual when I push offending messages thru the perlscript.







4.0.0 noisier than earlier releases?

2023-05-15 Thread Robert Nicholson
I remember writing in the past about what I saw in the debugger when running SA 
3.4.6

It seems that 4.0.0 seems even noisier.

Again this is my programmatically calling SpamAssassin in a perlscript.

I’ve checked and I didn’t find any other version of NetAddr::IP in the @INC

Subroutine NetAddr::IP::STORABLE_freeze redefined at 
/usr/local/lib64/perl5/NetAddr/IP.pm line 365.
at /usr/local/lib64/perl5/NetAddr/IP.pm line 365.
NetAddr::IP::import('NetAddr::IP') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
Mail::SpamAssassin::Plugin::WLBLEval::BEGIN() called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
eval {...} called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
require Mail/SpamAssassin/Plugin/WLBLEval.pm called at (eval 
231)[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm:129]
 line 1
eval ' require Mail::SpamAssassin::Plugin::WLBLEval; 
;' called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm 
line 129

Mail::SpamAssassin::PluginHandler::load_plugin('Mail::SpamAssassin::PluginHandler=HASH(0x4ec4a20)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef, undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5308

Mail::SpamAssassin::Conf::load_plugin('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 4300

Mail::SpamAssassin::Conf::__ANON__[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm:4301]('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'loadplugin', 'Mail::SpamAssassin::Plugin::WLBLEval', 'loadplugin 
Mail::SpamAssassin::Plugin::WLBLEval') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf/Parser.pm line 
437

Mail::SpamAssassin::Conf::Parser::parse('Mail::SpamAssassin::Conf::Parser=HASH(0x4e25688)',
 'file start /home/elastica/SALOCAL-4.0.0/etc/mail/spamassassin...', 0) called 
at /home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5032

Mail::SpamAssassin::Conf::parse_rules('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'file start /home/elastica/SALOCAL-4.0.0/etc/mail/spamassassin...') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin.pm line 1792
Mail::SpamAssassin::init('Mail::SpamAssassin=HASH(0x4e25a90)', 1) 
called at /home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin.pm line 580
Mail::SpamAssassin::check('Mail::SpamAssassin=HASH(0x4e25a90)', 
'Mail::SpamAssassin::Message=HASH(0x4f06120)') called at filter70.pl line 2288
main::decorate_mail('Mail::Audit::MimeEntity=HASH(0x4e65e48)', 
'SCALAR(0x4c7f1a0)') called at filter70.pl line 966
Subroutine NetAddr::IP::STORABLE_thaw redefined at 
/usr/local/lib64/perl5/NetAddr/IP.pm line 377.
at /usr/local/lib64/perl5/NetAddr/IP.pm line 377.
NetAddr::IP::import('NetAddr::IP') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
Mail::SpamAssassin::Plugin::WLBLEval::BEGIN() called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
eval {...} called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
require Mail/SpamAssassin/Plugin/WLBLEval.pm called at (eval 
231)[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm:129]
 line 1
eval ' require Mail::SpamAssassin::Plugin::WLBLEval; 
;' called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm 
line 129

Mail::SpamAssassin::PluginHandler::load_plugin('Mail::SpamAssassin::PluginHandler=HASH(0x4ec4a20)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef, undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5308

Mail::SpamAssassin::Conf::load_plugin('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 4300

Mail::SpamAssassin::Conf::__ANON__[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm:4301]('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'loadplugin', 'Mail::SpamAssassin::Plugin::WLBLEval', 'loadplugin 
Mail::SpamAssassin::Plugin::WLBLEval') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf/Parser.pm line 
437

Mail::SpamAssassin::Conf::Parser::parse('Mail::SpamAssassin::Conf::Parser=HASH(0x4e25688)',
 'file start /home/elastica/SALOCAL-4.0.0/etc/mail/spamassassin...', 0) called 
at /home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5032


4.0.0 noisier than earlier releases?

2023-05-15 Thread Robert Nicholson
I remember writing in the past about what I saw in the debugger when running SA 
3.4.6

It seems that 4.0.0 seems even noisier.

Again this is my programmatically calling SpamAssassin in a perlscript.

I’ve checked and I didn’t find any other version of NetAddr::IP in the @INC

Subroutine NetAddr::IP::STORABLE_freeze redefined at 
/usr/local/lib64/perl5/NetAddr/IP.pm line 365.
 at /usr/local/lib64/perl5/NetAddr/IP.pm line 365.
NetAddr::IP::import('NetAddr::IP') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
Mail::SpamAssassin::Plugin::WLBLEval::BEGIN() called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
eval {...} called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
require Mail/SpamAssassin/Plugin/WLBLEval.pm called at (eval 
231)[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm:129]
 line 1
eval ' require Mail::SpamAssassin::Plugin::WLBLEval; 
;' called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm 
line 129

Mail::SpamAssassin::PluginHandler::load_plugin('Mail::SpamAssassin::PluginHandler=HASH(0x4ec4a20)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef, undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5308

Mail::SpamAssassin::Conf::load_plugin('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 4300

Mail::SpamAssassin::Conf::__ANON__[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm:4301]('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'loadplugin', 'Mail::SpamAssassin::Plugin::WLBLEval', 'loadplugin 
Mail::SpamAssassin::Plugin::WLBLEval') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf/Parser.pm line 
437

Mail::SpamAssassin::Conf::Parser::parse('Mail::SpamAssassin::Conf::Parser=HASH(0x4e25688)',
 'file start /home/elastica/SALOCAL-4.0.0/etc/mail/spamassassin...', 0) called 
at /home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5032

Mail::SpamAssassin::Conf::parse_rules('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'file start /home/elastica/SALOCAL-4.0.0/etc/mail/spamassassin...') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin.pm line 1792
Mail::SpamAssassin::init('Mail::SpamAssassin=HASH(0x4e25a90)', 1) 
called at /home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin.pm line 580
Mail::SpamAssassin::check('Mail::SpamAssassin=HASH(0x4e25a90)', 
'Mail::SpamAssassin::Message=HASH(0x4f06120)') called at filter70.pl line 2288
main::decorate_mail('Mail::Audit::MimeEntity=HASH(0x4e65e48)', 
'SCALAR(0x4c7f1a0)') called at filter70.pl line 966
Subroutine NetAddr::IP::STORABLE_thaw redefined at 
/usr/local/lib64/perl5/NetAddr/IP.pm line 377.
 at /usr/local/lib64/perl5/NetAddr/IP.pm line 377.
NetAddr::IP::import('NetAddr::IP') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
Mail::SpamAssassin::Plugin::WLBLEval::BEGIN() called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
eval {...} called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Plugin/WLBLEval.pm 
line 25
require Mail/SpamAssassin/Plugin/WLBLEval.pm called at (eval 
231)[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm:129]
 line 1
eval ' require Mail::SpamAssassin::Plugin::WLBLEval; 
;' called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/PluginHandler.pm 
line 129

Mail::SpamAssassin::PluginHandler::load_plugin('Mail::SpamAssassin::PluginHandler=HASH(0x4ec4a20)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef, undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5308

Mail::SpamAssassin::Conf::load_plugin('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'Mail::SpamAssassin::Plugin::WLBLEval', undef) called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 4300

Mail::SpamAssassin::Conf::__ANON__[/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm:4301]('Mail::SpamAssassin::Conf=HASH(0x4e255f8)',
 'loadplugin', 'Mail::SpamAssassin::Plugin::WLBLEval', 'loadplugin 
Mail::SpamAssassin::Plugin::WLBLEval') called at 
/home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf/Parser.pm line 
437

Mail::SpamAssassin::Conf::Parser::parse('Mail::SpamAssassin::Conf::Parser=HASH(0x4e25688)',
 'file start /home/elastica/SALOCAL-4.0.0/etc/mail/spamassassin...', 0) called 
at /home/elastica/SALOCAL-4.0.0/share/perl5/Mail/SpamAssassin/Conf.pm line 5032


Updated from 3.4.0 to 3.4.6 very noisy debug output.

2021-12-29 Thread Robert Nicholson
I just updated from 3.4.0 to 3.4.6 and the output in perl debugger when I 
programmatically using SA is quite noisy.

Where can I find 3.4.1 etc so I can incrementally update from 3.4.0 so I can 
see where the dramatic change is coming from?

When I use my script in the debugger from 3.4.0 there is no noisy output like 
from 3.4.6


Subroutine NetAddr::IP::STORABLE_freeze redefined at 
/usr/local/lib64/perl5/NetAddr/IP.pm line 365.
 at /usr/local/lib64/perl5/NetAddr/IP.pm line 365.

Subroutine NetAddr::IP::STORABLE_thaw redefined at 
/usr/local/lib64/perl5/NetAddr/IP.pm line 377.
 at /usr/local/lib64/perl5/NetAddr/IP.pm line 377.




Re: Website "help" spams

2021-07-29 Thread Robert S
So far so good. 16 messages marked as spam over the last 12hr and one
got through. Can I send the one that got through to somebody
anonymously?

On Fri, Jul 30, 2021 at 6:18 AM Kevin A. McGrail  wrote:
>
> Lol and Thanks :-)  The key thing you are seeing I would guess is our RBL.  
> We took it out of the public rules because a big player added it to their 
> systems and we couldn't handle lookups from a few bazillion boxes.  Linode 
> donated us two boxes so it's on the roadmap to open it back up to the public.
>
> However, You should really install the rules with the channel ruleset too: 
> https://mcgrail.com/template/kam.cf_channel
>
> Regards,
> KAM
> --
> Kevin A. McGrail
> Member, Apache Software Foundation
> Chair Emeritus Apache SpamAssassin Project
> https://www.linkedin.com/in/kmcgrail - 703.798.0171
>
>
> On Thu, Jul 29, 2021 at 4:15 PM Benny Pedersen  wrote:
>>
>> On 2021-07-29 21:37, Kevin A. McGrail wrote:
>> > The KAMOnly plugin is not needed.  It activates rules for our
>> > infrastructure.
>>
>> i like the infrastructure then


Re: Process of domain submission for inclusion in 60_whitelist_auth.cf

2021-07-12 Thread Robert Harnischmacher
Hi Bill,

thanks for the detailed explanations. I understand the purpose of the 
def_whitelist_auth list better now, but wonder if its benefit is not 
overcompensated by significant negative effects, certainly not desired by the 
community.

First of all, I would like to contribute some statistical findings:

A look at the exemplary group of the largest 1,000 U.S. online stores according 
to Alexa Rank shows that about 15 percent of the domains are whitelisted in 
60_whitelist_auth.cf. There are no significant and especially no consistent 
differences in the email reputation of these 15 percent compared to the rest of 
the top 1,000. This would not be a problem if the Spamassassin whitelist did 
not unintentionally give the 15 percent a competitive advantage. Based on the 
high spam score bonus of 7.5 points, which USER_IN_DEF_DKIM_WL and 
USER_IN_DEF_SPF_WL bring, one can for example risk a higher frequency of mass 
mailings, run riskier reactivation campaigns or write to "broader" distribution 
lists: Possible spam scores, for example due to blacklisting, would be ironed 
out by the above-mentioned "bonus". And indeed: With some stores from the 15 
percent group I see again and again - partly even consistently - serious 
blacklistings.

There are about 16 DKIM rules and 12 SPF rules in Spamassassin that are meant 
to evaluate in a technically automated way whether and how good the SPF and 
DKIM implementation of a sender is. Interestingly, the comment in 
60_whitelist_auth.cf. says: "These listings are intended to (...) reward 
senders that send with good SPF, DKIM, and DMARC." With this in mind, it seems 
like a logical overlap to me that USER_IN_DEF_DKIM_WL and USER_IN_DEF_SPF_WL 
introduce additional high "bonus" scores based solely on human judgment at the 
one-time point of a check. Almost all of the senders listed in 
60_whitelist_auth.cf have changed their email service providers one or more 
times over the years, with sometimes significant changes in the quality of 
their deliverability settings and with significant differences in list hygiene, 
sending frequency, etc. But the spam score bonus of 7.5 remains nailed down all 
the time! 

In short, I would recommend considering removing the DKIM and SPF whitelists in 
Spamassassin altogether. It would make the spam-fighting world a better and 
fairer place!

Best,

Robert

> Am 29.06.2021 um 06:52 schrieb Bill Cole 
> :
> 
> On 2021-06-28 at 17:04:05 UTC-0400 (Mon, 28 Jun 2021 23:04:05 +0200)
> Robert Harnischmacher 
> is rumored to have said:
> 
>> In which form can one submit the subdomain of a mail sender for the 
>> integration in 60_whitelist_auth.cf. Which information is required for 
>> consideration?
> 
> There is no process by which a sender can pro-actively apply for the addition 
> of a def_whitelist_auth entry in that file. Entries are added rarely, when a 
> committer to the project sees a need for an entry due to false positives or 
> borderline scoring of messages from a sender who is not known to send ANY 
> spam and is known to send "ham" that users value highly. Removal of entries 
> is equally ad hoc and unilateral, and more rare. If a committer is convinced 
> that an entry is causing spam to be misclassified as ham, they can remove 
> that entry.
> 
> Note that the above describes concrete process and vague criteria, not any 
> sort of objective formal policy. There is no objective official policy. The 
> normal state for any sender is to not have an entry. I believe that most 
> committers to the project would agree with me that ideally there would be no 
> such list because high-value ham would be more readily distinguishable from 
> spam. Additions and removals happen when they are believed to address a 
> concrete problem being experienced by actual SpamAssassin users. I don't 
> recall any significant disagreements about entries in that list, but if there 
> were any they could be discussed here or on the 'dev' list. Ultimately, the 
> PMC would be the final authority on including an entry or not, however our 
> processes for deciding anything that becomes an issue for the PMC is biased 
> towards stability, not agility.
> 
> 
> 
> 
> -- 
> Bill Cole
> b...@scconsult.com or billc...@apache.org
> (AKA @grumpybozo and many *@billmail.scconsult.com addresses)
> Not Currently Available For Hire



smime.p7s
Description: S/MIME cryptographic signature


Process of domain submission for inclusion in 60_whitelist_auth.cf

2021-06-28 Thread Robert Harnischmacher
In which form can one submit the subdomain of a mail sender for the integration 
in 60_whitelist_auth.cf. Which information is required for consideration?

Thank you!

Best, Robert


Re: OT: "...value judgement"

2020-07-21 Thread Robert Schetterer

Am 21.07.20 um 21:07 schrieb Bill Cole:

On 21 Jul 2020, at 14:06, Grant Taylor wrote:


On 7/21/20 11:56 AM, Bill Cole wrote:
All answers: "NO!" In those cases, "black" and "white" all reference 
actual colors of physical things, not a metaphorical value judgment.


Hum.  Your "value judgement" statement is interesting.

The original meaning of blacklist that I found seems to be exactly 
that, a value judgement on if it was okay / safe to do business with 
people / businesses or not.  Specifically if someone (independent of 
race) was unsafe to do business with, they were added to the blacklist.


Precisely.

That usage is problematic because in many (most? all?) Anglophone 
societies, "Black" is an ethno-racial label. In some cases (UK, US, 
probably more) it is accepted and internalized as an identity by those 
thus labeled. This creates a naming collision with the usage of "black" 
and "white" as metaphorical labels for value judgments.


The degree of annoyance caused by that collision of connotations varies 
widely.




Hi @ll, can we focus on tec problems again ?



--
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Spamass milter question

2020-05-27 Thread Robert Schetterer

Am 27.05.20 um 18:35 schrieb @lbutlr:

What, if any, local SpamAssassin settings does spams-milter use when processing 
incoming mail?

For example, if I wanted to white list a sender or blacklist a domain, would 
the general settings in /usr/local/etc/spamassasin/local.cf be the place?

I am wondering because I have a server whitelisted in that file (or do I?), but 
I am seeing occasional logs like:

postfix/cleanup[7771] 49MN7m64m8z2rPFW: milter-reject: END-OF-MESSAGE from 
server.example.com[n.n.n.n]: 5.7.1 Blocked by SpamAssassin;

# Allow all mailing list posts from example.com
whitelist_from_rcvd: *@* server.example.com

This seems to be in accordance with the docs.



 i think it was

*@example.com

but perhaps my memory is out of date

--
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: From Spoofed

2020-03-02 Thread Robert A. Ober

On 2/26/20 9:54 AM, Bill Cole wrote:

On 26 Feb 2020, at 10:16, Robert A. Ober wrote:

 don't participate because I'm just good enough to maintain my 
customers email servers,


Which puts you in the top 99.999th percentile of email server skills 
worldwide!



––

Ha,  I hope that's wrong:-)

BTW,  removing the line I had overlooked in the whitelist along with the 
rules did solve the issue.  Think I will remove the extremeshok stuff 
and see what happens.


Y'all take care,
Robert


Fwd: Re: From Spoofed

2020-02-26 Thread Robert A. Ober

Sorry,

Thought I did replay all, but did not.

I did find a whitelist line I missed.  I guess the third time IS the 
charm as the saying goes.  I will be monitoring her Inbox again today to 
see if that solves it.


The server is on a version of Linux that may have stopped getting 
updates so I suppose that is why the spamassassin version is old.  I 
will endeavor to update the server soon.


Thanks for all the answers!  I don't participate because I'm just good 
enough to maintain my customers email servers, but I really appreciate 
the expertise on this list.


Y'all have some fun,
Robert


 Forwarded Message 
Subject:Re: From Spoofed
Date:   Wed, 26 Feb 2020 08:34:16 -0600
From:   Robert A. Ober 
To: David B Funk 



On 2/25/20 9:04 PM, David B Funk wrote:

On Wed, 26 Feb 2020, Benny Pedersen wrote:


Robert A. Ober skrev den 2020-02-26 02:28:


I have a user that is getting many emails with obscene subjects.
Someone is spoofing the From to include the users domain so the email
is hitting "USER_IN_WHITELIST".  I have installed the plugins from
extremeshok and it has not stopped the problem.


remove whitelist_from in spamassassin, or change it to score -0.1

i will not argue on why whitelist_from even exists


The SUBJECT_FUCKBUDDY rule has a score of 3.0 .


change score to 300

upgrade to 3.4.4 btw


I won't argue with the recommendation to upgrade but his real problem is:

Someone is spoofing the From to include the users domain so the email is 

hitting "USER_IN_WHITELIST"

That says somebody has taken the users' domain and added it to a 
"whitelist_from" statement. That is -not- a SA default.


So first kill that ill-advised whitelist_from


–––

I did that previously, but I will check again.

Thanks all for the answers, I will read them all hopefully within the hour.

Robert


From Spoofed

2020-02-25 Thread Robert A. Ober

    Hey Folks,

I have a user that is getting many emails with obscene subjects. Someone 
is spoofing the From to include the users domain so the email is hitting 
"USER_IN_WHITELIST".  I have installed the plugins from extremeshok and 
it has not stopped the problem.


  Emails have header info such as:

X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mail

X-Spam-Level:

X-Spam-Status: No, score=-60.8 required=5.0 
tests=ALL_CODING,ALL_OZ,BAYES_99,


BAYES_999,FROM_EXCESS_BASE64,HTML_IMAGE_ONLY_12,HTML_MESSAGE,

HTML_SHORT_LINK_IMG_2,MIME_HTML_ONLY,RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_PBL,

RCVD_IN_PSBL,RCVD_IN_RP_RNBL,RCVD_IN_SBL_CSS,RCVD_IN_SORBS_WEB,RCVD_IN_XBL,

RDNS_NONE,SERGIO_SUBJECT_PORN014,SUBJECT_FUCKBUDDY,URIBL_ABUSE_SURBL,

URIBL_BLACK,URIBL_DBL_SPAM,URIBL_SBL,USER_IN_WHITELIST autolearn=no

    version=3.3.2

The SUBJECT_FUCKBUDDY rule has a score of 3.0 .

Subject line has "Hungry for a Fuckbuddy" .  Sorry I can't paste, it did 
not come through formatted properly when the user forwarded from Outlook 
and it's gone from her Inbox on the server.


If I send a test email with Fuckbuddy in the subject from my GMail 
account spamassassin catches it and it and sends it to the spam folder.


Ideas?

Thanks,
Robert

Robert A. Ober
IT Consultant, Vidcaster, & Freelancer
www.infohou.com
Houston, TX




Re: URIBL_SBL_A - Spamhaus false positive..

2020-01-23 Thread Robert Braver
Hello Riccardo,

On Thursday, January 23, 2020, 7:53:18 AM, Riccardo Alfieri wrote:

RA> if you would care to forward me offlist a complete sample that triggers
RA> the FPs I'll be happy to investigate

FWIW, these very messages to the SA list this morning mentioning this domain
triggered for me as well, e.g.:

X-Spam-Report:
* -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at https://www.dnswl.org/, 
high
*  trust
*  [207.244.88.153 listed in list.dnswl.org]
*  0.1 URIBL_SBL_A Contains URL's A record listed in the Spamhaus SBL
*  blocklist
*  [URIs: fluent.ltd.uk]
*  1.6 URIBL_SBL Contains an URL's NS IP listed in the Spamhaus SBL
*  blocklist
*  [URIs: fluent.ltd.uk]




-- 
Best regards,
 Robert Braver
 rbra...@ohww.norman.ok.us



Custom DMARC_FAIL rule

2018-11-26 Thread Robert Fitzpatrick
I have the following custom rules working pretty well in testing, but 
ran into this message with two "Authentication-Results" headers:



Authentication-Results: mx3.webtent.org; dmarc=none (p=none dis=none)
header.from=email.monoprice.com
Authentication-Results: mx3.webtent.org;
dkim=fail reason="signature verification failed" (2048-bit key;
unprotected) header.d=email.monoprice.com
header.i=@email.monoprice.com header.b=JvTxQQIc


This triggers DMARC_FAIL in my custom rules below, but all I want to 
pick up on is 'header.from' failures. What do I need to change the 
regular expression to also pick up on header.from in the header? Would I 
just add '.*header.form' after =fail?



# DMARC rules
header __DMARC_FAIL Authentication-Results =~ /webtent.org; (dmarc|dkim)=fail /
meta   DMARC_FAIL   (__DMARC_FAIL && !__DOS_HAS_LIST_ID && 
!__DOS_HAS_MAILING_LIST)
describe DMARC_FAIL DMARC or DKIM authentication failed
score DMARC_FAIL 3.7

meta WT_FORGED_SENDER (DMARC_FAIL && !DKIM_VALID)
describe WT_FORGED_SENDER To score high when DMARC fails w/o valid DKIM
scoreWT_FORGED_SENDER 8.0

header __DMARC_PASS Authentication-Results =~ /webtent.org; (dmarc|dkim)=pass /
meta   DMARC_PASS  (__DMARC_PASS && !DMARC_FAIL)
describe DMARC_PASS DMARC or DKIM authentication valid
tflags DMARC_PASS nice
score DMARC_PASS -1.1

meta   DMARC_NONE   (!DMARC_PASS && !DMARC_FAIL)
describe DMARC_NONE No DMARC or DKIM authentication
score DMARC_NONE 0.001


Any suggestions for setting up DMARC custom rules appreciated.

--
Robert



Re: Forgery with SPF/DKIM/DMARC

2018-11-16 Thread Robert Fitzpatrick

Dominic Raferd wrote on 11/16/2018 8:50 AM>

Please clarify what you mean by 'even though SPF and DKIM is setup
with DMARC to reject'? I presume that 'company.com' does not have a
DMARC p=reject policy, or else your DMARC program (e.g. opendmarc)
should block forged emails from them.



Oh yes, sorry, the names changed to protect the innocent. But now that I 
am confirming, I don't see the _dmarc record setup by the DNS company as 
requested. So, this message with would fail DMARC if setup for 
company.com to reject as you noted? I'll send them the request again and 
see, thanks.


--
Robert



Forgery with SPF/DKIM/DMARC

2018-11-16 Thread Robert Fitzpatrick
We're having an issue with spam coming from the same company even though 
SPF and DKIM is setup with DMARC to reject. Take this forwarded email 
for instances


 Original message  
From: User  
Date: 11/15/18 10:42 AM (GMT-07:00) 
To: Other User  
Subject: OVERDUE INVOICE 

Sorry for the delay…. This is an invoice reminder. The total for your item is $1,879.17. 

THX, 

- 

User 
T 123.456.7890 | O 123.456.7891 
EMail:u...@company.com


However, the raw headers show as this...


Date: Thu, 15 Nov 2018 18:35:35 +0100
From: User 

To: other.u...@company.com
Message-ID: <860909106225419267.2007038e08376...@company.com>
Subject: OVERDUE INVOICE


Could someone suggest a rule to match the signature with the last From 
email or envelope from? Or another suggestion how this could be resolved.


Thanks!

--
Robert



FSL_BULK_SIG still active?

2018-04-07 Thread Robert Boyl
Hi, everyone

Pls...

Is this still an active spamassassin test?

header   __FSL_HAS_LIST_UNSUB  exists:List-Unsubscribe
meta FSL_BULK_SIG  ((DCC_CHECK || RAZOR2_CHECK || PYZOR_CHECK)
&& !__FSL_HAS_LIST_UNSUB)
describe FSL_BULK_SIG  Bulk signature with no Unsubscribe

Had some odd false positive due to its high score of 1,35...

It was a forgot password message... and it scored "Bulk signature with no
Unsubscribe".

Seems strange as it depends on DCC, Razor, Pyzor, systems that I also see
score wrongly.

Thanks.
Rob


Lots of money, score of 0??

2018-03-27 Thread Robert Boyl
Guys,

Do you usually tune up Lots of money rule? Strange, our spamassassin/EFA
scores 0 and false negative. Imho it should score at least something, few
people would write Million dollars in an email, why not add up score?

LOTS_OF_MONEY 0.00

See https://pastebin.com/dY6iFeYL

Thanks!
Rob


razor?

2018-03-09 Thread Robert Boyl
Hi, everyone

Just wondering, whats your thoughts on Razor?

Havent analysed big amount of emails yet, but Ive had a few cases where it
causes very strange false positives that make no sense.

and adds a lot of points...

RAZOR2_CF_RANGE_51_100 0.36, RAZOR2_CF_RANGE_E8_51_100 2.43, RAZOR2_CHECK
1.73

It says on their site " Detection is done with statistical and randomized
signatures that efficiently spot mutating spam content. "

For example those scores were for a totally legit email that had some
screenshots embedded in the email...

Also, how to report FP?

Thanks.
Rob


catching a dot in the number of a rule

2018-01-19 Thread Robert Boyl
Hi, masters!

I know
[1-9]{1,5} spreadsheets

catches somnething like

23244 spreadsheets

What about 23.244 spreadhseets? How to make the rule consider a dot in the
number?

Thank you!
Rob


No message ID

2017-11-09 Thread Robert Fitzpatrick
I have a user getting slammed with messages not being filtered like 
below, I can't find the IP or address in any part of a whitelist. I'm 
wondering if the missing message ID can cause this? Or should I setup a 
rule to kill messages without the ID?


Nov  8 13:08:30 mx2 maiad[49762]: (49762-03) Passed CLEAN, 
[158.69.253.173] [158.69.253.173] <cont...@u-bordeaux-montaigne.fr> -> 
<u...@webtent.com>, Hits: -, 1127 ms


This is the MTA info for the above example message


root@mx2:~ # bzcat /var/log/maillog.0.bz2 | grep C9795D7E7D
Nov  8 13:08:27 mx2 postfix/smtpd[49544]: C9795D7E7D: 
client=wanteaven.net[158.69.253.173]
Nov  8 13:08:27 mx2 postfix/cleanup[49747]: C9795D7E7D: message-id=<>
Nov  8 13:08:28 mx2 opendkim[829]: C9795D7E7D: wanteaven.net [158.69.253.173] 
not internal
Nov  8 13:08:28 mx2 opendkim[829]: C9795D7E7D: not authenticated
Nov  8 13:08:28 mx2 opendmarc[833]: C9795D7E7D: u-bordeaux-montaigne.fr none
Nov  8 13:08:28 mx2 postfix/qmgr[915]: C9795D7E7D: 
from=<cont...@u-bordeaux-montaigne.fr>, size=2134250, nrcpt=1 (queue active)
Nov  8 13:08:30 mx2 postfix/smtp[48641]: C9795D7E7D: to=<u...@webtent.com>, 
relay=127.0.0.1[127.0.0.1]:10024, delay=2.6, delays=1.4/0/0/1.2, dsn=2.6.0, 
status=sent (250 2.6.0 Ok, id=49762-03, from MTA: 250 2.0.0 Ok: queued as EFB09D7E9D)
Nov  8 13:08:30 mx2 postfix/qmgr[915]: C9795D7E7D: removed



--
Robert



Re: new campaign: bitly & appengine.google

2017-09-26 Thread Robert Kudyba
Still seeing these after a restart of SA with sendmail:

Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_PHISHY_DOLLARS has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_VOICEMAIL has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_FAKE_DELIVER has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_BBB has dependency
'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_JURY has dependency
'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_REALLY_FAKE_DELIVER
has dependency 'KAM_RPTR_PASSED' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_VERY_MALWARE has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_NOTIFY2 has
dependency 'KAM_IFRAME' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_CARD has dependency
'KAM_RPTR_SUSPECT' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_FORGED_ATTACHED has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_EVICTION has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test JMQ_CONGRAT has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_BADPDF2 has
dependency 'KAM_RPTR_SUSPECT' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_PAYPAL2 has
dependency 'KAM_RAPTOR' with a zero score
Sep 26 14:14:17 storm spamd[9784]: rules: meta test KAM_AMAZON has
dependency 'KAM_RAPTOR' with a zero score


On Thu, Sep 14, 2017 at 10:21 AM, Kevin A. McGrail <
kevin.mcgr...@mcgrail.com> wrote:

> I'll check but nothing jummps out as an issue.  Ping me next Wednesday.
>
>
> On 9/14/2017 10:18 AM, Robert Kudyba wrote:
>
> A few less now, so these are ok to ignore?
>
> spamassassin -D --lint 2>&1 | grep -Ei '(failed|undefined dependency|score
> set for non-existent rule)'
> Sep 14 10:15:48.606 [21681] dbg: config: warning: *score set for
> non-existent rule* DNS_FROM_RFC_DSN
> Sep 14 10:15:48.606 [21681] dbg: config: warning: *score set for
> non-existent rule* __RFC_IGNORANT_ENVFROM
> Sep 14 10:15:48.607 [21681] dbg: config: warning: *score set for
> non-existent rule* DNS_FROM_RFC_BOGUSMX
> Sep 14 10:15:48.607 [21681] dbg: config: warning: *score set for
> non-existent rule* FILL_THIS_FORM_FRAUD_PHISH
> Sep 14 10:15:48.607 [21681] dbg: config: warning: *score set for
> non-existent rule* DNS_FROM_AHBL_RHSBL
> Sep 14 10:15:48.608 [21681] dbg: config: warning: *score set for
> non-existent rule* __DNS_FROM_RFC_ABUSE
> Sep 14 10:15:48.608 [21681] dbg: config: warning: *score set for
> non-existent rule* __DNS_FROM_RFC_POST
> Sep 14 10:15:48.608 [21681] dbg: config: warning: *score set for
> non-existent rule* FILL_THIS_FORM_LOAN
> Sep 14 10:15:48.608 [21681] dbg: config: warning: *score set for
> non-existent rule* FILL_THIS_FORM_LONG
> Sep 14 10:15:48.608 [21681] dbg: config: warning: *score set for
> non-existent rule* __DNS_FROM_RFC_WHOIS
> Sep 14 10:15:48.608 [21681] dbg: config: warning: *score set for
> non-existent rule* HELO_LH_HOME
> Sep 14 10:15:48.609 [21681] dbg: config: warning: *score set for
> non-existent rule* URI_OBFU_WWW
> Sep 14 10:15:48.648 [21681] dbg: config: warning: no description set for
> KAM_RPTR_*FAILED*
> Sep 14 10:15:50.738 [21681] dbg: rules: meta test LCL_DOB_FROM_INFO has 
> *undefined
> dependency* '__FROM_DOM_INFO'
> Sep 14 10:15:50.743 [21681] dbg: rules: meta test KAM_SALE has *undefined
> dependency* 'BODY_8BITS'
> Sep 14 10:15:50.771 [21681] dbg: rules: meta test KAM_PHISH2 has *undefined
> dependency* '__KAM_URIBL_PCCC'
> Sep 14 10:15:50.788 [21681] dbg: rules: meta test KAM_BADPDF2 has *undefined
> dependency* 'KAM_BADPDF'
> Sep 14 10:15:50.788 [21681] dbg: rules: meta test KAM_BADPDF2 has *undefined
> dependency* 'KAM_BADPDF1'
> Sep 14 10:15:50.795 [21681] dbg: rules: meta test KAM_COLLEGE has *undefined
> dependency* '__KAM_URIBL_PCCC'
> Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_CREDIT2 has *undefined
> dependency* '__KAM_URIBL_PCCC'
> Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has *undefined
> dependency* 'IN_BRBL'
> Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has *undefined
> dependency* 'RCVD_IN_BRBL_RELAY'
> Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has *undefined
> dependency* '__KAM_URIBL_PCCC'
> Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has *undefined
> dependency* 'KAM_MESSAGE_EMAILBL_PCCC'
> Sep 14 10:15:50.804 [21681] dbg: rules: meta test DIGEST_MULTIPLE ha

Re: Ends with string

2017-09-15 Thread Robert Boyl
Hi!

Thanks! I didnt find this info in Writing rules tutorial.

I see

uri __KAM_SHORT
/(\/|^|\b)(?:j\.mp|bit\.ly|goo\.gl|x\.co|t\.co|t\.cn|tinyurl\.com|hop\.kz|urla\.ru|fw\.to)(\/|$|\b)/i

Seems a bit complicated.

It would be to make this rule check that suffixes are at the end of URI.

uri __TEST_URLS /\b(\.vn|\.pl|\.my|\.lu|\.vn|\.ar)\b/i

I believe this does it, correct?

uri __TEST_URLS /\b(\.vn$|\.pl$|\.my$|\.lu$|\.vn$|\.ar$)\b/i

Thanks.
Rob

2017-09-08 14:03 GMT-03:00 Kevin A. McGrail <kevin.mcgr...@mcgrail.com>:

> On 9/8/2017 12:24 PM, Robert Boyl wrote:
>
>> Hello, everyone!
>>
>> Is there a way to create a Spamassassin rule that checks for a certain
>> URL suffix such as .ru but makes sure it has to be at the end of the URI?
>> Ends with string.
>>
>> Thanks!
>> Rob
>>
>
> Yes, it's called an anchor and Shane Williams a long time ago gave me some
> advice on that I used in this rule:
>
> uri __KAM_SHORT /(\/|^|\b)(?:j\.mp|bit\.ly|goo
> \.gl|x\.co|t\.co|t\.cn|tinyurl\.com|hop\.kz|urla\.ru|fw\.to)(\/|$|\b)/i
>
> Regards,
> KAM
>
>


Re: new campaign: bitly & appengine.google

2017-09-14 Thread Robert Kudyba
A few less now, so these are ok to ignore?

spamassassin -D --lint 2>&1 | grep -Ei '(failed|undefined dependency|score set 
for non-existent rule)'
Sep 14 10:15:48.606 [21681] dbg: config: warning: score set for non-existent 
rule DNS_FROM_RFC_DSN
Sep 14 10:15:48.606 [21681] dbg: config: warning: score set for non-existent 
rule __RFC_IGNORANT_ENVFROM
Sep 14 10:15:48.607 [21681] dbg: config: warning: score set for non-existent 
rule DNS_FROM_RFC_BOGUSMX
Sep 14 10:15:48.607 [21681] dbg: config: warning: score set for non-existent 
rule FILL_THIS_FORM_FRAUD_PHISH
Sep 14 10:15:48.607 [21681] dbg: config: warning: score set for non-existent 
rule DNS_FROM_AHBL_RHSBL
Sep 14 10:15:48.608 [21681] dbg: config: warning: score set for non-existent 
rule __DNS_FROM_RFC_ABUSE
Sep 14 10:15:48.608 [21681] dbg: config: warning: score set for non-existent 
rule __DNS_FROM_RFC_POST
Sep 14 10:15:48.608 [21681] dbg: config: warning: score set for non-existent 
rule FILL_THIS_FORM_LOAN
Sep 14 10:15:48.608 [21681] dbg: config: warning: score set for non-existent 
rule FILL_THIS_FORM_LONG
Sep 14 10:15:48.608 [21681] dbg: config: warning: score set for non-existent 
rule __DNS_FROM_RFC_WHOIS
Sep 14 10:15:48.608 [21681] dbg: config: warning: score set for non-existent 
rule HELO_LH_HOME
Sep 14 10:15:48.609 [21681] dbg: config: warning: score set for non-existent 
rule URI_OBFU_WWW
Sep 14 10:15:48.648 [21681] dbg: config: warning: no description set for 
KAM_RPTR_FAILED
Sep 14 10:15:50.738 [21681] dbg: rules: meta test LCL_DOB_FROM_INFO has 
undefined dependency '__FROM_DOM_INFO'
Sep 14 10:15:50.743 [21681] dbg: rules: meta test KAM_SALE has undefined 
dependency 'BODY_8BITS'
Sep 14 10:15:50.771 [21681] dbg: rules: meta test KAM_PHISH2 has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 10:15:50.788 [21681] dbg: rules: meta test KAM_BADPDF2 has undefined 
dependency 'KAM_BADPDF'
Sep 14 10:15:50.788 [21681] dbg: rules: meta test KAM_BADPDF2 has undefined 
dependency 'KAM_BADPDF1'
Sep 14 10:15:50.795 [21681] dbg: rules: meta test KAM_COLLEGE has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_CREDIT2 has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'IN_BRBL'
Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'RCVD_IN_BRBL_RELAY'
Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 10:15:50.801 [21681] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'KAM_MESSAGE_EMAILBL_PCCC'
Sep 14 10:15:50.804 [21681] dbg: rules: meta test DIGEST_MULTIPLE has undefined 
dependency 'DCC_CHECK'

> On Sep 14, 2017, at 10:12 AM, Kevin A. McGrail <kevin.mcgr...@mcgrail.com> 
> wrote:
> 
> grab https://www.pccc.com/downloads/SpamAssassin/contrib/nonKAMrules.cf 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pccc.com_downloads_SpamAssassin_contrib_nonKAMrules.cf=DwMD-g=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=Qar3KBEgOvS0fTg23z4EyAXKdOKmQqCqw47gwNutQcs=yQL_7-gI1SzzbJznT5smgB_yd6Iv5neZF2vja3XeyFg=>
>  as well.
> 
> After that let me know but some rules are internal use only so if it's a 
> warning, don't be too concerned.
> 
> Regards,
> KAM
> On 9/14/2017 9:57 AM, Robert Kudyba wrote:
>>> > i have lost the url for kam.cf :(
>>> 
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pccc.com_downloads_SpamAssassin_contrib_KAM.cf=DwIDaQ=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=2_KMc6f7_uK5u9lGUOjxcShbX6TXhm_XZ-6Rqk8esj4=RD7D7GpMhY_eNZ_prqUt371WFA_gJTsvqxSybQct8sI=
>>>  
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pccc.com_downloads_SpamAssassin_contrib_KAM.cf=DwIDaQ=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=2_KMc6f7_uK5u9lGUOjxcShbX6TXhm_XZ-6Rqk8esj4=RD7D7GpMhY_eNZ_prqUt371WFA_gJTsvqxSybQct8sI=>
>>>  
>>> It's the first hit googling for KAM.cf if you ever need.
>> 
>> Just added this to our Fedora 26 server, any reason for these warnings?
>> rpm -q spamassassin
>> spamassassin-3.4.1-12.fc26.x86_64
>> 
>> [root@server spamassassin]# sa-update
>> [root@ server spamassassin]# spamassassin -D --lint 2>&1 | grep -Ei 
>> '(failed|undefined dependency|score set for non-existent rule)'
>> Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
>> rule HELO_LH_HOME
>> Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
>> rule DNS_FROM_RFC_DSN
>> Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
>> rule DNS_FROM_RFC_BOGUSMX
>> Sep 14 09:50:43.738 [

Re: new campaign: bitly & appengine.google

2017-09-14 Thread Robert Kudyba
> > i have lost the url for kam.cf :(
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pccc.com_downloads_SpamAssassin_contrib_KAM.cf=DwIDaQ=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=2_KMc6f7_uK5u9lGUOjxcShbX6TXhm_XZ-6Rqk8esj4=RD7D7GpMhY_eNZ_prqUt371WFA_gJTsvqxSybQct8sI=
>  
> It's the first hit googling for KAM.cf if you ever need.

Just added this to our Fedora 26 server, any reason for these warnings?
rpm -q spamassassin
spamassassin-3.4.1-12.fc26.x86_64

[root@server spamassassin]# sa-update
[root@ server spamassassin]# spamassassin -D --lint 2>&1 | grep -Ei 
'(failed|undefined dependency|score set for non-existent rule)'
Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
rule HELO_LH_HOME
Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
rule DNS_FROM_RFC_DSN
Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
rule DNS_FROM_RFC_BOGUSMX
Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
rule FILL_THIS_FORM_LOAN
Sep 14 09:50:43.738 [9443] dbg: config: warning: score set for non-existent 
rule FILL_THIS_FORM_FRAUD_PHISH
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule URI_OBFU_WWW
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule __DNS_FROM_RFC_WHOIS
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule __RFC_IGNORANT_ENVFROM
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule __DNS_FROM_RFC_ABUSE
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule __DNS_FROM_RFC_POST
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule DNS_FROM_AHBL_RHSBL
Sep 14 09:50:43.739 [9443] dbg: config: warning: score set for non-existent 
rule FILL_THIS_FORM_LONG
Sep 14 09:50:43.852 [9443] dbg: config: warning: no description set for 
KAM_RPTR_FAILED
Sep 14 09:50:44.773 [9443] dbg: rules: CBJ_GiveMeABreak merged duplicates: 
KAM_IFRAME KAM_RAPTOR KAM_RPTR_FAILED KAM_RPTR_PASSED KAM_RPTR_SUSPECT
Sep 14 09:50:45.864 [9443] dbg: rules: meta test DIGEST_MULTIPLE has undefined 
dependency 'DCC_CHECK'
Sep 14 09:50:45.867 [9443] dbg: rules: meta test KAM_CREDIT2 has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 09:50:45.872 [9443] dbg: rules: meta test LCL_DOB_FROM_INFO has 
undefined dependency '__FROM_DOM_INFO'
Sep 14 09:50:45.873 [9443] dbg: rules: meta test KAM_BADPDF2 has undefined 
dependency 'KAM_BADPDF'
Sep 14 09:50:45.873 [9443] dbg: rules: meta test KAM_BADPDF2 has undefined 
dependency 'KAM_BADPDF1'
Sep 14 09:50:45.891 [9443] dbg: rules: meta test KAM_COLLEGE has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 09:50:45.895 [9443] dbg: rules: meta test KAM_GRABBAG9 has undefined 
dependency 'MALFORMED_FREEMAIL'
Sep 14 09:50:45.903 [9443] dbg: rules: meta test KAM_PHISH2 has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 09:50:45.908 [9443] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'IN_BRBL'
Sep 14 09:50:45.908 [9443] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'RCVD_IN_BRBL_RELAY'
Sep 14 09:50:45.908 [9443] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency '__KAM_URIBL_PCCC'
Sep 14 09:50:45.908 [9443] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'KAM_MESSAGE_EMAILBL_PCCC'
Sep 14 09:50:45.908 [9443] dbg: rules: meta test KAM_BAD_DNSWL has undefined 
dependency 'RCVD_IN_HOSTKARMA_W'
Sep 14 09:50:45.916 [9443] dbg: rules: meta test KAM_GOOGLE2 has undefined 
dependency 'HK_SPAMMY_FILENAME'
Sep 14 09:50:45.924 [9443] dbg: rules: meta test JMQ_CONGRAT has undefined 
dependency 'HK_SPAMMY_FILENAME'
Sep 14 09:50:45.925 [9443] dbg: rules: meta test KAM_SALE has undefined 
dependency 'BODY_8BITS'
[root@storm server]# spamassassin --lint



Ends with string

2017-09-08 Thread Robert Boyl
Hello, everyone!

Is there a way to create a Spamassassin rule that checks for a certain URL
suffix such as .ru but makes sure it has to be at the end of the URI? Ends
with string.

Thanks!
Rob


Re: reason why sendmail w/ SA3.4.1 scantime=15.0, delay=00:01:06 w/ SquirrelMail?

2017-07-17 Thread Robert Kudyba

> On Jul 17, 2017, at 11:01 AM, Tom Hendrikx <t...@whyscream.net> wrote:
> 
> On 17-07-17 16:39, Robert Kudyba wrote:
>> 
>>> On Jul 17, 2017, at 10:28 AM, Tom Hendrikx <t...@whyscream.net
>>> <mailto:t...@whyscream.net>> wrote:
>>> 
>>> On 17-07-17 16:00, Robert Kudyba wrote:
>>>> 
>>>>> On Jul 17, 2017, at 9:39 AM, Antony Stone
>>>>> <antony.st...@spamassassin.open.source.it
>>>>> <mailto:antony.st...@spamassassin.open.source.it>
>>>>> <mailto:antony.st...@spamassassin.open.source.it>> wrote:
>>>>> 
>>>>> On Monday 17 July 2017 at 14:25:17, Robert Kudyba wrote:
>>>>> 
>>>>>>> On Jul 14, 2017, at 4:00 AM, Matus UHLAR - fantomas
>>>>>>> <uh...@fantomas.sk <mailto:uh...@fantomas.sk>
>>>>>>> <mailto:uh...@fantomas.sk>>
>>>>> wrote:
>>>>>>>> Robert Kudyba <rkud...@fordham.edu <mailto:rkud...@fordham.edu>
>>>>>>>> <mailto:rkud...@fordham.edu>> wrote:
>>>>>>>>> Over the past few days sending mail via SquirrelMail has become
>>>>>>>>> glacial. The load on the server is under 1. I've restarted the SA,
>>>>>>>>> sendmail and dovecot processes several times. Here are some logs
>>>>>>>>> I can
>>>>>>>>> provide any settings if desired.
>>>>>>> 
>>>>>>> tried to run a message through "spamassassin -D" ?
>>>>>>> that should give you debug/timing info.
>>>>>> 
>>>>>> OK here is the pastebin of spamassassin -D < gtube.txt:
>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pastebin.com_iZtm2hhy=DwIFAw=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=wV3-oZ_3m8NtSuw_6UTtdU1WptL8Pl1vNOok-EXrcZo=802-414zeT59KVCIFVa_uxfSq0XezT7e4OVZibWbIwc=
>>>>>> 
>>>>> 
>>>>> 
>>>>> Jul 16 09:01:42.796 [29903] dbg: dns: entering helper-app run mode
>>>>> Jul 16 09:01:47.806 [29903] dbg: dns: leaving helper-app run mode
>>>>> Jul 16 09:01:47.806 [29903] dbg: razor2: razor2 check timed out after 5
>>>>> seconds
>>>> 
>>>> OK so I ran: /var/spool/amavisd/.razor
>>>> 
>>>> ls -l /var/spool/amavisd/.razor
>>>> total 100
>>>> -rw-r- 1 amavis amavis 72420 Dec 22  2014 razor-agent.log
>>>> -rw-r- 1 amavis amavis   998 Jul 17 09:49
>>>> server.c301.cloudmark.com.conf
>>>> -rw-r- 1 amavis amavis   998 Jul 17 09:46
>>>> server.c302.cloudmark.com.conf
>>>> -rw-r- 1 amavis amavis   995 Dec 20  2014
>>>> server.c303.cloudmark.com.conf
>>>> -rw-r- 1 amavis amavis57 Jul 17 09:49 servers.catalogue.lst
>>>> -rw-r- 1 amavis amavis30 May 23  2013 servers.discovery.lst
>>>> -rw-r- 1 amavis amavis76 Jul 17 09:49 servers.nomination.lst
>>>> 
>>>> New pastebin:
>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pastebin.com_9RWEYuSt=DwIC-g=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=QspHQBi1X_n1ZQylsERsyborPsWRSy3cHQlXJ8FUf7c=DZ63JGDSr9nTI6HaajZtLRvUf0ao4tBA4dKtq_77Xlg=
>>>> 
>>>> 
>>>> Still taking 15 seconds.
>>>> 
>>>> Jul 17 09:55:28 storm spamd[28111]: spamd: clean message (-103.4/5.0)
>>>> for spamd:1001 in 15.0 seconds, 1843 bytes.
>>>> Jul 17 09:55:28 storm spamd[28111]: spamd: result: . -103 -
>>>> ALL_TRUSTED,BAYES_00,FROM_IS_TO,USER_IN_WHITELIST
>>>> scantime=15.0,size=1843,user=spamd,uid=1001,required_score=5.0,rhost=localhost,raddr=::1,rport=53074,mid=<32889a456ed9c9911ff0034513796858.squirrel@ourdomain>,bayes=0.00,autolearn=no
>>>> autolearn_force=no
>>>> Jul 17 09:55:28 storm spamd[28041]: prefork: child states: II
>>>> 
>>> 
>>> The error is still the same. Do you even have access to those cloudmark
>>> razor servers? Does razor work outside of spamassassin/amavisd?
>> 
>> Is that supposed to be a paid service? This test seems successful. 
>> 
>> razor-check -d <   /usr/share/doc/spamassassin/sample-spam.txt
>> Razor-Log: Computed razorhome from env: /root/.razor
>> Razor-Log: Found razorhome: /root/.razor
>> Razor-Log: read_file: 15 items read fr

Re: reason why sendmail w/ SA3.4.1 scantime=15.0, delay=00:01:06 w/ SquirrelMail?

2017-07-17 Thread Robert Kudyba

> On Jul 17, 2017, at 10:28 AM, Tom Hendrikx <t...@whyscream.net> wrote:
> 
> On 17-07-17 16:00, Robert Kudyba wrote:
>> 
>>> On Jul 17, 2017, at 9:39 AM, Antony Stone
>>> <antony.st...@spamassassin.open.source.it
>>> <mailto:antony.st...@spamassassin.open.source.it>> wrote:
>>> 
>>> On Monday 17 July 2017 at 14:25:17, Robert Kudyba wrote:
>>> 
>>>>> On Jul 14, 2017, at 4:00 AM, Matus UHLAR - fantomas
>>>>> <uh...@fantomas.sk <mailto:uh...@fantomas.sk>>
>>> wrote:
>>>>>> Robert Kudyba <rkud...@fordham.edu <mailto:rkud...@fordham.edu>> wrote:
>>>>>>> Over the past few days sending mail via SquirrelMail has become
>>>>>>> glacial. The load on the server is under 1. I've restarted the SA,
>>>>>>> sendmail and dovecot processes several times. Here are some logs I can
>>>>>>> provide any settings if desired.
>>>>> 
>>>>> tried to run a message through "spamassassin -D" ?
>>>>> that should give you debug/timing info.
>>>> 
>>>> OK here is the pastebin of spamassassin -D < gtube.txt:
>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pastebin.com_iZtm2hhy=DwIFAw=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=wV3-oZ_3m8NtSuw_6UTtdU1WptL8Pl1vNOok-EXrcZo=802-414zeT59KVCIFVa_uxfSq0XezT7e4OVZibWbIwc=
>>>> 
>>> 
>>> 
>>> Jul 16 09:01:42.796 [29903] dbg: dns: entering helper-app run mode
>>> Jul 16 09:01:47.806 [29903] dbg: dns: leaving helper-app run mode
>>> Jul 16 09:01:47.806 [29903] dbg: razor2: razor2 check timed out after 5
>>> seconds
>> 
>> OK so I ran: /var/spool/amavisd/.razor
>> 
>> ls -l /var/spool/amavisd/.razor
>> total 100
>> -rw-r- 1 amavis amavis 72420 Dec 22  2014 razor-agent.log
>> -rw-r- 1 amavis amavis   998 Jul 17 09:49 server.c301.cloudmark.com.conf
>> -rw-r- 1 amavis amavis   998 Jul 17 09:46 server.c302.cloudmark.com.conf
>> -rw-r- 1 amavis amavis   995 Dec 20  2014 server.c303.cloudmark.com.conf
>> -rw-r- 1 amavis amavis57 Jul 17 09:49 servers.catalogue.lst
>> -rw-r- 1 amavis amavis30 May 23  2013 servers.discovery.lst
>> -rw-r- 1 amavis amavis76 Jul 17 09:49 servers.nomination.lst
>> 
>> New pastebin: 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pastebin.com_9RWEYuSt=DwIC-g=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=QspHQBi1X_n1ZQylsERsyborPsWRSy3cHQlXJ8FUf7c=DZ63JGDSr9nTI6HaajZtLRvUf0ao4tBA4dKtq_77Xlg=
>>  
>> 
>> Still taking 15 seconds.
>> 
>> Jul 17 09:55:28 storm spamd[28111]: spamd: clean message (-103.4/5.0)
>> for spamd:1001 in 15.0 seconds, 1843 bytes.
>> Jul 17 09:55:28 storm spamd[28111]: spamd: result: . -103 -
>> ALL_TRUSTED,BAYES_00,FROM_IS_TO,USER_IN_WHITELIST
>> scantime=15.0,size=1843,user=spamd,uid=1001,required_score=5.0,rhost=localhost,raddr=::1,rport=53074,mid=<32889a456ed9c9911ff0034513796858.squirrel@ourdomain>,bayes=0.00,autolearn=no
>> autolearn_force=no
>> Jul 17 09:55:28 storm spamd[28041]: prefork: child states: II
>> 
> 
> The error is still the same. Do you even have access to those cloudmark
> razor servers? Does razor work outside of spamassassin/amavisd?

Is that supposed to be a paid service? This test seems successful. 

razor-check -d <   /usr/share/doc/spamassassin/sample-spam.txt
 Razor-Log: Computed razorhome from env: /root/.razor
 Razor-Log: Found razorhome: /root/.razor
 Razor-Log: read_file: 15 items read from /root/.razor/razor-agent.conf
Jul 17 10:38:14.467263 check[20932]: [ 2] [bootup] Logging initiated 
LogDebugLevel=9 to stdout
Jul 17 10:38:14.467525 check[20932]: [ 5] computed razorhome=/root/.razor, 
conf=/root/.razor/razor-agent.conf, ident=/root/.razor/identity
Jul 17 10:38:14.467584 check[20932]: [ 2]  Razor-Agents v2.84 starting 
razor-check -d
Jul 17 10:38:14.467707 check[20932]: [ 8] reading straight RFC822 mail from 

Jul 17 10:38:14.467809 check[20932]: [ 6] read 1 mail
Jul 17 10:38:14.467905 check[20932]: [ 8] Client supported_engines: 4 8
Jul 17 10:38:14.468110 check[20932]: [ 8]  prep_mail done: mail 1 headers=293, 
mime0=616
Jul 17 10:38:14.468241 check[20932]: [ 6] skipping whitelist file (empty?): 
/root/.razor/razor-whitelist
Jul 17 10:38:14.468383 check[20932]: [ 5] read_file: 1 items read from 
/root/.razor/servers.discovery.lst
Jul 17 10:38:14.468528 check[20932]: [ 5] read_file: 0 items read from 
/root/.razor/servers.nomination.lst
Jul 17 10:38:14.468671 check[20932]: [ 5] read_file: 3 items r

Re: reason why sendmail w/ SA3.4.1 scantime=15.0, delay=00:01:06 w/ SquirrelMail?

2017-07-17 Thread Robert Kudyba

> On Jul 17, 2017, at 9:39 AM, Antony Stone 
> <antony.st...@spamassassin.open.source.it> wrote:
> 
> On Monday 17 July 2017 at 14:25:17, Robert Kudyba wrote:
> 
>>> On Jul 14, 2017, at 4:00 AM, Matus UHLAR - fantomas <uh...@fantomas.sk> 
> wrote:
>>>> Robert Kudyba <rkud...@fordham.edu> wrote:
>>>>> Over the past few days sending mail via SquirrelMail has become
>>>>> glacial. The load on the server is under 1. I've restarted the SA,
>>>>> sendmail and dovecot processes several times. Here are some logs I can
>>>>> provide any settings if desired.
>>> 
>>> tried to run a message through "spamassassin -D" ?
>>> that should give you debug/timing info.
>> 
>> OK here is the pastebin of spamassassin -D < gtube.txt:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pastebin.com_iZtm2hhy=DwIFAw=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=wV3-oZ_3m8NtSuw_6UTtdU1WptL8Pl1vNOok-EXrcZo=802-414zeT59KVCIFVa_uxfSq0XezT7e4OVZibWbIwc=
>>  
> 
> 
> Jul 16 09:01:42.796 [29903] dbg: dns: entering helper-app run mode
> Jul 16 09:01:47.806 [29903] dbg: dns: leaving helper-app run mode
> Jul 16 09:01:47.806 [29903] dbg: razor2: razor2 check timed out after 5 
> seconds

OK so I ran: /var/spool/amavisd/.razor

ls -l /var/spool/amavisd/.razor
total 100
-rw-r- 1 amavis amavis 72420 Dec 22  2014 razor-agent.log
-rw-r- 1 amavis amavis   998 Jul 17 09:49 server.c301.cloudmark.com.conf
-rw-r- 1 amavis amavis   998 Jul 17 09:46 server.c302.cloudmark.com.conf
-rw-r- 1 amavis amavis   995 Dec 20  2014 server.c303.cloudmark.com.conf
-rw-r- 1 amavis amavis57 Jul 17 09:49 servers.catalogue.lst
-rw-r- 1 amavis amavis30 May 23  2013 servers.discovery.lst
-rw-r- 1 amavis amavis76 Jul 17 09:49 servers.nomination.lst

New pastebin: https://pastebin.com/9RWEYuSt <https://pastebin.com/9RWEYuSt>

Still taking 15 seconds.

Jul 17 09:55:28 storm spamd[28111]: spamd: clean message (-103.4/5.0) for 
spamd:1001 in 15.0 seconds, 1843 bytes.
Jul 17 09:55:28 storm spamd[28111]: spamd: result: . -103 - 
ALL_TRUSTED,BAYES_00,FROM_IS_TO,USER_IN_WHITELIST 
scantime=15.0,size=1843,user=spamd,uid=1001,required_score=5.0,rhost=localhost,raddr=::1,rport=53074,mid=<32889a456ed9c9911ff0034513796858.squirrel@ourdomain>,bayes=0.00,autolearn=no
 autolearn_force=no
Jul 17 09:55:28 storm spamd[28041]: prefork: child states: II





Re: reason why sendmail w/ SA3.4.1 scantime=15.0, delay=00:01:06 w/ SquirrelMail?

2017-07-17 Thread Robert Kudyba

> On Jul 14, 2017, at 4:00 AM, Matus UHLAR - fantomas <uh...@fantomas.sk> wrote:
> 
>> Robert Kudyba <rkud...@fordham.edu> wrote:
>>> Over the past few days sending mail via SquirrelMail has become glacial. 
>>> The load on the server is under 1. I've restarted the SA, sendmail and 
>>> dovecot processes several times. Here are
>>> some logs I can provide any settings if desired.
> 
> tried to run a message through "spamassassin -D" ?
> that should give you debug/timing info.

OK here is the pastebin of spamassassin -D < gtube.txt: 
https://pastebin.com/iZtm2hhy






Re: reason why sendmail w/ SA3.4.1 scantime=15.0, delay=00:01:06 w/ SquirrelMail?

2017-07-13 Thread Robert Kudyba
> n*5s delay *may* indicate unresponsive DNS host(s)/resolver(s) in /etc/hots
> [ at least it should be ruled out ]
>

Nah both are university DNS servers that are current.

>
> How long does it take to get SMTP greeting message when you start
> "/usr/sbin/sendmail -bs" as a non root user?
> [ Is it sendmail startup or message processing? ]
>

The latter, starts up and restarts quickly, couple seconds.

>
> > [...]
> > Jul 13 23:04:05 storm spamd[13378]: spamd: processing message <
> 9ca00a710c6bfad3d60dd424cd79ac19.squirrel@our-domain> for root:1001
> > Jul 13 23:04:20 storm spamd[13378]: spamd: clean message (-101.5/5.0)
> for root:1001 in 15.0 seconds, 1193 bytes.
> > Jul 13 23:04:20 storm spamd[13378]: spamd: result: . -101 -
> ALL_TRUSTED,BAYES_00,PYZOR_CHECK,USER_IN_WHITELIST
> >   scantime=15.0 [...]
>
> Hitting PYZOR_CHECK is scary.


So I can try and disable to see if there's a difference.

Sounds like we should also try a local DNS server too...

>
>


reason why sendmail w/ SA3.4.1 scantime=15.0, delay=00:01:06 w/ SquirrelMail?

2017-07-13 Thread Robert Kudyba
Over the past few days sending mail via SquirrelMail has become glacial.
The load on the server is under 1. I've restarted the SA, sendmail and
dovecot processes several times. Here are some logs I can provide any
settings if desired.


Jul 13 23:03:24 storm sendmail[14504]: v6E33EOQ014504:
Authentication-Warning: our-domain: apache set sender to me@our-domain
using -f

Jul 13 23:03:39 storm sendmail[14504]: v6E33EOQ014504: from=me@our-domain,
size=535, class=0, nrcpts=1,
msgid=<9ca00a710c6bfad3d60dd424cd79ac19.squirrel@our-domain>,
relay=apache@localhost

Jul 13 23:04:05 storm sendmail[14629]: v6E33ddm014629: from=,
size=779, class=0, nrcpts=1,
msgid=<9ca00a710c6bfad3d60dd424cd79ac19.squirrel@our-domain>, proto=ESMTP,
daemon=MTA-loopback, relay=localhost [127.0.0.1]

Jul 13 23:04:05 storm sendmail[14629]: v6E33ddm014629: Milter insert (1):
header: X-Virus-Scanned: clamav-milter 0.99.2 at our-domain

Jul 13 23:04:05 storm sendmail[14629]: v6E33ddm014629: Milter insert (1):
header: X-Virus-Status: Clean

Jul 13 23:04:05 storm spamd[13378]: spamd: connection from localhost
[::1]:48316 to port 783, fd 5

Jul 13 23:04:05 storm spamd[13378]: spamd: using default config for root:
/home/spamd/user_prefs

Jul 13 23:04:05 storm spamd[13378]: spamd: processing message
<9ca00a710c6bfad3d60dd424cd79ac19.squirrel@our-domain> for root:1001

Jul 13 23:04:20 storm spamd[13378]: spamd: clean message (-101.5/5.0) for
root:1001 in 15.0 seconds, 1193 bytes.

Jul 13 23:04:20 storm spamd[13378]: spamd: result: . -101 -
ALL_TRUSTED,BAYES_00,PYZOR_CHECK,USER_IN_WHITELIST
scantime=15.0,size=1193,user=root,uid=1001,required_score=5.0,rhost=localhost,raddr=::1,rport=48316,mid=<9ca00a710c6bfad3d60dd424cd79ac19.squirrel@our-domain>,bayes=0.00,autolearn=no
autolearn_force=no

Jul 13 23:04:20 storm sendmail[14629]: v6E33ddm014629: Milter add: header:
X-Spam-Status: No, score=-101.5 required=5.0
tests=ALL_TRUSTED,BAYES_00,\n\tPYZOR_CHECK,USER_IN_WHITELIST autolearn=no
autolearn_force=no version=3.4.1

Jul 13 23:04:20 storm sendmail[14629]: v6E33ddm014629: Milter add: header:
X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on\n\tour-domain

Jul 13 23:04:20 storm sendmail[14504]: v6E33EOQ014504: to=me@our-domain,
ctladdr=me@our-domain (16836/16836), delay=00:01:06, xdelay=00:00:41,
mailer=relay, pri=30535, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0,
stat=Sent (v6E33ddm014629 Message accepted for delivery)

Jul 13 23:04:20 storm spamd[13309]: prefork: child states: II

Jul 13 23:04:20 storm spamd[13378]: spamd: connection from localhost
[::1]:48320 to port 783, fd 5

Jul 13 23:04:20 storm spamd[13378]: spamd: using default config for spamd:
/home/spamd/user_prefs

Jul 13 23:04:20 storm spamd[13378]: spamd: processing message
<9ca00a710c6bfad3d60dd424cd79ac19.squirrel@our-domain> for spamd:1001


Re: mail slipped by with forged/spoofed from: in our domain

2017-06-19 Thread Robert Kudyba
> I don't believe sendmail has any default setting for rejecting HELO names.  
> You should probably add "localdomain" to your access table.
> 

Yep been like this for years:
# By default we allow relaying from localhost...
Connect:localhost.localdomain   RELAY
Connect:localhost   RELAY
Connect:127.0.0.1   RELAY



Re: mail slipped by with forged/spoofed from: in our domain

2017-06-19 Thread Robert Kudyba

> On Jun 19, 2017, at 4:02 PM, Kevin A. McGrail <kevin.mcgr...@mcgrail.com> 
> wrote:
> 
> On 6/19/2017 3:27 PM, Robert Kudyba wrote:
>> 
>> Well this user has his sendmail account from our subdomain forward to his 
>> university Gmail account so that’s where the SPF kicks in. But how come 
>> those first IPs in the mail header pass?
> 
> I don't know, it's hard to tell with a forwarded email.

Does the logs help?

Jun 17 15:53:32 storm sendmail[30146]: v5HJqkrB030146: 
from=<le...@cis.fordham.edu>, size=1019, class=0, nrcpts=1, 
msgid=<857f17b790446b08d4d19827b3e51...@cis.fordham.edu>, bodytype=8BITMIME, 
proto=ESMTP, daemon=MTA, relay=oi66.grupocartonpa
ck.com [189.30.23.66]
Jun 17 15:53:32 storm sendmail[30146]: v5HJqkrB030146: Milter insert (1): 
header: X-Virus-Scanned: clamav-milter 0.99.2 at storm.cis.fordham.edu
Jun 17 15:53:32 storm sendmail[30146]: v5HJqkrB030146: Milter insert (1): 
header: X-Virus-Status: Clean
Jun 17 15:53:32 storm spamd[2840]: spamd: connection from localhost [::1]:59804 
to port 783, fd 5
Jun 17 15:53:32 storm spamd[2840]: spamd: using default config for root: 
/home/spamd/user_prefs
Jun 17 15:53:32 storm spamd[2840]: spamd: processing message 
<857f17b790446b08d4d19827b3e51...@cis.fordham.edu> for root:1001
Jun 17 15:53:37 storm sendmail[30355]: STARTTLS=client, 
relay=aspmx.l.google.com., version=TLSv1.2, verify=FAIL, 
cipher=ECDHE-RSA-AES128-GCM-SHA256, bits=128/128
Jun 17 15:53:38 storm sendmail[30197]: v5HJr7gx030197: 
unassigned.nodeoutlet.com [103.208.244.235] (may be forged) did not issue 
MAIL/EXPN/VRFY/ETRN during connection to MTA
Jun 17 15:53:38 storm sendmail[30196]: v5HJr7qA030196: 
unassigned.nodeoutlet.com [103.208.244.235] (may be forged) did not issue 
MAIL/EXPN/VRFY/ETRN during connection to MTA
Jun 17 15:53:38 storm sendmail[30195]: v5HJr7K1030195: 
unassigned.nodeoutlet.com [103.208.244.235] (may be forged) did not issue 
MAIL/EXPN/VRFY/ETRN during connection to MTA
Jun 17 15:53:47 storm spamd[2840]: spamd: clean message (0.2/5.0) for root:1001 
in 15.0 seconds, 1429 bytes.
Jun 17 15:53:47 storm spamd[2840]: spamd: result: . 0 - 
BAYES_00,FROM_IS_TO,PYZOR_CHECK,RCVD_NUMERIC_HELO,T_SPF_HELO_TEMPERROR,T_SPF_TEMPERROR
 
scantime=15.0,size=1429,user=root,uid=1001,required_score=5.0,rhost=localhost,raddr=::1,rport=59804,mid=<857f17b790446b08d4d19827b3e51...@cis.fordham.edu>,bayes=0.001885,autolearn=no
 autolearn_force=no
Jun 17 15:53:47 storm sendmail[30146]: v5HJqkrB030146: Milter add: header: 
X-Spam-Status: No, score=0.2 required=5.0 
tests=BAYES_00,FROM_IS_TO,\n\tPYZOR_CHECK,RCVD_NUMERIC_HELO,T_SPF_HELO_TEMPERROR,T_SPF_TEMPERROR\n\tautolearn=no
 autolearn_force=no version=3.4.1
Jun 17 15:53:47 storm sendmail[30146]: v5HJqkrB030146: Milter add: header: 
X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) 
on\n\tstorm.cis.fordham.edu
Jun 17 15:55:08 storm sendmail[30476]: v5HJqkrB030146: 
to=<le...@cis.fordham.edu>, ctladdr=<le...@cis.fordham.edu> (15746/1500), 
delay=00:01:36, xdelay=00:01:21, mailer=local, pri=31610, dsn=2.0.0, stat=Sent
Jun




Re: mail slipped by with forged/spoofed from: in our domain

2017-06-19 Thread Robert Kudyba
> The biggest issue I see is the SPF approval:
> ARC‐Authentication‐Results: i=1; mx.google.com;
> 
>spf=pass (google.com: best guess record for domain of
> le...@cis.fordham.edu  designates 150.108.68.26 
> as permitted sender)
> 
> Perhaps a compromised account?

Well this user has his sendmail account from our subdomain forward to his 
university Gmail account so that’s where the SPF kicks in. But how come those 
first IPs in the mail header pass?



mail slipped by with forged/spoofed from: in our domain

2017-06-19 Thread Robert Kudyba
We use sendmail-8.15.2-8.fc25 on Fedora 25 with spamassassin-3.4.1-9. Can 
anyone explain how this email got through with a forged from: address? 
https://pastebin.com/L7NKCK3E 

The 1st received IP is not on any real time blacklist as of this moment:

Received: from 167.249.16.132

The 2nd IP in the mail header trail now shows up in BACKSCATTER, BLOCKLIST.DE 
and MAILSPIKE BL

Received: from embacelsga.localdomain (oi66.grupocartonpack.com [189.30.23.66])

But shouldn’t the default settings in sendmail.mc/cf check for spoofing of the 
HELO?

Re: version 3.4.1 with block TLD

2017-06-13 Thread Robert Kudyba

> On Jun 12, 2017, at 9:44 PM, Joseph Brennan <bren...@columbia.edu> wrote:
> 
> 
> 
> --On June 8, 2017 at 12:07:43 PM -0400 Robert Kudyba <rkud...@fordham.edu> 
> wrote:
> 
> I would like
>> to block *@*.us but allow the cities and schools that use them so allow
>> examples like @ci.boston.ma.us and corunna.k12.mi.us. I don’t think
>> this can be done with the access.db file in sendmail.
> 
> 
> Sendmail access.db? It's easy:
> 
> From:us REJECT
> From:ci.boston.ma.us OK
> From:corunna.k12.mi.usOK
> 
> Or name the states:
> 
> From:us   REJECT
> From:ma.usOK
> From:mi.usOK

Thanks for the suggestion didn’t realize it worked with the top level domain as 
well as the prefix before the TLD. Does the order matter? Meaning should the:
From:us REJECT 
come before the “OK” lines?

Also where can I add a rejection for the hostname colocrossing.com when it 
appears in the relay line like this. 
v5CB8tYp022453: ruleset=check_mail,  relay=198-23-201-144-host.colocrossing.com 
[198.23.201.144] (may be forged), 

I tried:
colocrossing.comREJECT

But that doesn’t seem to work.

Re: version 3.4.1 with block TLD

2017-06-08 Thread Robert Kudyba

> i just upgrade to the lates version 3.4.1,  
> 
> understand this version help to combat top level domain spam mail.
> 
> 
> so how to block some of the  domain , using black_list or custom rules ?
> 
> black_list_from  *@*.top
> black_list_from *@*.us
> 
> or 
> 
> custom rules ?

There was a pretty big thread about this suggesting to do this at the MTA 
level: https://lists.gt.net/spamassassin/users/198207/?page=1; 


What I’ve noticed is the *@*.us has picked up steam over the past couple of 
days. Since we use sendmail, it uses regular expressions and something called 
LOCAL_CONFIG and LOCAL_RULESETS macros, see 
http://www.xiitec.com/blog/2009/02/25/using-regular-expressions-in-sendmail/ 
 
but negative look aheads aren’t support (basic POSIX). I would like to block 
*@*.us but allow the cities and schools that use them so allow examples like 
@ci.boston.ma.us and corunna.k12.mi.us. I don’t think this can be done with the 
access.db file in sendmail.

Re: lots of missed spam/false negatives from .info TLD being marked with URIBL_RHS_DOB

2017-05-30 Thread Robert Kudyba
> For the past few days lots of missed spam has been getting through, running
>>> SA 3.4.1 on Fedora 25 with sendmail. I see that they are being tagged with
>>> URIBL_RHS_DOB, i.e.,  domains registered in the last five days. Since we
>>> are not running our own DNS server (yet--need permission from our CISO)
>>> URIBL_BLOCKED is also being triggered. Is there a way to update this?
> 
>> Update what how?

You answered below…thanks.

> 
>> I note that message hit BAYES_00. If content like that is getting a 
>> "strong ham" Bayes score, you should review your training processes and 
>> Bayes corpora - you *do* keep copies of messages you train Bayes with, 
>> right? :)

Yes just re-synced.


> If you trust URIBL_RHS_DOB to not hit your ham, you can increase the score 
>> of URIBL_RHS_DOB in your local rules file.
> 
>> If you'd prefer a more-focused solution, use a meta rule; perhaps:
> 
>>meta  LCL_DOB_FROM_INFO   __FROM_DOM_INFO && URIBL_RHS_DOB
>>score LCL_DOB_FROM_INFO   2.500  # or whatever you're comfortable with


Great trying this now.
> 
>> But: fixing your Bayes and getting a non-forwarding DNS server for your 
>> mail system so that you're not hitting RBL query limits are the biggest 
>> things you need to do to address this.

It’s enabled and looks like it’s working based on this and that use_bayes 1 in 
local.cf
sa-learn --dump magic
0.000  0  3  0  non-token data: bayes db version
0.000  0688  0  non-token data: nspam
0.000  0  80012  0  non-token data: nham
0.000  0 164827  0  non-token data: ntokens
0.000  0 1485101489  0  non-token data: oldest atime
0.000  0 1496149547  0  non-token data: newest atime
0.000  0  0  0  non-token data: last journal sync atime
0.000  0 1496152035  0  non-token data: last expiry atime
0.000  0   11059200  0  non-token data: last expire atime delta
0.000  0  99547  0  non-token data: last expire reduction 
count

> 
>>> I have't seen an update in sa-update since 03-May-2017 01:52:05:
> 
>> Masscheck and updates are *almost* back.

Great I’ll keep an eye out.

> 
>>> Here's a typical mail header & message content:
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pastebin.com_Rw1S7mWe=DwIFAw=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=bpKADIzstZa5G-g1qsGBa7gWKq4zTcrA_-E0jGYOsdo=_uJa-KDGfZ2CN8vjSlDNEmfotigbWHyD9TZaKnJwzNM=
>>>  
>>> 
>>>  
> 
>> Thanks for that.


Looks like the IP is being picked up on a few RBLs now.

> 
> Do you have any RBLs setup in sendmail?  You need
> to use bb.barracudacentral.org  and 
> zen.spamhaus.org 
> at a minimum.  Hopefully your DNS server situation
> can get fixed soon so you can use BLs successfully.
> 
Indeed we do plus spamcop:
FEATURE(`dnsbl', `b.barracudacentral.org', `', `"550 Mail from " 
$&{client_addr} " refused. Rejected for bad WHOIS info on IP of your SMTP 
server " in http://www.barracudacentral.org/lookups "')dnl
FEATURE(`dnsbl',`zen.spamhaus.org')dnl
FEATURE(`enhdnsbl', `bl.spamcop.net', `"Spam blocked see: 
http://spamcop.net/bl.shtml?"$&{client_addr}', `t')dnl

> If you switched to Postfix, there are many benefits
> to using Postscreen with weighted RBLs.  I have over
> 20 RBLs working together for best accuracy and low
> false positives.

We have several mailing lists and users past & present and the transition would 
be a bit painful.


> SpamAssassin is primarily going to be a content filter
> with some reputation checks.  Setup the MTA to be
> primarily reputation checks with DNS (i.e. make sure
> the sending IP has a PTR record [RDNS_NONE]) and
> RBL lookups.
> 
> The MTA should be blocking the majority of spam
> before it gets to SpamAssassin.

That’s what I thought, and we have even more filters in place, including the 
suggestion in 
https://www.autonarcosis.com/2015/10/14/vanity-top-level-domains-how-to-block-them-using-sendmail/
 

 to use the access file to block all of those vanity top level domains. I even 
have a regex to block anysubdomain.anydomain.us|info. And we have 
clamavjunofficial-sigs from extremeshok enabled.

Anything else to check?

lots of missed spam/false negatives from .info TLD being marked with URIBL_RHS_DOB

2017-05-29 Thread Robert Kudyba
For the past few days lots of missed spam has been getting through, running
SA 3.4.1 on Fedora 25 with sendmail. I see that they are being tagged with
URIBL_RHS_DOB, i.e.,  domains registered in the last five days. Since we
are not running our own DNS server (yet--need permission from our CISO)
URIBL_BLOCKED is also being triggered. Is there a way to update this? I
have't seen an update in sa-update since 03-May-2017 01:52:05:
SpamAssassin: Update processed successfully. Here's a typical mail header &
message content:
https://pastebin.com/Rw1S7mWe


Re: URIBL_BLOCKED on 2 Fedora 25 servers with working dnsmasq, w/ NetworkManager service

2017-05-19 Thread Robert Kudyba
>
> Wiki page updated and simplified.
>
> https://wiki.apache.org/spamassassin/CachingNameserver


For Fedora, since NetworkMangler (as many are fond to call it) is enabled
by default it might be worthwhile to mention this comment at, but note that
/etc/resolv.conf will be managed by dnssec-trigger daemon:
https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver#How_to_get_Unbound_and_dnssec-trigger_running
"If you use NetworkManager, configure it to use unbound. Add the following
line into /etc/NetworkManager/NetworkManager.conf
dns=unbound"


Re: URIBL_BLOCKED on 2 Fedora 25 servers with working dnsmasq, w/ NetworkManager service

2017-05-18 Thread Robert Kudyba
On May 18, 2017 5:11 PM, "Reindl Harald" <h.rei...@thelounge.net> wrote:



Am 18.05.2017 um 23:05 schrieb Robert Kudyba:

>
> On May 18, 2017, at 4:41 PM, David Jones <djo...@ena.com > djo...@ena.com>> wrote:
>>
>> From: Robert Kudyba <rkud...@fordham.edu <mailto:rkud...@fordham.edu>>
>>>
>>
>> Am 18.05.2017 um 22:30 schrieb Reindl Harald:
>>>>
>>>>> "with working dnsmasq" says all - DNSMASQ DON'T DO RECURSION - IT CAN#T
>>>>> you are forwarding to some other nameserver and you are not the only
>>>>> one
>>>>>
>>>>
>> But the nameserver I’m forwarding to is in our university.
>>>
>>
>> Your server needs to do it's on full recursive DNS lookups.
>>
>
> So dnsmasq is no longer an option?
>

it was never - no dns software which needs another nameserver for it's job
is suiteable on a inbound spamfilter

I will fix this wiki page now…
>>
>
> I see there’s rbldnsd. On Fedora and one of our 2 servers, we run NIS &
> ypbind. One runs NetworkManager and the other just the network service. I
> guess I’m looking for the best recommendation and easy configuration
> without conflicts. The link to https://urldefense.proofpoint.
> com/v2/url?u=http-3A__njabl.org_rsync.html=DwID-g=aqMfXO
> EvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3
> lLNo4tOL1ry_m7-psV3GejY=_GpsD3DHYXO7rQ_TtNdtAq_0iO39u8Q
> BVn0morPE0hs=-BaByTtCkQ37-fWpZVVp9ZMa7nLIUpa8OWscKkMi3T8=  is broken
> at the moment
>

rbldnsd is a completly different thing and supposed to host your *own*
dnsbl zones

what you you need is a *basic* namesever just donig recursion and tell your
mailserver just use it

* get rid of other crap
* dnf install unbound
* systemctl enable unbound
* systemctl start unound
* just use your unbound on 127.0.0.1


It looks like I'll have to

   - Add the following line into /etc/NetworkManager/NetworkManager.conf

dns=unbound

or ask the idiot maintaining "I'm forwarding to is in our university" why
he is forwarding queries outside your university to google instead doing
recursion


Probably because the university uses gmail. Our department does not.


Re: URIBL_BLOCKED on 2 Fedora 25 servers with working dnsmasq, w/ NetworkManager service

2017-05-18 Thread Robert Kudyba

> On May 18, 2017, at 4:41 PM, David Jones <djo...@ena.com> wrote:
> 
>> From: Robert Kudyba <rkud...@fordham.edu>
> 
>>> Am 18.05.2017 um 22:30 schrieb Reindl Harald:
>>>> "with working dnsmasq" says all - DNSMASQ DON'T DO RECURSION - IT CAN#T
>>>> you are forwarding to some other nameserver and you are not the only one
> 
>> But the nameserver I’m forwarding to is in our university.
> 
> Your server needs to do it's on full recursive DNS lookups.

So dnsmasq is no longer an option?

> 
>>> /etc/resolv.dnsmasq
>>> search subdomain.ourschool.edu ourschool.edu
>>> nameserver 150.108.x.yy
>>> nameserver 150.108.y.xx
>>> 
>>> seriously - what do you think happens?
>>> you and everybody else on planet earth using 150.xx.xx.xx are coming with
>> the same IP to the DNSBL/URIBL hosts
> 
> He's being rude but he's right.  You can't guarantee that all of the other DNS
> queries being made through your university DNS servers isn't going over the
> free limit on the URIBL DNS servers.
> 
>> Isn’t the point of enabling dnsmasq to cache DNS calls? I’m just following 
>> the
>> instructions at  
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.apache.org_spamassassin_CachingNameserver-23=DwIFEA=aqMfXOEvEJQh2iQMCb7Wy8l0sPnURkcqADc2guUW8IM=X0jL9y0sL4r4iU_qVtR3lLNo4tOL1ry_m7-psV3GejY=Xfhs5TxObQNstiygWZx6rtuJIMJ_Q65ueMPfIdG6MPw=YjlCBF15mxOWWMeVSUh_L9Jz1s8o454zFPqUC_5chAU=
>>  
>> Installing_dnsmasq_as_a_Caching_Nameserver which BTW has a broken
>> link to instructions.
> 
> I will fix this wiki page now…

I see there’s rbldnsd. On Fedora and one of our 2 servers, we run NIS & ypbind. 
One runs NetworkManager and the other just the network service. I guess I’m 
looking for the best recommendation and easy configuration without conflicts. 
The link to http://njabl.org/rsync.html <http://njabl.org/rsync.html> is broken 
at the moment. 



Re: URIBL_BLOCKED on 2 Fedora 25 servers with working dnsmasq, w/ NetworkManager service

2017-05-18 Thread Robert Kudyba

> Am 18.05.2017 um 22:30 schrieb Reindl Harald:
>> "with working dnsmasq" says all - DNSMASQ DON'T DO RECURSION - IT CAN#T
>> you are forwarding to some other nameserver and you are not the only one

But the nameserver I’m forwarding to is in our university.

> /etc/resolv.dnsmasq
> search subdomain.ourschool.edu ourschool.edu
> nameserver 150.108.x.yy
> nameserver 150.108.y.xx
> 
> seriously - what do you think happens?
> you and everybody else on planet earth using 150.xx.xx.xx are coming with the 
> same IP to the DNSBL/URIBL hosts

Isn’t the point of enabling dnsmasq to cache DNS calls? I’m just following the 
instructions at 
https://wiki.apache.org/spamassassin/CachingNameserver#Installing_dnsmasq_as_a_Caching_Nameserver
 which BTW has a broken link to instructions.



URIBL_BLOCKED on 2 Fedora 25 servers with working dnsmasq, w/ NetworkManager service

2017-05-18 Thread Robert Kudyba
I know this has been covered before, e.g., 
https://lists.gt.net/spamassassin/users/198845/?page=1;mh=-1; & 
https://lists.gt.net/spamassassin/users/199135 as well as off list at Ubuntu at 
https://serverfault.com/questions/644707/uribl-blocked-on-ubuntu-14-04-server-with-working-dnsmasq.
 Here’s what we’re getting on 2 Fedora 25 servers:

host -tTXT test.uribl.com.multi.uribl.com
test.uribl.com.multi.uribl.com descriptive text "127.0.0.1 -> Query Refused. 
See http://uribl.com/refused.shtml for more information [Your DNS IP: 
74.125.19.15]"
[root@storm audit]# 

Note the DNS IP is a Google IP and always changes when I run the command.

I just want to make sure I’m not missing something. NetworkManager and network 
service are running and here you can see dnsmasq running with NM:

NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; 
vendor preset: enabled)
   Active: active (running) since Wed 2017-05-17 17:07:27 EDT; 17h ago
 Docs: man:NetworkManager(8)
 Main PID: 24310 (NetworkManager)
Tasks: 4 (limit: 4915)
   CGroup: /system.slice/NetworkManager.service
   ├─24310 /usr/sbin/NetworkManager --no-daemon
   └─24468 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground 
--no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid 
--listen-address=127.0.0.1 --cache-size=400 --conf-file=/dev/null 
--proxy-dnssec --enable-dbus=org.free

Some logs to show dnsmasq in use:
May 17 14:23:32 ourserver dnsmasq[2336]: reading /etc/resolv.conf
May 17 14:23:32 ourserver dnsmasq[2336]: using nameserver 150.108.x.yy#53
May 17 14:23:32 ourserver dnsmasq[2336]: using nameserver 150.108.x.zz#53
May 17 14:23:32 ourserver dnsmasq[2336]: reading /etc/resolv.conf
May 17 14:23:32 ourserver dnsmasq[2336]: using nameserver 127.0.0.1#53

cat /etc/resolv.conf
# Generated by NetworkManager
search subdomain.ourdomain.edu
nameserver 127.0.0.1

dns=dnsmasq is set in the [main] section of 
/etc/NetworkManager/NetworkManager.conf 

And some digs to show before/after:
dig www.google.co.nz

; <<>> DiG 9.10.4-P8-RedHat-9.10.4-5.P8.fc25 <<>> www.google.co.nz
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50850
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;www.google.co.nz.  IN  A

;; ANSWER SECTION:
www.google.co.nz.   299 IN  A   172.217.10.67

;; Query time: 20 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Thu May 18 10:52:59 EDT 2017
;; MSG SIZE  rcvd: 61

[root@storm audit]# dig www.google.co.nz

; <<>> DiG 9.10.4-P8-RedHat-9.10.4-5.P8.fc25 <<>> www.google.co.nz
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53814
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.google.co.nz.  IN  A

;; ANSWER SECTION:
www.google.co.nz.   297 IN  A   172.217.10.67

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Thu May 18 10:53:01 EDT 2017
;; MSG SIZE  rcvd: 61


host -tA 2.0.0.127.multi.uribl.com
2.0.0.127.multi.uribl.com has address 127.0.0.1

/etc/dnsmasq.conf
port=0
resolv-file=/etc/resolv.dnsmasq
strict-order
no-dhcp-interface=enp7s0f0
bind-interfaces
listen-address=127.0.0.1,150.108.xx.yy,127.0.1.1
interface=enp7s0f0
domain=ourdomain.ourschool.edu

/etc/resolv.dnsmasq 
search subdomain.ourschool.edu ourschool.edu
nameserver 150.108.x.yy
nameserver 150.108.y.xx

 /etc/resolv.conf
# Generated by NetworkManager
search subdomain.ourschool.edu
nameserver 127.0.0.1

Am I missing something?

Re: Strict/Relaxed DKIM alignment possible with SA?

2017-05-07 Thread Robert Schetterer
Am 07.05.2017 um 13:08 schrieb Matus UHLAR - fantomas:
>>> > > On 07.05.17 00:46, Thore Boedecker wrote:
>>> > > > Thanks for all the great advice so far.
>>> > > >
>>> > > > Currently I'm playing around with opendkim->opendmarc->amavisd
>>> on my
>>> > > > testserver.
>>> > > >
>>> > > > My current postfix setup is using spampd as proxy and thus any
>>> > > > opendkim/opendmarc milters won't work in cojunction.
>>>
>>> > > > I've been planning to switch to amavis and use it as a milter for
>>> > > > quite some time now so maybe I should get on with it.
>>> [...]
>>> > > > Compiling opendmarc against libspf2 makes the opendmarc
>>> internal SPF
>>> > > > checker functional and now the SA SPF checks (triggered by
>>> amavis) are
>>> > > > firing as well.
>>>
>>> > On 07.05.17 - 11:46, Matus UHLAR - fantomas wrote:
>>> > > I would like to note that SPF can be used without openDMARC, and
>>> imho should
>>> > > work in SA itself.
>>> > >
>>> > > Did you (try to) make SPF working on valhalla.nano-srv.net?
>>>
>>> On 07.05.17 12:05, Thore Boedecker wrote:
>>> > It seems that I simply forgot the load the SPF module in my
>>> > spamassassin config.
>>> >
>>> > A few test mails from different servers are now hitting at least
>>> > the SPF_HELO_PASS rule but nothing else so far.
> 
>> On 07.05.17 - 12:27, Matus UHLAR - fantomas wrote:
>>> try running spamassassin -D on a mail, if you get something like:
>>>
>>> May  6 22:38:47.009 [30327] dbg: spf: relayed through one or more
>>> trusted
>>> relays, cannot use header-based Envelope-From, skipping
>>>
>>> it may be caused by postfix forwarding mail via localhost
>>> - it's better to know if spampd (or later amavisd) can work around that.
>>>
>>> SPF_PASS, SPF_NEUTRAL, SPF_NONE, SPF_SOFTFAIL and SPF_FAIL will indicate
>>> that SPF works as expected.
> 
> On 07.05.17 12:46, Thore Boedecker wrote:
>> I have played around with it and SA is not performing actual SPF
>> queries/validations due to the use of spampd on localhost as a proxy.
> 
> that's why I recommended trying policyd-spf on valhalla.nano-srv.net
> - it could be able to push Received-SPF: header SA could use after...
> 
>> The only way around this, that I know of, would be to switch to amavis
>> as it can be used as a milter.
>> Or is there a way to make SA work as a milter in postfix?
> 
> spamass-milter should be supported.

works perfect since years

> I run amavisd-milter on one machine.
> 



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: sa-compile will not configure

2017-04-21 Thread Robert Steinmetz AIA
No I don't. I have not used Gentoo and am not on that mailing list.. I 
hope my questions do not appear too newbie, since I've been running Unix 
systems for a long time, although this particular one had me stumped. 
Thelma is the name of one of my servers, another is Louise. That 
location has servers with all female names our other location has all 
male names.


Ian Zimmerman wrote:

On 2017-04-20 17:31, Robert Steinmetz AIA wrote:


thelma@thelma:~$ echo $PATH

BTW, do you have any connection to the Thelma who's asking a constant
stream of close-to-newbie questions in the Gentoo user mailing list?

It's not that common a name, so forgive me for the short-circuit in my
brain :-)


--

Robert Steinmetz AIA
Principal
Steinmetz & Associates

New Orleans & Atlanta



Re: sa-compile will not configure

2017-04-20 Thread Robert Steinmetz AIA

Thank you Bill,

I checked all of the permissions at every level, they were all 755 
except for as noted, which I changed. to 755

It works now.

I'll re-check this in the morning and run security scans to make sure 
everything is tied down..


I appreciate your help.


Bill Cole wrote:

On 20 Apr 2017, at 16:16, Robert Steinmetz AIA wrote:


Thank you Bill,

That has given me a clue. I ran the commands below:


thelma@thelma:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/games:/usr/local/games:/snap/bin 



thelma@thelma:~$ ls -ld /usr/local/sbin
drwxr-xr-x 2 root root 48 Mar 11  2007 /usr/local/sbin

thelma@thelma:~$ ls -ld /usr/local/bin
drwxr-xr-x 2 root root 48 Mar 11  2007 /usr/local/bin


OK, it MAY be that perl is also looking up the tree. If /usr/local is 
world-writable, then these 2 effectively are also (since they could be 
renamed and replaced with evil twins.)




thelma@thelma:~$ ls -ld /usr/sbin
drwxr-xr-x 2 root root 11752 Apr 18 13:06 /usr/sbin

thelma@thelma:~$ ls -ld /usr/bin
drwxr-xr-x 4 root root 72872 Apr 18 16:44 /usr/bin


The above note about /usr/local also applies to /usr


thelma@thelma:~$ ls -ld /usr/sbin
drwxr-xr-x 2 root root 11752 Apr 18 13:06 /usr/sbin


I think you meant /sbin here.


thelma@thelma:~$ ls -ld /bin
drwxrwxrwx 3 root root 4352 Apr 15 19:06 /bin


That's a problem. Enough of a problem that if this system has any 
other users who could have logged in OR any remote-accessible services 
that might be attack paths, you should reload from bare metal. Having 
/bin (or /sbin, /usr/bin, or /usr/sbin) world-writable essentially 
hands over control to anyone who knows about the flaw, wants the 
machine, and has some way to get in.


I have no idea how /bin could have become world-writable short of 
administrative malpractice or a prior malicious system compromise.



 ls -ld /usr/bin/X11
lrwxrwxrwx 1 root root 1 Mar 11  2007 /usr/bin/X11 -> .


That's a weird Ubuntu (or Debian?) quirk. It shouldn't be necessary 
but it probably shouldn't be fiddled with either, except maybe to 
'chmod -h o-w /usr/bin/X11' (to remove the world-writable permission 
from the symlink.)



ls -ld /usr/games
drwxr-xr-x 2 root root 784 Apr 15 18:17 /usr/games

ls -ld /usr/local/games
drwxr-xr-x 2 root root 48 Mar 11  2007 /usr/local/games

ls -ld /snap/bin
ls: cannot access '/snap/bin': No such file or directory

Note that /snap/bin doesn't exist and  /usr/bin/X11 links to "."

I added  /snap/bin as an empty directory but it still fails


I've not seen that in a default $PATH. Is it something you use 
locally? Here again, the permissions of directories up to the root MAY 
be taken into account by perl for untainting purposes, I'm not sure. 
There's  no sound reason for /, /usr, or any directory whose contents 
you care about to be world-writable without the "sticky" bit set (as 
with /tmp and /var/tmp) so you could safely do this:


   chmod -h o-w $( echo $PATH | tr ':' ' ' )





--
Rob



Re: sa-compile will not configure

2017-04-20 Thread Robert Steinmetz AIA

  
  
Reindl Harald wrote:


  just ask your distribution how they broke your environment
  
  
  this is *not* a spamassassin issue and all the stuuf you do abvoe
  is not supposed to make things better - how do you imagine "I
  deleted the /usr/bin/X11 link and created a new directory
  /usr/bin/X11 but it still failed" has any relation to
  spamassassin?
  

While not a direct Spamassassin issue Sapmassassin is the only
package this problem affects, every other package configures
properly.. It is ultimately a perl issue, but as Bill Cole helpfully
wrote:
The sa-compile script DOES use a SA utility
  function to untaint the whole %ENV hash, but there's a special
  catch for $ENV{'PATH'}: if any directories included are not
  absolute (e.g. commonly '.' and '~/bin') OR are writable by more
  than their owning user & group, $ENV{'PATH'} remains tainted
  and won't be used or passed to child processes. Often a bad
member directory is unobvious because it is a symlink name and
symlinks are usually technically mode 777 because the system
  doesn't use the mode of a symlink itself.
  

I was checking to see if all of the directories in $PATH were OK.. I
posted the entire results of those tests to allow someone more
knowledgeable than I am about Spamassassin and perl to see if there
was another problem. 

The reason for deleting and reinstalling  /usr/bin/X!1 was to test
is a Bill Cole's suggestion that a symbolic link might be the cause
of the issue, I think I've shown it's not.

Which links back to specific Spamassassin code. Determining why that
code is not functioning correctly is the route to solving the whole
problem. If it turns out the Spamassassin code its to blame then it
is a Spamassassin issue.


  




Re: sa-compile will not configure

2017-04-20 Thread Robert Steinmetz AIA

Thank you Bill,

That has given me a clue. I ran the commands below:


thelma@thelma:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/games:/usr/local/games:/snap/bin

thelma@thelma:~$ ls -ld /usr/local/sbin
drwxr-xr-x 2 root root 48 Mar 11  2007 /usr/local/sbin

thelma@thelma:~$ ls -ld /usr/local/bin
drwxr-xr-x 2 root root 48 Mar 11  2007 /usr/local/bin

thelma@thelma:~$ ls -ld /usr/sbin
drwxr-xr-x 2 root root 11752 Apr 18 13:06 /usr/sbin

thelma@thelma:~$ ls -ld /usr/bin
drwxr-xr-x 4 root root 72872 Apr 18 16:44 /usr/bin

thelma@thelma:~$ ls -ld /usr/sbin
drwxr-xr-x 2 root root 11752 Apr 18 13:06 /usr/sbin

thelma@thelma:~$ ls -ld /bin
drwxrwxrwx 3 root root 4352 Apr 15 19:06 /bin

 ls -ld /usr/bin/X11
lrwxrwxrwx 1 root root 1 Mar 11  2007 /usr/bin/X11 -> .

ls -ld /usr/games
drwxr-xr-x 2 root root 784 Apr 15 18:17 /usr/games

ls -ld /usr/local/games
drwxr-xr-x 2 root root 48 Mar 11  2007 /usr/local/games

ls -ld /snap/bin
ls: cannot access '/snap/bin': No such file or directory

Note that /snap/bin doesn't exist and  /usr/bin/X11 links to "."

I added  /snap/bin as an empty directory but it still fails

thelma@thelma:/usr/bin$ ls -ld /snap/bin
drwxr-xr-x 2 root root 48 Apr 20 15:55 /snap/bin

I was little concerned about what to do with /usr/bin/X11

I deleted the /usr/bin/X11 link and created a new directory /usr/bin/X11 
but it still failed.


I deleted the directory and remade the link.

I'd also prefer not to modify sa-compile since the next time there is a 
update it will likely be overwritten.


Hopefully someone can shed a clue

Bill Cole wrote:


Inside a perl script, the execution environment is available in the 
%ENV hash, with variable names as keys, so the execution search path 
"PATH" is "$ENV{'PATH'}". The %ENV hash is considered "tainted" as 
untrustworthy input by perl, so if the interpreter is run with the 
"-T" option, any subprocess launched by perl won't get any environment 
variables unless the script has done something to "untaint" members of 
that hash. The sa-compile script DOES use a SA utility function to 
untaint the whole %ENV hash, but there's a special catch for 
$ENV{'PATH'}: if any directories included are not absolute (e.g. 
commonly '.' and '~/bin') OR are writable by more than their owning 
user & group, $ENV{'PATH'} remains tainted and won't be used or passed 
to child processes. Often a bad member directory is unobvious because 
it is a symlink name and symlinks are usually technically mode 777 
because the system doesn't use the mode of a symlink itself.


What I expect is happening is that there's a problematic directory in 
the $PATH that perl gets when executed, so the blind untainting of 
$ENV{'PATH'} that sa-compile does won't work. The best fix is to find 
the insecure member of $PATH and remove it before trying to run 
sa-compile.







Re: Spam from .br TLDs

2017-04-20 Thread Robert Schetterer
Am 20.04.2017 um 15:57 schrieb RW:
> On Wed, 19 Apr 2017 17:37:42 +0200
> Heinrich Boeder wrote:
> 
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>>
>> Hi all,
>>
>> I tried the Ruleset Robert Schetterer suggested but I still get Spam.
>> The rules barely get a hit. I am pretty sure that the rules are
>> outdated.

Hm, any chance to add rest spam content to the rules ?
Brazil List user alive here ? What about getting the rules up2date

>>
>> I´d also like to avoid excluding Portuguese Mails or Spanish Mails by
>> using ok_languages because I get (wanted) mails with english text and
>> spanish signatures for example. Or would you say that the TextCat
>> Plugin works so well that the chances of a false positive are really
>> low?
> 
> TextCat's UNWANTED_LANGUAGE_BODY rules requires that one or more
> languages are found in the text and none of them is in the ok_languages
> list.  TextCat isn't actually very good, but for me it fails by missing
> spam rather than hitting ham. Anyway it's only 2.8 points so rule FPs
> will rarely translate into a overall FP, but it's enough to combine
> with BAYES_95 or above to get over the 5.0 threshold.
> 



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: sa-compile will not configure

2017-04-19 Thread Robert Steinmetz
Title: Signature

  
  
Robert Steinmetz wrote:
  
  Responding to my own post with new information.
  I think I've confirmed that the problem is the $PATH, or the perl
  equivalent.
  I added the full path name where the specific commands were called
  and that removed that error. Some of the commands seem to be
  called from other perl scrips so the problem seems to be my perl
  set up.
  But I'm not clear how perl actually sets the path.

The
  users login shell is /bin/sh
  
  I often sudo bash if I am doing a lot of admin work, rather that
  typing sudo for each command
  
  The script begins with #!/usr/bin/perl -T -w which invokes perl
  
  when I invoke sudo sh I get the same results although every
  interactive shell I have tried includes /bin /sbin /usr/bin and
  /usr/sbin
  
  /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin

  
  Some have a couple more options.
  
  
  BTW, plain text (not HTML) would be
appreciated.

  
  I wasn't sure of the protocol here but I thought I had. I'll try
  to remember.
  



-- 
  
  
  Robert
  Steinmetz, AIA
Principal
  Steinmetz & Associates
  



Re: Spam from .br TLDs

2017-04-18 Thread Robert Schetterer
Am 18.04.2017 um 21:32 schrieb Heinrich Boeder:
> Hi Folks,
> 
> I am getting a lot of Brazilian/Portuguese Spam lately and I was
> wondering if it is just me or if you guys noticed an increase in Spam
> from .br TLD Domains, too. The text in those emails is in Portuguese or
> Spanish Language (sorry but I cant really tell)  so my SA doesnt really
> work well because it is trained mostly for German and English language
> mails (Most Spam is filtered by Postscreen but the ones which pass
> postscreen usually pass SA also). Anyone any good ruleset for Mails in
> Portuguese Language?
> 
> Cheers,
> 
> - heinrich
> 
> key: 0xC15DAD56 -- 363D 5BC3 9C45 9D09 3D78  1C28 DB68 F047 C15D AD56
> 

http://www.lafraia.com.br/spambr/

no idea if they are working fine


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: sa-compile will not configure

2017-04-18 Thread Robert Steinmetz

Ian Zimmerman wrote:

On 2017-04-18 10:17, Robert Steinmetz wrote:


 tty is in /usr/bin

But it is stty, not tty, which fails to be found.  And stty is
(normally) in /bin.  So it looks a lot like /bin (and probably /sbin) is
missing from the PATH.

Yes thanks stty is in /bin

This could be related to the long-advertised switch to a unified /usr
tree.  Perhaps Ubuntu went ahead with that switch but some packages
haven't been updated to reflect it?

I'm not familiar with that.

One other thing which springs to mind is the distinction between login,
interactive, and other shells.  Double-check in which shell startup file
you set the PATH.

The users login shell is /bin/sh
I often sudo bash if I am doing a lot of admin work, rather that typing 
sudo for each command

The script begins with #!/usr/bin/perl -T -w which invokes perl
when I invoke sudo sh I get the same results although every interactive 
shell I have tried includes /bin /sbin /usr/bin and /usr/sbin

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin

Some have a couple more options.


BTW, plain text (not HTML) would be appreciated.
I wasn't sure of the protocol here but I thought I had. I'll try to 
remember.

--
Robert Steinmetz, AIA
Principal
Steinmetz & Associates

Signature


Re: sa-compile will not configure

2017-04-18 Thread Robert Steinmetz
Title: Signature

  
  
RW wrote:


  On Mon, 17 Apr 2017 16:37:35 -0400
Robert Steinmetz wrote:


  
I upgrades my working Ubuntu 14.04 LTS to 16.04 LTS SpamAssassin
version 3.4.1.
  Something happened during the upgrade and I ma now unable to get 
sa-compile to configure properly.

Here is the message



  root@thelma:~# dpkg --configure sa-compile
Setting up sa-compile (3.4.1-3) ...
Running sa-compile (may take a long time)
Can't exec "rm": No such file or directory at /usr/bin/sa-compile
line 374, <$fh> line 1.
make: chmod: Command not found


  
  
This is likely an Ubuntu/Debian problem. On the face of it it look like
sa-compile is being run without a properly set PATH variable. 

Note that you do need to run sa-update after changing versions of
spamassassin as it will be looking for rules in a 3.4.1 specific
directory.


Thanks for the response.
It is a problem of a failed upgrade. I posted the problem on the
Ubuntu forum so far not response.

I agree it looks like the $PATH is not set correctly, where in
spamassassin of sa-compile would that be set? 
I ran the command as superuser. I would expect that sa-compile would
use the user's $PATH which definitely includes "rm" and "chmod", sso
somewhere sa-compile or spamassassin must reser the $PATH or run as
another user with an incorrect $PATH.

I found the entry below in /etc/passwd 
debian-spamd:x:136:144::/var/lib/spamassassin:/bin/sh
I ran sa-update it ran without error.

I ran sa-compile again and this was the output;
root@thelma:~# sa-update
  root@thelma:~# sa-compile
  Apr 18 09:27:38.942 [8741] info: generic: base extraction
  starting. this can take a while...
  Apr 18 09:27:38.942 [8741] info: generic: extracting from rules of
  type body_0
  Can't exec "stty": No such file or directory at
  /usr/share/perl5/Mail/SpamAssassin/Util/Progress.pm line 158.
  100% [===] 1246.47
  rules/sec 00m00s DONE
  Can't exec "stty": No such file or directory at
  /usr/share/perl5/Mail/SpamAssassin/Util/Progress.pm line 158.
  100% [===] 254.60
  bases/sec 00m09s DONE
  Apr 18 09:27:48.738 [8741] info: body_0: 1146 base strings
  extracted in 10 seconds
  cd /tmp/.spamassassin8741LlVWi0tmp
  reading bases_body_0.in
  Can't exec "rm": No such file or directory at /usr/bin/sa-compile
  line 374, <$fh> line 1.
  cd Mail-SpamAssassin-CompiledRegexps-body_0
  re2c -i -b -o scanner1.c scanner1.re
  re2c -i -b -o scanner2.c scanner2.re
  re2c -i -b -o scanner3.c scanner3.re
  re2c -i -b -o scanner4.c scanner4.re
  re2c -i -b -o scanner5.c scanner5.re
  re2c -i -b -o scanner6.c scanner6.re
  /usr/bin/perl Makefile.PL
  PREFIX=/tmp/.spamassassin8741LlVWi0tmp/ignored
  INSTALLSITEARCH=/var/lib/spamassassin/compiled/5.022/3.004001
  Generating a Unix-style Makefile
  Writing Makefile for Mail::SpamAssassin::CompiledRegexps::body_0
  Writing MYMETA.yml and MYMETA.json
  make
  make: chmod: Command not found
  Makefile:400: recipe for target
  'blib/lib/Mail/SpamAssassin/CompiledRegexps/.exists' failed
  make: *** [blib/lib/Mail/SpamAssassin/CompiledRegexps/.exists]
  Error 127
  command failed: exit 2
  root@thelma:~#

tty is in /usr/bin
rm is in /bin
chmod is in /bin
sa-complie is in /usr/bin

root@thelma:/usr/bin# ls -ld sa-compile
  -rwxr-xr-x 1 root root 22014 Nov 10  2015 sa-compile


Looking at sa-compile it seems $PATH is set this looks to me like it
overwrites the search path.
if (!$modname) {
      $modname =
  "Mail::SpamAssassin::CompiledRegexps::$ruleset_name";
    }
  
    our $PATH = $modname;
    $PATH =~ s/::/-/g;
    $PATH =~ s/[^-_A-Za-z0-9\.]/_/g;

rm seems to be used without an absolute path at line 374 below.
$force and system("rm -rf $PATH");
I am not a perl expert, I hardly know anything about it.
Perhaps someone can shed some light on this.
I could edit sa-compile and add /bin/rm, /usr/.bin/tty 
then 
track down the chmod and add /bin/chmod where it occurs later.
somehow that seems the wrong way to fix it.


-- 
  Robert Steinmetz, AIA
  Principal
  Steinmetz & Associates
  
  
  
  

<>

sa-compile will not configure

2017-04-17 Thread Robert Steinmetz
I upgrades my working Ubuntu 14.04 LTS to 16.04 LTS SpamAssassin version 
3.4.1.
 Something happened during the upgrade and I ma now unable to get 
sa-compile to configure properly.


Here is the message


root@thelma:~# dpkg --configure sa-compile
Setting up sa-compile (3.4.1-3) ...
Running sa-compile (may take a long time)
Can't exec "rm": No such file or directory at /usr/bin/sa-compile line 
374, <$fh> line 1.

make: chmod: Command not found
make: *** [blib/lib/Mail/SpamAssassin/CompiledRegexps/.exists] Error 127
command 'make >>/tmp/.spamassassin15863LFnIj0tmp/log' failed: exit 2
dpkg: error processing package sa-compile (--configure):
 subprocess installed post-installation script returned error exit 
status 25

Errors were encountered while processing:
 sa-compile
I have found some references to this error message but can't find a 
solution.
I have removed, and reinstalled Spamassassin. I have removed purged and 
reinstalled and sa-compile with no result.

I haven't found anything in the logs to tell me anything new.

I'm hoping someone here can give me a clue as to where to start.

--
Rob Steinmetz
Signature
<>

Re: Yahoo - Can't figure out a server is down?

2017-03-05 Thread Robert Schetterer
Am 05.03.2017 um 13:09 schrieb Groach:
> Its called "NOLISTING" - but does it work?

everyone has his own spam
nobody can say whats best at your site
analyse your logs and the choose what to do best, you may follow best
practice but
Greylisting , Nolisting are very old practices
there are better ones now ,like postscreen etc
you should avoid use Greylisting , Nolisting
cause of many its disadvantages by design
however this was discussed extremly before , search list archive
to catch pro/contra

> 
> An experiment was carried out on a small throughput server.  Here is the
> conclusion: https://www.hmailserver.com/forum/viewtopic.php?p=185262#p185262
> 
> (You'll be surprised).
> 
> 
> On 05/03/2017 06:32, Rob Gunther wrote:
>> We have run our servers with a decoy, our MX records have been like
>> this for 10+ years:
>>
>> mx0.example.com <http://mx0.example.com>
>> mx1.example.com <http://mx1.example.com>
>> mx2.example.com <http://mx2.example.com>
>>
>> mx1 & mx2 are real servers.  mx0 is nothing, it points to an IP
>> address that is controlled by us but there is no server.
>>
>> The concept being that some spammers attempt that server, get nothing
>> and don't bother trying any other server.
>>
>> This has been fine for a decade.



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Filtering outbound mail

2017-02-16 Thread Robert Schetterer
Am 16.02.2017 um 11:07 schrieb David Jones:
> My mail filters also do a lot of outbound relaying from hundreds
> of customer mail servers.  Compromised accounts happen and I
> have some methods for detecting most of them and block the
> sender at the MTA within a few minutes to prevent my server
> IPs from becoming listed on RBLs.
> 
> Customer mail servers are currently trusted by IPs on our own
> network ranges and have a slight bias toward trust by being in
> the trusted_networks.  This allows for the proper RBL checks
> of the sender IP as long as the customer mail server adds the
> proper X-Originating-IP or Received: header of the client.
> 
> The goal is to be able to block most outbound spam with the
> usual rules, network tests, and Bayesian scores.  However,
> these compromised accounts often contain zero-hour email
> that score low.
> 
> A common factor for most of these emails is sending with a
> high number of recipients often to FREEMAIL recipients.
> 
> Would it make sense for me to setup/manage my own custom
> rules for checking the To: header or could the FreeMail plugin
> be extended to add new rules like FREEMAIL_TO?
> 
> I understand that the To: header is not the same as the
> RCPT TO and the MTA will split emails based on destination.
> In this situation, the sending MTA is smarthosted to my
> relays and these are compromised accounts on legit MTAs
> where headers can be considered reliable.  I do see patterns
> with sorted recipients and multiple FREEMAIL recipients
> that I would like to score on.  Then I have a database with
> this information that I run SQL queries against to determine
> frequency of certain rule hits to find compromised accounts
> and block them quickly.
> 
> Thanks,
> Dave
> 

clamav-milter with sanesecurity works fine and fast at outbound
but better get an intelligent milter cross outbound smtp servers
which is able to identify hacked accounts, for i.e it counts from and to
adr, if it fades from normal traffic ,action should be taken etc ,such
exists but not as freeware and for sure it must be fitted to your needs


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Useful and simple script to reduce high spam load at mta level, what do you think

2016-10-27 Thread Robert Schetterer
Am 27.10.2016 um 18:32 schrieb John Hardin:
> On Thu, 27 Oct 2016, Christian Grunfeld wrote:
> 
>> fail2ban with custom filter.
> 
> I tarpit them...
> 
> http://www.impsec.org/~jhardin/antispam/spammer-firewall
> 
> 
>> 2016-10-27 10:38 GMT-03:00 Nicola Piazzi <nicola.pia...@gruppocomet.it>:
>>
>>> This script can be used if you have mailscanner in mysql database that
>>> record results of spamassassin activity and postfix as mta
>>>
>>> # postban.sh
>>> # Temporary Ban SpamOnly Ip
>>> # -
>>> #
>>> # This script create a table for postfix that ban IPs that made high
>>> spam
>>> results only
>>> #
>>> # 1) Put this script anywhere and set your parameters
>>> # 2) Put in crontab a line like this to run every 15 minutes :
>>> # 0/15 * * * * /batch/postban.sh
> 

https://sys4.de/de/blog/2015/11/07/abwehr-des-botnets-pushdo-cutwail-ehlo-ylmf-pc-mit-iptables-string-recent-smtp/

https://sys4.de/de/blog/2014/03/27/fighting-smtp-auth-brute-force-attacks/

https://sys4.de/de/blog/2012/12/28/botnets-mit-rsyslog-und-iptables-recent-modul-abwehren/

but this solutions may not fit exact to "your" problem

fail2ban is a good well tested solution

so you should always decide by deep log analysis which way to go




Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Greymail and marketing junk

2016-09-30 Thread Robert Schetterer
Am 30.09.2016 um 11:35 schrieb Robert Schetterer:
> Am 30.09.2016 um 02:28 schrieb Alex:
>> Hi all,
>>
>> Has anyone given any thought to special rules or methods designed to
>> catch greymail? That is, mail that perhaps may be opt-in, but abusive,
>> like marketing mailing lists or newsletters?
>>
>> This might include mail with List-Unsubscribe headers, but that's not
>> necessarily enough to use to block an email.
>>
>> I've written a handful of rules based on Received headers for mail
>> servers like 'businesswatchnetwork.com' or 'list-manage.net' etc, but
>> there's obviously just too many of them and it's time-consuming.
>>
>> Any ideas for improving this process?
>>
>> Any thoughts on how the typical marketing email should be scored with bayes?
>>
>> Perhaps there's a DNSBL or other RBL out there whose purpose is to
>> identify marketing domains?
>>
>> Is anyone interested in sharing resources to start such a thing?
>>
> 
> from tec side there is not really a difference
> between marketing and other mails, dealing with their domains
> might never end , but you can always use your own a reject list on smtp
> level cause why should you do expensive content filter to known not
> wanted domains.
> 
> At the end ,at a server with many different users you will see that some

sorry typo

> marketing is really wanted by some users, but others like "not" to see it


> so best way for this are users own white/blacklists after you filtered
> the most bad things global.
> 
> 
> Best Regards
> MfG Robert Schetterer
> 



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Greymail and marketing junk

2016-09-30 Thread Robert Schetterer
Am 30.09.2016 um 02:28 schrieb Alex:
> Hi all,
> 
> Has anyone given any thought to special rules or methods designed to
> catch greymail? That is, mail that perhaps may be opt-in, but abusive,
> like marketing mailing lists or newsletters?
> 
> This might include mail with List-Unsubscribe headers, but that's not
> necessarily enough to use to block an email.
> 
> I've written a handful of rules based on Received headers for mail
> servers like 'businesswatchnetwork.com' or 'list-manage.net' etc, but
> there's obviously just too many of them and it's time-consuming.
> 
> Any ideas for improving this process?
> 
> Any thoughts on how the typical marketing email should be scored with bayes?
> 
> Perhaps there's a DNSBL or other RBL out there whose purpose is to
> identify marketing domains?
> 
> Is anyone interested in sharing resources to start such a thing?
> 

from tec side there is not really a difference
between marketing and other mails, dealing with their domains
might never end , but you can always use your own a reject list on smtp
level cause why should you do expensive content filter to known not
wanted domains.

At the end ,at a server with many different users you will see that some
marketing is really wanted by some users, but others like to see it
so best way for this are users own white/blacklists after you filtered
the most bad things global.


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: backport 3.4.0 Ubuntu 12.04 TLS

2016-09-16 Thread Robert Schetterer
Am 16.09.2016 um 13:41 schrieb Marcus Schopen:
> Hi Robert,
> 
> Am Freitag, den 16.09.2016, 13:02 +0200 schrieb Robert Schetterer:
>> Am 16.09.2016 um 12:48 schrieb Marcus Schopen:
>>> Hi Patrick,
>>>
>>> Am Donnerstag, den 15.09.2016, 22:02 -0400 schrieb Patrick Domack:
>>>> Sounds like a lot of work for an old spamassassin version.
>>>>
>>>> https://launchpad.net/%7Epatrickdk/+archive/ubuntu/production/+sourcepub/5219815/+listing-archive-extra
>>>
>>> H ... do you think better backporting 3.4.1 from Xenial? Does it run
>>> on Ubuntu 12.04 LTS and 14.04 LTS?
>>>
>>> Ciao!
>>>
>>>
>>
>> tested and running with recompile debian way 3.4.1 from wily 15.04 does
>> run in 14.04
> 
> I just backported version 3.4.1-3 from Xenial, seems to be fine. 
> 
> Did you change any code or did you just backported it with
> dpkg-buildpackage?
> 
> Ciao
> Marcus
> 
> 

time passed since then , but as i remember i did no change
at the deb src

Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: backport 3.4.0 Ubuntu 12.04 TLS

2016-09-16 Thread Robert Schetterer
Am 16.09.2016 um 12:48 schrieb Marcus Schopen:
> Hi Patrick,
> 
> Am Donnerstag, den 15.09.2016, 22:02 -0400 schrieb Patrick Domack:
>> Sounds like a lot of work for an old spamassassin version.
>>
>> https://launchpad.net/%7Epatrickdk/+archive/ubuntu/production/+sourcepub/5219815/+listing-archive-extra
> 
> H ... do you think better backporting 3.4.1 from Xenial? Does it run
> on Ubuntu 12.04 LTS and 14.04 LTS?
> 
> Ciao!
> 
> 

tested and running with recompile debian way 3.4.1 from wily 15.04 does
run in 14.04


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


block attachments via plugin

2016-09-02 Thread Robert Boyl
Hi, guys

Recently I saw this.

http://jrs-s.net/2013/06/14/block-common-trojans-in-spamassassin/

My idea was to create a rule in the way mentioned in this site, such as,
for example, certain attachment file type (such as HTML or ZIP) and a
certain subject, score the message.

The rule works. But I found that it causes false positives for emails that
have HTML in the body and not necessarily attached (internally, I guess its
the same, right?).

Example

--_000_2C3280CB5B1A584F8E4B3E0E263D843251617ACAMBXTB921Cvcarem_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable



Re: Possible ignore CRLF?

2016-08-26 Thread Robert Boyl
Hi,

Thanks for reply. Hehe, sorry :))

Rule

describe BRF_TEST123test
body BRF_TEST123 \bSe você não deseja mais receber nossos e-mails,
cancele\b/i
scoreBRF_TEST123 0.1

See here the message that qmail cant catch due to a CRLF in middle of text
(right after word "se") but icewarp can catch even with the CR LF.

If I remove the CR LF my qmail catches it (SA).

http://pastebin.com/gyeDcA3H

Thanks
Rob



2016-08-26 10:50 GMT-03:00 Axb <axb.li...@gmail.com>:

> On 08/26/2016 03:46 PM, Robert Boyl wrote:
>
>> Hi, everyone!
>>
>> Just curious if anyone has had this issue before.
>>
>> We have a customer SA rule that catches certain text "se voce nao deseja
>> mais receber..."
>>
>> We have an icewarp mail server where our rule hits just fine, DESPITE a
>> CRLF after word "SE".
>>
>> See imagem showing that CRLF http://screenpresso.com/=e406e
>>
>> But our qmail with SA does not hit the rule due to the CRLF.
>>
>> I removed CRLF, refed the message as such http://screenpresso.com/=6Zqke
>>
>> Then I got the hit...
>>
>> So question is, is there a way to make SA ignore CRLF?
>>
>> Thanks!
>> Rob
>>
>>
> And where is the rule you created?
>
> can you pastebin the sample message?
> Tests on a screenshot don't work .-)
>
> Guys - screenshots are for grannies
> Use copy/paste & pastebin!!!
>


Possible ignore CRLF?

2016-08-26 Thread Robert Boyl
Hi, everyone!

Just curious if anyone has had this issue before.

We have a customer SA rule that catches certain text "se voce nao deseja
mais receber..."

We have an icewarp mail server where our rule hits just fine, DESPITE a
CRLF after word "SE".

See imagem showing that CRLF http://screenpresso.com/=e406e

But our qmail with SA does not hit the rule due to the CRLF.

I removed CRLF, refed the message as such http://screenpresso.com/=6Zqke

Then I got the hit...

So question is, is there a way to make SA ignore CRLF?

Thanks!
Rob


Re: Is greylisting effective? (was Re: Using Postfix and Postgrey - not scanning after hold)

2016-08-04 Thread Robert Schetterer
Am 04.08.2016 um 22:30 schrieb Chris:
> Greylisting is just one of several tools available to a system
> administrator for filtering out spam

as multiple described it does not


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


detect if html attachment without plugin

2016-08-04 Thread Robert Boyl
Hi, everyone

Quick question. We have a Spamassassin installation where the mail servers
implementation doesnt permit any SA plugins, so I cant use
Plugin::MIMEHeader or the such.

To be able to detect that an email has an HTML attachment, such as this
message: http://pastebin.com/raw/TieFEiZi

I tried this, but it didnt work.

describe TEST_HTML
rawbody TEST_HTML  /bContent-Type: text\/html\b/i
score TEST_HTML 0.1

Any ideas, how to achieve via rule that scans body (or header)? Tried both.

Thanks.
Rob


scan an HTML file, possible?

2016-08-03 Thread Robert Boyl
Hi, everyone

I have a very nice regex a friend passed me that catches those emails that
have an HTML attached with a redirect html command to some malefic website.

He has some tool in Exim that scans text in attachments. But I wanted to
use a spamassassin rule.

Is there some plugin/way in Spamassassin to scan text of an html attachment?

Thanks!
Rob


Re: Is greylisting effective? (was Re: Using Postfix and Postgrey - not scanning after hold)

2016-08-02 Thread Robert Schetterer
Am 02.08.2016 um 20:04 schrieb Reindl Harald:
> 
> 
> Am 02.08.2016 um 20:00 schrieb John Hardin:
>> On Tue, 2 Aug 2016, Bill Cole wrote:
>>
>>> What's special about the postscreen delay is:
>>>
>>> 1. It delays only the last line of a multi-line greeting, so it
>>> catches MANY more bots than a simple delay.
>>>
>>> 2. It caches PASS results so even the very short (6s by default) delay
>>> that it imposes only hits the first encounter with a client that
>>> connects frequently. This is critically important in high-volume
>>> situations where the difference between mean session lengths of 0.5s
>>> and 6s is the difference between 2 and 12 MX boxes in a cluster.
>>>
>>> Combined, this is why Sendmail and other MTA greeting delays are less
>>> spectacularly effective than they used to be and less effective than
>>> postscreen. The resource cost of prolonging every session to 6s is
>>> untenable for busy machines, so bots that have adapted can get
>>> through. Back in the early days of Sendmail's GreetPause a value of 3s
>>> would catch most bots but over the years some bots have adapted by
>>> doing their own hard delays and others have learned to wait for
>>> anything from the server. Few (if any) have adapted by actually
>>> parsing the greeting and making sure that they've seen the end of a
>>> multi-line greeting before talking.
>>
>> That all sounds great.
>>
>> Is there any way to use postscreen as a frontend filter for a sendmail
>> MTA?
> 
> no - postscreen is not a smtp proxy
> 
> in fact the connection is handed over from postcreen to the smtpd
> process after a client has passed the tests
> 

you may use a complete postfix server including postscreen etc "before"
sendmailbut then it might better to simply change to postfix in
total, but such setups are often use with MS exchange


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


eval:check_uridnsbl to check subdomains

2016-08-02 Thread Robert Boyl
Hi, everyone

We are trying to query subdomains of a DNSBL in body of message, but
learned that the default plugin we use, used by URIBL, caps off subdomains.

This is the rule we based ourselves on... it works fine, except for
subdomains... it considers the domain part...

urirhssub   URIBL_GREY  multi.uribl.com.A   2
bodyURIBL_GREY  eval:check_uridnsbl('URIBL_GREY')
describeURIBL_GREY  Contains an URL listed in the URIBL greylist
tflags  URIBL_GREY  net
score   URIBL_GREY  0.25

Explained here

http://www.gossamer-threads.com/lists/spamassassin/users/194077

How can I make it work with subdomains also?

Perhaps adapt the plugin? Or use some other plugin that is able to check
subdomains and doesnt cap them off?

Thanks a lot,
Robert


Re: Is greylisting effective? (was Re: Using Postfix and Postgrey - not scanning after hold)

2016-07-31 Thread Robert Schetterer
Am 30.07.2016 um 13:10 schrieb Kim Roar Foldøy Hauge:
> On Sat, 30 Jul 2016, Robert Schetterer wrote:
> 
>> Am 30.07.2016 um 03:34 schrieb Reindl Harald:
>>>
>>>
>>> Am 29.07.2016 um 22:48 schrieb Dianne Skoll:
>>>> On Fri, 29 Jul 2016 22:39:15 +0200
>>>> Robert Schetterer <r...@sys4.de> wrote:
>>>>
>>>>>> I don't use postfix or postscreen.
>>>>> hm.. that does not fit the subject..why did you involved yourself ?
>>>>
>>>> I am sorry.  I should have changed the thread subject.
>>>>
>>>>> you may get that quite better, i see
>>>>> a lot of server greylisting useless ,only filling up others queues
>>>>> waiting for a second slot ,so it may only cheap for you but not for
>>>>> your partners
>>>>> Dont slow down communication if you dont need to
>>>>
>>>> So what I didn't mention is that in our implementation, once an IP
>>>> address successully passes greylisting, we no longer greylist it for
>>>> the next 45 days.  (It would probably be pointless... if an IP passes
>>>> greylisting once, it probably will keep passing it.)
>>>
>>> that's nothing special and postgrey does the same, the whole point of
>>> greylisting is that badly written bots don't try again (the same happens
>>> if they connect to a backup-MX responding with 4xx)
>>>
>>> also it don't help for clients which *do not* pass like large senders
>>> with outbound clusters coming each time from a different IP
>>>
>>> hence you skip greylisting based on DNSWL and spf-policyd because that
>>> big legit senders hit DNSWL or have a proper SPF while random bots of
>>> infected machines don't and this ones are your target for greylisting
>>>
>>>
>>>
>>
>> Harald is right, the goal has to be "reject" spam asap, not to tell
>> "come again later", i.e i had 4 bot cons per second, this will run out
>> the system of smtp slots rapidly which means any good sender isnt able
>> to sent mail too, greylisting makes such situations more worst.
>>
> 
> I'm no expert here, but postgrey is usually a purely local test. It
> should terminate with a "currently busy, try again later" message very
> quickly. SPF checks and white listing require dns lookups that can
> potentially take much longer. Several orders of magnitude longer.
> 
> Efficient handling of spam is all about doing the least expensive tests
> first in terms of cpu/time. Caching DNS can probably help a bit, but it
> will still require the occasional lookup now and then that take a lot
> longer than a good greylisting implementation should ever do.
> 
> Doing an expensive test on every mail when it's not needed is badly
> designed setup.
> 
> Many of the dns based lists also limit the amount of checks per day.
> Worst case scenario, you stop getting results from lists due to over
> use. If you use google's 8.8.8.8 servers for dns lookups one can quickly
> run into that problem, I did. A high volume of dns checks could force
> you into having to pay for the amount of traffic you cause.
> 
> Many expensive network (takes a long time) checks will probably make you
> run out of slots a lot faster than the reconnects due to greylisting
> will do due to the time spent waiting for the lookups to finish.
> 
> If speed of delivery is important, you could lower the amount of time
> mail stays greylisted. Ideally you'd like the mail delivered the first
> time a server tries to send it again. If a server tries to resend once,
> it will most likely try more than once anyway. Having a minimum time of
> 300 seconds, the default of postgrey, is probably a bit excessive.
> 

Greylisting was invented as an idea against bots. Its based on the idea
that bots "fire and forget" when they see a tmp error and dont get back.

This idea was criticized for design failures since it exists ( Harald
and me explained it in detail ), but was acceptable in lack of better
ideas that time.

But thats historic, bots are recoded, better antibot tecs were invented.
The only problem now is people still believe in historic stuff.


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Is greylisting effective? (was Re: Using Postfix and Postgrey - not scanning after hold)

2016-07-30 Thread Robert Schetterer
Am 30.07.2016 um 03:34 schrieb Reindl Harald:
> 
> 
> Am 29.07.2016 um 22:48 schrieb Dianne Skoll:
>> On Fri, 29 Jul 2016 22:39:15 +0200
>> Robert Schetterer <r...@sys4.de> wrote:
>>
>>>> I don't use postfix or postscreen.
>>> hm.. that does not fit the subject..why did you involved yourself ?
>>
>> I am sorry.  I should have changed the thread subject.
>>
>>> you may get that quite better, i see
>>> a lot of server greylisting useless ,only filling up others queues
>>> waiting for a second slot ,so it may only cheap for you but not for
>>> your partners
>>> Dont slow down communication if you dont need to
>>
>> So what I didn't mention is that in our implementation, once an IP
>> address successully passes greylisting, we no longer greylist it for
>> the next 45 days.  (It would probably be pointless... if an IP passes
>> greylisting once, it probably will keep passing it.)
> 
> that's nothing special and postgrey does the same, the whole point of
> greylisting is that badly written bots don't try again (the same happens
> if they connect to a backup-MX responding with 4xx)
> 
> also it don't help for clients which *do not* pass like large senders
> with outbound clusters coming each time from a different IP
> 
> hence you skip greylisting based on DNSWL and spf-policyd because that
> big legit senders hit DNSWL or have a proper SPF while random bots of
> infected machines don't and this ones are your target for greylisting
> 
> 
> 

Harald is right, the goal has to be "reject" spam asap, not to tell
"come again later", i.e i had 4 bot cons per second, this will run out
the system of smtp slots rapidly which means any good sender isnt able
to sent mail too, greylisting makes such situations more worst.





Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Using Postfix and Postgrey - not scanning after hold

2016-07-29 Thread Robert Schetterer
Am 29.07.2016 um 22:22 schrieb Dianne Skoll:
> On Fri, 29 Jul 2016 22:21:04 +0200
> Robert Schetterer <r...@sys4.de> wrote:
> 
>> now compare with pure postscreen
> 
> I don't use postfix or postscreen.  

hm.. that does not fit the subject..why did you involved yourself ?

All I'm showing is that greylisting
> stops a lot of mail, quite cheaply.

hopefully not *g, i think you mean spam mails...

you may get that quite better, i see
a lot of server greylisting useless ,only filling up others queues
waiting for a second slot ,so it may only cheap for you but not for your
partners
Dont slow down communication if you dont need to

  And hardly anyone notices it.
> 
> This is a production system filtering email for hundreds of thousands
> of people. :)  It's not something I'm going to run experiments on
> by turning off greylisting just to see what happens.

its up to you using more up2date tec stuff, be sure postscreen
is used in equal setups then yours and is well tested

> 
> Regards,
> 
> Dianne.
> 



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Using Postfix and Postgrey - not scanning after hold

2016-07-29 Thread Robert Schetterer
Am 29.07.2016 um 21:35 schrieb Ryan Coleman:
> Apparently you missed the rest of the thread as it was bypassing the
> scanning the SA would do.
> 
> But you’re jumping in 11 days (and 42 messages) after the thread started.

hopefully it will now come to an end now, it was less informative

> 
> 
>> On Jul 29, 2016, at 1:28 PM, Robert Schetterer <r...@sys4.de
>> <mailto:r...@sys4.de>> wrote:
>>
>> the subject Using Postfix and Postgrey - not scanning after hold
>> does not match spamassassin list theme
>>
>> however no need to flame in any case
> 



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Using Postfix and Postgrey - not scanning after hold

2016-07-29 Thread Robert Schetterer
Am 29.07.2016 um 22:15 schrieb Dianne Skoll:
> On Fri, 29 Jul 2016 21:13:56 +0200
> Robert Schetterer <r...@sys4.de> wrote:
> 
>> so i.e measure mails tagged as spam by spamassassin
>> with pure greylisting setup running before tagging ,perhaps for one
>> week, then stop greylisting ,do the same with pure postscreen setup,
>> compare results, this way you may given direction if you still need
>> greylisting.
> 
> See the attached reports.  The first one shows that the vast majority of
> greylisted messages are not retried.
> 
> The second shows that greylisting stops a pretty high percentage of
> messages compared to spam detection (the "Spam" and "Quarantined"
> categories) and is therefore an effective technique that can greatly
> reduce the processing burden.
> 
> The Y-axis indicates messages per hour.
> 
> Regards,
> 
> Dianne.
> 

half done
now compare with pure postscreen


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Using Postfix and Postgrey - not scanning after hold

2016-07-29 Thread Robert Schetterer
Am 29.07.2016 um 20:45 schrieb Dianne Skoll:
> On Fri, 29 Jul 2016 20:36:51 +0200
> Robert Schetterer <r...@sys4.de> wrote:
> 
>> Am 29.07.2016 um 20:07 schrieb Dianne Skoll:
>>> I don't agree.  Greylisting done properly is very effective and has
>>> minimal impact.  We have it on by default on our spam-filtering
>>> service and very few people have even noticed it.
> 
>> show evidence, dont speculate ,measure
> 
> What evidence do you want?  Signed affidavits from our customers that they
> haven't noticed greylisting?  I'm not sure what measurements or evidence
> you seek.

hopefuly you have permanent log analyser as mailadmins should have in
any case
however you can also use grep etc on logfiles

so i.e measure mails tagged as spam by spamassassin
with pure greylisting setup running before tagging ,perhaps for one week,
then stop greylisting ,do the same with pure postscreen setup,
compare results, this way you may given direction if you still need
greylisting.

Happy customers are good for business but they are not a counter
you should use and trust on to post recommendations about effectiveness
of a tec
procedure on a tec list



> 
> Regards,
> 
> Dianne.
> 



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Using Postfix and Postgrey - not scanning after hold

2016-07-29 Thread Robert Schetterer
Am 29.07.2016 um 20:07 schrieb Dianne Skoll:
> I don't agree.  Greylisting done properly is very effective and has
> minimal impact.  We have it on by default on our spam-filtering
> service and very few people have even noticed it.

show evidence, dont speculate ,measure
i ve done it over years, if you use postscreen you will see
greylisting rate will go down to a minimal need, however
if done right you can still always combine both
can we now return to spamassassin here
and get that theme to the postfix list ?

Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Using Postfix and Postgrey - not scanning after hold

2016-07-29 Thread Robert Schetterer
Am 29.07.2016 um 20:06 schrieb John Hardin:
> On Fri, 29 Jul 2016, Reindl Harald wrote:
> 
>>
>>
>> Am 29.07.2016 um 18:15 schrieb John Hardin:
>>>  On Fri, 29 Jul 2016, Reindl Harald wrote:
>>>
>>> >  Am 29.07.2016 um 03:30 schrieb Ryan Coleman:
>>> > > >   On Jul 28, 2016, at 2:49 PM, Reindl Harald
>>> > >  <h.rei...@thelounge.net> >  wrote:
>>> > > > >   Am 28.07.2016 um 21:36 schrieb Ryan Coleman:
>>> > > > >   I have eliminated postgrey from the installation and things
>>> are
>>> > > back > >   to “normal”
>>> > > > >   in other words you burried a problem by remove something
>>> instead
>>> > >  fix the >  reason while on every sane setup greylisting comes long
>>> > >  before any >  content scanner
>>> > > > >   No, asshole. I fixed it by removing postgrey from the
>>> equation.
>>> > >  asshole?
>>> >  just look in your mirror!
>>>
>>>  *SIGH*
>>>
>>>  Harald, please try to be more polite, and cut your fuse longer
>>
>> seriously - i find it interesting that you tell that me instead the
>> creature which starts calling others names
> 
> I was considering the entire exchange, not just your final response.
> Your comment about removing postgrey was abusive. The conversation past
> that point predictably deteriorated.
> 
> You *don't have* to respond to name-calling. Ryan bears blame for
> engaging in name-calling, but you generate a lot more traffic (and heat)
> here than he does.
> 

the subject Using Postfix and Postgrey - not scanning after hold
does not match spamassassin list theme

however no need to flame in any case

if done right postgrey can be used with any postfix setup,
best do it selective with dynips after postscreen and rbls
also use serveral milters opendmarc , openkim , clamav-milter
spamass-milter or amavis framework.

Note: in many countries in the EU its forbidden to silent discard spam
so milters as pre queue filter mostly match this perfectly, so its wise
to use them, there are also many milter frameworks that have greylisting
already included.

as many bots have perfect smtp code these days greylisting has lost his
effectiveness widly, so depending on your server it makes no sense
to use it anymore

content filter like spamassassin are "expensive"
always "try" to get them at a very last stage in you filter chain

best practise configs can be found via google, the rest is looking
at your logs and fix stuff in your filter chain when needed



Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: Fwd: too many missed spams/false negatives w/ SA 3.4.1 on sendmail, help w config?

2016-07-23 Thread Robert Kudyba
>
> :0:
> * ? formail -x"From:" -x"From" -x"Sender:" | egrep -is -f $HOME/.whitelist
> $ORGMAIL
>

>>>I assume you checked his explicit whitelisted senders file

Indeed only 2 addresses:

redac...@comcast.net

redac...@pegasus.rutgers.edu

>>>

> :0fw:
> | /usr/bin/spamc
>

...

:0fw: spamassassin.lock
> * < 256000
> | spamassassin
>

You pass it through spamc, and if spamc doesn't score it as spam you then
pass it through spamassassin?

Why the duplication?>>>

This is what I walked into a month ago and why I'm posting here. I'm
looking for advice on best practice here to get it right. Also, doesn't the
user's .procmailrc take precedence and skip the other configuration files?




> :0
> * ^^rom[ ]
> {
>  LOG="*** Dropped F off From_ header! Fixing up. "
>  :0 fhw
>  | sed -e '1s/^/F/'
> }
>

This should probably be before you attempt delivery to CaughtSpam,
otherwise you might be corrupting that folder.

Thanks I moved it just above the Caughspam rule.

>>>To echo Reindl, it doesn't look like that message was scanned by SA at
all.>>>

So what else can I check?


Re: too many missed spams/false negatives w/ SA 3.4.1 on sendmail, help w config?

2016-07-23 Thread Robert Kudyba
Forgot to include the hook to procmailrc:

cat /etc/procmailrc

DROPPRIVS=yes

PATH=/bin:/usr/bin:/usr/local/bin

SHELL=/bin/sh


# Spamassassin

INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc

:0fw

* <300 000

|/usr/bin/spamassassin

[root@dsm ~]# cat /etc/mail/spamassassin/spamassassin-spamc.rc

# send mail through spamassassin

:0fw

| /usr/bin/spamc


On Sat, Jul 23, 2016 at 9:31 PM, Robert Kudyba <rkud...@fordham.edu> wrote:

> Sorry forgot to reply all.
>
> Sendmail has a .mc file which creates a .cf file here's ours:
>
> include(`/usr/share/sendmail-cf/m4/cf.m4')dnl
>
> VERSIONID(`setup for linux')dnl
>
> OSTYPE(`linux')dnl
>
> dnl #
>
> dnl # Do not advertize sendmail version.
>
> dnl #
>
> dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl
>
> dnl #
>
> dnl # default logging level is 9, you might want to set it higher to
>
> dnl # debug the configuration
>
> dnl #
>
> dnl define(`confLOG_LEVEL', `9')dnl
>
> dnl #
>
> dnl # Uncomment and edit the following line if your outgoing mail needs to
>
> dnl # be sent out through an external mail server:
>
> dnl #
>
> dnl define(`SMART_HOST', `smtp.your.provider')dnl
>
> dnl #
>
> define(`confDEF_USER_ID', ``8:12'')dnl
>
> dnl define(`confAUTO_REBUILD')dnl
>
> define(`confTO_CONNECT', `1m')dnl
>
> define(`confTRY_NULL_MX_LIST', `True')dnl
>
> define(`confDONT_PROBE_INTERFACES', `True')dnl
>
> define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl
>
> define(`ALIAS_FILE', `/etc/aliases')dnl
>
> define(`STATUS_FILE', `/var/log/mail/statistics')dnl
>
> define(`UUCP_MAILER_MAX', `200')dnl
>
> define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl
>
> define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl
>
> define(`confAUTH_OPTIONS', `A')dnl
>
> dnl #
>
> dnl # The following allows relaying if the user authenticates, and
> disallows
>
> dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links
>
> dnl #
>
> dnl define(`confAUTH_OPTIONS', `A p')dnl
>
> dnl #
>
> dnl # PLAIN is the preferred plaintext authentication method and used by
>
> dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do
>
> dnl # use LOGIN. Other mechanisms should be used if the connection is not
>
> dnl # guaranteed secure.
>
> dnl # Please remember that saslauthd needs to be running for AUTH.
>
> dnl #
>
> dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
>
> dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5
> LOGIN PLAIN')dnl
>
> dnl #
>
> dnl # Rudimentary information on creating certificates for sendmail TLS:
>
> dnl # cd /etc/pki/tls/certs; make sendmail.pem
>
> dnl # Complete usage:
>
> dnl # make -C /etc/pki/tls/certs usage
>
> dnl #
>
> dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl
>
> dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl
>
> dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl
>
> dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl
>
> dnl #
>
> dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's
>
> dnl # slapd, which requires the file to be readble by group ldap
>
> dnl #
>
> dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl
>
> dnl #
>
> dnl define(`confTO_QUEUEWARN', `4h')dnl
>
> dnl define(`confTO_QUEUERETURN', `5d')dnl
>
> dnl define(`confQUEUE_LA', `12')dnl
>
> dnl define(`confREFUSE_LA', `18')dnl
>
> define(`confTO_IDENT', `0')dnl
>
> dnl FEATURE(delay_checks)dnl
>
> FEATURE(`no_default_msa', `dnl')dnl
>
> FEATURE(`smrsh', `/usr/sbin/smrsh')dnl
>
> FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl
>
> FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl
>
> FEATURE(redirect)dnl
>
> FEATURE(always_add_domain)dnl
>
> FEATURE(use_cw_file)dnl
>
> FEATURE(use_ct_file)dnl
>
> FEATURE(`dnsbl',`relays.ordb.org', `"550 5.7.1 Access denied(O):
> Unsolicited e-mail from " $&{client_addr} " refused. "',`t')dnl
>
> dnl #FEATURE(`dnsbl',`dnsbl.sorbs.net',`"554 Rejected " $&{client_addr} "
> found in dnsbl.sorbs.net"', `t')dnl
>
> FEATURE(`dnsbl', `b.barracudacentral.org', `', `"550 Mail from "
> $&{client_addr} " refused. Rejected for bad WHOIS info on IP of your SMTP
> server " in http://www.barracudacentral.org/lookups "')dnl
>
> FEATURE(`dnsbl',`zen.spamhaus.org')dnl
>
> FEATURE(`dnsbl',`l2.apews.org')
>
> FEATURE(`dnsbl',`bl.spamcop.net')
>
> FEATURE(`dnsbl', 

Re: too many missed spams/false negatives w/ SA 3.4.1 on sendmail, help w config?

2016-07-23 Thread Robert Kudyba
dnl #

FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl

FEATURE(`access_db', `hash -T -o /etc/mail/access.db')dnl

FEATURE(`blacklist_recipients')dnl

EXPOSED_USER(`root')dnl

dnl #

dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery
uncomment

dnl # the following 2 definitions and activate below in the MAILER section
the

dnl # cyrusv2 mailer.

dnl #

dnl define(`confLOCAL_MAILER', `cyrusv2')dnl

dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl

dnl #

dnl # The following causes sendmail to only listen on the IPv4 loopback
address

dnl # 127.0.0.1 and not on any other network devices. Remove the loopback

dnl # address restriction to accept email from the internet or intranet.

dnl #

dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl

dnl #

dnl # The following causes sendmail to additionally listen to port 587 for

dnl # mail from MUAs that authenticate. Roaming users who can't reach their

dnl # preferred sendmail daemon due to port 25 being blocked or redirected
find

dnl # this useful.

dnl #

dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl

dnl #

dnl # The following causes sendmail to additionally listen to port 465, but

dnl # starting immediately in TLS mode upon connecting. Port 25 or 587
followed

dnl # by STARTTLS is preferred, but roaming clients using Outlook Express
can't

dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS

dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps

dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1.

dnl #

dnl # For this to work your OpenSSL certificates must be configured.

dnl #

dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl

dnl #

dnl # The following causes sendmail to additionally listen on the IPv6
loopback

dnl # device. Remove the loopback address restriction listen to the network.

dnl #

dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl

dnl #

dnl # enable both ipv6 and ipv4 in sendmail:

dnl #

dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6')

dnl #

dnl # We strongly recommend not accepting unresolvable domains if you want
to

dnl # protect yourself from spam. However, the laptop and users on computers

dnl # that do not have 24x7 DNS do need this.

dnl #

dnl FEATURE(`accept_unresolvable_domains')dnl

dnl #

dnl FEATURE(`relay_based_on_MX')dnl

dnl #

dnl # Also accept email sent to "localhost.localdomain" as local email.

dnl #

dnl LOCAL_DOMAIN(`localhost.localdomain')dnl

dnl #

dnl # The following example makes mail from this host and any additional

dnl # specified domains appear to be sent from mydomain.com

dnl #

MASQUERADE_AS(`our domain’)dnl

dnl #

dnl # masquerade not just the headers, but the envelope as well

dnl #

FEATURE(masquerade_envelope)dnl

dnl #

dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as
well

dnl #

dnl FEATURE(masquerade_entire_domain)dnl

dnl #

dnl MASQUERADE_DOMAIN(localhost)dnl

dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl

dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl

dnl MASQUERADE_DOMAIN(mydomain.lan)dnl


# SMTP greet delay may deter spam, as per

#   https://wiki.apache.org/spamassassin/OtherTricks

# agw 22 June 2014 (H0.5BDAGW)

FEATURE(`greet_pause', `1')


MAILER(smtp)dnl

MAILER(procmail)dnl

dnl MAILER(cyrusv2)dnl


LOCAL_RULE_3

# custom S3 begin ... courtesy of Andrzej Filip <a...@bigfoot.com>

R$-/FACULTY/FIRE $@ $>3 $1@ ourdomain

R$-/GUEST/FIRE $@ $>3 $1@ ourdomain

R$-/STAFF/FIRE$@ $>3 $1@ ourdomain

R$-/STUDENTS/FIRE $@ $>3 $1@ourdomain

# custom S3 end


On Sat, Jul 23, 2016 at 8:55 PM, Reindl Harald <h.rei...@thelounge.net>
wrote:

> STAY ON LIST
>
> Am 24.07.2016 um 02:50 schrieb Robert Kudyba:
>
>> OK then the next question is why would some messages not be getting
>> scanned whilst others are? What else can I check? Could another config
>> file be bypassing? There's nothing in the whitelist unless I'm not
>> checking all the possible paths to whitelists?
>>
>
> i don't see how spamassassin is supposed to be called in your setup at
> all, in my setups with spamass-milter (postfix) talking to spamd it's
> impossible to skip it at all
>
> On Sat, Jul 23, 2016 at 8:44 PM, Reindl Harald <h.rei...@thelounge.net
>> <mailto:h.rei...@thelounge.net>> wrote:
>>
>>
>> Am 24.07.2016 um 02:14 schrieb Robert Kudyba:
>>
>> sample header of a missed spam/false negative:
>>
>> http://txt.do/5em14
>>
>>
>> there are no spamassassin headers - so what is your evidence that
>> this message ever went through spamassassin?
>>
>
>


Fwd: too many missed spams/false negatives w/ SA 3.4.1 on sendmail, help w config?

2016-07-23 Thread Robert Kudyba
We have a user who has about a 50% missed rate on spam detection. I'm
wondering if his user prefs or something is preventing scanning of all
messages?

SpamAssassin version 3.4.1, running on Perl version 5.20.3, sendmail
Version 8.15.2


The contents of the user_prefs file:


# How many points before a mail is considered spam.

# required_score 5


# Whitelist and blacklist addresses are now file-glob-style patterns, so

# "fri...@somewhere.com", "*@isp.com", or "*.domain.net" will all work.

# whitelist_from some...@somewhere.com

blacklist_from localde...@amazon.com

blacklist_from *@lormaneducation.net

blacklist_from *ncnet2.org

blacklist_from  *salesengineintl.com

blacklist_from *@shedsplansstart.com

blacklist_from *@multibriefs.com

blacklist_from pimsleur_approach@*

blacklist_from HSIAlert@*


# Add your own customised scores for some tests below.  The default scores
are

# read from the installed spamassassin rules files, but you can override
them

# here.  To see the list of tests and their default scores, go to

# http://spamassassin.apache.org/tests.html .

#

# score SYMBOLIC_TEST_NAME n.nn


# Speakers of Asian languages, like Chinese, Japanese and Korean, will
almost

# definitely want to uncomment the following lines.  They will switch off
some

# rules that detect 8-bit characters, which commonly trigger on mails using
CJK

# character sets, or that assume a western-style charset is in use.

#

# score HTML_COMMENT_8BITS 0

# score UPPERCASE_25_50 0

# score UPPERCASE_50_75 0

# score UPPERCASE_75_100 0

# score OBSCURED_EMAIL  0


# Speakers of any language that uses non-English, accented characters may
wish

# to uncomment the following lines.   They turn off rules that fire on

# misformatted messages generated by common mail apps in contravention of
the

# email RFCs.


# score SUBJ_ILLEGAL_CHARS  0


his .procmailrc file:


## only turn these on for debugging

##

##VERBOSE=on

##MAILDIR=$HOME/mail

##LOGFILE=$MAILDIR/from


##

:0:

* ? formail -x"From:" -x"From" -x"Sender:" | egrep -is -f $HOME/.whitelist

$ORGMAIL


## Silently drop all Asian language mail


:0:

*
^Subject:.*=\?(iso-2022-jp|ISO-2022-JP|iso-2022-kr|ISO-2022-KR|euc-kr|EUC-KR|gb2312|GB2312|ks_c_5601-1987|KS_C_5601-1987|koi8-r|KOI8-R)

/dev/null


:0:

* ^Content-Type:.*charset="?
?(iso-2022-jp|ISO-2022-JP|iso-2022-kr|ISO-2022-KR|euc-kr|EUC-KR|gb2312|GB2312|ks_c_5601-1987|KS_C_5601-1987|koi8-r|KOI8-R)

/dev/null


:0:

*
^X-Coding-System:.*charset="?(iso-2022-jp|ISO-2022-JP|iso-2022-kr|ISO-2022-KR|euc-kr|EUC-KR|gb2312|GB2312|ks_c_5601-1987|KS_C_5601-1987|koi8-r|KOI8-R)

/dev/null


## Chinese spam filter

:0:

* ^Subject:.*=\?utf-8\?B\?[56]

mail/Unreadable


:0:

* ^Content-Type:.*charset="?windows-1250

/dev/null


:0:

* ^Subject: Auto-discard notification

/dev/null


:0:

* ^Subject: (DELIVERY FAILURE:|failure notice$)

SpamSpoofing


:0:

* ^Subject: .*[Aa]cai.*

Caughtspam


:0:

* ^Subject: ACH payment report

Caughtspam


:0:

* ^Subject: \[SPAM\].*

Caughtspam


:0fw:

| /usr/bin/spamc

:0:

* ^X-Spam-Status: Yes

Caughtspam


:0HB:

* ? /usr/bin/bogofilter -p

Caughtspam


:0:

* ^From: Vitale

Caughtspam


##

#

# The condition line ensures that only messages smaller than 250 kB

# (250 * 1024 = 256000 bytes) are processed by SpamAssassin. Most spam

# isn't bigger than a few k and working with big messages can bring

# SpamAssassin to its knees.

#

# The lock file ensures that only 1 spamassassin invocation happens

# at 1 time, to keep the load down.

#

:0fw: spamassassin.lock

* < 256000

| spamassassin


# Mails with a score of 15 or higher are almost certainly spam (with 0.05%

# false positives according to rules/STATISTICS.txt). Let's put them in a

# different mbox. (This one is optional.)

:0:

* ^X-Spam-Level: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

almost-certainly-spam


# All mail tagged as spam (eg. with a score higher than the set threshold)

# is moved to Caughtspam

:0:

* ^X-Spam-Status: Yes

Caughtspam


# Work around procmail bug: any output on stderr will cause the "F" in
"From"

# to be dropped.  This will re-add it.

:0

* ^^rom[ ]

{

  LOG="*** Dropped F off From_ header! Fixing up. "


  :0 fhw

  | sed -e '1s/^/F/'

}


# :0:

# $DEFAULT



default /root/.spamassassin/user_prefs file:


# SpamAssassin user preferences file.  See 'perldoc
Mail::SpamAssassin::Conf'

# for details of what can be tweaked.

###


# How many points before a mail is considered spam.

# required_score 5


# Whitelist and blacklist addresses are now file-glob-style patterns, so

# "fri...@somewhere.com", "*@isp.com", or "*.domain.net" will all work.

# whitelist_from some...@somewhere.com


# Add your own customised scores for some tests below.  The default scores
are

# read from the installed spamassassin rules files, but you can override
them

# here.  To see the list of tests and their default scores, go to

# 

Re: Using Postfix and Postgrey - not scanning after hold

2016-07-19 Thread Robert Schetterer
Am 19.07.2016 um 06:44 schrieb Ryan Coleman:
> How do I get Spamassassin configured with Postfix to have the email checked 
> there FIRST before running it through Postgrey?
> 
> Or how do I get it to dump back into the queue after the hold time and scan 
> through SpamAssassin?
> 
> I’m watching all my log files and emails that are clearing PostGrey are 
> definitely not going to SpamAssassin next; and they never get there in the 
> first place because of Postgrey.
> 
> I have a theory that I can fix my massive spam issue (250-750 emails/day to 
> my mailboxes alone) if I can get them switched or to play together.
> 
> Thanks!
> 

i use postscreen, clamav-milter ( sanesecurity) , opendkim opendmarc
policyd-spf spamass-milter, postgrey ( only selective ) and as well lots
of other restictions

so there are a few "best practises" how to setup but at the very end
you have to analyse your logs what might be best at your site, cause
everyone has its own "unwanted stuff". Greylisting is not a big factor
in my setups since years, too many bots have full smtp engines
implemented which pass greylisting. Contentfilter like spamassassin have
high "costs", so make sure to reject as most as possible by other stuff
before pass to spamassassin.


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Schleißheimer Straße 26/MG, 80333 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: SPF should always hit? SOLVED

2016-07-11 Thread Robert Fitzpatrick

Robert Fitzpatrick wrote:

Joe Quinn wrote:

On 6/9/2016 11:23 AM, Robert Fitzpatrick wrote:

Excuse me if this is too lame a question, but I have the SPF plugin
enabled and it hits a lot. Should SPF_ something hit on every message
if the domain has an SPF record in DNS?

Furthermore, a message found as Google phishing did not get a hit on a
email address where the domain has SPF setup. Not sure if it would
fail anyway if the envelope from is the culprit?


In a perfect world, every message you scan will hit one of the following:
SPF_HELO_NONE
SPF_HELO_NEUTRAL
SPF_HELO_PASS
SPF_HELO_FAIL
SPF_HELO_SOFTFAIL
T_SPF_HELO_PERMERROR
T_SPF_HELO_TEMPERROR

And additionally one of the following:
SPF_NONE
SPF_NEUTRAL
SPF_PASS
SPF_FAIL
SPF_SOFTFAIL
T_SPF_PERMERROR
T_SPF_TEMPERROR



I finally was able to get SPF checks to be more reliable by making sure 
Postfix SPF policies were in place. Here is a good read 


https://github.com/mail-in-a-box/mailinabox/issues/698
Excerpt: It's worth noting that lack of postfix's spf checker renders 
spamassassin's flagging impaired because without it spamassassin in my 
case is only adding helo_pass and that's all regarding spfs.


Once we got Postfix SPF checks setup using the Python version and 
disabling rejects in the config, we now have headers we can be sure are 
handled by our custom rules in addition to any SA checks.


--
Robert



Re: SPF should always hit?

2016-06-09 Thread Robert Fitzpatrick

Joe Quinn wrote:

On 6/9/2016 11:23 AM, Robert Fitzpatrick wrote:

Excuse me if this is too lame a question, but I have the SPF plugin
enabled and it hits a lot. Should SPF_ something hit on every message
if the domain has an SPF record in DNS?

Furthermore, a message found as Google phishing did not get a hit on a
email address where the domain has SPF setup. Not sure if it would
fail anyway if the envelope from is the culprit?


In a perfect world, every message you scan will hit one of the following:
SPF_HELO_NONE
SPF_HELO_NEUTRAL
SPF_HELO_PASS
SPF_HELO_FAIL
SPF_HELO_SOFTFAIL
T_SPF_HELO_PERMERROR
T_SPF_HELO_TEMPERROR

And additionally one of the following:
SPF_NONE
SPF_NEUTRAL
SPF_PASS
SPF_FAIL
SPF_SOFTFAIL
T_SPF_PERMERROR
T_SPF_TEMPERROR

In practice, there's almost certainly a few edge cases where messages
can avoid getting one in either category. For purposes of writing your
own metas against these, the rules that matter most for measuring
spamminess are the none, pass, and fail/softfail results. The rest are
for total coverage of the results that an SPF query can yield, for
debugging and documentation purposes.

Also, none of these will hit at all if you disable network tests.


Yes, network tests are on. I have lots of messages hitting, it is harder 
to find one that doesn't have hits as you suggested. However, I can find 
several out of our database of 280K messages cached which do not hit any 
of these rules. So, what would be a reason they didn't hit?


The only custom rule I have with SPF_* is with SPF_FAIL combined without 
DKIM to give higher score:


meta WT_FORGED_SENDER (SPF_FAIL && !DKIM_VALID)
describe WT_FORGED_SENDER To score high when SPF fails without valid DKIM
scoreWT_FORGED_SENDER 8.0

Here is the score for this particular example:

2.095   FREEMAIL_FORGED_REPLYTO Freemail in Reply-To, but not From
1.000   XPRIO_SHORT_SUBJ(No description provided)
0.250   FREEMAIL_REPLYTO_END_DIGIT  Reply-To freemail username ends in digit
0.001   HTML_MESSAGEHTML included in message
0.001   HEADER_FROM_DIFFERENT_DOMAINS   (No description provided)
0.000   RCVD_IN_DNSWL_NONE  Sender listed at http://www.dnswl.org/, low 
trust
-1.900  BAYES_00Bayesian spam probability is 0 to 1%
-5.000  RCVD_IN_JMF_W   (No description provided)

--
Robert


SPF should always hit?

2016-06-09 Thread Robert Fitzpatrick
Excuse me if this is too lame a question, but I have the SPF plugin 
enabled and it hits a lot. Should SPF_ something hit on every message if 
the domain has an SPF record in DNS?


Furthermore, a message found as Google phishing did not get a hit on a 
email address where the domain has SPF setup. Not sure if it would fail 
anyway if the envelope from is the culprit?


--
Robert



  1   2   3   4   5   6   7   8   9   10   >