Re: procmail (was Re: Spam messages bypassing SA)
On Fri, 24 Oct 2014 08:43:41 -0400, David F. Skoll d...@roaringpenguin.com wrote: David Procmail is also unmaintained abandonware, as far as I can tell. David If you use SpamAssassin, you probably like Perl, so I would David recommend Email::Filter instead. It's far more flexible than David procmail and lets you write readable filters. David Since procmail is still the default LDA on Debian, this is my .procmailrc: David :0 David | /usr/bin/perl /home/dfs/.mail-filter.pl /home/dfs/.mail-filter.log 21 David And excerpts from my filter look something like this: Or you could run dovecot and its sieve plugin. Sieve is a real standard (RFC 5228) which procmail never was. -- Please *no* private copies of mailing list or newsgroup messages. Local Variables: mode:claws-external End:
Re: spf: lookup failed: addr is not a string
Hey Mark, Do you have a firewall in place that tries to do a deep packet inspection on DNS UDP packets but does not understand EDNS0 (the OPT RR) ? thanks for the suggestion! Unfortunately, the network is not the culprit. I tried to apply my chef recipes to a virtual machine on my desktop computer (different network) and get the same message from spamassassin. But when I try to install debian wheezy, spamassassin and unbound manually I’m not able to reproduce this issue anymore. There must be any package or configuration option in my chef recipes that causes spamassassin to fail the DNS lookup. I’ll further investigate this issue tomorrow. Thanks! Thomas
Re: procmail (was Re: Spam messages bypassing SA)
On Mon, 27 Oct 2014 23:50:20 -0700 Ian Zimmerman i...@buug.org wrote: Or you could run dovecot and its sieve plugin. Sieve is a real standard (RFC 5228) which procmail never was. It may be a standard, but it's nowhere near as flexible as Perl. I have very unusual filtering requirements (for example, rules that change depending on time-of-day or depending on who has the support pager that week) that are best expressed with a proper programming language. Regards, David.
Re: spf: lookup failed: addr is not a string
Hey! On Oct 28, 2014, at 10:51 AM, Thomas Preißler tho...@preissler.me wrote: Hey Mark, Do you have a firewall in place that tries to do a deep packet inspection on DNS UDP packets but does not understand EDNS0 (the OPT RR) ? thanks for the suggestion! Unfortunately, the network is not the culprit. I tried to apply my chef recipes to a virtual machine on my desktop computer (different network) and get the same message from spamassassin. But when I try to install debian wheezy, spamassassin and unbound manually I’m not able to reproduce this issue anymore. There must be any package or configuration option in my chef recipes that causes spamassassin to fail the DNS lookup. I’ll further investigate this issue tomorrow. It looks like the problem appears only with the package libmail-dkim-perl and some nameservers. If I uninstall this package or use 8.8.8.8 as the DNS server I don’t get the message “spf: lookup failure” anymore. The response of the two mail servers seems to be pretty much the same. There is just a very small difference if you ask for the dnssec signature: dig @ip SPF mail.sys4.de +dnssec 156.154.70.1 shows EDNS: version: 0, flags: do; udp: 4096 8.8.8.8 shows “EDNS: version: 0, flags: do; udp: 512 Finally, I’m able to reproduce this issue on a plain debian wheezy system: - install debian wheezy - enable backports and run apt-get update - apt-get -t wheezy-backports install spamassassin - apt-get install libmail-dkim-perl - set 156.154.70.1 as the only nameserver in /etc/resolv.conf - run spamassassin -D mail.eml But removing libmail-dkim-perl is not really a solution. This package provides the Mail::DKIM module which is required to check the DKIM signature. Thanks! Thomas Sent with Unibox
Re: procmail
David F. Skoll d...@roaringpenguin.com wrote: On Mon, 27 Oct 2014 23:50:20 -0700 Ian Zimmerman i...@buug.org wrote: Or you could run dovecot and its sieve plugin. Sieve is a real standard (RFC 5228) which procmail never was. It may be a standard, but it's nowhere near as flexible as Perl. I have very unusual filtering requirements (for example, rules that change depending on time-of-day or depending on who has the support pager that week) that are best expressed with a proper programming language. Do you keep sharp knives away from children? :-) -- A. Filip
Re: procmail
On Tue, 28 Oct 2014 13:28:19 +0100 Andrzej A. Filip andrzej.fi...@gmail.com wrote: It may be a standard, but it's nowhere near as flexible as Perl. I have very unusual filtering requirements (for example, rules that change depending on time-of-day or depending on who has the support pager that week) that are best expressed with a proper programming language. Do you keep sharp knives away from children? :-) Sure, but that doesn't mean a consummate chef need fear them! Regards, David.
Re: Spam messages bypassing SA
From: Bob Proulx b...@proulx.com Date: Mon, 27 Oct 2014 18:37:35 -0600 In the first email: # The lock file ensures that only 1 spamassassin invocation happens # at 1 time, to keep the load down. # :0fw: spamassassin.lock * 40 | spamc -x Kevin A. McGrail wrote: geoff.spamassassin140903 wrote: Kevin A. McGrail wrote: Using procmail without MTA glue is OK for many uses. I am wondering how many spamd connections you allow and if you have checked your logs? I also cannot remember but the uses of a lock file seem odd for something that can thread. Any one know if that is a good idea to remove? I wonder if you could explain in simple terms what the lockfile achieves in this situation? Is it even possible that it could cause messages to bypass SA? I don't think a lockfile achieves anything because it's a call to a program. Procmail has some weird syntax so hopefully someone with some procmail-fu can tell us if a lock on a procmail system call does anything. Well... The comment in the example explains what the lock is attempting to do. I think that comment got missed in the follow-ups. The lock will restrict spamassassin invocations to one at a time to prevent a high system load average running too many spamassassin processes all at once. It will serialize spamassassin invocations to one at a time instead of many in parallel. Normally the MTA will receive incoming messages and will fork a process for each incoming connection. If the outside world connects and sends 100 messages all at once then there will be 100 MTA processes running in parallel. If 10,000 all at once then probably some MTA process limit will prevent forking that many depending upon your configuration. Each of those will try to send the message through procmail and spamassassin in parallel too. Running 10,000 procmail processes in parallel probably won't be a problem since it is light weight. However running perl spamassassin 100 or 1,000 times in parallel all at once can be quite a resource hit to a moderate system! By putting the lock in the procmail rule it prevents more than one perl spamassassin process from running at a time. This keeps the system from being overloaded due to a spike from the outside world. I want to emphasize that the outside world impacts the system and can have an effect of a DDoS just by overwhelming the system with external connections. The MTA has limits to prevent this but while those are tuned for normal delivery the MTA maintainers won't know if you are running each message through spamasassin and causing a higher load because of it. The default MTA limits are probably too high when considering running the message through spamassassin too. The procmail example comes from the wiki page example: http://wiki.apache.org/spamassassin/UsedViaProcmail The wiki page example is launching spamassassin not spamc. That is an important difference to this case. Someone has changed that to spamc in the above and preserved all else including the serialization lock. The spamc talks to a spamd and so the number of parallel processes spamd can handle depends upon the spamd configuration. In the spamc use I would be inclined to remove the serialization lock. Let it be throttled at the spamd side of things instead. That would make the most sense to me. Then tune spamd's limits as needed. In summary I suggest removing the serialization lock from the spamc recipe. Give it a try and monitor system resource utilization. Start tuning at spamd. Tune other things as needed afterward. :0fw | spamc -x :0e { EXITCODE=$? } Bob I agree with everything you wrote but only when bayes autolearning is turned off. Bayes learning holds an exclusive lock to the bayes database particularly during expiration. If spamc does bayes autolearning and starts an expiration then other spamc runs for that user will be locked out of bayes. At some point you start getting timeouts at different points in the email delivery chain. I have a separate sa-learn (or spamc -L) procmail recipe that has a serialization lock. -jeff
Re: Spam messages bypassing SA
On Tue, 28 Oct 2014, Jeff Mincy wrote: I agree with everything you wrote but only when bayes autolearning is turned off. Bayes learning holds an exclusive lock to the bayes database particularly during expiration. If spamc does bayes autolearning and starts an expiration then other spamc runs for that user will be locked out of bayes. At some point you start getting timeouts at different points in the email delivery chain. Automatic expiry is strongly discouraged for this reason; there should be a scheduled cron job that does the expiry. -- John Hardin KA7OHZhttp://www.impsec.org/~jhardin/ jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C AF76 D822 E6E6 B873 2E79 --- ...the Fates notice those who buy chainsaws... -- www.darwinawards.com --- 3 days until Halloween
Re: spamassassin rule to combat phishing
On Mon, Oct 27, 2014 at 4:55 PM, John Hardin jhar...@impsec.org wrote: On Mon, 27 Oct 2014, francis picabia wrote: uri URI_EXAMPLE_EXTRA m;^https?://(?:www\.)?example\.com[^/?];i However another spoofed message was received today and the rule did not capture it. If I want to detect something in the form of: random_server.example.com.junk I need to wildcard the first bit. Would that be: uri URI_EXAMPLE_EXTRA m;^https?://(?:.*\.)?example\.com[^/?];i I don't understand what the question mark and colon does inside the ( ) I thought it followed an optional char or expression. Should it be like this? uri URI_EXAMPLE_EXTRA m;^https?://(.*\.)?example\.com[^/?];i (?:) means group, don't remember the match. () remembers what's matched for future use in the RE (e.g. to check for repeated strings like abcabcabcabc. Try this: uri URI_EXAMPLE_EXTRA m;^https?://(?:[^./]+\.)*example\.com[^/?];i Once again, thanks for the RE coding. I found a false positive it captured with my attempt at this : a href= http://www.newslettersite.com/redirectnewsletter_login.asp?URL=http://www.secondsite.com/PYB/contact_us.asploginemail=u...@example.comlogincode=123456utm_source=Articles_Air_01112014utm_medium=emailutm_campaign=newsletterutm_content=contactus I've tested your rule with that and it does not tag for the above. Great. Hopefully useful to others facing domain spoofs in phishing.
CYA .link
Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif
Re: procmail
On 2014-10-28 06:09, David F. Skoll wrote: On Tue, 28 Oct 2014 13:28:19 +0100 Andrzej A. Filip andrzej.fi...@gmail.com wrote: It may be a standard, but it's nowhere near as flexible as Perl. I have very unusual filtering requirements (for example, rules that change depending on time-of-day or depending on who has the support pager that week) that are best expressed with a proper programming language. Do you keep sharp knives away from children? :-) Sure, but that doesn't mean a consummate chef need fear them! Regards, David. Nonetheless one should keep bare knife switches away from said chef lest he forget that being an consummate expert in one field does not make him even barely competent in other fields. 1) If it ain't broke, don't fix it. 2) Clumsy is not broken. Think about it a little. {^_^}
Re: Spam messages bypassing SA
On 10/27/2014 8:37 PM, Bob Proulx wrote: In the first email: # The lock file ensures that only 1 spamassassin invocation happens # at 1 time, to keep the load down. Thanks, that was my thought as well and your analysis on using spamc and removing the lock was EXACTLY where my thought process was going! Regards, KAM
Re: Is this really the SpamAssassin list? (was Re: unsubscribe)
On Tue, 28 Oct 2014 04:27:14 +0100 Karsten Bräckelmann guent...@rudersport.de wrote: On Mon, 2014-10-27 at 19:44 -0700, jdebert wrote: Redirecting them makes people lazy. Better than annoying but they don't learn anything except to repeat their mistakes. Your assumption, the list moderators (aka owner, me being one of them) would simply and silently obey and dutifully do the un-subscription for them, is flawed. ;) This assumption is unwarranted. I did not say that. Did you read the rest of the message?
Re: procmail
On Tue, 28 Oct 2014 10:24:37 -0700 jdow j...@earthlink.net wrote: Sure, but that doesn't mean a consummate chef need fear them! Nonetheless one should keep bare knife switches away from said chef lest he forget that being an consummate expert in one field does not make him even barely competent in other fields. Yes, well. I've spent the last 13 years of my life creating a company whose products are all in the email security space with most of the critical logic written in Perl. I may be barely competent in some fields, but I do claim a certain competence in using Perl to mess with email. :) I also suspect that most SpamAssassin admins probably have some competence with perl. Anyway, we are drifting OT here I guess... Regards, David.
Re: procmail
On 2014-10-28 11:24, David F. Skoll wrote: On Tue, 28 Oct 2014 10:24:37 -0700 jdow j...@earthlink.net wrote: Sure, but that doesn't mean a consummate chef need fear them! Nonetheless one should keep bare knife switches away from said chef lest he forget that being an consummate expert in one field does not make him even barely competent in other fields. Yes, well. I've spent the last 13 years of my life creating a company whose products are all in the email security space with most of the critical logic written in Perl. I may be barely competent in some fields, but I do claim a certain competence in using Perl to mess with email. :) I also suspect that most SpamAssassin admins probably have some competence with perl. Anyway, we are drifting OT here I guess... Regards, David. That is hardly a compelling reason to change from procmail to perl, for me or others with working procmail systems. You seem to be advocating handing me perl and turning me loose after ripping procmail out of my hands. That does not endear you to me. It isn't broken. So why fix it? There is a tremendous amount of experience out there setting it up and using it. Is that a reason to discard it for something new? We're seeing the fruits of that sort of divisiveness with the systemd controversy. If fix means better and still 100% compatible it is an easy sell. If fix means 0% compatible being better is not good for people with better things upon which to spend their time than learning a new way shoved down their throats. In the abstract you are right. In the practical, that rightness appears to tarnish. In the case in point early on in this discussion it is quite easy to tell procmail to add a new header X-been-through-my-spamfilter. Then look for that header before feeding to spamassassin in the procmail script. If it appears merely deliver the mail. If it doesn't exist filter it and feed it to the next step. That is NOT a huge effort in procmail if somebody has already embraced learning the damnfool thing. Why condemn the poor sod to learning something new rather than fixing other aspects of his system? Not all of us here are email adminstrators. Many are working with smaller systems and manage the entire system more or less single handed or with limited help because their employer is cheap. {^_^} Joanne {o.o}
Re: Is this really the SpamAssassin list? (was Re: unsubscribe)
On 10/27/2014 5:37 PM, Karsten Bräckelmann wrote: header__KAM_SA_BLOCK_UNSUB1Subject =~ /unsubscribe/i Ouch. Would you please /^anchor$/ that beast? Unless you actually intend this sub-thread to be swept off the list, too. ;) I was trying to stay broad but see your point. Regards, KAM
Re: spamassassin rule to combat phishing
On Tue, Oct 28, 2014 at 11:47 AM, francis picabia fpica...@gmail.com wrote: On Mon, Oct 27, 2014 at 4:55 PM, John Hardin jhar...@impsec.org wrote: On Mon, 27 Oct 2014, francis picabia wrote: uri URI_EXAMPLE_EXTRA m;^https?://(?:www\.)?example\.com[^/?];i However another spoofed message was received today and the rule did not capture it. If I want to detect something in the form of: random_server.example.com.junk I need to wildcard the first bit. Would that be: uri URI_EXAMPLE_EXTRA m;^https?://(?:.*\.)?example\.com[^/?];i I don't understand what the question mark and colon does inside the ( ) I thought it followed an optional char or expression. Should it be like this? uri URI_EXAMPLE_EXTRA m;^https?://(.*\.)?example\.com[^/?];i (?:) means group, don't remember the match. () remembers what's matched for future use in the RE (e.g. to check for repeated strings like abcabcabcabc. Try this: uri URI_EXAMPLE_EXTRA m;^https?://(?:[^./]+\.)*example\.com[^/?];i Once again, thanks for the RE coding. I found a false positive it captured with my attempt at this : a href= http://www.newslettersite.com/redirectnewsletter_login.asp?URL=http://www.secondsite.com/PYB/contact_us.asploginemail=u...@example.comlogincode=123456utm_source=Articles_Air_01112014utm_medium=emailutm_campaign=newsletterutm_content=contactus I've tested your rule with that and it does not tag for the above. Great. Hopefully useful to others facing domain spoofs in phishing. I thought this was a representative test case, but apparently there is something triggering a false positive when the email is a newsletter which embeds a user's email within URLs. In the sample I've seen, there are 34 such possible links which may have triggered the issue, but I don't know which. I ran the quarantined sample through spamassassin -D and it shows: Oct 28 16:24:01.391 [28945] dbg: rules: ran uri rule URI_MYDOMAIN_PHISH == got hit: http://example.com; On prior lines in the trace I see other uri rules getting hits, but it seems to be about different URLs. The entire body of the email is base64 encoded. Extracting that part and running base64 -d I am not finding the hit described by SA trace. This is my method: zcat spam-jUVZBDml0wS5.gz | grep 'http://example.com' So the URL is not in the non-base64 part. zcat spam-jUVZBDml0wS5.gz /tmp/spamfull cp /tmp/spamfull /tmp/spam64 vi /tmp/spam64 (to remove headers) base64 -d /tmp/spam64 | grep 'http://example.com' (no matchs) Double checked with: spamassassin -D -lint /tmp/spamfull 21 | grep http://example.com nothing is output except the line above with URI_MYDOMAIN_PHISH. Is there any suggestion on how to nail down where the match is happening?
Re: procmail
On Tue, 28 Oct 2014 11:43:04 -0700 jdow j...@earthlink.net wrote: jdow That is hardly a compelling reason to change from procmail to jdow perl, for me or others with working procmail systems. You seem to jdow be advocating handing me perl and turning me loose after ripping jdow procmail out of my hands. That does not endear you to me. It isn't jdow broken. So why fix it? There is a tremendous amount of experience jdow out there setting it up and using it. Is that a reason to discard jdow it for something new? We're seeing the fruits of that sort of jdow divisiveness with the systemd controversy. If fix means better and jdow still 100% compatible it is an easy sell. If fix means 0% jdow compatible being better is not good for people with better things jdow upon which to spend their time than learning a new way shoved down jdow their throats. In the abstract you are right. In the practical, jdow that rightness appears to tarnish. You sound like you're replying more to me than to David. How do you match non-ASCII From: in procmail? Note that the encoding may differ, even for the same sender, depending on which MUA he's using ATM. _Some_ old stuff deserves to be replaced. -- Please *no* private copies of mailing list or newsgroup messages. Local Variables: mode:claws-external End:
Re: CYA .link
On 10/28/2014 12:06 PM, Axb wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif So we added this wanting to play with this command and had no change in behavior for an email with this from header: From: Notification notification@22notification-munge.linkmunge But no use of .linkmunge domains in the body Expected, yes? regards, KAM
Re: CYA .link
On 10/28/2014 10:13 PM, Kevin A. McGrail wrote: On 10/28/2014 12:06 PM, Axb wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif So we added this wanting to play with this command and had no change in behavior for an email with this from header: From: Notification notification@22notification-munge.linkmunge But no use of .linkmunge domains in the body Expected, yes? It should tag blubber[[.]]link Are you sure you're using an updated RegistrarBoundaries.pm ?
Re: CYA .link
On 10/28/2014 5:19 PM, Axb wrote: On 10/28/2014 10:13 PM, Kevin A. McGrail wrote: On 10/28/2014 12:06 PM, Axb wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif So we added this wanting to play with this command and had no change in behavior for an email with this from header: From: Notification notification@22notification-munge.linkmunge But no use of .linkmunge domains in the body Expected, yes? It should tag blubber[[.]]link Interesting. Ok, definitely not hitting just a from with a [[.]]link email address for me. Can you test that. Are you sure you're using an updated RegistrarBoundaries.pm ? Running trunk with your TLD updates, yes.
Re: procmail (was Re: Spam messages bypassing SA)
On Oct 28, 2014 at 07:40 -0400, David F. Skoll wrote: =On Mon, 27 Oct 2014 23:50:20 -0700 =Ian Zimmerman i...@buug.org wrote: = = Or you could run dovecot and its sieve plugin. Sieve is a real = standard (RFC 5228) which procmail never was. = =It may be a standard, but it's nowhere near as flexible as Perl. =I have very unusual filtering requirements (for example, rules that change =depending on time-of-day or depending on who has the support pager that week) =that are best expressed with a proper programming language. This is for the archives, not to change Ian or David's opinions Check out some of the sieve extensions. You (general) might be surprised that sieve can do what is described above. -- *** Derek DigetOffice of Information Technology Western Michigan University - Kalamazoo Michigan USA - www.wmich.edu/ ***
Re: CYA .link
On 10/28/2014 10:30 PM, Kevin A. McGrail wrote: On 10/28/2014 5:19 PM, Axb wrote: On 10/28/2014 10:13 PM, Kevin A. McGrail wrote: On 10/28/2014 12:06 PM, Axb wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif So we added this wanting to play with this command and had no change in behavior for an email with this from header: From: Notification notification@22notification-munge.linkmunge But no use of .linkmunge domains in the body Expected, yes? It should tag blubber[[.]]link Interesting. Ok, definitely not hitting just a from with a [[.]]link email address for me. Can you test that. Are you sure you're using an updated RegistrarBoundaries.pm ? Running trunk with your TLD updates, yes. it should not hit mailto: but I can't see it hitting even a full blah[dot]link domain hmmm
Re: CYA .link
--On Tuesday, October 28, 2014 6:06 PM +0100 Axb axb.li...@gmail.com wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif Testing this on my MTA's now... --Quanah -- Quanah Gibson-Mount Server Architect Zimbra, Inc. Zimbra :: the leader in open source messaging and collaboration
Re: CYA .link
On 10/28/2014 11:16 PM, Quanah Gibson-Mount wrote: --On Tuesday, October 28, 2014 6:06 PM +0100 Axb axb.li...@gmail.com wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif Testing this on my MTA's now... Not sure what is broken, don't see the eval hitting in -D rules mode either
Re: CYA .link
--On Tuesday, October 28, 2014 4:16 PM -0700 Quanah Gibson-Mount qua...@zimbra.com wrote: --On Tuesday, October 28, 2014 6:06 PM +0100 Axb axb.li...@gmail.com wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif Testing this on my MTA's now... Doesn't seem to work. Oct 28 17:22:35 edge02 amavis[35776]: (35776-08) spam-tag, fallenrollmenti...@vdsc.100web-hostingplusonline.link - x...@zimbra.com, Yes, score=6.7 tagged_above=-10 required=3 tests=[BAYES_50=0.8, DCC_CHECK=3.5, RP_MATCHES_RCVD=-0.8, URIBL_BLACK=3.2] autolearn=no autolearn_force=no This is with the updated RegistrarBoundaries.pm file --Quanah -- Quanah Gibson-Mount Server Architect Zimbra, Inc. Zimbra :: the leader in open source messaging and collaboration
Re: CYA .link
On 10/28/2014 11:28 PM, Quanah Gibson-Mount wrote: --On Tuesday, October 28, 2014 4:16 PM -0700 Quanah Gibson-Mount qua...@zimbra.com wrote: --On Tuesday, October 28, 2014 6:06 PM +0100 Axb axb.li...@gmail.com wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif Testing this on my MTA's now... Doesn't seem to work. Oct 28 17:22:35 edge02 amavis[35776]: (35776-08) spam-tag, fallenrollmenti...@vdsc.100web-hostingplusonline.link - x...@zimbra.com, Yes, score=6.7 tagged_above=-10 required=3 tests=[BAYES_50=0.8, DCC_CHECK=3.5, RP_MATCHES_RCVD=-0.8, URIBL_BLACK=3.2] autolearn=no autolearn_force=no This is with the updated RegistrarBoundaries.pm file I think I've found the issue... SA source is missing the eval rules blush with the eval rule it looks like: blacklist_uri_host link * 100 URI_HOST_IN_BLACKLIST BODY: domain is in the URL's black-list * [URI: www.bupahif.link (link)] will commit in a few...
Re: CYA .link
On 10/28/2014 10:30 PM, Kevin A. McGrail wrote: On 10/28/2014 5:19 PM, Axb wrote: On 10/28/2014 10:13 PM, Kevin A. McGrail wrote: On 10/28/2014 12:06 PM, Axb wrote: Patience quota exceeded. What a weird way to get a new TLD's ROI if (version = 3.004000) blacklist_uri_host link endif So we added this wanting to play with this command and had no change in behavior for an email with this from header: From: Notification notification@22notification-munge.linkmunge But no use of .linkmunge domains in the body Expected, yes? It should tag blubber[[.]]link Interesting. Ok, definitely not hitting just a from with a [[.]]link email address for me. Can you test that. Are you sure you're using an updated RegistrarBoundaries.pm ? Running trunk with your TLD updates, yes. before I commit please test with (BEWARE LINE BREAKS IN RULES!!!) body URI_HOST_IN_BLACKLIST eval:check_uri_host_in_blacklist() describe URI_HOST_IN_BLACKLIST domain is in the URL's black-list tflagsURI_HOST_IN_BLACKLIST userconf noautolearn score URI_HOST_IN_BLACKLIST 100.0 body URI_HOST_IN_WHITELIST eval:check_uri_host_in_whitelist() describe URI_HOST_IN_WHITELIST domain is in the URL's white-list tflagsURI_HOST_IN_WHITELIST userconf noautolearn score URI_HOST_IN_WHITELIST -100.0 headerHEADER_HOST_IN_BLACKLIST eval:check_uri_host_listed('BLACK') describe HEADER_HOST_IN_BLACKLIST Whitelisted header host or domain tflagsHEADER_HOST_IN_BLACKLIST userconf noautolearn score HEADER_HOST_IN_BLACKLIST 100.0 headerHEADER_HOST_IN_WHITELIST eval:check_uri_host_listed('WHITE') describe HEADER_HOST_IN_WHITELIST Blacklisted header host or domain tflagsHEADER_HOST_IN_WHITELIST userconf noautolearn score HEADER_HOST_IN_WHITELIST -100.0 you should see something like * 100.0 HEADER_HOST_IN_BLACKLIST Host or domain found in URI is blacklisted * [URI: www.bupahif.link (link)] * 100 URI_HOST_IN_BLACKLIST BODY: domain is in the URL's black-list * [URI: www.bupahif.link (link)] thanks Axb
Re: CYA .link
On 10/29/2014 12:23 AM, Jeff Mincy wrote: From: Axb axb.li...@gmail.com Date: Wed, 29 Oct 2014 00:00:39 +0100 before I commit please test with describe HEADER_HOST_IN_BLACKLIST Whitelisted header host or domain describe HEADER_HOST_IN_WHITELIST Blacklisted header host or domain These two are backwards? fixed... thanks (it's late for me :)
Re: Is this really the SpamAssassin list? (was Re: unsubscribe)
On Tue, 2014-10-28 at 11:19 -0700, jdebert wrote: On Tue, 28 Oct 2014 04:27:14 +0100 Karsten Bräckelmann guent...@rudersport.de wrote: On Mon, 2014-10-27 at 19:44 -0700, jdebert wrote: Redirecting them makes people lazy. Better than annoying but they don't learn anything except to repeat their mistakes. Your assumption, the list moderators (aka owner, me being one of them) would simply and silently obey and dutifully do the un-subscription for them, is flawed. ;) This assumption is unwarranted. I did not say that. You said that the unsubscribe-to-list posting user would not learn and get lazy, when those posts get redirected to the owner rather than hitting the list. Not learning: False. As I said, moderators would respond with explanation and instructions. In particular learning about his mistake and how to properly (and in future) unsubscribe, does make him learn. Since we'd not just unsub him, the user will even have to proof that he learned, by following procedures unsubscribing himself. Getting lazy: People are lazy. But since there's absolutely nothing we would simply do for them, there's no potential in the process to get lazy over. They will have to read and understand how to do it. And they will have to follow every step of the unsub procedure themselves. So if my assumption was really that unwarranted, please explain what else you did mean with those two sentences. Did you read the rest of the message? Yes. And quite frankly, catching unsub messages and bouncing them with a note as you mentioned is almost identical to the proposed redirect them to owner to handle it. With the latter involving moderators, having the advantage, that we can and will offer additional help if need be. -- char *t=\10pse\0r\0dtu\0.@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4; main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1: (c=*++x); c128 (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}
Re: procmail
I think that one of the things that up and coming Linux admins are supposed to do is write a Procmail is dead article and post it somewhere. It sure seems like it there's enough of them out there. Procmail isn't dead. However, the Procmail website is simply in an awful and atrocious state. It has been at least a half a decade since the server the website is on stopped hosting the distro, which is frankly ridiculous. OK I get that the domain owner doesn't want to spring the money for the bandwidth but there are better ways to handle it than an HTTP error. I also get that the domain owner isn't interested in fixing his HTML. OK whatever. There's a lot of distributions that include Procmail and lots and lots of people using it. There's still people writing patches for it for their distros. Yes it lacks a maintainer which is a shame. But it is even more shameful that RedHat and the other pay distributions aren't stepping up and picking up maintenance of it. I use procmail, but I don't use it to call SpamAssassin. Nor do I use it with any extensive recipes. Ted On 10/28/2014 11:43 AM, jdow wrote: On 2014-10-28 11:24, David F. Skoll wrote: On Tue, 28 Oct 2014 10:24:37 -0700 jdow j...@earthlink.net wrote: Sure, but that doesn't mean a consummate chef need fear them! Nonetheless one should keep bare knife switches away from said chef lest he forget that being an consummate expert in one field does not make him even barely competent in other fields. Yes, well. I've spent the last 13 years of my life creating a company whose products are all in the email security space with most of the critical logic written in Perl. I may be barely competent in some fields, but I do claim a certain competence in using Perl to mess with email. :) I also suspect that most SpamAssassin admins probably have some competence with perl. Anyway, we are drifting OT here I guess... Regards, David. That is hardly a compelling reason to change from procmail to perl, for me or others with working procmail systems. You seem to be advocating handing me perl and turning me loose after ripping procmail out of my hands. That does not endear you to me. It isn't broken. So why fix it? There is a tremendous amount of experience out there setting it up and using it. Is that a reason to discard it for something new? We're seeing the fruits of that sort of divisiveness with the systemd controversy. If fix means better and still 100% compatible it is an easy sell. If fix means 0% compatible being better is not good for people with better things upon which to spend their time than learning a new way shoved down their throats. In the abstract you are right. In the practical, that rightness appears to tarnish. In the case in point early on in this discussion it is quite easy to tell procmail to add a new header X-been-through-my-spamfilter. Then look for that header before feeding to spamassassin in the procmail script. If it appears merely deliver the mail. If it doesn't exist filter it and feed it to the next step. That is NOT a huge effort in procmail if somebody has already embraced learning the damnfool thing. Why condemn the poor sod to learning something new rather than fixing other aspects of his system? Not all of us here are email adminstrators. Many are working with smaller systems and manage the entire system more or less single handed or with limited help because their employer is cheap. {^_^} Joanne {o.o}
Re: procmail
Am 29.10.2014 um 01:23 schrieb Ted Mittelstaedt: I think that one of the things that up and coming Linux admins are supposed to do is write a Procmail is dead article and post it somewhere. It sure seems like it there's enough of them out there. Procmail isn't dead. However, the Procmail website is simply in an awful and atrocious state. It has been at least a half a decade since the server the website is on stopped hosting the distro, which is frankly ridiculous. OK I get that the domain owner doesn't want to spring the money for the bandwidth but there are better ways to handle it than an HTTP error. I also get that the domain owner isn't interested in fixing his HTML. OK whatever. There's a lot of distributions that include Procmail and lots and lots of people using it. There's still people writing patches for it for their distros. Yes it lacks a maintainer which is a shame. But it is even more shameful that RedHat and the other pay distributions aren't stepping up and picking up maintenance of it frankly in times of LMTP and Sieve there is hardly a need to use procmail - it is used because i know it and it just works - so why should somebody step in and maintain it while nobody is forced to use it signature.asc Description: OpenPGP digital signature
Re: procmail
On 10/28/2014 5:31 PM, Reindl Harald wrote: Am 29.10.2014 um 01:23 schrieb Ted Mittelstaedt: I think that one of the things that up and coming Linux admins are supposed to do is write a Procmail is dead article and post it somewhere. It sure seems like it there's enough of them out there. Procmail isn't dead. However, the Procmail website is simply in an awful and atrocious state. It has been at least a half a decade since the server the website is on stopped hosting the distro, which is frankly ridiculous. OK I get that the domain owner doesn't want to spring the money for the bandwidth but there are better ways to handle it than an HTTP error. I also get that the domain owner isn't interested in fixing his HTML. OK whatever. There's a lot of distributions that include Procmail and lots and lots of people using it. There's still people writing patches for it for their distros. Yes it lacks a maintainer which is a shame. But it is even more shameful that RedHat and the other pay distributions aren't stepping up and picking up maintenance of it frankly in times of LMTP and Sieve there is hardly a need to use procmail - it is used because i know it and it just works - so why should somebody step in and maintain it while nobody is forced to use it From my understanding (as I don't use Dovecot and Sieve) you cannot pipe mail from the Sieve implementation into other programs, once Sieve is done with it, that's it. Right there that's a non-starter for me, I'm afraid. I'm a Unixy brat. Procmail is unixy, some of these more recent Linux bits of software are more Windows than Unix. Ignoring pipes is very bad, if I wanted greasy kid stuff software I'd run Windows on my servers. As for why should someone maintain it, people already maintain it - the distros maintain their versions You want a single maintainer to coordinate patches so people aren't re-inventing the wheel. Ted
Re: procmail
Am 29.10.2014 um 01:39 schrieb Ted Mittelstaedt: On 10/28/2014 5:31 PM, Reindl Harald wrote: Am 29.10.2014 um 01:23 schrieb Ted Mittelstaedt: I think that one of the things that up and coming Linux admins are supposed to do is write a Procmail is dead article and post it somewhere. It sure seems like it there's enough of them out there. Procmail isn't dead. However, the Procmail website is simply in an awful and atrocious state. It has been at least a half a decade since the server the website is on stopped hosting the distro, which is frankly ridiculous. OK I get that the domain owner doesn't want to spring the money for the bandwidth but there are better ways to handle it than an HTTP error. I also get that the domain owner isn't interested in fixing his HTML. OK whatever. There's a lot of distributions that include Procmail and lots and lots of people using it. There's still people writing patches for it for their distros. Yes it lacks a maintainer which is a shame. But it is even more shameful that RedHat and the other pay distributions aren't stepping up and picking up maintenance of it frankly in times of LMTP and Sieve there is hardly a need to use procmail - it is used because i know it and it just works - so why should somebody step in and maintain it while nobody is forced to use it From my understanding (as I don't use Dovecot and Sieve) you cannot pipe mail from the Sieve implementation into other programs, once Sieve is done with it, that's it. Right there that's a non-starter for me, I'm afraid. true - on the other hand i don't use dovecot except as roxy and SASL provider and did not have any need for procmail over 6 years now sieve is a standard an dnot dovecot specific I'm a Unixy brat. Procmail is unixy, some of these more recent Linux bits of software are more Windows than Unix. Ignoring pipes is very bad, if I wanted greasy kid stuff software I'd run Windows on my servers. me too - but i don't use pipes in context of untrusted input while incomign mail is taht sort of traffic and at least recent issues proves to be right As for why should someone maintain it, people already maintain it - the distros maintain their versions You want a single maintainer to coordinate patches so people aren't re-inventing the wheel wrong answer or question convince someone to take over upstream or accept it is dead until that happens be happy it is still shipped from distributions instead get dropped as abandonware at all signature.asc Description: OpenPGP digital signature
Re: spf: lookup failed: addr is not a string
On 2014-10-28 13:25, Thomas Preißler wrote: Finally, I’m able to reproduce this issue on a plain debian wheezy system: - install debian wheezy - enable backports and run apt-get update - apt-get -t wheezy-backports install spamassassin - apt-get install libmail-dkim-perl - set 156.154.70.1 as the only nameserver in /etc/resolv.conf - run spamassassin -D mail.eml But removing libmail-dkim-perl is not really a solution. This package provides the Mail::DKIM module which is required to check the DKIM signature. Thanks, I can now reproduce this. I'm beginning to understand what is going on here. Because you have a older version of Mail::DKIM, spamassassin is unable to provide it with its own resolver, so Mail::DKIM does it by directly calling Net::DNS, which uses IO::Socket::INET, while the rest of the SpamAssassin's DNS resolving goes through IO::Socket::IP. For some reason a TCP DNS request by Net::DNS affects the socket used by IO::Socket::IP, making a variable holding a string also get a numerical component, and moreover it becomes tainted. In the end the getnameinfo() falls into the snag. Weird... Mark
Re: procmail
On Wed, 29 Oct 2014 01:31:51 +0100 Reindl Harald h.rei...@thelounge.net wrote: frankly in times of LMTP and Sieve there is hardly a need to use procmail - it is used because i know it and it just works - so why should somebody step in and maintain it while nobody is forced to use it I use Email::Filter, not procmail, but tell me: Can LMTP and Sieve do the following? 1) Cc: mail containing a specific header to a certain address, but only between 08:00-09:00 or 17:00-21:00. 2) Archive mail in a folder called Received-Archive/-MM. 3) Take mail to a specific address, shorten it by replacing things like four with 4, this with dis, etc. and send as much of the result as possible as a 140-character SMS message? Oh, and only do this if the support calendar says that I am on the support pager that week. 4) Take the voicemail notifications produced by our Asterisk software and replace the giant .WAV attachment with a much smaller .MP3 equivalent. These are all real-world requirements that my filter fulfills. And it does most of them without forking external processes. (Item 3 actually consults a calendar program to see who's on support, but the rest are all handled in-process.) Regards, David.
Re: Is this really the SpamAssassin list? (was Re: unsubscribe)
On Wed, 29 Oct 2014 00:33:04 +0100 Karsten Bräckelmann guent...@rudersport.de wrote: On Tue, 2014-10-28 at 11:19 -0700, jdebert wrote: On Tue, 28 Oct 2014 04:27:14 +0100 Karsten Bräckelmann guent...@rudersport.de wrote: On Mon, 2014-10-27 at 19:44 -0700, jdebert wrote: Redirecting them makes people lazy. Better than annoying but they don't learn anything except to repeat their mistakes. Your assumption, the list moderators (aka owner, me being one of them) would simply and silently obey and dutifully do the un-subscription for them, is flawed. ;) This assumption is unwarranted. I did not say that. You said that the unsubscribe-to-list posting user would not learn and get lazy, when those posts get redirected to the owner rather than hitting the list. Not exactly what I said. Not learning: False. As I said, moderators would respond with explanation and instructions. In particular learning about his mistake and how to properly (and in future) unsubscribe, does make him learn. Since we'd not just unsub him, the user will even have to proof that he learned, by following procedures unsubscribing himself. False as evidenced by how the same people repeat the same thing on the same list and on other lists. Got it. Getting lazy: People are lazy. But since there's absolutely nothing we would simply do for them, there's no potential in the process to get lazy over. They will have to read and understand how to do it. And they will have to follow every step of the unsub procedure themselves. The long form of saying we're agreed. And one of the reasons to automate the process. Did you read the rest of the message? Yes. And quite frankly, catching unsub messages and bouncing them with a note as you mentioned is almost identical to the proposed redirect them to owner to handle it. With the latter involving moderators, having the advantage, that we can and will offer additional help if need be. Having the listserver catch the messages and handle them is almost identical to redirecting them to the owner for manual handling? I could see that if list owners still managed lists manually. But there's this nifty new software that manages lists automatically, freeing the list owners from all that drudge work. Your assumption is that I am telling you to do all this manually. You seemed to be ambivalent about this, not preferring to do it manually but seeming to prefer to do it manually. My assumption was expecting it to occur to everyone that it might be done automatically. I really did not expect to have to write to ISO-9002 standards on a user list. jd
Re: procmail
On 10/28/2014 7:10 PM, David F. Skoll wrote: On Wed, 29 Oct 2014 01:31:51 +0100 Reindl Haraldh.rei...@thelounge.net wrote: frankly in times of LMTP and Sieve there is hardly a need to use procmail - it is used because i know it and it just works - so why should somebody step in and maintain it while nobody is forced to use it I use Email::Filter, not procmail, but tell me: Can LMTP and Sieve do the following? 1) Cc: mail containing a specific header to a certain address, but only between 08:00-09:00 or 17:00-21:00. Yes - it would be ugly but you could do this from cron. Just make up 2 procmail recipes and have the cron job copy over the one on tap. 2) Archive mail in a folder called Received-Archive/-MM. There's a milter for that. (I use Sendmail myself) 3) Take mail to a specific address, shorten it by replacing things like four with 4, this with dis, etc. and send as much of the result as possible as a 140-character SMS message? Oh, and only do this if the support calendar says that I am on the support pager that week. Probably - but I wouldn't want to write it either in Perl or Procmail, nor would I want to read the result! So what happens if the mailserver that your running the SMS text email through bites the dust? 4) Take the voicemail notifications produced by our Asterisk software and replace the giant .WAV attachment with a much smaller .MP3 equivalent. This is what I use for that (no Procmail in this) tedm-voicemail: | /usr/local/bin/wavmail-to-mp3mail.pl | /usr/sbin/sendmail -i -f v...@example.com t...@example.com #!/usr/bin/perl # use MIME::Parser; use IO::File; $p = new MIME::Parser; $p-output_to_core(1); $e = $p-parse(\*STDIN); $f = /tmp/$$; # # If running on FBSD then this might be simpler # open(PIPE, |/usr/local/bin/ffmpeg -i - -f mp3 $f); # # SOX can read the compressed ADPCM since it uses different sound libraries but it cannot change # the bitrate, and the bitrate that the Panasonic puts out is unusual - while it can be put into # an .mp3 by sox, nothing can read it. Instead we convert the wav to regular PCM which lame # can understand # open(PIPE, |/usr/local/bin/sox -t wav - -s -t wav - | /usr/local/bin/lame --preset phone -v -q 0 -V 9 --quiet - $f); print PIPE $e-parts(1)-bodyhandle-as_string; close PIPE; if( open(PIPE, :bytes,$f) ) { if( $fh = $e-parts(1)-bodyhandle-open(w) ) { my $buffer; while( read(PIPE, $buffer, 10240) 0 ) ### read chunks of 10KB at a time { $fh-print($buffer); } $fh-close; } close PIPE; } $e-parts(1)-head-replace(Content-type,audio/mp3); $e-parts(1)-head-mime_attr(content-type.name=voice-mail.mp3); $e-parts(1)-head-replace(Content-Disposition,attachment); $e-parts(1)-head-mime_attr(content-disposition.filename=voice-mail.mp3); $e-parts(0)-bodyhandle-as_string =~ m/\((\d+)\)/m; $e-sync_headers(Length=COMPUTE); $e-print; unlink($f); smtp# I'd be interested in seeing how you do it with your filter. Ted These are all real-world requirements that my filter fulfills. And it does most of them without forking external processes. (Item 3 actually consults a calendar program to see who's on support, but the rest are all handled in-process.) Regards, David.
Re: procmail
On Tue, 2014-10-28 at 22:10 -0400, David F. Skoll wrote: frankly in times of LMTP and Sieve there is hardly a need to use procmail - it is used because i know it and it just works - so why should somebody step in and maintain it while nobody is forced to use it I use Email::Filter, not procmail, but tell me: Can LMTP and Sieve do the following? Dammit, this is just too teasing... Sorry. ;) procmail can do all of those. (Yeah, not your question, but still...) 1) Cc: mail containing a specific header to a certain address, but only between 08:00-09:00 or 17:00-21:00. Sure. Limiting to specific days or hours can be achieved without external process by recipe conditions based on our own SMTP server's Received header, which we can trust to be correct. 2) Archive mail in a folder called Received-Archive/-MM. Trivial. See man procmailex. 3) Take mail to a specific address, shorten it by replacing things like four with 4, this with dis, etc. and send as much of the result as possible as a 140-character SMS message? Oh, and only do this if the support calendar says that I am on the support pager that week. Yep. Completely internal, given there's an email to SMS gateway (flashback 15 years ago), calling an external process for SMS delivery otherwise. 4) Take the voicemail notifications produced by our Asterisk software and replace the giant .WAV attachment with a much smaller .MP3 equivalent. Check. Calling an external process, but I doubt procmail and ffmpeg / avconv is worse than Perl and the modules required for that audio conversion. Granted, in this case I'd need some rather skillful sed-fu in the pipe, or a little help of an external Perl script using MIME-tools... ;) These are all real-world requirements that my filter fulfills. And it does most of them without forking external processes. (Item 3 actually consults a calendar program to see who's on support, but the rest are all handled in-process.) That said, and all joking apart: Do you guys even remember when this got completely off topic? -- char *t=\10pse\0r\0dtu\0.@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4; main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1: (c=*++x); c128 (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}
Who is ISIPP IADB why are they vouching for spammers?
While grubbing thru messages in one of my spam traps I came across one that had negative scores from: -2.2 RCVD_IN_IADB_VOUCHED RBL: ISIPP IADB lists as vouched-for sender -0.5 KHOP_RCVD_TRUSTDNS-Whitelisted sender is verified Since it also hit RAZOR2_CF_RANGE_E8_51_100 RAZOR2_CF_RANGE_51_100 it didn't get learned as ham, but it still generated a FP. Is this worth reporting to somebody? Should that IADB be trustworthy or should I contribute this sort of spam to the scoring engine to get that -2.2 adjusted down? It is kind of interesting to track the history of spamtrap fodder. These are addresses that were mutations of legit business addresses that I noticed regularly bouncing spam. So I created a catchall (luser relay) handler for them and started tracking the spam fodder. At first it was clearly just garbage spam but gradually mutated as spammers sold their address lists to others and now it's gotten up to legit looking businesses (Verizon, ATT, PayPal, etc) throwing their stuff into this spamtrap (IE drank the cool-aid). -- Dave Funk University of Iowa dbfunk (at) engineering.uiowa.eduCollege of Engineering 319/335-5751 FAX: 319/384-0549 1256 Seamans Center Sys_admin/Postmaster/cell_adminIowa City, IA 52242-1527 #include std_disclaimer.h Better is not better, 'standard' is better. B{
Re: Who is ISIPP IADB why are they vouching for spammers?
David B Funk dbf...@engineering.uiowa.edu writes: While grubbing thru messages in one of my spam traps I came across one that had negative scores from: -2.2 RCVD_IN_IADB_VOUCHED RBL: ISIPP IADB lists as vouched-for sender -0.5 KHOP_RCVD_TRUSTDNS-Whitelisted sender is verified Since it also hit RAZOR2_CF_RANGE_E8_51_100 RAZOR2_CF_RANGE_51_100 it didn't get learned as ham, but it still generated a FP. Is this worth reporting to somebody? Should that IADB be trustworthy They say they are. But they all say that (it's how thet earn money). If you want to go any further, you should read the mail and decide by yourselve how you classify it. Obviously someone thought it was spam and reported to razor, but the sender has been paying ISIPP and think they are legitimate. Best regards, Olivier or should I contribute this sort of spam to the scoring engine to get that -2.2 adjusted down? It is kind of interesting to track the history of spamtrap fodder. These are addresses that were mutations of legit business addresses that I noticed regularly bouncing spam. So I created a catchall (luser relay) handler for them and started tracking the spam fodder. At first it was clearly just garbage spam but gradually mutated as spammers sold their address lists to others and now it's gotten up to legit looking businesses (Verizon, ATT, PayPal, etc) throwing their stuff into this spamtrap (IE drank the cool-aid). --
Re: Spam messages bypassing SA
Jeff Mincy wrote: I agree with everything you wrote but only when bayes autolearning is turned off. Bayes learning holds an exclusive lock to the bayes database particularly during expiration. But the example was calling spamc. Bayes autolearning would be occuring in the spamd side of things. The spamc shouldn't need to know about it. The spamd side worries about that. That is rather the entire point of using the client-server model. Otherwise one would simply run the full perl spamassassin there instead. (There are other reasons for the client server too. And yet more for running the full perl spamassassin inline. There is no canonical correct way.) For my personal mail I run the full perl spamassassin. For mailing lists I run it through spamc-spamd. And as John noted it is much better to run sa-learn --expire as a separate process, probably cron driven, and not inline with the SA run. If spamc does bayes autolearning and starts an expiration then other spamc runs for that user will be locked out of bayes. At some point you start getting timeouts at different points in the email delivery chain. Any time supply (spamd) can't keep up with demand (spamc) there may be timeouts and other failures. The question is where might those occur. In the suggested recipe a timeout between spamc and spamd would cause the spamc to exit with EX_TEMPFAIL (75) which would cause procmail to exit the same which would cause the MTA to requeue and retry the message later. For spamc-spamd use I pump the mail off through spamc and let spamd queue and process as fast as it can. In my environment I am not experiencing timeouts. But if for some reason the resources of supply did not keep up with demand then the message would simply queue for retry later when resources may be available. If the system was overloaded then that is about the best that can be done anyway. If that happened often then increasing the compute resources on the spamd side would allow it to keep up better. Serializing spamc will definitely give spamd plenty of time between messages so that the system won't be overloaded. But if the system is dedicated to handling mail and anti-spam then it won't be able to be highly utilized. Running more in parallel will usually utilize resources more efficiently. In my environment spamc and spamd are on different systems. Therefore making use of parallel compute resources is a good thing. But if in your environment everything happens on one single system then serialization may be best. It is your judgement call and every environment is different. I have a separate sa-learn (or spamc -L) procmail recipe that has a serialization lock. I run sa-learn --expire from cron. I run spamc --learntype=spam otherwise using a different invocation not involving procmail. Bob
Re: Is this really the SpamAssassin list? (was Re: unsubscribe)
On Tue, 2014-10-28 at 19:56 -0700, jdebert wrote: On Wed, 29 Oct 2014 00:33:04 +0100 Karsten Bräckelmann guent...@rudersport.de wrote: Redirecting them makes people lazy. Better than annoying but they don't learn anything except to repeat their mistakes. Your assumption, the list moderators (aka owner, me being one of them) would simply and silently obey and dutifully do the un-subscription for them, is flawed. ;) This assumption is unwarranted. I did not say that. You said that the unsubscribe-to-list posting user would not learn and get lazy, when those posts get redirected to the owner rather than hitting the list. Not exactly what I said. In the part you snipped of my previous post, I asked you to explain what you did mean, if not what I discussed in detail. This response is not helpful, neither constructive. Not learning: False. As I said, moderators would respond with explanation and instructions. In particular learning about his mistake and how to properly (and in future) unsubscribe, does make him learn. Since we'd not just unsub him, the user will even have to proof that he learned, by following procedures unsubscribing himself. False as evidenced by how the same people repeat the same thing on the same list and on other lists. Got it. Show me an example of one subscriber repeating this mistake on this list. Show me an example of one subscriber repeating this mistake on this list, after the proposed and discussed redirect to owner procedure is in effect, which is meant to help with the issue. You cannot possibly show the latter, since it is not yet in effect. So there is no evidence as you just claimed. Moreover, there is absolutely no basis to your evidence claim, that directly approaching those subscribers by moderators would not make them learn. You'll have a really hard time showing the first, too. Got it. (Not a native English speaker, what's that supposed to mean in the context of your quote? Equivalent of a foot-stomp?) Getting lazy: People are lazy. But since there's absolutely nothing we would simply do for them, there's no potential in the process to get lazy over. They will have to read and understand how to do it. And they will have to follow every step of the unsub procedure themselves. The long form of saying we're agreed. And one of the reasons to automate the process. Fun research project for you in strong favor of automation: How many such posts did this list get in the last month? Statistically irrelevant spike. Last 6 months? Last year? Two years? I am a moderator of this list. I do know that handling those bad unsub requests manually would be barely noticeable compared to the general moderation load. Which isn't high either. Did you read the rest of the message? Yes. And quite frankly, catching unsub messages and bouncing them with a note as you mentioned is almost identical to the proposed redirect them to owner to handle it. With the latter involving moderators, having the advantage, that we can and will offer additional help if need be. Having the listserver catch the messages and handle them is almost identical to redirecting them to the owner for manual handling? I could see that if list owners still managed lists manually. But there's this nifty new software that manages lists automatically, freeing the list owners from all that drudge work. I am very sorry, but it appears you have absolutely no clue what nursing mailing lists today means. Yes, all subscription (and un-subscription) is handled automatically. No owner intervention, not even notices. Automation. What we mostly do face is posts by non-subscribers. Mostly spam (just ignore), but also a non-negligible amount of valid posts by non-subscribers, or list-replies by subscribers using a wrong address. The latter outweighs by far the amount of non-subscribers. Unsub posts to the list? About the same as non-subscriber posts. Very limited. Almost negligible, if some rare samples won't trigger an on-list shitstorm. With the proposed process in place, I would have spent less lime managing and resolving the last 12 months' bad unsub requests, than it took me arguing with you about something that really does not concern you. Your assumption is that I am telling you to do all this manually. You seemed to be ambivalent about this, not preferring to do it manually but seeming to prefer to do it manually. No. I know from experience that doing this manually is the easiest, least time consuming solution. And with no word did I imply you are telling me to do all this manually. Quite the contrary. My assumption was expecting it to occur to everyone that it might be done automatically. I really did not expect to have to write to ISO-9002 standards on a user list. Exactly, *might*. Not the best solution in this case. -- char *t=\10pse\0r\0dtu\0.@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4;