RE: pyzor problem.
I thought that was the purpose of the pyzor discover command? Who maintains 82.94.255.100 as it doesn't get listed with pyzor discover. -Original Message- From: User for SpamAssassin Mail List [mailto:[EMAIL PROTECTED] Sent: Monday, July 30, 2007 6:56 PM To: Gary V Cc: users@spamassassin.apache.org Subject: Re: pyzor problem. On Mon, 30 Jul 2007, Gary V wrote: > >We noticed pyzor latency/timeouts last week and had to disable it. > > > >User for SpamAssassin Mail List wrote: > > > Hello, > > > > > > I've noticed a big jump in spam here and looking through logs it > > > looks like my system is not getting pyzor to respond. > > > > > > When I do a "spamassassin --lint -D" > > > > > > I show: > > > > > > debug: Pyzor is available: /usr/bin/pyzor > > > debug: Pyzor: got response: 66.250.40.33:24441 TimeoutError: > > > debug: Pyzor: couldn't grok response "66.250.40.33:24441 > >TimeoutError: " > > > > > > > > > Has something changed with pyzor as of late ? > > > > > > Anyone have any clues? > > > > > > Thanks, > > > > > > Ken > > > > > > > > > >-- > >Joel Nimety > > I think the main server has been overloaded for a couple years now. > Find .../.pyzor/servers file and replace 66.250.40.33:24441 with > 82.94.255.100:24441 > > It should help. > > Gary V Gary, That server "82.94.255.100:24441" solved the problem. The next problem was how to change that IP address in the ~/.pyzor/servers files for all the customers. So I put together a script to do just that. Here is that script in case others want to do the same thing. Thanks, Ken You must put in a servers file in the /etc/skel/.pyzor directory with 82.94.255.100:24441 in the servers file. Script follows: #! /bin/sh # #This script changes the pyzor server in each users home directory to #the server that is listed in /etc/skel/.pyzor/servers . #This became a problem when the primary server stopped #responding. - knr - 7-07 # # # USERNAME="" cd /home for USERNAME in `ls -d *`; do if [ -d /home/${USERNAME}/.pyzor ]; then if [ -f /home/${USERNAME}/.pyzor/servers ]; then cp /etc/skel/.pyzor/servers /home/${USERNAME}/.pyzor/servers; chown ${USERNAME}:users /home/${USERNAME}/.pyzor/servers; fi fi done
SA Rule based on checks
Is it possible to have a rule that looks at the SA checks already performed and score based off that. For example, I'm thinking about a rule that offsets a negative Bayes/CRM114 value if DCC and RAZOR or some other rules checks have tripped. -=B
RE: pyzor finally dead?
I just checked my logs because I was surprised to here this and it looks like 82.94.255.100:24441 is what I'm still using and my MailScanner-SpamAssassin log entries are still showing that sa rule being tripped by some transactions. 263 so far today and 1740 yesterday average sa-checked messages is around 40-50 thousand a day. I believe pyzor was supposed to be an answer to for those who wanted to run their own razor server but didn't want to pay the licensing fees, so you could just run your pyzor server instead. -Original Message- From: Martin.Hepworth [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 13, 2008 7:39 AM To: [EMAIL PROTECTED] Subject: pyzor finally dead? All We've been using the following pyzor server.. 82.94.255.100:24441 For many years now as the 'official' one seemed not to update. Now this this seems to be dead as well (well people on the MailScanner IRC channel also report no activity from this). Any news or good alternates? -- Martin Hepworth Snr Systems Administrator Solid State Logic Tel: +44 (0)1865 842300 ** Confidentiality : This e-mail and any attachments are intended for the addressee only and may be confidential. If they come to you in error you must take no action based on them, nor must you copy or show them to anyone. Please advise the sender by replying to this e-mail immediately and then delete the original from your computer. Opinion : Any opinions expressed in this e-mail are entirely those of the author and unless specifically stated to the contrary, are not necessarily those of the author's employer. Security Warning : Internet e-mail is not necessarily a secure communications medium and can be subject to data corruption. We advise that you consider this fact when e-mailing us. Viruses : We have taken steps to ensure that this e-mail and any attachments are free from known viruses but in keeping with good computing practice, you should ensure that they are virus free. Red Lion 49 Ltd T/A Solid State Logic Registered as a limited company in England and Wales (Company No:5362730) Registered Office: 25 Spring Hill Road, Begbroke, Oxford OX5 1RU, United Kingdom **
RE: SORBS_DUL
It does makes sense that they would list unused/unowned netblocks in APNIC in their database probably because of the probability that such blocks would get assigned to an ISP which more than likely offer it up as dynamic. I haven't looked there in a while but I thought it explained conditions for Ips and netblocks to be in the DUL database and I thought it said it was because of published info by the ISP as well as reverse lookup records. Over the years of my use of SORBS_DUL, I've seen maybe a dozen or so .coms that had their static ISP assigned address in SURBS_DUL because of their PTR records. Once they contacted their ISP changed their PTR records so that it didn't look dynamic (IP embedded), SORBS removed the IP from the DUL database. -Original Message- From: James Gray [mailto:[EMAIL PROTECTED] Sent: Monday, March 24, 2008 4:57 PM To: users@spamassassin.apache.org Subject: SORBS_DUL Why are rules that look up against this list still in the base of SpamAssassin?? The SORBS dynamic list is so poorly maintained that it's practically useless and if you are an unfortunate who ends up incorrectly listed in it, good luck getting off it! Case at hand, the company I work for purchased a /19 address block directly from APNIC before anyone else had it (IOW, we were the first users of that block). We now have both our external mail IP's listed in SORBS_DUL despite the fact the /24 they belong to, and the /24's on either side have NEVER been part of a dynamic pool. SORBS refuse to delist them as our MX records are different to these outgoing mail servers! FFS - we run managed services for a number of ISP's why the hell would we *want* to munge all our inbound and outbound mail through the same IP's?!? Seriously folks, can we make SORBS_DUL optional and not "on by default" in the general distribution? Cheers, James
BATV and whitelisting
I'm staring to see BATV use increasing. Has anyone thought about how this effects whitelists, mta acls, etc? It looks like such things are broken because if an end-user whitelists [EMAIL PROTECTED] and BATV has the mail from as [EMAIL PROTECTED], then that whitelisting has no effect. And since the BATV signature changes, they can't whitelist that even if they new what batv signed address was for that sender. Any thought about how to resolve this? I was thinking of stripping out the batv stuff to get the senders address for matching but I see different kinds of prvs= addresses out there. Some have [EMAIL PROTECTED] and others have [EMAIL PROTECTED] Bobby
Fuzzy 2.3b and PNG
What am I missing? I updated but not png isn't working. If I switch to debug logging 2 I see in the log when I run the sample thru. [2006-08-26 18:16:40] Debug mode: Analyzing file with content-type "image/png" [2006-08-26 18:16:40] Debug mode: Image type not recognized, unknown format. Skipping this image... Thanks Bobby
RE: I'm thinking about suing Microsoft
But windows patches are free. Even if you are using an illegal copy of windows, you can still manually download and install the patches. It's Microsoft Update where they mostly have the genuine windows verification code. Even Redhat forces you to pay subscriptions for their autoupdate management stuff. -Original Message- From: Marc Perkel [mailto:[EMAIL PROTECTED] Sent: Monday, October 23, 2006 3:59 PM To: Jo Cc: Duane Hill; users@spamassassin.apache.org Subject: Re: I'm thinking about suing Microsoft Popularity is a factor. But the real vulnerability is that Windows can be more secure if it has the patches. If Linux for example restricted it's seurity patches to only licensed users they would have the same problem. I'm not saying either that MS should be compelled to distribute any upgrades for free. Just secutiry fixes.
RE: mail bounce warning for the list
So what you're saying is that the rule that people running listservers should maintain valid recipients who want to receive messages from the list shouldn't be followed just because it's a list about an antispam product? The last time I checked, the most common reason for spamcop lists is due to messages being sent to their spam traps. What's the point of even having rules in SA for spamcop and other DNSBLs if you don't have a certain level of trust in them. SA is more resource intensive that an MTA block which is why so many still use it. I know that over 20k a day trip the SORBs DUL rule here and around 10k trip spamhaus. You can pretty much bet it's all spam so I can understand why people would rather use those lists at their MTAs based on their observations of the mail flow for their domains. There have been messages posted to this list that can have very positive SA scores simply due to the content. So based of that, I guess everyone should whitelist users@spamassassin.apache.org and spammers reading the list can just turn around and use that as their return address because then the argument could be made that anyone who doesn't deserves not to get mail from the SA lists. I believe the correct process here is that the moderators of the SA listserver investigate why the listserver got listed on Spamcop. If it is a case where there are addresses to spamtraps in the list, then maybe the list needs to send out opt-in verification messages to weed them out. -=B From: Mike Kenny [mailto:[EMAIL PROTECTED] Sent: Tuesday, November 07, 2006 3:15 AMTo: users@spamassassin.apache.orgSubject: Re: mail bounce warning for the list On 11/7/06, Derek Harding <[EMAIL PROTECTED]> wrote: Gary W. Smith wrote:>> Was the SA group listed by spamcop last month? I just now received> this for messages from October 26th.>Who cares?> < [EMAIL PROTECTED]>:>> MailScanner warning: numerical links are often malicious: 209.209.82.24 does not like recipient.>> Remote host said: 554 5.7.1 Service unavailable; Client host> [MailScanner warning: numerical links are often malicious: 140.211.11.2] blocked using bl.spamcop.net; Blocked - see> _http://www.spamcop.net/bl.shtml?140.211.11.2_>> Giving up on MailScanner warning: numerical links are often malicious: 209.209.82.24 .>> Gary Wayne Smith>Anyone dumb enough to block outright on the spamcop BL deserves whateverthey don't get.DerekIs this not part of the problem? That many of these people who 'deserve whatever they don't get' are operating under the mistaken belief that these spam vigilantes are protecting them from spam and allowing legitimate mail through? We can enter into a pointless argument about whether this is due to the stupidity of their administrators or the arrogance of the knowldgeable administrators, but the fact is that this is happening. This is evidenced by the number of complaints from people claiming either not to have received legitimate email or to have it bounced by spamcop or some such site. Blocking mail base soley on the IP address (whether because it is a dynamic address or has at some time in the past sent a mail to a spamtrap) is akin to shooting the postman because yesterday you received an advertisement. The only way to kill spam is to inspect the mail using a tool such as SA and then reach an intelligent decision based on the results (the interpretation of the results will vary from site to site). Blocking IP addresses will not kill spam, it kills the mail system.The spammer will move to anotehr IP, the poor innocent user doesn't know what to do and either accepts that his mail may not reach all recipients or reverts to licking stamps.mike
SA Rule help question
Does anyone know how a rule can be written to compare two header markers for similar info? I don't think SA can do variable storage so I was thinking maybe a regex rule that normalizes what I want to focus on from a header in the regex search of another header. For example, let's say that I wanted to see if the same email address is in a X-Envelope-From header and Reply-To header? That's not exactly what I was looking to do but is similar. Thanks for any suggestions Bobby Rose This document may include proprietary and confidential information of Wayne State University Physician Group and may only be read by those person(s) to whom it is addressed. If you have received this e-mail message in error, please notify us immediately. This document may not be reproduced, copied, distributed, published, modified or furnished to third parties, without prior written consent of Wayne State University Physician Group. Thank you.
RE: New free blacklist: BRBL - Barracuda Reputation Block List
I had the same issue and found that the system that's relaying (216.129.105.40) those confirmation emails doesn't have a PTR record. You'd think someone selling a antispam/email appliance would be familiar with the RFCs. -Original Message- From: Justin Piszcz [mailto:[EMAIL PROTECTED] Sent: Monday, September 22, 2008 10:15 AM To: Daniel J McDonald Cc: users@spamassassin.apache.org Subject: Re: New free blacklist: BRBL - Barracuda Reputation Block List On Mon, 22 Sep 2008, Daniel J McDonald wrote: Hmm I signed up for this 1-2 days ago but never got a confirmation e-mail from them? What is the RBL name? Justin.
RE: New version of iXhash plugin available
Has anyone who switched to 1.5 of iXHash received any hits? I haven't seen any since switching. One thing that I've noticed is if I pass the same message thru SA using the old iXhash, the hash is computed via Method 1 and 2, if I use 1.5 of iXhash, it's only computed using method 2 On one box I have SA with 1.5 and another I have SA and 1.01 and the box with 1.01 version has tripped about 5 messages since switching it back. -Original Message- From: Dirk Bonengel [mailto:[EMAIL PROTECTED] Sent: Saturday, November 29, 2008 8:55 AM To: Arthur Dent; users@spamassassin.apache.org Subject: Re: New version of iXhash plugin available Arthur Dent schrieb: [ CUT] > Hmmm.. OK I tried another one. This one actually triggered iXhash when I > got > it originally. You can see the original mail (including headers showing > iXhash > report) here: > > http://pastebin.ca/1269211 > > Running it through now it doesn't generate the error that the > other message did, BUT... it doesn't trigger iXhash anymore! > > That mail's somewhat older, isn't it? So it may simple have been removed from Ctyme's (and the other's) data. It doesn't trigger here as well. > Here's the debug output: > > [EMAIL PROTECTED] SpamSamples]$ spamassassin -D IXHASH < watchtv.txt > [15993] dbg: IXHASH: Using iXhash plugin > [15993] dbg: IXHASH: IxHash querying ctyme.ixhash.net > [15993] dbg: IXHASH: Hash value #1 not computed, requirements not met > [15993] dbg: IXHASH: Computed hash-value > e82057ad6c568847f62357498f0dd4db via > method 2, using perl exclusively > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.ctyme.ixhash.net > [15993] dbg: IXHASH: Hash value #3 not computed, requirements not met > [15993] dbg: IXHASH: IxHash querying hosteurope.ixhash.net > [15993] dbg: IXHASH: Hash value #1 not computed, requirements not met > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.hosteurope.ixhash.net > [15993] dbg: IXHASH: Hash value for method #2 found in metadata, > re-using that > one > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.hosteurope.ixhash.net > [15993] dbg: IXHASH: Hash value #3 not computed, requirements not met > [15993] dbg: IXHASH: IxHash querying generic.ixhash.net > [15993] dbg: IXHASH: Hash value #1 not computed, requirements not met > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.generic.ixhash.net > [15993] dbg: IXHASH: Hash value for method #2 found in metadata, > re-using that > one > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.generic.ixhash.net > [15993] dbg: IXHASH: Hash value #3 not computed, requirements not met > [15993] dbg: IXHASH: IxHash querying ix.dnsbl.manitu.net > [15993] dbg: IXHASH: Hash value #1 not computed, requirements not met > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.ix.dnsbl.manitu.net > [15993] dbg: IXHASH: Hash value for method #2 found in metadata, > re-using that > one > [15993] dbg: IXHASH: Now checking > e82057ad6c568847f62357498f0dd4db.ix.dnsbl.manitu.net > [15993] dbg: IXHASH: Hash value #3 not computed, requirements not met > From ... [Headers and spam report removed] > > This all looks OK to me. Just keep an eye to your spam flow and see if you get any hits (that would most likely hit @ ix.dnsbl.manitu.net - Heise's list would be the biggest one)
RE: Bug in iXhash plugin - fixed version available
I just tried again with this 1.5.2 version and on box it times out querying and on another it seems to run but no hits again. Both my boxes are SA3.2.5. Does anyone have a message that is known to have hashes on any of iXhash hosts? -Original Message- From: Giampaolo Tomassoni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 03, 2008 12:49 PM To: 'Marc Perkel'; 'Dirk Bonengel' Cc: users@spamassassin.apache.org Subject: RE: Bug in iXhash plugin - fixed version available > -Original Message- > From: Marc Perkel [mailto:[EMAIL PROTECTED] > Sent: Wednesday, December 03, 2008 12:04 AM > > it's WORKING Well, it hangs my SA 3.2.4 setup on waiting for a reply from ctyme.ixhash.net . The strange thing is that it consumes a lot of CPU while hanging... Some problem in the ctyme.ixhash.net side? Anybody is experiencing the same? Giampaolo > Dirk Bonengel wrote: > > OK, I found the bug. > > > > I just released a fixed release. Thanks to Lars Uhlmann for finding > > the culprit and delivering a fix. > > Problem was the regular expression checking the IP returned if it > > belongs to the 127.x.x.x range. > > > > Hmm, I had this working before > > > > Soryy again for the trouble > > > > Dirk > > > > > > > > > > > > > >
RE: Bug in iXhash plugin - fixed version available
The old version will still work. 1.5.2 is working for me except that since starting to use it, I'm seeing more SA timeouts than before. So on one box, I've gone back to 1.01 to confirm that it is iXhash 1.5.2 -Original Message- From: RobertH [mailto:[EMAIL PROTECTED] Sent: Thursday, December 04, 2008 12:59 AM To: users@spamassassin.apache.org Subject: RE: Bug in iXhash plugin - fixed version available is there anything wrong with still using an older pre 1.5.x version of iXhash? is there a problem that makes an upgrade recommended? OR is there a problem that forces up to upgrade? - rh
RE: Bug in iXhash plugin - fixed version available
Yep. Timeouts have stopped on the node that I switched back to iXhash 1.0.1. -Original Message- From: Rose, Bobby [mailto:[EMAIL PROTECTED] Sent: Thursday, December 04, 2008 8:22 AM To: users@spamassassin.apache.org Subject: RE: Bug in iXhash plugin - fixed version available The old version will still work. 1.5.2 is working for me except that since starting to use it, I'm seeing more SA timeouts than before. So on one box, I've gone back to 1.01 to confirm that it is iXhash 1.5.2 -Original Message- From: RobertH [mailto:[EMAIL PROTECTED] Sent: Thursday, December 04, 2008 12:59 AM To: users@spamassassin.apache.org Subject: RE: Bug in iXhash plugin - fixed version available is there anything wrong with still using an older pre 1.5.x version of iXhash? is there a problem that makes an upgrade recommended? OR is there a problem that forces up to upgrade? - rh
RE: SpamAssassin 3.0.5 RELEASED
Is anyone else having problems getting to www.apache.org? I've tried from work and from home. The site acts like it's trying to load and then eventually gives the generic cannot find server or DNS error. It's not DNS because the FQDN resolves. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 06, 2005 10:10 PM To: dev@spamassassin.apache.org; users@spamassassin.apache.org Cc: Warren Togami Subject: SpamAssassin 3.0.5 RELEASED -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 (NOTE: this is a maintainance release of the 3.0.x branch. If you are already running the more up-to-date, stable 3.1.0, pay no attention! This is only for people who are stuck on 3.0.x for some reason.) We got enough votes for those tarballs we voted on last week, so it's an official release now. Here are the checksums: md5sum of archive files: 0d6066561db3e4efff73f00c34584cb8 Mail-SpamAssassin-3.0.5.tar.bz2 12c9f14ffaeb5cb3b5801cc5b5231cdd Mail-SpamAssassin-3.0.5.tar.gz e0d0e556d5929bb209aedc91ccdb2358 Mail-SpamAssassin-3.0.5.zip sha1sum of archive files: 30dcfce390a311dfff9430c1b00ae4f7e4357ca8 Mail-SpamAssassin-3.0.5.tar.bz2 99051775deb4566077fdca57a274531bade19bc8 Mail-SpamAssassin-3.0.5.tar.gz 7632e774d111764f041efb9e42453fc38885a1c2 Mail-SpamAssassin-3.0.5.zip And they're available at http://www.apache.org/dist/spamassassin/ . Abbreviated changelog: - - bug 4464: Trivial doco change - - bug 4346: Skip large messages in sa-learn - - bug 4570: Optimize a regexp that was blowing perl stack trying to parse very long headers - - Bug 4275: Fix some incorrectly case-insensitive URL parsing regexps - - bug 3712: more efficient parsing of messages with lots of newlines in header - - bug 4065: Recognize new outlook express msgid format - - bug 4390: Recognize URLs obfuscated using backslashes - - bug 4439: Fix removal of markup when there are DOS newlines - - bug 4565: new Yahoo server naming is causing FORGED_YAHOO_RCVD false positives - - bug 4522: URI parsing with JIS encoding - - bug 4655: fix redhat init script for spamd to be smarter about stopping processes - - bug 4190: race condition in round-robin forking algorithm - - bug 4535: parse mime content boundary with -- correctly - - bug 3949: fix ALL_TRUSTED misfires - --j. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Exmh CVS iD8DBQFDllKcMJF5cimLx9ARAicsAJ9scH3eWPq7rf3g2usGIPjZnf5cQQCglK8g WdqjzNMaHzszmTI5xT8nHjk= =aU+H -END PGP SIGNATURE-
Apache SpamAssassin 3.2.0 using older version of ImageInfo
The ImageInfo packaged with 3.2.0 isn't the latest version from SARE as it's missing the image_name_regex method. -=B
SA 3.2.0 and Undisclosed recipients?
Does anyone know why the UNDISC_RECIPS was removed from 20_head_tests.cf tests? I searched the dev lists and it's mentioned in the context of being obsolete when ran against the corpus but I've seen alot of spam that is seen as being sent to undisclosed-recipients (aka BCC). I've added it to my local.cf but it's odd it was removed when I'm seeing it as a common factor in alot of recent spam coming to my domain after upgrading. -=Bobby
RE: ANNOUNCE: Apache SpamAssassin 3.2.1 available
I'm seeing the same kind of messages mentioned after compiling from source on Redhat ES4 and running make test. -Original Message- From: Daniel J McDonald [mailto:[EMAIL PROTECTED] Sent: Monday, June 11, 2007 6:35 PM To: users@spamassassin.apache.org Subject: Re: ANNOUNCE: Apache SpamAssassin 3.2.1 available On Mon, 2007-06-11 at 21:14 +0100, Justin Mason wrote: > Apache SpamAssassin 3.2.1 is now available! This is a maintenance and > security release of the 3.2.x branch. It is highly recommended that > people upgrade to this version from 3.2.0. Whilst compiling the RPM for mandriva corporate server 4: t/spamc_optCNot found: reported spam = Message successfully reported/revoked # Failed test 2 in t/SATest.pm at line 635 Output can be examined in: log/d.spamc_optC/out.1 t/spamc_optCNOK 2 Not found: revoked ham = Message successfully reported/revoked # Failed test 4 in t/SATest.pm at line 635 fail #2 Output can be examined in: log/d.spamc_optC/out.1 log/d.spamc_optC/out.3 t/spamc_optCNOK 4 Not found: failed to report spam = Unable to report/revoke message [...] Output can be examined in: log/d.spamc_optC/out.1 log/d.spamc_optC/out.3 log/d.spamc_optC/out.5 log/d.spamc_optC/out.7 t/spamc_optCFAILED tests 2, 4, 6, 8 Failed 4/9 tests, 55.56% okay t/spamc_optL# Failed test 1 in t/spamc_optL.t at line 20 Not found: learned spam = Message successfully un/learned [...] t/spamc_optLFAILED tests 1-16 Failed 16/16 tests, 0.00% okay Failed TestStat Wstat Total Fail Failed List of Failed --- t/spamc_optC.t94 44.44% 2 4 6 8 t/spamc_optL.t 16 16 100.00% 1-16 t/spamd_allow_user_rules.t51 20.00% 4 t/spamd_plugin.t 62 33.33% 4 6 17 tests skipped. Failed 4/129 test scripts, 96.90% okay. 23/1981 subtests failed, 98.84% okay. make: *** [test_dynamic] Error 255 error: Bad exit status from /var/tmp/rpm-tmp.45769 (%check) Any thoughts? -- Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX Austin Energy http://www.austinenergy.com
URI Tests and Japanese Chars
I have a user that is of Japanese origin and who converses with other individuals in Japan in his same field of study. The messages they send are in Japanese and trip the URI_SBL rule. These people are in different .jp domains and I really don't want to get into the administrative overhead of whitelisting. I don't see anything in the message bodies that even looks like a URI. Has anyone else ran into this? Bobby Rose Wayne State University School of Medicine
RE: URI Tests and Japanese Chars (solved)
I figured out the problem, it' was the an individuals email address in the message body (even though not a mailto). Their email domain isn't listed at spamhaus.org but it turns out one of their ISPs DNS servers are which they are using as secondary. This makes the second time I've come across this. The last time it was an ISP's (pipex.net) DNS server in the U.K. that was tripping the URIBL_SBL rule. This time the user is in the med.juntendo.ac.jp (Juntendo Univ Med School) who's ISP is cwidc.net and the DNS server ns03.cwidc.net (154.33.17.212) is the one in spamhaus.org which they say is hosting a long time spammer. http://www.spamhaus.org/sbl/sbl.lasso?query=SBL17240 Does URI checking really need to be so thorough? Obviously there must be some bias at spamhaus if the big named ISPs don't get their name servers listed because we know that they provide services to spammers. Any idea on how to limit the scope to just the URI at it's face value? -----Original Message- From: Rose, Bobby [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 15, 2005 2:14 PM To: users@spamassassin.apache.org Subject: URI Tests and Japanese Chars I have a user that is of Japanese origin and who converses with other individuals in Japan in his same field of study. The messages they send are in Japanese and trip the URI_SBL rule. These people are in different .jp domains and I really don't want to get into the administrative overhead of whitelisting. I don't see anything in the message bodies that even looks like a URI. Has anyone else ran into this? Bobby Rose Wayne State University School of Medicine
RE: URI Tests and Japanese Chars (solved)
This is an excerpt that I used in trying to track it down. No real mailto URI (Bunless there is some translation going on with email addresses embedded in the (Bbody by the email client on send. At first, I just thought it might be a bug (Bsince the messages were using ISO-2022-JP character set but if I sent just a (Bplain text message with just the [EMAIL PROTECTED] in the body, then URIBL_SBL (Bwas tripped. (B (B* (B- Original Message - (BFrom: "user1" <[EMAIL PROTECTED]> (BTo: "user2" <[EMAIL PROTECTED]> (BSent: Friday, March 11, 2005 11:14 AM (BSubject: Re: $BFb;[EMAIL PROTECTED](J (B (B*** (B (B-=B (B (B (B-Original Message- (BFrom: Jeff Chan [mailto:[EMAIL PROTECTED] (BSent: Wednesday, March 16, 2005 7:52 AM (BTo: users@spamassassin.apache.org (BSubject: Re: URI Tests and Japanese Chars (solved) (B (BOn Wednesday, March 16, 2005, 3:55:52 AM, Bobby Rose wrote: (B (B> I figured out the problem, it' was the an individuals email address in (B> the message body (even though not a mailto). Their email domain isn't (B> listed at spamhaus.org but it turns out one of their ISPs DNS servers (B> are which they are using as secondary. This makes the second time (B> I've come across this. The last time it was an ISP's (pipex.net) DNS (B> server in the U.K. that was tripping the URIBL_SBL rule. (B (B> This time the user is in the med.juntendo.ac.jp (Juntendo Univ Med (B> School) who's ISP is cwidc.net and the DNS server ns03.cwidc.net (B> (154.33.17.212) is the one in spamhaus.org which they say is hosting a (B> long time spammer. (B> http://www.spamhaus.org/sbl/sbl.lasso?query=SBL17240 (B (B> Does URI checking really need to be so thorough? Obviously there must (B> be some bias at spamhaus if the big named ISPs don't get their name (B> servers listed because we know that they provide services to spammers. (B> Any idea on how to limit the scope to just the URI at it's face value? (B (Buridnsbl used in the default rule URIBL_SBL does check domain name servers (Bagainst SBL, but I'm kind of surprised to hear it triggering on email (Baddresses. It should definitely be checking web sites and the like. Can you (Bgive a sample of the text it hit? Was it in URI form like: (B (B mailto://[EMAIL PROTECTED] (B (BThat said, I agree that the SBL listings are at times overbroad. (BName servers for gov.ru and spb.ru for example are listed (ns.rtcomm.ru and (Bns1.relcom.ru respectively). Listings like those can cause false positives, (Band I personally object to deliberately harming innocent bystanders to (B"pressure" ISPs. (B (BJeff C. (B-- (BJeff Chan (Bmailto:[EMAIL PROTECTED] (Bhttp://www.surbl.org/
RE: URI Tests and Japanese Chars (solved)
But in my test messages the email address wasn't in the form of a URI. It was just the email address. I even used pine for a test to make sure it was a gui client doing some reformatting business. Do we know if it's possible to know if the results from SBL are for the domain of the URI being queried or if their results are due to some association with the domain being queried. If so then we could ignore any results other than for the domain being queried or weigh the results differently so long as they aren't accumulative points for each occurrence. Otherwise, the points would add up the more that person's email address appears in the email. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Thursday, March 17, 2005 5:26 PM To: Daryl C. W. O'Shea Cc: List Mail User; [EMAIL PROTECTED]; users@spamassassin.apache.org Subject: Re: URI Tests and Japanese Chars (solved) -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Daryl C. W. O'Shea writes: > List Mail User wrote: > > Jeff, > > > > RFC 1630 make pretty clear that a email address in either a "mailto:"; > > or "cid:"; clause *is* a URI. It does not address whether a bare > > email address would count (it seems that it doesn't fit the RFC > > definition, but does fit some other I found by Goggle). > > > > I could be convinced either way from a bare address (as it stand > > now, maybe someone else has something to add). But a "mailto:"; "mail:" or "cid:"; > > clause should (in my opinion) be looked up by the URI rules - they > > are URI, not URL rules (though URLs are clearly the most common from of URIs). > > > > I was surprised to see that from the RFC, even "Msg-Id:" clauses > > are URIs. > > > > Paul Shupak > > [EMAIL PROTECTED] > > I'd agree with Paul, what's the difference between doing the lookup of > the domain listed in a mailto: link and a http: link -- both of which > are often found in someone's signature? > > Eliminating the mailto: domain lookup could lead to spam such as > "email us at [EMAIL PROTECTED] for all the junk you don't really want". However, it's an impedance mismatch between what's going into the backends (the SBL and SURBL uribls) and what we're matching on the other end. At least for SBL, it's definitely problematic, since a SBL escalation (of mail relays) will blocklist mail that *mentions* that domain! - --j. -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Exmh CVS iD8DBQFCOgPeMJF5cimLx9ARAsyZAJ9ZiuOa2Lo6iK8Xflh6G+FdddUUcACeIbrA YxiICu7MFD6uG8eKB9YK5tw= =BHlZ -END PGP SIGNATURE-
RE: URI Tests and Japanese Chars (solved)
Correct. I had done this not fully understanding this rule as I associated it with SURBL which is stated to have lower false positives. After finding out thru this issue that URIBL_SBL was not SURBL I changed the score back to default. -Original Message- From: Alan Premselaar [mailto:[EMAIL PROTECTED] Sent: Thursday, March 17, 2005 8:30 PM To: List Mail User Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; users@spamassassin.apache.org Subject: Re: URI Tests and Japanese Chars (solved) List Mail User wrote: >>... >>To: "Daryl C. W. O'Shea" <[EMAIL PROTECTED]> >>Cc: List Mail User <[EMAIL PROTECTED]>, [EMAIL PROTECTED], >> users@spamassassin.apache.org >>Subject: Re: URI Tests and Japanese Chars (solved) >>In-Reply-To: <[EMAIL PROTECTED]> >>From: [EMAIL PROTECTED] (Justin Mason) >> > > Justin, > > >>Daryl C. W. O'Shea writes: >> >>>List Mail User wrote: >>> Jeff, RFC 1630 make pretty clear that a email address in either a "mailto:"; or "cid:"; clause *is* a URI. It does not address whether a bare email address would count (it seems that it doesn't fit the RFC definition, but does fit some other I found by Goggle). I could be convinced either way from a bare address (as it stand now, maybe someone else has something to add). But a "mailto:"; "mail:" or "cid:"; clause should (in my opinion) be looked up by the URI rules - they are URI, not URL rules (though URLs are clearly the most common from of URIs). I was surprised to see that from the RFC, even "Msg-Id:" clauses are URIs. Paul Shupak [EMAIL PROTECTED] >>> >>>I'd agree with Paul, what's the difference between doing the lookup >>>of the domain listed in a mailto: link and a http: link -- both of >>>which are often found in someone's signature? >>> >>>Eliminating the mailto: domain lookup could lead to spam such as >>>"email us at [EMAIL PROTECTED] for all the junk you don't really want". >> >>However, it's an impedance mismatch between what's going into the >>backends (the SBL and SURBL uribls) and what we're matching on the other end. >> >>At least for SBL, it's definitely problematic, since a SBL escalation >>(of mail relays) will blocklist mail that *mentions* that domain! > > > Thats not true in general. Since the SBL is an IP based list, a mail > server escalation would have no effect on any other domain, only on > messages relayed through the servers. > > The more common case where a SBL escalation will affect other domains > is (the typical kind I've noticed) when they list all corporate > servers and some otherwise innocent domains use name servers within > that space (this was the Russian government/Rostelecom earlier this week). > > Still, you are correct, there is a big difference between the SURBL > policy of zero FPs and the SBL policy, which I can best state as "kill > the spammers". SURBLs rarely have `collateral' damage and their > default scores reflect that; The URIBL_SBL is only assigned scores of "0 0.629 0 0.996" > in 3.0.2 - Only URIBL_AB_SURBL with set 3 and URIBL_WS_SURBL with set > 1 are ever assigned lower scores than the URIBL_SBL. All the other > SURBL have significantly higher scores - URIBL_SC_SURBL is many times what URIBL_SBL is. > (You may not know, but I even proposed adding back the SPEWS lists, > though with low scores, and I do use all the rfci lists with > relatively low scores except for bogusmx, which may be the best single > indicator I have ever found, and I still assign it fewer points than URIBL_SC_SURBL). > >>- --j. >>{snipped PGP SIGNATURE] > > > Paul Shupak > [EMAIL PROTECTED] > > P.S. I understand the political problems with the particular FPs that > SPEWS generates, but I do hope the rfci lists make it to the URIBL rulesets. Since you mentioned the scores, please note the Bobby Rose, the original poster of this issue had modified the score for URIBL_SBL from its defaults to 10 ... I had suggested that he reduce the score (possibly setting it back to the defaults) While it doesn't negate the issues surrounding the way the URI lookups work (or should possibly work) ... it's obvious that there is enough FP potential to warrant not scoring it so high. alan
RE: DCC License Change
But doesn't the licensing change have more to do with people setting up there own private database of hashes and not so much a case of querying the public databases which most SA people are doing? -Original Message- From: Greg Allen [mailto:[EMAIL PROTECTED] Sent: Saturday, March 19, 2005 2:43 PM To: users@spamassassin.apache.org Subject: RE: DCC License Change I read through some of these postings at rhyolite.com. It sounds to me like DCC should be off in SA by default going forward, or possibly completely removed from SA future versions so users don't accidentally get in a license/legal dispute without their knowledge. For instance...Jump two years into the future you receive a registered mail..."Dear Mr. SA user. Our logs show that you used DCC for 12 months 2 years ago so you now owe rhyolite or whoever $10,000. Have a nice day." Also, if someone were to create a 'Commtouch owned IP address RBL', I would surely install it on my gateway today. :-) -Original Message- From: Bob Proulx [mailto:[EMAIL PROTECTED] Sent: Saturday, March 19, 2005 1:36 PM To: users@spamassassin.apache.org Subject: Re: DCC License Change Matt Kettler wrote: > Justin Mason wrote: > >Well, I guess this gives us a good reason to finally get around to > >writing our own hashing subsystem... > > Unfortunately that might not be a workable option Justin. The reason > DCC is changing license is because it's infringing on a broad patent > of using hashes to automatically detect spam based on volume of duplicates. > It's not because the author really wants to change the license, it's > ultimately because he HAS to change the license. > > http://www.rhyolite.com/pipermail/dcc/2004/002468.html > > The license change is a part of an agreement with the patent owner, so > any similar system implemented by SA would end up going the same path > as DCC. > > You might be able to do a razor-ish system of listing based on > reports, but you might find this patent still applies, or some other > patent applies. > > Run it through ASF legal, and proceed accordingly. I am reading the archive and I can't agree completely with that statement. Although I agree the patent is involved. http://www.rhyolite.com/pipermail/dcc/2005/002570.html I see several important points there. 1. "I have some other ideas, but they depend on things that cost money like a feed of the (formerly free) SBL from Spamhaus." 2. "The new ideas can't be free because they are likely to cost money in fees to third parties." 3. "The agreement includes a promise to me to not sue or try to collect royalties Patent 6,330,590 from organizations covered by the new, restricted license." I agree that he sounds like he does not want the license change and feels forced into it. It could not have been an easy decision for him. But previously he stated that it was not infringing. http://www.rhyolite.com/pipermail/dcc/2004/002465.html There he states that he does not believe DCC to be infringing on the patent. Note however that the date of the message is well before the license change. So it is possible he was convinced otherwise. It is also possible that the his plans included code that would in the future would need a license. I am just speculating. In any case it is a shame to see things take this turn of direction. DCC will be missed. Bob
RE: ZDNET redirecting to spammer websites?
Wouldn't this just be something that SURBL should take care of? If this URL is the source of spam then it should be in SURBL regardless if it's in the zdnet.com domain. Right!? -Original Message- From: Rosenbaum, Larry M. [mailto:[EMAIL PROTECTED] Sent: Monday, March 21, 2005 10:35 AM To: users@spamassassin.apache.org Subject: ZDNET redirecting to spammer websites? We received a drug spam containing the following URL: http://chkpt.zdnet.com/chkpt/supposedtoallow/fdl%2ev%69%61%67%73.co%6d/p /b/kmioa This URL will actually take you to fdl.viags.com (which then goes to www.simply-rx.net). As far as I know, the SA SURBL check will check zdnet.com, not the spammer domain viags.com. What is going on here, and what should we do about it? Larry
RE: ZDNET redirecting to spammer websites?
Even though zdnet.com shouldn't be in SURBL, wouldn't having chkpt.zdnet.com (the actually site doing the redirect) be in SURBL? -Original Message- From: Jeff Chan [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 22, 2005 12:38 AM To: users@spamassassin.apache.org Cc: SURBL Discuss Subject: Re: ZDNET redirecting to spammer websites? On Monday, March 21, 2005, 11:32:45 AM, Bobby Rose wrote: > Wouldn't this just be something that SURBL should take care of? If > this URL is the source of spam then it should be in SURBL regardless > if it's in the zdnet.com domain. Right!? Which domain are you referring to? zdnet.com should not be in SURBLs because it has too many legitimate uses. If we listed zdnet.com that would surely result in false positives. On the other hand viags.com and simply-rx.net should be listed in SURBLs, *and they are*. What's needed is for applications like SpamAssassin to parse the redirection correctly and check both zdnet.com and viags.com. zdnet.com should not match SURBLs, but viags.com should. QED. Jeff C. __ > -Original Message- > From: Rosenbaum, Larry M. [mailto:[EMAIL PROTECTED] > Sent: Monday, March 21, 2005 10:35 AM > To: users@spamassassin.apache.org > Subject: ZDNET redirecting to spammer websites? > We received a drug spam containing the following URL: > http://chkpt.zdnet.com/chkpt/supposedtoallow/fdl%2ev%69%61%67%73.co%6d > /p > /b/kmioa > This URL will actually take you to fdl.viags.com (which then goes to > www.simply-rx.net). As far as I know, the SA SURBL check will check > zdnet.com, not the spammer domain viags.com. What is going on here, > and what should we do about it? > Larry Jeff C. -- Jeff Chan mailto:[EMAIL PROTECTED] http://www.surbl.org/