[sniffer] Re: What is your oldest production CPU?

2013-12-27 Thread Matt
On a VMware ESXi 5.x box with a virtual machine version 8, and physical 
E5-2689 CPU's I see the following:


On a Windows 2003 32-bit host, Device Manager shows that it is x86 
family 6 model 45.
On a Windows 2008 R2 64-bit host, Device Manager shows that it is 
Intel64 family 6 model 45.


Windows does know the processor version as well, though that's just meta 
information I believe and may not be reliable.  There are of course 
several other popular flavors of virtualization that I am not familiar with.


Matt





On 12/27/2013 3:58 PM, Pete McNeil wrote:

On 2013-12-27 15:45, Matt wrote:
Intel 5400 series Xeon here.  But don't forget virtualization.  I'm 
not sure what CPU virtualization does to targeting your code.


That's a good point The processor should be specified in the VM 
profile and if I recall correctly it is typically defaulted to the 
processor of the VM host. I should look closer at this -- but would 
like some feedback.


Thanks,

_M





#
This message is sent to you because you are subscribed to
 the mailing list .
This list is for discussing Message Sniffer,
Anti-spam, Anti-Malware, and related email topics.
For More information see http://www.armresearch.com
To unsubscribe, E-mail to: 
To switch to the DIGEST mode, E-mail to 
To switch to the INDEX mode, E-mail to 
Send administrative queries to  



[sniffer] Re: What is your oldest production CPU?

2013-12-27 Thread Matt
Intel 5400 series Xeon here.  But don't forget virtualization.  I'm not 
sure what CPU virtualization does to targeting your code.


Matt




On 12/27/2013 9:43 AM, Pete McNeil wrote:

Hello Sniffer Folks,

We would like to know what your oldest production CPU is.

When building new binaries of SNF or it's utilities we would like to 
select the newest CPU we can without leaving anybody behind.


We're also evaluating whether we should split binaries into a 
"compatible" version base on Intel i686 (or equivalent AMD), and a 
"current" version based on Intel Core2 (or equivalent AMD).


Please respond here.

Thanks for your time!!

_M





#
This message is sent to you because you are subscribed to
 the mailing list .
This list is for discussing Message Sniffer,
Anti-spam, Anti-Malware, and related email topics.
For More information see http://www.armresearch.com
To unsubscribe, E-mail to: 
To switch to the DIGEST mode, E-mail to 
To switch to the INDEX mode, E-mail to 
Send administrative queries to  



[sniffer] Re: Slow processing times, errors

2013-06-28 Thread Matt
I just looked through my history of the SNFclient.exe.err log and I was 
a bit off.  There were two small occurrences before 5/22, and both were 
so small they were without a doubt unnoticeable.  Here's my history 
since installation on 4/10/2012 with the number of occurrences of "Could 
Not Connect!" noted:


   11/19/2012 (949 occurrences) - * No backup in excess of 2 minutes
   occurred
   4/26/2012 (181 occurrences) - * No backup in excess of 2 minutes
   occurred
   5/22 (3,568 occurrences) - * No backup in excess of 2 minutes occurred
   5/31 (13,745 occurrences)
   6/3 (2,630 occurrences) - * No backup in excess of 2 minutes occurred
   6/11 (26,194 occurrences)
   6/13 (28,633 occurrences)
   6/14 (83,342 occurrences)
   6/17 (5,952 occurrences) - * No backup in excess of 2 minutes occurred
   6/21 (10,894 occurrences)
   6/27 (34,959 occurrences)

At peak times, we are probably doing between 30,000 and 40,000 messages 
per hour.  When this hit at peak hour yesterday, the server was backed 
up 2 minutes within 10 minutes of this starting.  If it happens outside 
of business hours, it's likely not seen at all.  The backup was resolved 
within 1 hour using different Declude settings, but Sniffer wasn't 
restarted for 2 hours and 45 minutes after the problem started.  So 
those ~35,000 occurrences were from 165 minutes of email.


After looking through my logs, it doesn't seem that I rebooted hardly 
any of these times.  I tend to not want to take the server down outside 
of rush hour.  I do tweak Declude for rush processing when backed up, so 
Declude is restarted which will stop sending files to Sniffer for a 
period of time.  It seems that Sniffer is mostly healing itself without 
restarting it's service, though maybe the updates will cause a service 
restart/reset?


I also checked and found that we added that larger client on 6/1, and 
outside of that, customer counts have been fairly consistent.


Matt


On 6/28/2013 2:56 PM, e...@protologic.com wrote:

Matt:
Coincidentally (I hope) this happened to us on the 22nd also. It did 
not stop working completely although we didn't get the throughput you 
did. We also saw the messages indicating it was not able to open the 
file. Pretty much the same message as in your first post and not one 
I've seen before.

Eric




Sent using SmarterSync Over-The-Air sync for iPad, iPhone, BlackBerry 
and other SmartPhones. May use speech to text. If something seems odd 
please don't hesitate to ask for clarification. E.&O.E.


On 2013-06-28, at 11:39 AM, Matt wrote:

> Eric,
>
> I'm guessing based on what you were seeing, that it was unrelated to 
what I was seeing. Sniffer never actually died, it just got over 100 
times slower, and 1/8th of the time it timed out. This never happened 
before 5/22, and this same server has been there for years, and the 
same installation of Sniffer for 2 years or so. I would think that if 
the issue was I/O (under normal conditions), it would have happened 
before 5/22 as there were clearly bursty periods often enough that my 
own traffic didn't change dramatically enough so that it happened 4 to 
5 times in one month.

>
> The server itself could have some issues that could be causing this. 
Maybe the file system is screwy, or Windows itself, or memory errors, 
or whatever.

>
> Matt
>
>
> On 6/28/2013 2:12 PM, E. H. (Eric) Fletcher wrote:
>> Matt:
>>
>> I mentioned in a previous post that we had experienced something 
similar at
>> about that time and resolved it a day or so later by re-installing 
sniffer
>> when service restarts, reboots and some basic troubleshooting did 
not give

>> us the results we needed. At this point that still seems to have been
>> effective (about 5 days now).
>>
>> At the time, we did move things around to see whether it was 
related to the
>> number of items in the queue or anywhere else within the structure 
of the
>> mail system and found it made no difference. A single item arriving 
in an

>> empty Queue was still not processed. CPU utilization was modest (single
>> digit across 4 cores) and disk I/O was lighter than usual as it 
took place

>> over a weekend. Memory utilization was a little higher than I'd like to
>> see, we are addressing that now.
>>
>> Following a suggestion from another ISP, we moved the spool folders 
onto a
>> RAM drive a couple of months ago. That has worked well for us, we 
did rule
>> it out as the source of the problem by moving back onto the 
conventional
>> hard disk during the last part of the troubleshooting and for the 
first hour
>> or two following the reload. We are processing on the Ramdisk now 
and have

>> been for over 4 days again.
>>
>> For what it's worth . . .
>>
>> Eric
>>
>>
>&g

[sniffer] Re: Slow processing times, errors

2013-06-28 Thread Matt

Eric,

I'm guessing based on what you were seeing, that it was unrelated to 
what I was seeing.  Sniffer never actually died, it just got over 100 
times slower, and 1/8th of the time it timed out.  This never happened 
before 5/22, and this same server has been there for years, and the same 
installation of Sniffer for 2 years or so.  I would think that if the 
issue was I/O (under normal conditions), it would have happened before 
5/22 as there were clearly bursty periods often enough that my own 
traffic didn't change dramatically enough so that it happened 4 to 5 
times in one month.


The server itself could have some issues that could be causing this.  
Maybe the file system is screwy, or Windows itself, or memory errors, or 
whatever.


Matt


On 6/28/2013 2:12 PM, E. H. (Eric) Fletcher wrote:

Matt:

I mentioned in a previous post that we had experienced something similar at
about that time and resolved it a day or so later by re-installing sniffer
when service restarts, reboots and some basic troubleshooting did not give
us the results we needed.  At this point that still seems to have been
effective (about 5 days now).

At the time, we did move things around to see whether it was related to the
number of items in the queue or anywhere else within the structure of the
mail system and found it made no difference. A single item arriving in an
empty Queue was still not processed.   CPU utilization was modest (single
digit across 4 cores) and disk I/O was lighter than usual as it took place
over a weekend.  Memory utilization was a little higher than I'd like to
see, we are addressing that now.

Following a suggestion from another ISP, we moved the spool folders onto a
RAM drive a couple of months ago.  That has worked well for us, we did rule
it out as the source of the problem by moving back onto the conventional
hard disk during the last part of the troubleshooting and for the first hour
or two following the reload.  We are processing on the Ramdisk now and have
been for over 4 days again.

For what it's worth . . .

Eric


-Original Message-
From: Message Sniffer Community [mailto:sniffer@sortmonster.com] On Behalf
Of Matt
Sent: Friday, June 28, 2013 10:32 AM
To: Message Sniffer Community
Subject: [sniffer] Re: Slow processing times, errors

Pete,

Just after the restart of the Sniffer service, times dropped back down into
the ms from 30+ seconds before, so what I am saying is that if I/O was the
issue, it was merely the trigger for something that put the service in a bad
state when it started.  I/O issues are not persistent, but could happen from
time to time I'm sure. Restarting Sniffer with a backlog of 2,500 messages
and normal peak traffic will not re-trigger the condition, and I press
Declude to run up to 300 messages at a time in situations like that, and the
CPU's are pegged until the backlog clears.  In the past, I restarted the
whole system, not knowing why it worked.  During normal peak times (without
bursts), the Declude is processing about 125 messages at a time which take
an average of 6 seconds to fully process, and therefore Sniffer is probably
handling only about 10 messages at a time (at peak).

Since 5/22 I have seen 4 or 5 different events like this, and I confirmed
that they are all present in the SNFclient.exe.err log.

Matt



On 6/28/2013 12:41 PM, Pete McNeil wrote:

On 2013-06-28 12:10, Matt wrote:

I am looking to retool presently just because it's time.  So if you
are convinced that this is due to low resources, don't concern
yourself with it.

Ok. It makes sense that the ~200 messages all at once could have
happend at the restart. SNFClient will keep trying for 30-90 seconds
before it gives up and spits out it's error file. That's where your
delays are coming from. SNF itself was clocking only about 100-800ms
for all of the scans.

The error result you report is exactly the one sent by SNF -- that it
was unable to open the file.

I am very sure this is resource related -- your scans should not be
taking the amount of time they are and I suspect most of that time is
eaten up trying to get to the files. The occasional errors of the same
time are a good hint that IO is to blame.

The new spam that we've seen often includes large messages -- so
that's going to put a higher load on IO resources -- I'll bet that the
increased volume and large message sizes are pushing IO over the edge
or at least very close to it.

Best,

_M




#
This message is sent to you because you are subscribed to
   the mailing list .
This list is for discussing Message Sniffer, Anti-spam, Anti-Malware, and
related email topics.
For More information see http://www.armresearch.com To unsubscribe, E-mail
to:  To switch to the DIGEST mode, E-mail to
 To switch to the INDEX mode, E-mail to
 Send administrative queries to





[sniffer] Re: Slow processing times, errors

2013-06-28 Thread Matt
I'll certainly look more closely next time.  Hopefully I'll be migrated 
before this happens again :)


Matt


On 6/28/2013 1:44 PM, Darin Cox wrote:
How about running performance monitor to watch disk I/O, mem, cpu, 
page file, etc. over time in the hopes of catching one of the events?

Darin.

*From:* Matt <mailto:for...@mailpure.com>
*Sent:* Friday, June 28, 2013 12:10 PM
*To:* Message Sniffer Community <mailto:sniffer@sortmonster.com>
*Subject:* [sniffer] Re: Slow processing times, errors
Pete,

I'm near positive that it's not system resources that are causing 
Sniffer to not be able to _access the files_.  I believe these errors 
are a symptom and not the cause.


You have to keep in mind that on the messages that don't throw errors, 
they were taking 30-90 seconds to scan, but immediately after a 
restart it was under 1 second.  The system stayed the same, it was 
just the state of the service that was off in a bad way.


I did add a larger client about a month ago around the time that this 
started, which did inch up load by between 1% and 5% I figure, but I 
can't say for sure that the two things are connected.  I've seen much 
bigger changes however in spam volumes from single spammers.  I have 
looked at my SNFclient.exe.err log and found that the previous 
slowdowns were all represented in this file, and nothing else really 
since a smattering in 2012 of other stuff.  I believe that I/O could 
be the trigger, or general system load, but the error in the service 
that misses opening some files, and is otherwise slower than normal by 
100 times, will persist when everything else is fine again.  I figure 
that this is all triggered by a short-term lack of resources or a 
killer message type of issue that does something like run away with 
memory.  Certainly there were no recent changes on the server prior to 
this starting to happen, including Sniffer itself which has been 
perfectly solid up until 5/22.


Regarding the ERROR_MSG_FILE batch that I sent you in that log, it did 
happen exactly when I restarted Sniffer, and in fact the 
SNFclient.exe.err log showed a different error while this was 
happening, and maybe this will point you to something else?  That log 
says "Could Not Connect!" when the regular Sniffer log shows 
"ERROR_MSG_FILE" about 1/8th of the time while in a bad state.  When I 
restarted the Sniffer service, the regular log showed a bunch of 
"ERROR_MSG_FILE" in a row, but the SNFclient.exe.err log below shows 
"XCI Error!: FileError snf_EngineHandler::scanMessageFile() 
Open/Seek".  You can match the message ID's with the other log that I 
provided. I believe that block of messages was already called to 
SNFclient.exe, but the Sniffer service haddn't yet responded, and so 
they were dumped as a batch into both logs during shut down of the 
service.


20130627183807, arg1=F:\\proc\work\D862600e64269.smd : Could
Not Connect!
20130627183808, arg1=F:\\proc\work\D86440177431f.smd : Could
Not Connect!
20130627183808, arg1=F:\\proc\work\D861200ce41ce.smd : Could
Not Connect!
20130627183809, arg1=F:\\proc\work\D864401734321.smd : Could
Not Connect!
20130627183809, arg1=F:\\proc\work\D861400da41e3.smd : Could
Not Connect!
20130627183810, arg1=F:\\proc\work\D862600d7425f.smd : Could
Not Connect!
20130627183811, arg1=F:\\proc\work\D864a00e94346.smd : Could
Not Connect!
20130627183811, arg1=F:\\proc\work\D8615019b41f4.smd : Could
Not Connect!
20130627183813, arg1=F:\\proc\work\D862900e94282.smd : Could
Not Connect!
20130627183815, arg1=F:\\proc\work\D863d01584306.smd : Could
Not Connect!
20130627183817, arg1=F:\\proc\work\D86030158416f.smd : Could
Not Connect!
20130627183818, arg1=F:\\proc\work\D862300e94255.smd : Could
Not Connect!
20130627183819, arg1=F:\\proc\work\D862900e64281.smd : Could
Not Connect!
20130627183819, arg1=F:\\proc\work\D864b00d74357.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
20130627183819, arg1=F:\\proc\work\D864800d7433c.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
20130627183819, arg1=F:\\proc\work\D861901734205.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
20130627183819, arg1=F:\\proc\work\D861d01774230.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
20130627183819, arg1=F:\\proc\work\D8641016d4310.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
20130627183819, arg1=F:\\proc\work\D865000e64363.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
20130627183819, arg1=F:\\proc\work\D865000e14361.smd : XCI
Error!: FileError snf_EngineHandler::scanMessageFile() O

[sniffer] Re: Slow processing times, errors

2013-06-28 Thread Matt

Pete,

Just after the restart of the Sniffer service, times dropped back down 
into the ms from 30+ seconds before, so what I am saying is that if I/O 
was the issue, it was merely the trigger for something that put the 
service in a bad state when it started.  I/O issues are not persistent, 
but could happen from time to time I'm sure. Restarting Sniffer with a 
backlog of 2,500 messages and normal peak traffic will not re-trigger 
the condition, and I press Declude to run up to 300 messages at a time 
in situations like that, and the CPU's are pegged until the backlog 
clears.  In the past, I restarted the whole system, not knowing why it 
worked.  During normal peak times (without bursts), the Declude is 
processing about 125 messages at a time which take an average of 6 
seconds to fully process, and therefore Sniffer is probably handling 
only about 10 messages at a time (at peak).


Since 5/22 I have seen 4 or 5 different events like this, and I 
confirmed that they are all present in the SNFclient.exe.err log.


Matt



On 6/28/2013 12:41 PM, Pete McNeil wrote:

On 2013-06-28 12:10, Matt wrote:
I am looking to retool presently just because it's time.  So if you 
are convinced that this is due to low resources, don't concern 
yourself with it.


Ok. It makes sense that the ~200 messages all at once could have 
happend at the restart. SNFClient will keep trying for 30-90 seconds 
before it gives up and spits out it's error file. That's where your 
delays are coming from. SNF itself was clocking only about 100-800ms 
for all of the scans.


The error result you report is exactly the one sent by SNF -- that it 
was unable to open the file.


I am very sure this is resource related -- your scans should not be 
taking the amount of time they are and I suspect most of that time is 
eaten up trying to get to the files. The occasional errors of the same 
time are a good hint that IO is to blame.


The new spam that we've seen often includes large messages -- so 
that's going to put a higher load on IO resources -- I'll bet that the 
increased volume and large message sizes are pushing IO over the edge 
or at least very close to it.


Best,

_M





#
This message is sent to you because you are subscribed to
 the mailing list .
This list is for discussing Message Sniffer,
Anti-spam, Anti-Malware, and related email topics.
For More information see http://www.armresearch.com
To unsubscribe, E-mail to: 
To switch to the DIGEST mode, E-mail to 
To switch to the INDEX mode, E-mail to 
Send administrative queries to  



[sniffer] Re: Slow processing times, errors

2013-06-28 Thread Matt
d : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
   20130627183820, arg1=F:\\proc\work\D8638016d42db.smd : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
   20130627183820, arg1=F:\\proc\work\D862b00e64292.smd : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
   20130627183820, arg1=F:\\proc\work\D8638017342e0.smd : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
   20130627183820, arg1=F:\\proc\work\D865d01804393.smd : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
   20130627183820, arg1=F:\\proc\work\D8631017742b1.smd : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek
   20130627183820, arg1=F:\\proc\work\D85b100e33f7f.smd : XCI
   Error!: FileError snf_EngineHandler::scanMessageFile() Open/Seek


I am looking to retool presently just because it's time.  So if you are 
convinced that this is due to low resources, don't concern yourself with it.


Matt


On 6/28/2013 10:36 AM, Pete McNeil wrote:

On 2013-06-27 20:01, Matt wrote:
I'm attaching a snippet of my log.  About 100 lines past the start, 
where you see a smattering of error messages, you then see a large 
block of them while the Sniffer service is restarting, and then after 
that no errors at all.  There have in fact been no errors at all in 
several hours since this restart of Sniffer. 


I can promise you that the error you're reporting comes directly from 
a problem with the file system. Here is the code where that error is 
generated. Put simply the code tries to open the file and determine 
it's size. If that doesn't work it throws the ERROR_MSG_FILE exception 
in one of two forms -- that's what ends up in the log.


| *try*  {/// Try opening the message file.
/|/||/| MessageFile.open(MessageFilePath.c_str(), ios::in | 
ios::binary);/// Open the file, binary mode.
/|/||/| MessageFile.seekg(0, ios::end);/// Find the end of the file,
/|/||/| MessageFileSize = MessageFile.tellg();/// read that position as 
the size,
/|/||/| MessageFile.seekg(0, ios::beg);/// then go back to the 
beginning.
/|/||/| MyScanData.ScanSize = MessageFileSize;/// Capture the message 
file size.
/|/||/| }
|| *catch*(...) {/// Trouble? Throw FileError.
/|/||/| MyRulebase->MyLOGmgr.logThisError(/// Log the error.
/|/||/|   MyScanData,*"scanMessageFile().open"*,
||   snf_ERROR_MSG_FILE,*"ERROR_MSG_FILE"*
|| );
|| *throw*  FileError(*"snf_EngineHandler::scanMessageFile() 
Open/Seek"*);
|| }
||
|| *if*(0 >= MessageFileSize) {/// Handle zero length files.
/|/||/| MessageFile.close();/// No need to keep this open.
/|/||/| MyRulebase->MyLOGmgr.logThisError(/// Log the error.
/|/||/|   MyScanData,*"scanMessageFile().isFileEmpty?"*,
||   snf_ERROR_MSG_FILE,*"ERROR_MSG_FILE"*
|| );
|| *throw*  FileError(*"snf_EngineHandler::scanMessageFile() 
FileEmpty!"*);
|| }
|

Another clue is that in the log snippet you provide, there are hints 
of a problem brewing when there are sporadic instances of this error. 
Then, when there is a large block -- virtually all requests to open 
the files for scan are rejected by the OS. Either something made those 
files unavailable, or the OS was unable to handle the request. I find 
it interesting also that the time required to report the error started 
at about 172 milliseconds and continued to climb to 406, 578, and then 
656 before the restart.


SNF does not make log entries in the classic log during a restart, by 
the way.


Note also the timestamps associated with these events and you can see 
that the event was precipitated by a dramatic rise in message rates. 
The first part of your log seems to indicate about 7-10 messages per 
second. During the large block of errors, the message rate appears to 
have been in excess of 120 (I counted approximately 126 at timestamp 
20130627183819). That's an increase at least an order of magnitude 
higher than the rate that was causing sporadic errors.


I suspect based on the data you have provided that something on your 
system generated a very large spike of activity that your IO subsystem 
was unable to manage and this caused snf scans to fail because snf was 
unable to open the files it was asked to scan.


Your restart of SNF apparently coincided with the event, but since all 
of the SMD file names are unique during the event, and since SNF has 
no way to generate scan requests on it's own, SNF does not appear to 
have been the cause of the event in any way. It was able to record the 
event, none the less.


So the question in my mind now is:

* Is there a way to improve your IO subsystem so that it can gain some 
headroom above 10 msg/sec?

* What caused the sudden dramatic spike th

[sniffer] Re: Slow processing times, errors

2013-06-27 Thread Matt
This is an automated message from the mailing list software.  In order 
to unsubscribe you must issue the command "please unsubscribe".



#
This message is sent to you because you are subscribed to
 the mailing list .
This list is for discussing Message Sniffer,
Anti-spam, Anti-Malware, and related email topics.
For More information see http://www.armresearch.com
To unsubscribe, E-mail to: 
To switch to the DIGEST mode, E-mail to 
To switch to the INDEX mode, E-mail to 
Send administrative queries to  



[sniffer] Re: Slow processing times, errors

2013-06-27 Thread Matt

Darin,

I'm not seeing that sort of thing.  With 3.x, there doesn't appear to be 
any extraneous file creation in the Sniffer program directory, and never 
any TMP files in my spool.  I do not have Sniffer modifying headers, so 
that may be different on our systems.


Matt


On 6/27/2013 5:25 PM, Darin Cox wrote:
When we had sluggish performance similar that yours, resulting in 
numerous sniffer .tmp files in the spool, the cause was eventually 
traced to a proliferation of files in the sniffer directory.  Clearing 
them out brought performance back up to normal.

Darin.

*From:* e...@protologic.com <mailto:e...@protologic.com>
*Sent:* Thursday, June 27, 2013 5:17 PM
*To:* Message Sniffer Community <mailto:sniffer@sortmonster.com>
*Subject:* [sniffer] Re: Slow processing times, errors
We were experiencing this several days ago and couldn't find a fix 
that worked or worked for long. We uninstalled SNF and reinstalled and 
have not detected a problem since. I will check the logs and report 
back if I see anything intermittent.





Sent using SmarterSync Over-The-Air sync for iPad, iPhone, BlackBerry 
and other SmartPhones. May use speech to text. If something seems odd 
please don't hesitate to ask for clarification. E.&O.E.


On 2013-06-27, at 2:06 PM, Matt wrote:

> Pete,
>
> I've had many recent incidences where, as it turns out, 
SNFclient.exe takes 30 to 90 seconds to respond to every message with 
a result code (normally less than a second), and as a result backs up 
processing. Restarting the Sniffer service seems to do the trick, but 
I only tested that for the first time today after figuring this out.

>
> I believe the events are triggered by updates, but I'm not sure as 
of yet. Updates subsequent to the slow down do not appear to fix the 
situation, so it seems to be resident in the service. When this 
happens, my SNFclient.exe.err log fill up with lines like this:

>
> 20130627155608, arg1=F:\\proc\work\D6063018a2550.smd : Could Not 
Connect!

>
> At the same time, my Sniffer logs start showing frequent 
"ERROR_MSG_FILE" results on about 1/8th of the messages.

>
> I'm currently using the service version 3.0.2-E3.0.17. It's not 
entirely clear to me what the most current one is.

>
> Any suggestions as to the cause or solution?
>
> Thanks,
>
> Matt
>
>
> #
> This message is sent to you because you are subscribed to
> the mailing list .
> This list is for discussing Message Sniffer,
> Anti-spam, Anti-Malware, and related email topics.
> For More information see http://www.armresearch.com
> To unsubscribe, E-mail to:
> To switch to the DIGEST mode, E-mail to
> To switch to the INDEX mode, E-mail to
> Send administrative queries to
>





[sniffer] Slow processing times, errors

2013-06-27 Thread Matt

Pete,

I've had many recent incidences where, as it turns out, SNFclient.exe 
takes 30 to 90 seconds to respond to every message with a result code 
(normally less than a second), and as a result backs up processing.  
Restarting the Sniffer service seems to do the trick, but I only tested 
that for the first time today after figuring this out.


I believe the events are triggered by updates, but I'm not sure as of 
yet.  Updates subsequent to the slow down do not appear to fix the 
situation, so it seems to be resident in the service.  When this 
happens, my SNFclient.exe.err log fill up with lines like this:


20130627155608, arg1=F:\\proc\work\D6063018a2550.smd : Could 
Not Connect!


At the same time, my Sniffer logs start showing frequent 
"ERROR_MSG_FILE" results on about 1/8th of the messages.


I'm currently using the service version 3.0.2-E3.0.17.  It's not 
entirely clear to me what the most current one is.


Any suggestions as to the cause or solution?

Thanks,

Matt


#
This message is sent to you because you are subscribed to
 the mailing list .
This list is for discussing Message Sniffer,
Anti-spam, Anti-Malware, and related email topics.
For More information see http://www.armresearch.com
To unsubscribe, E-mail to: 
To switch to the DIGEST mode, E-mail to 
To switch to the INDEX mode, E-mail to 
Send administrative queries to  



[sniffer] Re: Sniffer Helper App?

2008-07-01 Thread Matt

Steve,

Since this hasn't yet been mentioned, try Alligate (www.alligate.com).  
It does selective greylisting (only greylists things that look spammy), 
and also will validate your users' addresses and do things like country 
blocking/tarpitting/greylisting.  Only one zombie spammer survives 
greylisting, and after you dump all of that plus validate addresses, you 
will reduce your traffic down to a point where it is only 1/3 spam.  If 
you only reject bad addresses and clear abuse (many bad addresses in one 
connection for instance), you can do this with 99.% accuracy.  I'm 
not lying about that either.  The only things that fail selective 
greylisting will be black boxes that don't spool E-mail, and if you give 
a wide retry time, you will likely allow future attempts from a black 
box that happens to get greylisted.


Selective greylisting is far superior to regular greylisting since it is 
rarely triggered against legitimate E-mail.  I dump around 93% of all 
connections to my servers and I don't need to falsely trust a single 
source of data such as SpamCop to achieve those results.  I then leave 
the heavy lifting to a secondary filtering system where the heavy 
lifting is performed.  Alligate requires almost no resources, though you 
should dedicate a box to it so that other things don't step on it's feet.


Matt



Steve Guluk wrote:


Hello, 

I run iMail 9.0 and would like a program that can do GeoIP to 
screen foreign countries before they even get to iMail. I used to use 
MXGuard (still have an active license) but my server could not handle 
the CPU draw. I moved to eWall which really has some great potential 
as it is a nice light gateway client that works with Sniffer but it 
also crashes and has a few other problems (this program also 
introduced me to GeoIP).



Any other suggestions as I am beat after trying to get some decent 
spam relief as well as relief from an aging server. My server is an 
AMD 2.0 with Raid  and 2 gigs of Ram   It's faired well over the 
last couple years but the spam levels ramping up are starting to take 
their toll and I don't want to move to a new server just yet.



eWalls got me spoiled on the GeoIP feature where it polls a DB for 
country info based on the incoming IP and can delete emails before 
they reach iMail.  



Any suggestions on what I should consider to help with spam and also 
use Sniffer. Is Declude worth while? Some other light gateway like eWall ?



Thanks in advance for any suggestions, 




*Steve Guluk*

SGDesign

(949) 661-9333

ICQ: 7230769












[sniffer] Re: It's official. SNF Version 3.0 is Ready!

2008-06-26 Thread Matt

Pete,

Glad you got the joke.  I'll allow you a a little time to take your mind 
off of the future :)


Thanks,

Matt



Pete McNeil wrote:

Hello Matt,

Thursday, June 26, 2008, 4:21:42 PM, you wrote:

  

Pete,



  
Now that you got that taken care of, can you give us an idea when you 
expect 4.0 to be released?



Hehehe.

We're not close enough to that to be remotely accurate, but I will
tell you that I'd like it to be within something approaching a year.

There are a lot of continuous upgrades to do on the back-end that will
unleash and enhance V3's power and provide additional tools and
products along the way -- plus plenty of work helping new third party
products get off the ground w/ SNF inside... So we'll be plenty busy
and we'll keep you posted.

_M


  


[sniffer] Re: It's official. SNF Version 3.0 is Ready!

2008-06-26 Thread Matt

Pete,

Now that you got that taken care of, can you give us an idea when you 
expect 4.0 to be released?


Matt



Pete McNeil wrote:

Hello Sniffer folks,

Back in Q1 we were sure we'd be ready with the new SNF after nearly a
year of testing on both large and small systems. What a surprise!

After publishing the first release candidate we went from version 1-5
to version 2-27 at a breathtaking pace!

Thank you to everyone who has tested, poked, prodded, and twisted the
new SNF -- not to mention keeping up with all of those updates during
the final phase of testing. I can't imagine getting to this point
without your patience, trust, attention to detail, and persistence!
Bravo!



Without further fanfare: Today the latest release candidate becomes
the official production release of Message Sniffer (SNF) Version 3.0.

The changes:

-- Minor updates to readme files.

-- Changed the build / version information and recompiled.

-- Removed redundant comments from the configuration file.

We have been bug free for more than 2 months with several hundred
systems using the new engine.

You can download the latest distributions from this page:

http://www.armresearch.com/products/index.jsp

You may also notice that we've published our new web site! There are a
few bits of documentation still under construction here and there, but
we're well on our way to filling those in along with a stream of
continues improvements and additions based on our work with you!

Once again, Thanks to everyone for a fantastic job!

Thanks for all of your support, comments, and efforts!

As always we're hear to help. Now, onward to the next upgrade...
always work to do ;-)

Cheers!

_M

  




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: XYNTService -- Any Problems?

2008-05-09 Thread Matt
I'm sure that I don't speak for everyone, but I would tend to avoid 
third-party service systems, and this would also expose Sniffer to the 
potential pitfalls of that software.


You could provide directions on how to install SRVANY, and then have a 
script that completes the process once the executables are on the 
system.  That would be my short-term recommendation.  In the long-term, 
I would do your own service as opposed to use someone else's container.


I would not recommend distributing XYNTService until you have trialled 
that for several months with a range of systems.  The work of properly 
testing this is possibly more work than creating your own service.


All IMO of course.

Matt



Pete McNeil wrote:

Hello Matt,

Friday, May 9, 2008, 3:57:42 PM, you wrote:

  

SRVANY works perfectly and is free with Windows.  Why not use that?



We can't redistribute SRVANY with our installer and we can't be sure
it will be present on all systems. I've been using SRVANY every time I
do an install for somebody... but we want something that we can
deliver with the installer so it can be a (more or less) one click
process.

_M

  


[sniffer] Re: XYNTService -- Any Problems?

2008-05-09 Thread Matt

SRVANY works perfectly and is free with Windows.  Why not use that?

Matt



Pete McNeil wrote:

Hello Sniffer Folks,

We are working on an installer for the command-line version of SNF
V3.0.

We are considering re-distributing XYNTService to setup the
SNFServer.exe as part of the installation. There are some rumblings
here and there about XYNTService not working the same way on some
versions of Windows.

If any of you have experience with XYNTService then we would sure like
to hear about it.

If anyone knows of an alternative that we can bundle with our
installer then please let us know that too...

(Why don't you just write something?? -- ) We'd prefer not to reinvent
that particular wheel right now -- not that it's hard, just that it's
not necessary and we'd rather do other important stuff.

Thanks!

_M

  




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: Excessive amounts of Foreign language spam

2007-12-26 Thread Matt
I recommend not doing this.  There are enough possibilities of 
legitimate discussion and data being transmitted by E-mail that could 
contain this string that you would be best off being more specific to 
the encoding of the actual message.  Target the header charset tag and 
you should be good to go if you want to ban most all of it, but keep in 
mind that legitimate E-mails from Cyrillic based systems can still send 
messages written in English using this and other charactersets, so make 
sure there is nothing legitimate at all coming from countries that would 
use this before using it.


Matt



Kim W. Premuda wrote:

We use this in Declude to filter messages containing Cyrillic (Russian)
characters...

 ANYWHERE   10  CONTAINSkoi8-r

Kim W. Premuda
FastWave Internet Services, Inc.
858-487-1414
[EMAIL PROTECTED]



-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Rick Hogue
Sent: Wednesday, December 26, 2007 7:20 AM
To: Message Sniffer Community
Subject: [sniffer] Excessive amounts of Foreign language spam

 
I have been getting a large amount of foreign language email appears to be

Russian but not sure as I do not speak it. What is being done about this
kind of spam?
I am on Imail 8.21 and I am not using the latest beta of Sniffer. I do not
use other methods other than a couple from my Declude engine including my
own blacklist, but there are so many different IP addresses on this it is
very hard to keep up with. Help!

 


Simplified Association Management Systems
Rick Hogue
P.O. Box 18304
Louisville, KY 40261
1-502-459-3100 office
http://www.samprogram.com




#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>

---
[This E-mail scanned for viruses by Declude Virus]


---
[This E-mail scanned for viruses by Declude Virus]



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>


  




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: re subscriptions to list

2007-11-29 Thread Matt

All auto-responders should be burnt in hell

Have a nice day :)

Matt



Pete McNeil wrote:

Regarding this thread and to nobody in particular:

I would like to say a word or two before this gets out of hand.

Our policy on this list is to provide the answers needed no matter how
obvious or well posted those answers may be.

Emotionally negative responses are discouraged and generally not
useful.

RTFM type answers should be emotionally neutral, should summarize a
quick answer, and should provide a link to TFM.

For whatever reason, these kinds of requests are made and these kinds
of questions are asked. The folks who make these requests or ask these
questions - no matter how obvious - need help. The best thing we can
do is provide that help.

Keep in mind also that these messages are archived so that they remain
searchable on the 'web. This means that any solutions we post here,
including references to obvious or well posted answers, serve to make
those answers easier to find.

Please: Be kind and helpful, or stay away from the send button. I
can't remember the number of times something simple and obvious
baffled me when I needed it least -- and I'm sure many of us have had
similar moments.*

A simple answer to an obvious question can go a long way in a positive
direction.

Please help us keep this forum active, positive, and informative.

Thanks,

_M

  




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: Greeting Malware Spike Graph

2007-06-29 Thread Matt




Pete,

The first one of these hit our system at 8:45 a.m. ET yesterday (linked
Malware).

FYI, this is the same guy that is sending the PDF stock spam, and was
responsible for the 3 other large virus seedings in the last 6 months.

Matt



Pete McNeil wrote:

  Hello Sniffer Folks,

Vertical Wall Of Spam

  
  
  
  
  

#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>

  





[sniffer] Re: Spammers turning to PDF attachments?

2007-06-21 Thread Matt
I saw some AFF (419/Nigerian) stuff that came in PDF format.  That one 
is going to be a real pain if they keep it up since those messages come 
from legitimate webmail providers, and lack all but a few words of text 
in the body.  It's hard to get a consistent pattern out of that.  I'm 
not nearly as worried about the zombies.


Matt



Colbeck, Andrew wrote:

See this article at the Internet Storm Center:

http://isc.sans.org/diary.html?storyid=3012


Pump and dump scams now in PDF
Published: 2007-06-20,
Last Updated: 2007-06-20 21:33:39 UTC
by Maarten Van Horenbeeck (Version: 1)

Apparently the groups behind what we know as pump and dump spam have
found a new way to bypass spam filters. As of yesterday, we've been
observing e-mails with bogus text, often in german, each with a PDF in
attachment...



Andrew.






#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>


  




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: Dead Sniffer processes piling up.

2007-06-14 Thread Matt

Pete,

I have left all of those processes active for troubleshooting, and they 
are still there and definitely Sniffer.  Process Explorer even shows 
what command line the executable was run with so I was able to do some 
digging in the logs for specifics.


I found that Declude was recording errors related to Sniffer, and 
Sniffer was not logging the messages or associated events at all.  
Here's what Declude is showing:


   06/14/2007 09:11:01.665 q3d2b00d49610.smd ERROR: External 
program SNIFFER-IP didn't finish quick enough; terminating.
   06/14/2007 09:11:01.665 q3d2b00d49610.smd Couldn't get external 
program exit code


Now it could be that Declude is causing issues by trying to terminate 
Sniffer.


FYI, I don't have a stack up of any additional files in my Sniffer 
directory, and the service is started and everything seems fine outside 
of this one period of time when my server got wallopped.  What happened 
was that one customer with over 1,000 addresses received 950 E-mails 
from a ConstantContact customer in spam run from harvested addresses.  
They can deliver fast enough that it likely stressed my system for a 
moment, and triggered this behavior.  It could also be that the content 
of these messages caused an issue with Sniffer.  My server has 8 cores 
in it, and if it reached 100% CPU, it only did so for a moment in time.


This very likely could be associated with heap issues, but I did double 
my heap memory the other day, and normally it doesn't cause processes 
like this to just hang in the background doing nothing.  That other 
application that hung about 10 times during this period is what suggests 
that it could be a heap issue because I know that app to be the first to 
go under stress (it is not a service).  That app does an average of over 
50 DNS lookups and has a lot more latency than Sniffer does, so it is 
remarkable that Sniffer hung 100 times and that app only hung 10 times.  
That suggests to me that maybe something better could be done in terms 
of cleaning up these processes.


I'll keep the server in this state until the evening in the event that 
you want to take a look at it.


Thanks,

Matt



Pete McNeil wrote:


Hello Matt,


Thursday, June 14, 2007, 12:44:32 PM, you wrote:





>




I also had about 10 errors waiting to be cleared from another 
application, but probably because of the way that Sniffer works (as a 
service or something related), the Sniffer processes are just hung 
without a prompt.  I saw this last week also.



I have Declude set for 200 processes, so it probably reached 300 when 
the first 100 hung, and then it stayed with those 100 hung.  Is there 
anything that can be done in Sniffer to kill off these hung processes 
in an automated and proactive manner?  I recently upgraded to the 
latest version and I was probably a version or two behind, and I don't 
recall this happening before.



It seems very unlikely that SNF instances would be hung -- they will 
either time-out themselves or be killed off by Declude. Please let us 
know if there are any errors in your SNF log.



Also - check the SNF working directory to make sure you don't have a 
lot of old job files hanging around. That can cause SNF instances to 
relax their timing based on the assumption there is a high system load 
-- with relaxed timing they will stay around longer waiting for results.



If you find that you do have a lot of old job files hanging around 
then you should clean them out to get things going normally again.



Stop SMTP

Wait for all jobs to finish

Stop your persistent instance

Remove all left-over job files (QUE, WRK, FIN, ABT, XXX, SVR)

Restart your persistent instance

Restart SMTP


Also, presuming you have a persistent instance - make sure that is 
still running. If that had failed for some reason then you might be 
running now in peer-server mode which will be a bit slower than 
persistent mode.



Hope this helps,


_M


--

Pete McNeil

Chief Scientist,

Arm Research Labs, LLC.

#

This message is sent to you because you are subscribed to

  the mailing list .

To unsubscribe, E-mail to: <[EMAIL PROTECTED]>

To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>

To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>

Send administrative queries to  <[EMAIL PROTECTED]>



  


[sniffer] Dead Sniffer processes piling up.

2007-06-14 Thread Matt




Pete,

I found this morning an instance where suddenly the number of processes
on my system shot from around 50 to as many as 300, and after that
peak, it settled down and rode the 150 level.  All of the hung
processes are Sniffer being called by Declude.




I also had about 10 errors waiting to be cleared from another
application, but probably because of the way that Sniffer works (as a
service or something related), the Sniffer processes are just hung
without a prompt.  I saw this last week also.

I have Declude set for 200 processes, so it probably reached 300 when
the first 100 hung, and then it stayed with those 100 hung.  Is there
anything that can be done in Sniffer to kill off these hung processes
in an automated and proactive manner?  I recently upgraded to the
latest version and I was probably a version or two behind, and I don't
recall this happening before.

Thanks,

Matt






[sniffer] Re: Error Messages since WeightGate

2007-06-10 Thread Matt
Here's a better page from someone at Microsoft all about the desktop 
heap.  This one suggests that you can change the limit from 48 MB to a 
value as much as 450 MB.  You will probably normally not need more than 
the total number of processes that Declude can use times the amount of 
memory allocated per session, so if you have 512 MB/session, and have 
100 processes defined in Declude, you would need about 50 MB, but adding 
something like weightgate to an app that has latency could very well 
increase the needs even more.


   
http://blogs.msdn.com/ntdebugging/archive/2007/01/04/desktop-heap-overview.aspx


Matt


Matt wrote:

Keith,

When I looked at this several years ago, this is what I came up with:

Windows allows a total of 48 MB in the heap, and each service
started process uses the third setting in the chain, or 512 KB by
default, and there is about 10 MB that gets used for other
things.  Based on what Scott Perry wrote concerning this in a
obscure page on the Declude site about Declude Queue, there can
only be a total of 77 service started processes before having
issues, and you can assume that there will be one for Declude up
to my limit of 40, and also often times another process whether it
is a virus scanner or external filter application in JunkMail.

Windows apparently starts to barf when the limit is reached, and
applications can go into a bad state, only partially launching and
becoming corrupted.  This has a high association with load, but
the true association seems to be the number of processes, which
typically correspond to load but not necessarily.  This is
probably also what has caused McAfee to barf on occasion on my
server with similar errors.  McAfee has a decent amount of latency
compared to most other things that Declude launches except of
course for Eradispam due to the timeout issues.

There are two camps on what to do with the mystery heap, aka
desktop heap.  Some have indicated on the IMail and Declude lists
in the past that setting it to 2048 would resolve some issues with
IMail's SMTP and also the 16 bit version of F-Prot when run from
Declude which is awfully slow and CPU intensive.  That change
however would reduce the number of service started processes that
were possible by a factor of 4.  Scott suggests that reducing it
to maybe 256 would help in high traffic servers, though this is a
limit that you wouldn't want to pass because it could cause
instability.

FYI, the error messages will contribute to heap usage, so these must 
be cleared, and when you have a bunch of these, it will limit what you 
can run, and in fact make the problem worse.


If you are using Declude as a service, that certainly takes one 
process off the top that used to count towards the heap, but it's 
likely what is running concurrently that is causing the issue along 
with error dialogs.  Weightgate certainly adds to this issue, as well 
as other plugins and virus scanners.  The best solution for a high 
volume server that wants to do weight skipping would be for either 
Sniffer or Declude to skip based on both a high and a low weight 
within the config.  I have been asking for over three years for this 
and have even recently documented a solution for Declude that would be 
backwards compatible with current configs should they opt to do this.


Here's a quote from the old Declude site authored by Scott:

*Flaw #1 - Server crashing: Microsoft's Mystery Heap*
Fortunately, not many people experience this problem. However, it
is listed first because it is more serious than the other flaw.
This one can back up mail for hours/days, and crash the server.

The problem here is that each process that is started by a service
uses a certain (unknown) amount of an undocumented type of memory
that Windows allocates. Without knowing how much of the mystery
heap is used, or how much is left, or how much is available when
the system starts, it's impossible to know when you will run out.

When you DO run out, Windows does a *terrible* job in handling it.
Instead of preventing the program from loading and recording an
error to the event log, Windows will keep the program half-loaded
(the error almost always occurs while loading .DLLs) and pop up an
error message saying that it can't start the program.

When this happens, unless you happen to be at the server, you
won't have a chance to close the box. So, another one will soon
pop up as another SMTP process is started. By the time you find
out, there could be hundreds or thousands of the pop-up boxes.
Since Microsoft doesn't clear them automatically, when the
original 30 SMTP processes end, there still isn't enough of this
mystery heap left, because Microsoft is using it to display these
error messages. So

[sniffer] Re: Error Messages since WeightGate

2007-06-10 Thread Matt
hat
   can run before the mystery heap is depleted. However, some people
   have reported better results by raising the value to 2048KB --
   that's one of the problems with undocumented resources (there's no
   way to know for sure which value is better or why).

   We recommend going to
   http://support.microsoft.com/default.aspx?scid=kb;EN-US;q142676 and
   changing the registry entry to use a value of "256" or "2048" (NOTE:
   Microsoft recommends 512 in that article; if you use 512, make sure
   not to have IMail's MaxQueProc registry entry set to more than 30).

Matt




Keith Johnson wrote:

Darrell,

Did you alter your heap size 3rd entry?  If so, did you go to 1024 or other.  I 
found this article by crossing a Declude page, appears to be what I need to go 
after.

http://support.microsoft.com/default.aspx?scid=kb;EN-US;q142676

-Keith

  _

From: Message Sniffer Community on behalf of Darrell ([EMAIL PROTECTED])
Sent: Sun 6/10/2007 2:31 PM
To: Message Sniffer Community
Subject: [sniffer] Re: Error Messages since WeightGate



After looking into it I am on board with what Pete said about the "heap"
issue.  It makes sense to me that its the heap issue since were
launching weight gate -> SNF.  Effectively doubling the amount of
processes being launched.

Darrell
---
Check out http://www.invariantsystems.com for utilities for Declude,
Imail, mxGuard, and ORF.  IMail/Declude Overflow Queue Monitoring,
SURBL/URI integration, MRTG Integration, and Log Parsers.


Keith Johnson wrote:
  

Darrell,

You are right, a reboot will take care of it for a season, then it comes back 
out of the blue.  Very strange indeed.

Keith

  _

From: Message Sniffer Community on behalf of Darrell ([EMAIL PROTECTED])
Sent: Sat 6/9/2007 9:36 PM
To: Message Sniffer Community
Subject: [sniffer] Re: Error Messages since WeightGate



Keith,

I was having the same problems last week.  Just came out of the blue and
was across several of our servers as well.  Same error verbatim.  FWIW -
I also use weightgate.  I rebooted the servers I was seeing this issue
on and the problem has not returned.

Very odd you mentioned that as I thought this was isolated to just me.

Darrell
---
Check out http://www.invariantsystems.com for utilities for Declude,
Imail, mxGuard, and ORF.  IMail/Declude Overflow Queue Monitoring,
SURBL/URI integration, MRTG Integration, and Log Parsers.


Keith Johnson wrote:


It appears since installing WeightGate we have been receiving a lot of the 
below Application PopUps indicating an error:

The application failed to initialize properly 0xc142. Click on OK to 
terminate the application

The application entry is our Sniffer .exe.  Today alone I saw over 300.   I 
thought it was an isolated issue.  However, it is happening across all our 
servers.  We are running the latest Sniffer in Persistent mode.  We never saw 
these prior to WeightGate.  Has anyone seen this before?  Below is the actual 
entry in Event Log.

-Keith

Event Type: Information
Event Source: Application Popup
Event Category: None
Event ID: 26
Date:  6/9/2007
Time:  12:12:35 AM
User:  N/A
Computer: NAIMAIL2
Description:
Application popup: rrctp2ez.exe - Application Error : The application failed to 
initialize properly (0xc142). Click on OK to terminate the application.



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>

  

--
---
Check out http://www.invariantsystems.com for utilities for Declude, Imail,
mxGuard, and ORF.  IMail/Declude Overflow Queue Monitoring, SURBL/URI
integration, MRTG Integration, and Log Parsers.


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>





#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>







#
This message is sent to you because you are subscribed to
  the mailing list .
To un

[sniffer] Re: Appriver issue

2007-05-18 Thread Matt

I have something that I would also like to clear up.

When I indicated that AppRiver had removed it's contact page, it likely 
just wasn't operating at the time that I was attempting to access it.  
Considering their issues, it would not be a surprise to see other issues 
like this caused, but it seemed suspicious since their home page was 
working and not their contact page.  I did note that it was working by 
the time that it was pointed out that it was up.


In no way did I ever believe that Pete or Sniffer had any direct 
involvement in the system that created these problems, and in no way 
should this reflect badly on Pete or Sniffer as far as I am concerned.


I was slightly miffed after getting off the phone with them where their 
reaction quite clearly indicated that they were aware of the issue.  I 
suggested that they take their servers off-line due to the issues that 
were being caused, but I was probably barking up the wrong tree.  The 
servers weren't taken off line for another hour or so, or maybe this is 
when the delivery servers caught up with the queued E-mail destined for 
my client.  I'm not sure why they didn't act on this sooner.  When you 
have a loop, it is important to stop it, and their multi-homing made it 
difficult for others to block.  One user received about 500 copies of 
the same message (and also called them), and there were other examples 
that we saw which were much more limited.  I do hope that they didn't 
choose to introduce new software at 11 a.m. ET on the busiest E-mail day 
of the week, and that this was only when the problems surfaced...


Everyone that deals with significant volumes of E-mail has issues from 
time to time, and I wouldn't draw conclusions about AppRiver based on 
just this one circumstance.  I would imagine that it is hard to plan for 
how to deal with a broad scale looping issue, and I'm sure this was a 
learning experience for them.


Matt




David Moore wrote:

I think what Peter is try to say is that Sort monster is hosted at Appriver
and Appriver had an issue and therefore so did Sort monster.

http://www.dnsstuff.com/tools/dnsreport.ch?&domain=sortmonster.com
 



Regards David Moore
[EMAIL PROTECTED]

J.P. MCP, MCSE, MCSE + INTERNET, CNE.
www.adsldirect.com.au for ADSL and Internet www.romtech.com.au for PC sales

Office Phone: (+612) 9453 1990
Fax Phone: (+612) 9453 1880
Mobile Phone: +614 18 282 648

POSTAL ADDRESS:
PO BOX 190
BELROSE NSW 2085
AUSTRALIA.

-

This email message is only intended for the addressee(s) and contains
information that may be confidential, legally privileged and/or copyright.
If you are not the intended recipient please notify the sender by reply
email and immediately delete this email. Use, disclosure or reproduction of
this email, or taking any action in reliance on its contents by anyone other
than the intended recipient(s) is strictly prohibited. No representation is
made that this email or any attachments are free of viruses. Virus scanning
is recommended and is the responsibility of the recipient.


-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Kevin Rogers
Sent: Saturday, 19 May 2007 11:59 AM
To: Message Sniffer Community
Subject: [sniffer] Re: Appriver issue

Thanks for the explanation, and I wasn't trying to blame you - just wanted
more info is all.

We use Sniffer, but not Appriver.  You said that if we don't use Appriver,
we shouldn't have been affected, but you also seemed to say that if one of
the recipient's of my user's email uses Appriver that might've caused a
problem.  And also that *some* of Sniffer users might have experienced the
problem as well. 


It sounds like things are still being worked out.  I just wanted some kind
of verification that they were aware of the problem, were working on it,
that they were in some way sorry about what happened...you know - the usual
stuff.  And I know that you are not an official rep of Appriver or anything,
but presently you're all we have in that role ;)

Thanks

Kevin




Pete McNeil wrote:
  

Hello Kevin,

Friday, May 18, 2007, 8:52:47 PM, you wrote:

  

Pete - Thanks for the reply, but I guess I don't understand what 
you're saying.  "Some packet loss" and "rulebase downloads to slow 
down for a time" don't reflect what happened to me yesterday and 
apparently not what happened to one of the other posters either when 
he said that Appriver was having a problem "with sending messages 
over and over again".  I received over (at last count) 35,000 
messages (almost all of which were bounced replies, from one email 
from one of our users who sent an email to about 70 people) yesterday.

  
  

And I had already gone to http://www.armresearch.com/  yesterday and 
there was nothing there.  There

[sniffer] Re: Downloads are not working....

2007-05-17 Thread Matt

Pete McNeil wrote:

I'm not sure what the actual issue is (I will get that data later),
however I've just been informed that it should be resolved in the next
20 minutes or so.
  


The issue was that they were redelivering messages over and over again.  
One customer got one message over 500 times in an hour!  I suspect that 
our gateways were blocking some of this automatically, and I also tried 
to block it at the router level but it kept popping out of other address 
blocks.


Matt


[sniffer] Re: Downloads are not working....

2007-05-17 Thread Matt
Appriver, who is somehow involved with Sniffer, is having a ridicolous 
problem with sending messages over and over again (once every few 
seconds).  They pulled their contact information from their site but 
didn't take down their servers.  I suspect this is putting strain on 
them and if Sniffer uses their bandwidth for downloads, that could 
explain things.


Matt

Chuck Schick wrote:

Speeds are really slow and the connection is lost before
completionEverything checks out good on our end.  Is something going on
with the sortmonster end of things?

Chuck Schick
Warp 8, Inc.
(303)-421-5140
www.warp8.com


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  


#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: How to incorporate a white list?

2007-04-03 Thread Matt

Pete,

CBL has a proven 99.97% accuracy and on some systems over a 40% hit rate 
on traffic, yet their methods are rather simple and easy to implement.


If an IP hits your spamtrap, and it has either no reverse DNS entry or 
it has a dynamic reverse DNS entry, it is added, if it doesn't, it isn't 
added.  They have a few other mechanisms that I am aware of, but the 
above will take care of almost everything related to spam zombies.  Your 
current whitelisting method will take care of the few exceptions to 
this.  There is rather simple code that can test for standard types of 
dynamic reverse DNS entries with both numbers and hex encoded values, 
and exceptions for names that might include things like "mail" or "mx#" 
in the names.


If you want to expand this to static spammers, you merely introduce 
other pre-qualifications such as having a Mail From domain or HELO that 
matches the payload domain in the body.  I figure that for the most part 
however that you are tagging static spammers with other rules that take 
presidence over the IP rules, and that this would be minimally 
beneficial in comparison to spam zombies.


The source of the false positives hitting your spam traps are most 
likely due to AFF (Advance Fee Fraud) and some phishing, which use free 
accounts on legitimate servers to send their spam, and an increasing 
precidence of hacked E-mail accounts being used by zombie spammers.  The 
first method would avoid listing such servers in almost every 
circumstance, and we certainly wouldn't ever see things like yahoo.com, 
gmail.com and rr.com mail servers listed like we see with some degree of 
regularity under the current method.


Matt


Pete McNeil wrote:

Hello Andy,

Tuesday, April 3, 2007, 5:15:12 PM, you wrote:

  

Hi Jonathan:



  

That's exactly the problem. These particular rules were blocking Google mail
servers - NOT specific content.



To clarify, it was blocking precisely one IP. The F001 bot only tags a
single IP at a time (not ranges, ever), and only after repeated
appearances at clean spamtraps where the message also fails other
tests (often including content problems like bad headers, obfuscation,
heuristic scam text matching etc.)

The rule was in place from 20070326. The first reported false
positives arrived today (just after midnight). The rule was removed
just less than 12 hours from that report (due to scheduling and heavy
spam activity this morning that requiring my immediate attention). The
report was ordinary (not a rule panic).

As is the case with all FPs, the rule cannot be repeated (without
special actions).

  

Obviously, as already discussed in the past, it IS necessary that these
IP-based blocks are put under a higher scrutiny. I'm not suggesting that the
"automatic" bots should be disabled, I'm just proposing that "intelligence"
must be incorporated that will use RevDNS and WHOIS to identify POSSIBLY
undesirable blocks and to flag those for human review by Sniffer personnel
so that they don't end up poisoning mail severs of all their clients.



While interesting, these mechanism are not foolproof nor trivial to
implement. Also - our prior research has taught us that direct human
involvement in IP rule evaluation leads to far more errors we can
allow. Once upon a time, IP rules were created in very much the way
you describe -- candidate IPs were generated from spamtraps and the
live spam corpus and then reviewed (manually and automatically)
against RevDNS, WHOIS, and other tools. At that time, IP rules had the
absolute worst reliability of any test mechanism we provided. Upon
further R&D, we created the F001 bot that is in place and now the
error rate has been significantly reduced and our people are able to
focus on things that computers can't do better.

Please don't get me wrong, I'm definitely not saying that the F001 bot
can't be improved - it certainly can, and will if it survives. What I
am saying is that it is accurate enough now that any improvements in
accuracy would be non-trivial to implement.

Our current development focus is on developing the suite of
applications and tools that will allow us to complete the alpha and
beta testing of the next version of SNF*. That work has priority, and
given that these events are rare and easily mitigated we have not
deemed it necessary to make enhancements to the F001 bot a higher
priority.

The following factors make it relatively easy to mitigate these IP FP
events (however undesirable): Rule panics can make these rules
immediately inert, FP report/response times are sufficiently quick,
The IP rule group is sequestered at the lowest priority so that it can
easily be weighted lower than other tests.

Also, it is likely that the F001 bot and IP rules group will be
eliminated once the next SNF version is sufficiently deployed because
one of the major enhancements of the new engine is a mul

[sniffer] Re: How to incorporate a white list?

2007-04-03 Thread Matt
Agreed, however reverse DNS is not a universal solution as things like 
RR accounts will come from the same base domain as RR spam zombies, and 
you would otherwise have to track down each unique reverse DNS entry.


I would test a connection to the SMTP server instead.  Most of these 
servers will at least respond.  So if a domain like yahoo.com, 
gmail.com, rr.com, etc. is found in the reverse DNS for a new IP rule, 
you would then check to see if it at least responded to a port 25 
connection, and if it did, skip that rule.


Note that I score IP rules at half the weight of the others.  There are 
more common issues with international ISP's and webmail providers than 
with things like yahoo.com, gmail.com, rr.com, etc.  Many don't get a 
lot of international traffic so they don't notice it.


Matt


Andy Schmidt wrote:

Hi,

Unless I'm mistaken, rule 1370762 was targeting the same address range.

If I may make a suggestion:
Before the spam-trap robots are allowed to block major, well-known and
easily recognizable email providers, how about the robot script pulls a
WHOIS and a Reverse DNS and runs that data against a table of "can't block"
entities - or at least spits those out for "human review".

If that can't be done, then how about the robots issue an hourly report of
"suspect" IPs. A no-brainer script can pull matching WHOIS and RevDNS for
quick human review and overriding (if necessary).

I would rather those obvious bad rules are caught before or very quickly
after they go live. There is always some delay before I get first reports
until I realize that this is a "real" problem. Then I have to try to get
headers from end-users before I can dig into logs... Hours and hours pass
(especially if it's overnight events). In the meantime the problem escalates
all around me.

Thanks,
Andy

-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Pete McNeil
Sent: Tuesday, April 03, 2007 11:09 AM
To: Message Sniffer Community
Subject: [sniffer] Re: How to incorporate a white list?

Hello Andy,

Tuesday, April 3, 2007, 9:36:17 AM, you wrote:

  

Hi Phil,



  

Yes, it seems as if some Sniffer rules, e.g., 1367683, is broadly


targeting
  

Google's IPs.



  

I've submitted 3 false positive reports since last night, at least two of
them were Google users, one located in the U.S. and the other in the
Netherlands!



This IP rule has been pulled.

FP processing will happen shortly.

_M



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  


[sniffer] Re: Integration with Mailenable

2007-03-17 Thread Matt
There is in fact a Domain Keys plug-in for SmarterMail listed on their 
downloads page:


   http://www.smartertools.com/Products/SmarterMail/DL/v4.aspx

Personally I'm not a fan of any present sender identification 
implementation.  Both SPF and Domain Keys are primarily associated with 
spam by volume, and SPF can at cause one's customers issues when they do 
things like use alternative SMTP servers or find themselves behind an 
SMTP proxy at a hotel or T-Mobile HotSpot...but I digress.


I think that both IMail and SmarterMail are decent products, but neither 
one of them is perfect.  SmarterMail certainly has a lower cost of 
entry.  I would trust Jay's experience with MailEnable considering his 
extensive experience.


Matt



Jay Sudowski - Handy Networks LLC wrote:

Hi Phil -

Good question.  We integrate Sniffer into SmarterMail via Declude.
However, SmarterMail does have the capability to run a program against a
message before it is delivered.  We have some customers that use a batch
file to call f-prot and get virus scanning integrated into their mail
server on the cheap.  I believe it would likely be possible to make use
of the same functionality to call Sniffer directly, and thus avoid
having to purchase Declude.  I have just never had a need to attempt
this.

As for domain keys, I don't believe so.  However, you can setup
SPFyou're your domains simply by adding the appropriate DNS records to
said domains zone files.

-Jay

-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On
Behalf Of Phillip Cohen
Sent: Friday, March 16, 2007 12:01 PM
To: Message Sniffer Community
Subject: [sniffer] Re: Integration with Mailenable


Jay,

Thanks for the heads up on Mailenable. I took a look at SmarterMail 
and it looks pretty good. How does it interface with Message Sniffer 
or does it require and external gateway such as EWall? How has 
support been with it and how have they been as far as updates. Also 
does it have "domain keys" capability and SPF support for sending 
mail to yahoo.com etc...


Thanks,

Phil


At 07:26 PM 3/15/2007, you wrote:
  

Stay Away From MailEnable.

There are so many exploits out there for MailEnable, and there are more
exploits found monthly, if not weekly.  At one particular interval,
MailEnable had to re-release the same patch several times in the *same*
week because it kept on not actually fixing the root of the issue.  If
you run MailEnable, odds are that you will end up exploited, even if


you
  

stay on the of the patches.

On top of that, MailEnable is just simply a CPU and IO hog, much more


so
  

than other other mail server I have ever seen.  By default, they use
entirely text based configuration files, which on occasion get


truncated
  

to zero during periods of high activity on the server.

In the past year, we have assisted our customers move 20,000+ mailboxes
away from MailEnable, mostly all to SmarterMail.  Do not waste your


time
  

and money with MailEnable.

-Jay

-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On
Behalf Of Phillip Cohen
Sent: Thursday, March 15, 2007 12:22 PM
To: Message Sniffer Community
Subject: [sniffer] Integration with Mailenable


We are finally going to replace our old Vopmail server. Looking at
Mailenable Enterprise. Will Sortmonster work with that program? Is
anyone using Mailenable? If so how is it and if it works with
Sortmonster how did you use them together.

THanks,

Phil


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to


<[EMAIL PROTECTED]>
  

To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to


<[EMAIL PROTECTED]>
  

To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>




#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  


[sniffer] Re: Integration with Mailenable

2007-03-15 Thread Matt

Yeah, filtering services suck!

Matt



Chris Bunting wrote:

Merak mail server has been great for us, we have 10,000 users, and have
not had any problems with it over the 5+ years we have been using it...
It's been rock-solid. Don't waste your money on the anti-virus/anti-spam
filtering services... just use message sniffer with a content filter and
you are set.

Thank You,
Chris Bunting
Lancaster Networks
Direct: 717-278-6639
Office: 888-LANCNET x703
MS Certified Systems Engineer
IP Telephony Expert

Lancaster Networks
1085 Manheim Pike 
Lancaster PA 17601 
www.lancasternetworks.com

--
Corporate Technology Solutions...
Specializing in 3com NBX Telephony Solutions
IT Services - Phone Systems - Digital CCTV
--
The information in this e-mail is confidential and may be privileged or
subject to copyright. It is intended for the exclusive use of the
addressee(s). 
If you are not an addressee, please do not read, copy, distribute or
otherwise act upon this email. If you have received the email in error, 
please contact the sender immediately and delete the email. The

unauthorized use of this email may result in liability for breach of
confidentiality, privilege or copyright.


-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On
Behalf Of Jay Sudowski - Handy Networks LLC
Sent: Thursday, March 15, 2007 10:27 PM
To: Message Sniffer Community
Subject: [sniffer] Re: Integration with Mailenable

Stay Away From MailEnable.  


There are so many exploits out there for MailEnable, and there are more
exploits found monthly, if not weekly.  At one particular interval,
MailEnable had to re-release the same patch several times in the *same*
week because it kept on not actually fixing the root of the issue.  If
you run MailEnable, odds are that you will end up exploited, even if you
stay on the of the patches.

On top of that, MailEnable is just simply a CPU and IO hog, much more so
than other other mail server I have ever seen.  By default, they use
entirely text based configuration files, which on occasion get truncated
to zero during periods of high activity on the server.

In the past year, we have assisted our customers move 20,000+ mailboxes
away from MailEnable, mostly all to SmarterMail.  Do not waste your time
and money with MailEnable.  


-Jay

-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On
Behalf Of Phillip Cohen
Sent: Thursday, March 15, 2007 12:22 PM
To: Message Sniffer Community
Subject: [sniffer] Integration with Mailenable


We are finally going to replace our old Vopmail server. Looking at 
Mailenable Enterprise. Will Sortmonster work with that program? Is 
anyone using Mailenable? If so how is it and if it works with 
Sortmonster how did you use them together.


THanks,

Phil


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  


#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: FTP server / firewall issues - Resolved.

2007-01-05 Thread Matt

Darin,

There are many people with firewall or client configuration issues that 
cause problems with FTP, however HTTP rarely experiences issues and is 
definitely easier to support.  As far as efficiency goes, since the 
rulebases will all be zipped, there is little to be gained from 
on-the-fly improvements to FTP (and there are some for HTTP as well).  
In such a case, I would consider it to be effectively a wash, nothing 
gained, nothing lost (measurably).


Matt



Darin Cox wrote:

Thanks, Pete.  Appreciate you taking the time to explain what's happening in
more detail.

I'm curious as to why FTP is more difficult than HTTP to debug, deploy,
secure, and scale, though. I tend to think of them on equal footing, with
the exception of FTP being faster and more efficient to transfer files in my
experience.

Thanks for the link to save some time.  Much appreciated.

Darin.


- Original Message - 
From: "Pete McNeil" <[EMAIL PROTECTED]>

To: "Message Sniffer Community" 
Sent: Friday, January 05, 2007 9:47 PM
Subject: [sniffer] Re: FTP server / firewall issues - Resolved.


Hello Darin,

Friday, January 5, 2007, 6:23:22 PM, you wrote:

  

Hi Pete,



  

Why the change?



Many reasons. HTTP is simpler to deploy and debug, simpler to scale,
less of a security problem, etc...

Also, the vast majority of folks get their rulebase files from us with
HTTP - probably for many of the reasons I mentioned above.

  

FTP is more efficient for transferring files than HTTP.



Not necessarily ;-)

  

Can we request longer support for FTP to allow adequate time for everyone


to
  

schedule, test, and make the change?



I'm not in a hurry to turn it off at this point, but I do want to put
it out there that it will be turned off.

  

I remember trying dHTTP initially when this was set up, but it wasn't
working reliably, plus FTP is more efficient, so we went that way.  wget


may
  

work better when we have time to try it.



  

Also, what's this about gzip?  Is the rulebase being changed to a .gz


file?
  

Compression is a good move to reduce bandwidth, but can we put in a plug


for
  

a standard zipfile?



Gzip is widely deployed and an open standard on all of the platforms
we support. We're not moving to a compressed file -- the plan is to
change the scanning engine and the rulebase binary format to allow for
incremental updates before too long - so for now we will keep the file
format as it is.

Apache easily compresses files on the fly when the connecting client
can support a compressed format. The combination of wget and gzip
handle this task nicely. As a result, most achieve the benefits of
compression during transit almost automatically.

  

Do you have scripts already written to handle downloads the way you want
them now?  If so, how about a link?



We have many scripts on our web site:

http://kb.armresearch.com/index.php?title=Message_Sniffer.TechnicalDetails.AutoUpdates

My personal favorite is:

http://www.sortmonster.com/MessageSniffer/Help/UserScripts/ImailSnifferUpdateTools.zip

I like it because it's complete as it is, deploys in minutes with with
little effort, generally folks have no trouble achieving the same
results, and an analog of the same script is usable on *nix systems
where wget and gzip are generally already installed.

There are others of course.

Hope this helps,

_M


  


[sniffer] Re: Uploading problems

2006-12-07 Thread Matt

Try WPUT

   http://sourceforge.net/projects/wput/

Matt



K Mitchell wrote:

At 11:16 PM 12/7/2006 -0700, Jay Sudowski - Handy Networks LLC wrote:
  

Give this a try: http://www.ncftp.com/download/



  Just did about 5 minutes ago. It won't run without specifying a
destination directory, and sortmonster ftp won't allow any directory settings.


Thanks though  :o)



  


[sniffer] Re: Increase in spam

2006-10-25 Thread Matt




My connection traffic doubled over the course of two weeks.  This
started on about the 9th, and seems to have peaked and leveled off last
week.  The increase was mostly due to a new brute force spammer (a.k.a.
dictionary attack), but static spam seems to have also increased by
about 20%.  I believe the static spam increase is Scott Richter
lighting his companies back up, mostly at carolina.net, in addition to
the brute force zombie spam and massive increase in backscatter that
these attacks are causing.  About 10% of connections to our gateways
are the result of backscatter, with 30% of that coming from FrontBridge
alone (bigfish.net).

The new brute force spamming is not just simply higher volume, it also
has more targets than before.  This has helped to expose several
customer's accounts that had a domain aliased with a catch-all and
forwarded to an account that we were protecting.  Previously, we never
saw this since those domains weren't being attacked.  I have no clue as
to why anyone is still providing catch-alls, especially mail forwarding
services like BulkRegister.  It just seems like a good way to limit the
capacity of a server by 75% or more.

Matt



Pete McNeil wrote:

  Hello Andrew,

Wednesday, October 25, 2006, 1:33:20 PM, you wrote:

  
  
For another organization's graph of spam trends as received by them,
check out the updated graphs at TQM cubed:

  
  
  
  
http://tqmcube.com/tide.php

  
  
  
  
Their graph shows a sharp uptick at the end of June 2006.

  
  
...and a new upward trend around 0917.

That's consistent with what we've seen.

_M


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  





[sniffer] Re: Version 2-3.5 Release -- Faster Engine

2006-10-23 Thread Matt

Kudos Pete!  Just wanted to say thanks.

Matt



Pete McNeil wrote:

Hello SNF Folks,

The plan was to hold off until the next major release, however in
light of recent increases in spam traffic we are pushing out a new
version with our faster engine included. All other upgrades are will
wait for the major release ;-)

The scanning engine upgrade results in a 2x speed increase that
hopefully will help with the higher volumes we are seeing now.

Version 2-3.5 also rolls up 2-3.2i1 which included the timing and file
locking upgrades.

You can find version 2-3.5 here:

http://kb.armresearch.com/index.php?title=Message_Sniffer.GettingStarted.Distributions

Thanks,

_M

  



#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: SPAM Problems

2006-10-23 Thread Matt
Sorry about the OT here, but I feel compelled to add just a little 
follow up on the topic of pre-scanning and Alligate.


Alligate is IMO definitely the way to go.  As Paul pointed out, 
greylisting everything (i.e. ORF) has drawbacks and I wouldn't use a 
solution that greylisted everything.  I worked with Brian Milburn of 
Alligate for months to help him create a method of providing selective 
greylisting so that most legitimate E-mail is not greylisted.  I also 
helped him create a method of storing triplicates for use with 
greylisting that only track base domains and not the full sender and 
recipient, thus substantially reducing what needs to be greylisted if it 
does trigger selective greylisting.  I received nothing in return except 
for a very capable product that benefited my system greatly.  Brian is 
also a lot like Pete and R. Scott Perry.


Setting things up optimally is not going to be an out of the box type of 
experience.  I have both offered some free assistance in private and 
public to those that are dealing with Alligate, and Brian can also 
provide some support for new setups.  There is of course a limit to my 
time for things like this.  I have also occasionally consulted on such 
things at the request of others.


So while it can be a hard nut to crack, especially if one is not 
familiar with the architecture or concepts of a pre-scanning gateway, 
there is help out there, and it is definitely worth while.  I formerly 
used ORF for tarpitting and address validation, but going to Alligate 
for this was the best move that I have made since picking up Declude and 
Sniffer.


Note that Alligate Gateway is not a replacement for Sniffer, Declude or 
any other deep scanning solution, it is merely a tool for handling 
validation and some blocking of the most obvious and easiest to detect 
spam, primarily with passive means of blocking (greylisting and 
tarpitting), and without needing to throw a lot of CPU at it.  I handle 
over 1 million connections per day and Alligate averages about 5% CPU at 
peak times.  Only 7% of the connections result in delivery of a message 
to my deep-scanning layer using a configuration that is not aggressive.  
There is only one zombie spammer at present that will survive greylisting.


Matt



Dave Marchette wrote:

I agree with the pre-scanning concept.  IMgate, ORF and Alligate are all
good, but it just depends upon your level of comfort with each type of
environment these run in.  Each takes several days of fine tuning and
log babysitting (even though the vendors tell you it is plug and play-
it's not).  We've tested all three and prefer Alligate (thanks Matt!)
but any way you look at it, if you are running even moderate volume then
pre-scanning is the next step in the evolution of protection.   


-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On
Behalf Of Technical Support
Sent: Monday, October 23, 2006 7:28 AM
To: Message Sniffer Community
Subject: [sniffer] Re: SPAM Problems


We also use ORF by VamSoft on IIS to pre-process. 


We do not use the grey listing. We tried it, and it is great at
eliminating
spam, but it can delay mail for hours, which is a problems for most
email
users. 


Instead of grey listing, we have found ORF's tar-pitting very effective.


We set some tests at the ORF level, but don't block on them (because
there
is no "weighting"). We also have some spam trap email addresses. Fail a
test
or hit a spam trap and we tar-pit. Instead of sending us 100 spams a
minute
they can only send one per minute. 


We can pick up x-records with Declude and not have to re-run the tests
on
the iMail server, still using Declude to score the messages based on the
prior tests. 

ORF even has a built-in interface for sniffer. 


It is simpler and preferable to process everything on the iMail server,
but
when you want to off-load processing to stretch your iMail / Declude
investment, this arrangement can do the trick. 


Paul Fuhrmeister
[EMAIL PROTECTED]


-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On
Behalf
Of David Waller
Sent: Monday, October 23, 2006 5:15 AM
To: Message Sniffer Community
Subject: [sniffer] Re: SPAM Problems

Filippo,

We had a similar problem. Due to the huge volumes of spam we found our
mail
server becoming less able to deal with email. Imail/Declude/Sniffer is
expensive in processor terms when processing email and we found the best
was
to pre-process mail filtering using Greylisting (we used Vamsoft in IIS
SMTP
but others exist). This has dramatically reduced the load on our server
and
seems to stop the bulk of spammers and mail harvesters

Hope this helps.

David



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PRO

[sniffer] Re: Increase in spam

2006-10-18 Thread Matt




I have a moderately large number of domains and accounts that I
protect, and in the past, whenever someone indicated an increase in
spam, my system was always at normal levels and I chalked it up to some
change on the end of the one reporting the issue (such as a nobody
alias or being baptized by a brute force /dictionary spam attack). 
Over the last two weeks though, my connection traffic to my gateway is
up about 90%, and what gets through my gateway to Declude has risen
around 20%, yet there has been no noticeable increase in good E-mail. 
I'm guessing that one of the zombie spammers has become more
aggressive, at least with tarpitting avoidance and retries, while
clearly there are a lot of new static spam blocks (Scott Richter for
instance is back with a vengeance).

In relation to yesterday's reported leakage, I am guessing that Sniffer
isn't the only protection, and that RBL's are a part of that system. 
If so, the Network Solutions issues yesterday did cause issues with
resolving off of blacklists as it has been reported, and that could
explain the extra leakage.

Matt




Darin Cox wrote:

  We saw a sudden ~50% increase on July 16th, but only fluctuations and
moderate growth since then.  On weekdays we're now at 80% spam, 95% or
better on weekends.

Darin.


- Original Message - 
From: "Pete McNeil" <[EMAIL PROTECTED]>
To: "Message Sniffer Community" 
Sent: Wednesday, October 18, 2006 9:23 AM
Subject: [sniffer] Re: Increase in spam


Hello K,

Wednesday, October 18, 2006, 8:52:17 AM, you wrote:

  
  
  I've been seeing a massive increase in spam over the last 2 days getting
through with minimal scores. Could this be due to the drawback of the
filter involved with false positives, or something else?

  
  
It's hard to pin down, but not likely to be the pulled rule. We have
seen a relative increase in new spam campaigns over the past 2 days
preceded by a lull. That may be what you're noticing.

I've attached a graph to illustrate.

_M

  





[sniffer] Re: Significant increase in false positives

2006-10-16 Thread Matt




There is no doubt that having Declude handle xhdr files would be
optimal.  I might add that an option to exclude the header on non-hits
would also be wise.  David Barker appears open to some feature requests
of late, and I would think that you could make this happen.  Not
everyone has capacity limitations, so the internal functionality would
probably suit the needs of many of your users also, and cover all
non-SA systems instead of just Declude.

Regarding this rule, the binary segment is non-searchable.  My only
solution would be to write some _vbscript_ that parsed the Sniffer log
for hits and move the files from my CopyFile directory back into
Declude's Proc.  I'm guessing that someone could also do some grepping
for this, but that ain't a strength of mine.  I could do this in
minutes though if I had headers to search on.  Thankfully this rule
only hit about 1,000 times this weekend as a final match (I'm ignoring
those that weren't final matches since those would have hit anyway). 
My gateway gets rid of most image spams, so I would expect a comparably
higher rate for others.

Regarding false positives in general.  I don't expect Sniffer to be
perfect due to the way that rules are generated, but I have two
suggestions.

1) One would be to test all new rules on a small sub-set of E-mail that
covers the most common patterns such as attachments and E-mail/webmail
clients with various formats including forwards and replies.  This
would likely stop the worst of the worst in terms of FP issues like the
one earlier this year that was hitting on most base64 code.  I envision
hundreds of test messages and not thousands, so this should be
practical.

2) The second suggestion is one that I have mentioned many times before
in private involving being able to tag messages on multiple types of
hits for a stronger result.  The separation would need to be on the
type rule so that all rule types would be isolated from one another. 
For instance, phrase, pattern, IP and domain rules could be put in
different codes and allowed to be scored in combination.  It would also
be equally as important to treat rules from user submissions different
from those generated from spam traps since these rules are not nearly
as universal.  I currently average just under 3 matches per message
that Sniffer hits, and I would imagine that there is a lot of mixing
between these types.  This would allow many that are scoring Sniffer
lower than our block weight to then score these multiple classification
hits higher.  This wouldn't be useful though unless it was seperated by
types like I listed since I often find multiple hits under the current
rulebase format.

Thanks,

Matt





Pete McNeil wrote:

  
  
  
  
  Hello Matt,
  
  
  Monday, October 16, 2006, 10:03:04 PM, you wrote:
  
  
  
  

  

>


Pete,


Would you please clarify this a bit.
 Declude of course doesn't record the rule in the headers, so this is
difficult to figure out.  Knowing the pattern may help identify the
problematic messages.  Also knowing the start time and end time of the
rule would also help.

  

  
  
  
  
  The rule was coded for a binary segment in an image file. Here is
the rule information:
  
  
  
  

  



  

  
  Rule - 1174356
  


  
  Name 
  
  
  image spam binary segment as text
!1AQaq"2
  


  
  Created 
  
  
  2006-10-14
  


  
  Source 
  
  
  !1AQaq"2
  


  
  Hidden 
  
  
  false
  


  
  Blocked 
  
  
  false
  


  
  Origin 
  
  
  Spam Trap
  


  
  Type 
  
  
  Simple Text
  


  
  Created By 
  
  
  [EMAIL PROTECTED]
  


  
  Owner 
  
  
  [EMAIL PROTECTED]
  


  
  Strength 
  
  
  3.20638481603822
  


  
  False Re

[sniffer] Re: Significant increase in false positives

2006-10-16 Thread Matt




Pete,

Would you please clarify this a bit.  Declude of course doesn't record
the rule in the headers, so this is difficult to figure out.  Knowing
the pattern may help identify the problematic messages.  Also knowing
the start time and end time of the rule would also help.

I would be nice too if you talked with Declude about allowing for the
insertion of headers, or even if you did this on your own.  I believe
the D* file may be editable when the external app is launched.  That
would make recovery of this so much easier for me (minutes instead of
hours of work).

Thanks,

Matt



Pete McNeil wrote:

  
  
  
  
  Hello Darin,
  
  
  Monday, October 16, 2006, 5:17:26 PM, you wrote:
  
  
  
  

  

>


Anyone else seeing a sudden increase in
FPs?  We normally report a few each day, but we're seeing a 10x
increase in FPs for the past three days.

  

  
  
  
  
  Not sure if this is it, but there was an image segment rule that
went in over the weekend and resulted in an unusual number of false
positives today. The rule was removed. IIRC the rule id was: 1174356
  
  
  Hope this helps,
  
  
  _M
  
  
  -- 
  Pete McNeil
  Chief Scientist,
  Arm Research Labs, LLC.
  #

This message is sent to you because you are subscribed to

  the mailing list .

To unsubscribe, E-mail to: <[EMAIL PROTECTED]>

To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>

To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>

Send administrative queries to  <[EMAIL PROTECTED]>




  





[sniffer] Re: New SPAM pain

2006-07-26 Thread Matt




Pete surely won't mind after you post your observations :)

Matt



Darrell ([EMAIL PROTECTED]) wrote:

If Pete doesn't mind I will post my observations in regards to the
product.  I run both products (CommTouch and Sniffer). 
Darrell
  
---
  
Check out http://www.invariantsystems.com for utilities for Declude,
Imail, mxGuard, and ORF.  IMail/Declude Overflow Queue Monitoring,
SURBL/URI integration, MRTG Integration, and Log Parsers. 
  
  
John Shacklett writes: 
  I'm dying to start a thread and talk about
Sniffer's stance on CommTouch,

but I can resist. 
Instead, I would like to point out that eight clearly spam messages
have

made it through to my Inbox [or Outlook Junk Folder] so far this week
that

appear to have skinned clear through Sniffer. First ones I've seen in
> >Are we undergoing a new phase or campaign that I can make
adjustments for? 

-- 

John  
 


#

This message is sent to you because you are subscribed to

  the mailing list .

To unsubscribe, E-mail to: <[EMAIL PROTECTED]>

To switch to the DIGEST mode, E-mail to
<[EMAIL PROTECTED]>

To switch to the INDEX mode, E-mail to
<[EMAIL PROTECTED]>

Send administrative queries to  <[EMAIL PROTECTED]>

  
  
  
#
  
This message is sent to you because you are subscribed to
  
 the mailing list .
  
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
  
To switch to the DIGEST mode, E-mail to
<[EMAIL PROTECTED]>
  
To switch to the INDEX mode, E-mail to
<[EMAIL PROTECTED]>
  
Send administrative queries to  <[EMAIL PROTECTED]>
  
  
  
  





[sniffer] Re: [sniffer]Re[2]: [sniffer]WeightGate source, just in case...

2006-06-08 Thread Matt




Pete,

My understanding was that Declude treats different arguments to an
executable as just being other forms of that executable so it only
processes it once.  I'm not positive one way or another.  It's worth
testing though.

Matt



Pete McNeil wrote:

  Hello Matt,

Wednesday, June 7, 2006, 11:52:56 PM, you wrote:

  
  
Pete,

  
  
  
  
Just two more cents for the masses...

  
  
  
  
If people use this for two different external tests in Declude, they 
need to create two differently named executables because Declude will 
assume the calling executable to be part of the same test and only run
it once (or possibly create an error depending on one's configuration).
This may not be necessary if you have different test types defined, i.e.
nonzero, weight, external, and bitmask, but better safe than sorry.

  
  
I think this might not be correct. IIRC, the design spec for that
feature was that if the command line was different in the test then it
would be executed again and if the command line was identical it would
not.

This was to allow for calling the same program with different
parameters.

I'm pretty sure that's how it works --- it might be worth a few tests
if you're sure it's not that way, but I strongly suspect that if one
of the parameters are different in the test line (inside the quotes)
then it will be executed again as a different test.

  
  
Also, I noted that the Subjects on this list are being repeated.  I saw
that you changed to a new server, but I also noted that there is no 
space after "[sniffer]" in the Subject and thought that maybe this is 
what is throwing things off.  Maybe adding that space will correct the
issue???

  
  
It does look a little weird. Sometimes it's normal though. I'll see if
I can identify anything odd in the settings.

_M

  





[sniffer] Re: [sniffer][Fwd: Re: [sniffer]FP suggestions]

2006-06-08 Thread Matt




Darin,

Thunderbird allows you to choose the default forwarding method as
either inline or as attachment.  It might actually default to inline, I
can't remember, but whenever it does message/rfc822 attachments, it is
as a whole unlike some other clients that edit it down to the bare
minimum of what the consider to be useful like addressing, subject date
and MIME stuff if appropriate.  I'm definitely guilty of being a
Netscape diehard, and I'm very happy that the Mozilla project brought
things back to life again.

I fully understand the attachment trick with Outlook thanks to the
confirmations.  This will be easier than having people cut and paste
the headers in.  This doesn't happen much, but there is nothing worse
than getting a spam report without header info.

I also understand the encoding issues with forwarding in Outlook/OE. 
It's a shame that this happens.  Maybe having a copy of Thunderbird
around for this purpose might fit in where this is an issue.  Sounds
like adding Sniffer headers would be the best solution for this issue
on a wider basis since you definitely can't convince every admin not to
submit using Outlook/OE.

Soon I'm going to code up my Sniffer FP reports to be automatically
triggered when a message is reprocessed from my spam review system, so
I won't have to even bother with the source any more.  That should only
take a couple of hours, and it would be time well spent.  I always fix
issues and whitelist locally where appropriate, but I also report to
Sniffer for the benefit of all in addition to making sure that a FP
rule will not tag something outside of the scope of what I whitelisted,
and I have to report in order to be able to see what the content of the
rule was.  Customers do most of the reprocessing now, I just do the
back end stuff.

Matt



Darin Cox wrote:

  
Thunderbird and Netscape just takes the full original source and
attaches it as a message/rfc822 attachment.  I forwarded this message
back to the list by just pressing "Forward".

  
  
Interesting that they include the headers with a simple forward, without
specifying forward as attachment.  I haven't ever seen that behaviour before
in a mail client.  Seems like a few forwards would create a very bloated
message with all of the old headers.

  
  
I'm pretty sure that
Outlook Express works simply by just pressing Forward As Attachment, or
at least it gives me enough of the original, including the full headers,
to determine how to block the spam.

  
  
Yes it does.  However you've missed the point.  The issue is not how to get
the headers.  It is how to keep an email client from encoding the message
and headers differently, so that Sniffer can properly identify the rule that
caught the message.

  
  
Please excuse me for wanting more detail about the Outlook attachment
trick, but would you mind attaching this message to a response so that I
could look at the headers and such?

  
  
Sorry, I don't use Outlook.  But I can tell you the steps to take in Outlook
2003 (other versions are almost exactly the same).  I have my Outlook users
follow these with no problem.

1. Create a new email message
2. Click the arrow beside the paperclip icon, select item instead of file
from the dropdown
3. Browse mailboxes from the popup dialog to select the message to attach.
4. Viola, original message and headers attached.

  
  
There was a discussion about Outlook's behavior with Scott some time
ago.  Apparently Microsoft was pressured by customers to remove headers
when forwarding because they felt that they were a security/privacy
risk.  No one told them that Outlook was a security/privacy risk on it's
own :)  ...but that's another story.  I would probably feel different if
I had the need for groupware though, but digs at Microsoft are
irresistible sometimes.

  
  
I don't remember that discussion, and am not sure we're talking about the
same thing.  If you attach the original message via the steps above, you get
the full original message, headers and body.  We have a number of customers
who send spam reports this way, mostly on Outlook 2002 and 2003.

Darin



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  





[sniffer][Fwd: Re: [sniffer]FP suggestions]

2006-06-07 Thread Matt

Darin,

Thunderbird and Netscape just takes the full original source and 
attaches it as a message/rfc822 attachment.  I forwarded this message 
back to the list by just pressing "Forward".  I'm pretty sure that 
Outlook Express works simply by just pressing Forward As Attachment, or 
at least it gives me enough of the original, including the full headers, 
to determine how to block the spam.  I have been telling Outlook users 
to copy and paste the headers into a forwarded message.


Please excuse me for wanting more detail about the Outlook attachment 
trick, but would you mind attaching this message to a response so that I 
could look at the headers and such?


There was a discussion about Outlook's behavior with Scott some time 
ago.  Apparently Microsoft was pressured by customers to remove headers 
when forwarding because they felt that they were a security/privacy 
risk.  No one told them that Outlook was a security/privacy risk on it's 
own :)  ...but that's another story.  I would probably feel different if 
I had the need for groupware though, but digs at Microsoft are 
irresistible sometimes.


Matt
--- Begin Message ---



Of course I'm sending the full message as an 
attachment.  You can do that with Outlook by attaching and item, then 
browsing your mail folders for the message to attach.  And yes, that's how 
you do it with Outlook Express as well.  I don't use Thunderbird or 
Netscape mail, but I would assume you still need to attach the original message 
to avoid the headers being lost.
 
What I was referring to was a little more involved 
than that... namely the possibility of it not matching a rule because the 
attachment was encoded differently.  For example, I've seen mail go 
through that baes64 encoded an attached email that was not originally 
base64 encoded.
 
From Pete's responses, it sounded like "no rule 
found" really did mean no rule was matched.  Especially since he has a 
separate code for "rule already removed".  FPs we send are always from same 
day, or, at the very least, within 24 hours.
Darin.
 
 
- Original Message - 
From: Matt 
To: Message Sniffer Community 
Sent: Wednesday, June 07, 2006 11:46 PM
Subject: Re: [sniffer]FP suggestions
Darin,Outlook will strip many of the headers when 
forwarding.  Outlook Express needs to forward the messages using "Forward 
As Attachment" in order to insert the full original headers.  
Thunderbird/Netscape Mail will work just by forwarding.  If you paste the 
full source in a message, you should send as plain text.I have many FP's 
that come back as having no rules found, but these are more likely to be from 
rules that were already removed.  So I wouldn't jump to a conclusion that 
the rule was not found because of formatting unless you are not sending the full 
unadulterated original message source.  I would imagine that it would 
mostly be IP rules that aren't found when not forwarding the full original 
source.MattDarin Cox wrote: 

  It is unclear - we receive FPs that have traveled through all sorts of
clients, quarantine systems, changed hands various numbers of times,
or not (to all of those)... Right now I don't want to make that
research project a high priority.

Understood.

  
  That's true it wouldn't change, but submitting the message directly
would not be correct - the dialogue is with you, and in any case,
additional trips through the mail server also modify parts of the
header and sometimes parts of the message (tag lines, disclaimers,
etc)...

Hmmm... with attaching the original message, I guess it still makes more
sense to deliver to us first for now.  Just looking for an alternative that
gets you the message as close as possible to the original form as possible.
Maybe we'll write a script to copy and forward the D*.SMD file as an
attachment to you for FPs at some point in the future.




#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  
--- End Message ---
#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



Re: [sniffer]WeightGate source, just in case...

2006-06-07 Thread Matt

Pete,

Just two more cents for the masses...

If people use this for two different external tests in Declude, they 
need to create two differently named executables because Declude will 
assume the calling executable to be part of the same test and only run 
it once (or possibly create an error depending on one's configuration).  
This may not be necessary if you have different test types defined, i.e. 
nonzero, weight, external, and bitmask, but better safe than sorry.


Also, I noted that the Subjects on this list are being repeated.  I saw 
that you changed to a new server, but I also noted that there is no 
space after "[sniffer]" in the Subject and thought that maybe this is 
what is throwing things off.  Maybe adding that space will correct the 
issue???


Matt



Pete McNeil wrote:


Hello Sniffer Folks,

   The WeightGate utility I posted was done in a real hurry, so I
   thought I'd post source here just in case anybody wants to review
   it and in case of any trouble ;-) Also for any educational value
   it may have for others interested in writing similar kinds of
   utilities.

   Source attached in HTML form.

   Thanks,

   _M

 

 




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



Re: [sniffer]FP suggestions

2006-06-07 Thread Matt




Darin,

Outlook will strip many of the headers when forwarding.  Outlook
Express needs to forward the messages using "Forward As Attachment" in
order to insert the full original headers.  Thunderbird/Netscape Mail
will work just by forwarding.  If you paste the full source in a
message, you should send as plain text.

I have many FP's that come back as having no rules found, but these are
more likely to be from rules that were already removed.  So I wouldn't
jump to a conclusion that the rule was not found because of formatting
unless you are not sending the full unadulterated original message
source.  I would imagine that it would mostly be IP rules that aren't
found when not forwarding the full original source.

Matt




Darin Cox wrote:

  
It is unclear - we receive FPs that have traveled through all sorts of
clients, quarantine systems, changed hands various numbers of times,
or not (to all of those)... Right now I don't want to make that
research project a high priority.

  
  
Understood.

  
  
That's true it wouldn't change, but submitting the message directly
would not be correct - the dialogue is with you, and in any case,
additional trips through the mail server also modify parts of the
header and sometimes parts of the message (tag lines, disclaimers,
etc)...

  
  
Hmmm... with attaching the original message, I guess it still makes more
sense to deliver to us first for now.  Just looking for an alternative that
gets you the message as close as possible to the original form as possible.
Maybe we'll write a script to copy and forward the D*.SMD file as an
attachment to you for FPs at some point in the future.




#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



  





Re: [sniffer]Re[2]: [sniffer]Re[2]: [sniffer]Re[2]: [sniffer]FP suggestions

2006-06-07 Thread Matt




Pete,

I think that you just broke Scott's record with his two hour feature
request with your own a two hour program :)

Anyone remember those days???

Thanks,

Matt



Pete McNeil wrote:

  Hello Matt,

Wednesday, June 7, 2006, 4:22:05 PM, you wrote:

  
  
   
 Pete,
 
 Since the %WEIGHT% variable is added by Declude, it might make
sense to have a qualifier instead of making the values space
delimited.

  
  
I don't want to mix delimiters... everything so far is using spaces,
so it makes sense to continue that way IMO.

  
  
  Errors in Declude could cause values to not be inserted,
and not everyone will want to skip at a low weight.  I haven't seen
any bugs with %WEIGHT% since shortly after it was introduced, but
you never know.  I have seen some issues with other Declude inserted variables though.

  
  
Well, errors are always a possibility, but in this case it _should_ be
reasonably safe. For example, if this is used to gate SNF, then a
missing %WEIGHT% would result in trying to launch a program with the
same name as the authentication string, and it is highly unlikely that
would be found, so the result would be the "program not found" error
code. That's not perfect because it's a nonzero result, but it is safe
in that it is not likely to launch another program.

  
  
 One other thing that I came across with the way that Declude calls
external apps...you can't delimit the data with things like quotes. 
There is no mechanism for escaping a functional quote from a quote
that should appear in the data that you pass to it...so don't use
quotes as delimiters :)

  
  
Not a problem...

I just whipped together a utility called WeightGate.exe that can be
downloaded here (for now):

http://www.messagesniffer.com/Tools/WeightGate.exe

Suppose you wanted to use it in Declude to skip running SNF if your
weight was already ridiculously low (perhaps white listed) or already
so high that you want to save the extra cycles. Then you might do
something like this:

SNF external nonzero "c:\tool\WeightGate.exe -50 %WEIGHT% 30 c:\SNF\sniffer.exe authenticationxx" 10 0

(hopefully that didn't wrap, and if it did you will know what I meant ;-)

To test this concept out you might first create a copy of
WeightGate.exe callled ShowMe.exe (case matters!) and then do
something like this:

SNF external nonzero "c:\tool\ShowMe.exe -50 %WEIGHT% 30 c:\SNF\sniffer.exe authenticationxx" 10 0

The result of that would be the creation of a file c:\ShowMe.log that
contained all of the parameters ShowMe.exe was called with -- that way
you wouldn't have to guess if it was correct. ShowMe.exe ALWAYS
returns zero, so this _should_ be safe ;-)

If you run WeightGate on the command line without parameters it will
tell you all about itself and it's alter ego ShowMe.exe.

That description goes like this (I may fix the typo(s) later):

WeightGate.exe
(C) 2006 ARM Research Labs, LLC.

This program is distributed AS-IS, with no warranty of any kind.
You are welcome to use this program on your own systems or those
that you directly support. Please do not redistribute this program
except as noted above, however feel free to recommend this program
to others if you wish and direct them to our web site where they
can download it for themselves. Thanks! www.armresearch.com.

This program is most commonly used to control the activation of
external test programs from within Declude (www.declude.com) based
on the weigth that has been calculated thus far for a given message.

As an added feature, if you rename this program to ShowMe.exe then
it will emit all of the command line arguments as it sees
them to a file called c:\ShowMe.log so that you can use it
as a debugging aid.

If you are seeing this message, you have used this program
incorrectly. The correct invocation for this program is:

WeightGate , ,... 

Where:
   = a number representing the lowest weight to run .
   = a number representing the actual weight to evaluate.
   = a number representing the highest weight to run .
   = the program to be activated if  is in range.
  ,  = arguments for .

If  is in the range [,] then WeightGate will run
 and pass all of , ,...  to it. Then
WeightGate will collect the exit code of  and return it as
WeightGate's exit code.

If WeightGate gets the wrong number of parameters it will display
this message and return FAIL_SAFE (zero) as it's exit code.

If  is not in range (less than  or greater than )
then WeightGate will NOT launch  and will return FAIL_SAFE
(zero) as it's exit code.

As a deubgging aid, I was called with the following arguments:

arg[0]  = WeightGate

  





Re: [sniffer]Re[2]: [sniffer]Re[2]: [sniffer]FP suggestions

2006-06-07 Thread Matt




Pete,

Since the %WEIGHT% variable is added by Declude, it might make sense to
have a qualifier instead of making the values space delimited.  Errors
in Declude could cause values to not be inserted, and not everyone will
want to skip at a low weight.  I haven't seen any bugs with %WEIGHT%
since shortly after it was introduced, but you never know.  I have seen
some issues with other Declude inserted variables though.

One other thing that I came across with the way that Declude calls
external apps...you can't delimit the data with things like quotes. 
There is no mechanism for escaping a functional quote from a quote that
should appear in the data that you pass to it...so don't use quotes as
delimiters :)

Matt



Pete McNeil wrote:

  Hello Matt,

Wednesday, June 7, 2006, 3:37:36 PM, you wrote:

  
  
   
 Pete,
 
 An X-Header would be very, very nice to have.  I understand the
issues related to waiting to see if something comes through, and
because of that, I would maybe suggest moving on your own.

  
  
I've got it on the list to have a message rewriting option... it's
just not as high as some others. I hadn't thought about the weight
gating utility - though that seems like something that would be useful
in general for external tests...

"weightgate -5 %WEIGHT% 20 " 5 0

 is executed if %WEIGHT% is in the range [-5,20]
and the exit code of  is returned.

That seems like a pretty simple utility to knock out - perhaps I will
;-)

Also, on the FP reporting links idea, that would break the process -
it's important for us to see the message for many reasons, and it's
important for the FP resolution process to be interactive.

_M


  





Re: [sniffer]Re[2]: [sniffer]FP suggestions

2006-06-07 Thread Matt




Pete,

An X-Header would be very, very nice to have.  I understand the issues
related to waiting to see if something comes through, and because of
that, I would maybe suggest moving on your own.

Sniffer doesn't need to be run on every single message in a Declude
system.  Through weight based skipping, many administrators (especially
the ones that could make the most use of this) could skip processing
Sniffer once a certain weight is reached, and in turn that would save
enough load that it should easily make up for needing to re-write the
message to the disk with the modified headers.  On external tests that
allow for weight skipping on my system, I was skipping around 50% of
messages before lightening the load with pre-scanning.

Sniffer could do weight skipping with Declude by accepting the %WEIGHT%
variable in the command line.

SNIFFER-IP        external    063   
"C:\IMail\Declude\Sniffer\customer-code.exe license-code WH=26 WL=-5
CW=%WEIGHT%"    5    0
...etc.

The WH setting says don't run if equal to or greater than, the WL says
don't run if equal to or less than, and the CW passes in the weight
from Declude at the time of calling Sniffer.  It still launches
Sniffer, but it could be stopped immediately before any heavy lifting
is done.

The best solution of course would be for Declude to allow for
weight-based skipping in the config without calling the executable, but
I started asking about that back in the Scott days and I am not holding
out hope for that happening soon considering.  The most realistic
option would seem to then have Sniffer do the heavy lifting of
rewriting itself, and save some CPU and disk I/O by improving
efficiencies with something as simple as weight-based skipping.  I'm
pretty sure the net result would be less CPU and disk I/O overall if
both were done.

Another alternative may be to create a separate executable (with
weight-based skipping) that would only deal with adding headers from
the text file that Sniffer drops in the directory.  There would be less
benefit overall to keeping this all in one app, but it would target the
primary need.  This could easily be written by one of us in _vbscript_ as
a proof of concept.  I have considered doing this before, but it isn't
at the top of my priorities.

BTW, you could maybe even encode links in the headers for FP reporting
through a Web interface, completely removing the forwarding mechanism
from the mix, though you wouldn't have the opportunity to see the
messages which may not be good as a whole.

Matt





Pete McNeil wrote:

  Hello Scott,

Wednesday, June 7, 2006, 10:08:58 AM, you wrote:

  
  
  
 
For me the pain of false positives submissions is  the research
that happens when I get a "no rule found" return.
 
 
 
I then need to find the queue-id of the original  message and then
find the appropriate Sniffer log and pull out the log lines  from
there and then submit it. Almost always in these cases, a rule is  removed.
 
 
 
If this process could be improved that would really  be a time saver.

  
  
This depends on the email system you are using. On some systems
(MDaemon, and postfix, for example) X- headers from SNF can be emitted
into the message. When we see these we can identify the rules directly
without asking for the extra research.

It would be nice if Declude would offer a mechanism to pick up the
optional .xhdr file SNF can generate and include it in the X headers
that it already adds to the message.

I know this begs the question, why not have SNF add the headers for
SmarterMail and IMail platforms, and the reason is that it would
require writing an additional copy of the message to disk. Since these
systems tend to be io bound already (Declude/IMail anyhow) the
performance penalty would be prohibitive. If Declude picks up .xhdr
from SNF directly then it can be included in the ONE rewrite Declude
makes anyway.

I've asked them about this and other improved integration
opportunities for a while now (many months), and I get favorable
responses, but no action so far. I guess we will see :-)

_M

  





Re: [sniffer]FP suggestions

2006-06-06 Thread Matt
d reverse DNS entries for bulk mailers so
that I can treat them differently on my system.  Some I do block by
default, but others that send a majority of legitimate messages I
balance in my system so that they don't fail technical tests (such as
Declude's BADHEADERS, SPAMHEADERS and HELOBOGUS) since they are not
appropriate for non-zombies, and at least two hits on things like
Sniffer, SURBL and SpamCop are required before something gets blocked. 
The essence of the issue here is that while one mailing might be spam,
it doesn't make sense to paint all of the provider's mailings as spam. 
Sniffer still hits a lot of what passes through my system, but they are
not blocked nearly as often as before.  I recall reporting places like
roving.com, icebase.net, cheetahmail.com and other well known providers
to Sniffer as false positives in the past.  I recognize that Sniffer
has a lot of clients that don't care if such things get blocked, and do
care a lot if spam leaks through, and it is tough to target individual
lists as opposed to the entire provider.  Another subset of this is
third-party tracking services that sometimes get tagged based on the
fact that some of their clients do spam, and then there is the subset
that advertises third-parties with direct links to their domains where
those third-parties have spammed, i.e. Entertainment Books, Omaha
Steaks, University of Phoenix Online, etc.  You clearly pulled those
domains from spam samples, but they cross-contaminate opt-in lists
through advertising links.  Based on past discussions, we may well
differ on our opinions about how to deal with this, but it isn't
workable to continually report such things if they continually get
listed in other ways, and that's why I decided sometime ago to track
these providers myself.

Regarding those things that are submitted to Sniffer as spam that I
don't consider to be spam, that's another very tough cookie to crack. 
If one customer reports E-mails from Harry & David to Sniffer as
spam, and I report it as ham...who is right?  I think that this comes
down to having an official definition of spam and communicating it. 
Spamhaus has a definition that isn't workable in the real-world because
it requires affirmative confirmation of being put on a bulk mail list
by companies that you do business with, and as we know, virtually no
one follows this so closely, yet many want these E-mails and everyone
has the ability to unsubscribe.  My definition of spam, like Spamhaus,
is that it is both bulk and unsolicited, however we differ when it
comes to defining unsolicited.  I try to allow any first-party
communications, advertising or otherwise, so long as they are directly
related to the actions that created the relationship (i.e. no
third-party offers), and so long as they honor unsubscriptions without
jumping through hoops (like remembering obscure logins to stop
messages).

There are of course some grey areas around the edges such as lists that
mix opt-ins with harvested/bought lists, but they are fairly rare and I
can't suggest a pure way to deal with such things outside of hard work
and manual review, though I would discourage against collection methods
that can cause pollution such as automated submissions which for
instance can report things like Harry & David because the admin
blacklisted them locally, and then they show up as blocked spam that
Sniffer didn't hit and are submitted.  I think this may be a
contributing factor.  I would prefer that people manually report, and
that they know the rules for what to report (i.e. the spam definition).

I didn't intend to draw you into this discussion within another thread
or at this time, but I do think that Sniffer would benefit from some
more focus on the FP issues.
I hope this helps and I am willing to lend some more ideas or opinions
if you want to bounce some of your own off of me or the list.

Thanks,

Matt





Re: [sniffer]A design question - how many DNS based tests?

2006-06-06 Thread Matt
I have 46 RBL's configured, though 16 are configured to score 
differently on last hop and prior hops.  I would say that more than 35 
of these are things that I would not like to lose.


I weight most RBL's at around half of my Hold weight in Declude.  False 
positives on my system typically hit about 5 different tests of various 
types before they get enough weight to be blocked.  Sniffer is the test 
most often a part of false positives, being a contributing factor in 
about half of them.  About 3/4 of all FP's (things that are blocked by 
my system) are some form of automated or bulk E-mail.  That's not to say 
that other tests are more accurate; they are just scored more 
appropriately and tend to hit less often, but the FP issues with Sniffer 
have grown due to cross checking automated rules with other lists that I 
use, causing two hits on a single piece of data.  For instance, if SURBL 
has an FP on a domain, it is possible that Sniffer will pick that up too 
based on an automated cross reference, and it doesn't take but one 
additional minor test to push something into Hold on my system.


IMO, the more tests, the better.  It's the best way to mitigate FP's.  I 
don't look to Sniffer as anything more than a contributer to the overall 
score.  Sniffer can't block a message going to my system on it's own due 
to it's weighting.  I think it's more important to be accurate than to 
hit more volume, and handling false positive reports with Sniffer is 
cumbersome for both me and Sniffer.  I would hope that any changes seek 
to increase accuracy above all else.  Sniffer does a very good job of 
keeping up with spam, and it's main issues with leakage are caused by 
not being real-time, but that's ok with me.  At the same time Sniffer is 
the test most often a part of false positives, being a contributing 
factor in about half of them.  About 3/4 of all FP's (things that are 
blocked by my system) are some form of automated or bulk E-mail.  That's 
not to say that other tests are more accurate; they are just scored more 
appropriately and tend to hit less often, but the FP issues with Sniffer 
have grown due to cross checking automated rules with other lists that I 
use, causing two hits on a single piece of data, and the growth of the 
Sniffer userbase which has become more likely to report first-party 
advertising as spam, either manually or through an automated submission 
mechanism.


Matt




Pete McNeil wrote:


Hello Sniffer Folks,

I have a design question for you...

How many DNS based tests do you use in your filter system?

How many of them really matter?

Thanks!

_M

 




#
This message is sent to you because you are subscribed to
 the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Message loop

2006-04-19 Thread Matt




Pete,

I tried replying to some FP reports and I received back some loop
reports from your gateway:



Failed to deliver to '[EMAIL PROTECTED]'
mail loop: too many hops (too many 'Received:' header fields)






Reporting-MTA: dns; server75.appriver.com

Original-Recipient: rfc822;<[EMAIL PROTECTED]>
Final-Recipient: system;<[EMAIL PROTECTED]>
Action: failed
Status: 5.0.0





Received: from [10.238.11.79] (HELO inbound.appriver.com)
  by server75.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 204450520 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:36:23 -0400
Received: by inbound.appriver.com (CommuniGate Pro PIPE 5.0.6)
  with PIPE id 40116463; Wed, 19 Apr 2006 16:36:22 -0400
Received: from [69.20.119.203] (HELO server70.appriver.com)
  by inbound.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 40116469 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:36:18 -0400
Received: by server70.appriver.com (CommuniGate Pro PIPE 5.0.6)
  with PIPE id 20584892; Wed, 19 Apr 2006 16:31:39 -0400
Received: from [12.6.93.226] (HELO exchange01.apprivercorp.com)
  by server70.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 20584879 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:31:35 -0400
Received: from server75.appriver.com ([207.97.224.142]) by exchange01.apprivercorp.com with Microsoft SMTPSVC(6.0.3790.1830);
	 Wed, 19 Apr 2006 15:32:57 -0500
Received: from [10.238.11.110] (HELO inbound.appriver.com)
  by server75.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 204446070; Wed, 19 Apr 2006 16:31:35 -0400
Received: by inbound.appriver.com (CommuniGate Pro PIPE 5.0.6)
  with PIPE id 43046685; Wed, 19 Apr 2006 16:31:33 -0400
Received: from [207.97.227.84] (HELO www03.appriver.com)
  by inbound.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 43046648; Wed, 19 Apr 2006 16:31:27 -0400
Received: from outbound.appriver.com [10.238.11.133] by www03.appriver.com with ESMTP
  (SMTPD32-8.15) id ADFB2787015E; Wed, 19 Apr 2006 16:30:51 -0400
Received: from [10.238.11.92] (HELO server87.appriver.com)
  by outbound.appriver.com (CommuniGate Pro SMTP 5.0.2)
  with SMTP id 75970524 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:30:43 -0400
Received: by server87.appriver.com (CommuniGate Pro PIPE 5.0.6)
  with PIPE id 153351626; Wed, 19 Apr 2006 16:30:43 -0400
Received: from [216.88.36.96] (HELO mnr1.microneil.com)
  by server87.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 153351606 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:30:34 -0400
Received: by mnr1.microneil.com (Postfix, from userid 93)
	id 578433F20C0; Wed, 19 Apr 2006 16:30:34 -0400 (EDT)
X-SortMonster-MessageSniffer-Rules: snfrv2r3-v2-3.2
	0-69638-850--m
	0-113972-1237-3710-m
	0-113972-1237-12801-m
	0-113972-1237-23632-m
	0-113972-1237-31312-m
	0-69638-850--f
X-SortMonster-MessageSniffer-Result: 0
Received: from SortMonster.com (unknown [216.88.36.181])
	by mnr1.microneil.com (Postfix) with ESMTP id 0DFE03F20BE
	for <[EMAIL PROTECTED]>; Wed, 19 Apr 2006 16:30:34 -0400 (EDT)
Received: from outbound.appriver.com [207.97.229.125] by SortMonster.com with ESMTP
  (SMTPD32-6.05) id ADE1CCC009A; Wed, 19 Apr 2006 16:30:25 -0400
Received: from [10.238.11.52] (HELO inbound.appriver.com)
  by outbound.appriver.com (CommuniGate Pro SMTP 5.0.2)
  with ESMTP id 75970346 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:30:24 -0400
Received: by inbound.appriver.com (CommuniGate Pro PIPE 5.0.6)
  with PIPE id 98719021; Wed, 19 Apr 2006 16:30:22 -0400
Received: from [66.109.52.12] (HELO mailpure.com)
  by inbound.appriver.com (CommuniGate Pro SMTP 5.0.6)
  with ESMTP id 98718972 for [EMAIL PROTECTED]; Wed, 19 Apr 2006 16:30:10 -0400
Received: from [192.168.0.100] [24.29.42.95] by mailpure.com with ESMTP
  (SMTPD32-8.15) id ADD0457300A2; Wed, 19 Apr 2006 16:30:08 -0400
Message-ID: <[EMAIL PROTECTED]>
Date: Wed, 19 Apr 2006 16:31:16 -0400
From: Matthew Bramble <[EMAIL PROTECTED]>
Organization: MailPure
User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: Sniffer FP Support <[EMAIL PROTECTED]>
Subject: Re: hb064pkq - [EMAIL PROTECTED]
References: <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
In-Reply-To: <[EMAIL PROTECTED]>
Content-Type: multipart/alternative;
 boundary="040105010502030206010001"
X-MailPure: 
X-MailPure: Spam Score: 0
X-MailPure: Scan Time: 19 Apr 2006 at 16:30:09 EST
X-MailPure: Spool File: D9DD0457300A2541E.SMD
X-MailPure: Server Name: 
X-MailPure: SMTP Sender: [EMAIL PROTECTED]
X-MailPure: Received From: (Private IP) [192.168.0.100]
X-MailPure: Country Chain: 
X-MailPure: 
X-MailPure: Spam and virus blocking services provided by MailPure.com
X-MailPure: 
X-Policy: sortmonster.com
X-Note: This Email was scanned by AppRiver SecureTide
X-Note: Spam Tests Failed: 
X-Country-Path: PRIV

Re: [sniffer] New RuleBot F002 Online

2006-03-13 Thread Matt

Pete,

I would definitely like to see rules classified for what they are based 
on instead of the content, but certainly I don't expect to see that 
without a major new release.


Rules such as those based phrases, IP's, domains, patterns, and viruses 
all have different accuracies and issues.  If you were also to group 
them in a similar way, we could tag multiple rules for a single message 
so that for instance a phrase and a domain both hit on the same 
message.  My logs show that I average 3 matches for every final result.  
If this becomes a plan, I would proceed very carefully since doing it in 
a way that could cause a lot of cross-over pollution would make comboing 
such things for a higher score unwise.  I would in fact recommend 
creating something like 4 groups;


   1) IP's,
   2) Domains, E-mail addresses & Links,
   3) Patterns (like domain patterns and obfuscation), and
   4) Content.

There shouldn't be any crossover of FP's in such a thing, so multiple 
hits would be stronger.


In relation to the placement of RuleBot F002 results, I would just favor 
pretty much anything but the 60 and 63 groups because they are scored 
lower due to FP's on my system, and it has generally been said by others 
that this is the case on theirs as well.  F002 has the appearance of 
being hyper-accurate, and it would help if it was placed in a group with 
other hyper accurate results.  Even placing it in 61 (Experimental) 
would be preferred over 60.


Thanks,

Matt


Pete McNeil wrote:


On Friday, March 10, 2006, 3:41:00 PM, Darin wrote:

DC> Totally agree.  I'd like to see some separation between rules created by
DC> newer rulebots and preexisting rules.  That way if there becomes an issue
DC> with a bot, we can turn off one group quickly and easily.

There is no way to do this without completely reorganizing the result
codes or defeating the competitive ranking mechanisms.

If you feel strongly about it I can move these rule groups to lower
numbers on your local rulebase or make some other numbering scheme -
but I don't recommend it. Moving these rule groups to lower numbers
would cause them to win competitions with other rules where they would
normally not win.

At some point in the future we might renumber the rule groups again,
but I like to avoid this since there are so many folks that just don't
get the message (no matter what we do to publish it) when we make
changes like this and so any large scale changes tend to cause
confusion for very long periods.

For example: I still, on occasion, have questions about the
gray-hosting group which has not existed for quite a long time.

So far there has not been one FP reported on bot F002 and extremely
few on F001 - the vast majority of those associated with the very
first group of listings prior to the last two upgrades for the bot.

_M



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] New RuleBot F002 Online

2006-03-10 Thread Matt

Pete,

In light of current and prolonged issues, this seems like a good and 
safe tactic.  I would appreciate it however if maybe you could place the 
rules in another result code since this result code is not as accurate 
as some others are and some of us weight it lower than others.


Thanks,

Matt



Pete McNeil wrote:


Hello Sniffer Folks,

 Rulebot F002 has been placed online.

 This rulebot captures and creates geocities web links from the
 "chatty" campaigns. This is largely a time saver for us humans... we
 will focus our attention more on abstracts for these campaigns now
 that F002 will be capturing the raw links.

 Rules from F002 will produce a 60 result code (Ungrouped).

 The engine is following a standard protocol that we have used for
 months. I expect no false positives from this one.

Thanks,
_M

Pete McNeil (Madscientist)
President, MicroNeil Research Corporation
Chief SortMonster (www.sortmonster.com)
Chief Scientist (www.armresearch.com)


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] New rulebase compilers online.

2006-03-06 Thread Matt

Pete,

Does this mean that you are somehow supporting incremental rule base 
updates, or is it that the compiler is just much faster so we will get 
the same number of updates, but generally get them 40-120 minutes 
earlier in relation to the data that generated them?


Either way, definitely an improvement.  The closer to real-time we can 
get, the better.


Thanks,

Matt



Pete McNeil wrote:


Hello Sniffer Folks,

 I have just completed work to upgrade the rulebase compiler bots.
 They are now significantly more efficient. As a result you will be
 seeing updates more frequently.

 Previous lag was between 40-120 minutes.

 Current lag (sustained) is < 5 minutes.

 More timely updates should equate to lower spam leakage for new
 spam.

 You do not need to take any action on this. This note is for your
 information only.

Thanks,

_M

Pete McNeil (Madscientist)
President, MicroNeil Research Corporation
Chief SortMonster (www.sortmonster.com)
Chief Scientist (www.armresearch.com)


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] New Rulebot F001

2006-03-06 Thread Matt
Lowest result code wins with Sniffer, 63 is the highest score currently, 
and these rules are going in a place where formerly they were only 
IP's,so you shouldn't need to adjust anything.  I would imagine that 
refinement should improve accuracy in the IP rules, though I don't 
believe that it will be near perfect.


I do however want to voice my general and ongoing concern about 
automation and extracting IP's from spamtraps.  This can be done, but 
one must be very careful to remove legitimate or compromised hosts, and 
most that don't bother to do so are even worse than SpamCop when it 
comes to listing ISP's and the like.


For a good picture of whether a host is spammy, one should also look at 
all of the good traffic, and make sure that there is a huge sample of 
data to work with.  Alternatively, one should be checking for things 
such as the host being legitimate (does it answer with a name that 
matches the reverse DNS or HELO that it gave you, does it have "mail" in 
the name, etc.).  Also, it makes sense to have different qualification 
mechanisms for zombie spam and static spammers since their heuristics 
are quite different and can be targeted more effectively and more 
accurately with mechanisms built to their patterns.


I do fear that automation of this sort, unless it is done in a very 
reserved manner (throwing out what can't be almost absolutely 
confirmed), will result in foreign hosts being caught, and large 
ISP's/E-mail providers much in the same way as they have been.  CBL 
takes the reserved approach and is therefore much, much more accurate 
than SpamCop, yet their results aren't that far off the last I checked.  
CBL primarily targets zombies with their methods, and they do this 
because it is much easier to find a sign of an illegitimate host (that 
also hit a spamtrap).


Matt



Jay Sudowski - Handy Networks LLC wrote:


There's been at least one FP ;)

--
Rule - 861038
NameF001 for Message 2888327: [216.239.56.131]
Created 2006-03-02
Source  216.239.56.131
Hidden  false
Blocked false
Origin  Automated-SpamTrap
TypeReceivedIP
Created By  [EMAIL PROTECTED]
Owner   [EMAIL PROTECTED]
Strength2.08287379496965
False Reports   0

From Users  0

[FPR:B]

The rule is below threshold, and/or badly or broadly coded so it will be
removed from the core rulebase.


My concern with automated IP rule coding is that we use Sniffer because
it's extremely accurate.  Coding rules linked to IPs, particularly IPs
that are used by google or any large ISP to send large amounts of
(mostly legitimate) email is contrary to what Sniffer is great at, which
is tagging spam that no one else is.

Is response code 63 going to be utilized for any other purposes?  If
not, I will let Declude know to weight these responses lower than normal
Sniffer.

- Jay 
-Original Message-

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pete McNeil
Sent: Monday, March 06, 2006 3:00 PM
To: sniffer@sortmonster.com
Subject: [sniffer] New Rulebot F001

Hello Sniffer folks,

 The first of the new rulebots is coming online.

 Rulebot F001 creates IP rules for sources that consistently fail
 many tests while also reaching the cleanest of our spamtraps.

 The rules will appear in group 63.

 The bot is playing catchup a bit (since there have been few IP rules
 at all since we disabled the old bots).

 The algorithms used in this bot have been tested manually for 2
 weeks with no false positives.

 Expect an increase in your rulebase size while F001 catches up with
 current spamtrap data.

Thanks,

_M

Pete McNeil (Madscientist)
President, MicroNeil Research Corporation
Chief SortMonster (www.sortmonster.com)
Chief Scientist (www.armresearch.com)


This E-Mail came from the Message Sniffer mailing list. For information
and (un)subscription instructions go to
http://www.sortmonster.com/MessageSniffer/Help/Help.html



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Bad Rule - 828931

2006-02-07 Thread Matt

Pete,

The overflow directory disappeared when 3.x was introduced.  I posted a 
follow up on the Declude list about how to do this.


Matt



Pete McNeil wrote:


On Tuesday, February 7, 2006, 8:14:53 PM, David wrote:

DS> Hello Pete,

DS> Tuesday, February 7, 2006, 8:11:50 PM, you wrote:

DS>>> Not sure, can anyone think of a way to cross check this? What if I put
DS>>> all the released messages back through sniffer?

PM>> That would be good -- new rules were added to correctly capture the
PM>> bad stuff. I almost suggested something more complex.

DS> That said...anyone know specifics of reprocessing messages through
DS> Declude on Imail? I know that in 1.x Declude would drop some kind of
DS> marker so that q/d's copied into spool would not be reprocessed but I
DS> don't remember what it was and don't know if it works same in 3.x.

DS> Posted question on Declude JM list but no answer so far.

IIRC messages in the spool under scan would be locked until declude
was done with them. After that, placing the Q and D files into the
spool would mean that normal IMail processes would deliver them on the
next sweep.

The way around this was to place the messages back in the overflow
folder (I'm not sure which parts - I think the Q goes in overflow and
the D stays in spool -- someone will know for sure).

The theory there is that messages sent to the overflow folder are sent
there before they are scanned in order to backlog the extra processing
load. So, messages coming out of the overflow folder would naturally
be scanned ( for the first time - thinks the robot ).

_M


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Bad Rule - 828931

2006-02-07 Thread Matt

Pete,

Gotcha.  Basically anything that I trapped that is over 10 KB may have 
failed this (because that would be indicative of having an attachment in 
base64).  It is much less likely to have hit on things without 
attachments, but it of course would be possible, and the bigger it was, 
the more likely that it could have failed.


I also searched my Sniffer logs for the rule number and found no hits.  
It appears that I missed the bad rulebase.


Thanks,

Matt



Pete McNeil wrote:


On Tuesday, February 7, 2006, 6:15:13 PM, David wrote:

DS> Sorry, wrong thread on the last post.

DS> Add'l question. Pete, what is the content of the rule?

The rule info is:

Rule - 828931
NameC%+I%+A%+L%+I%+S%+V%+I%+A%+G%+R%+A
Created 2006-02-07
Source  C%+I%+A%+L%+I%+S%+V%+I%+A%+G%+R%+A
Hidden  false
Blocked false
Origin  User Submission
TypeManual
Created By  [EMAIL PROTECTED]
Owner   [EMAIL PROTECTED]
Strength3.84258274153269
False Reports   0
From Users  0


Rule belongs to following groups
[252] Problematic

The rule was an attempt to build an abstract matching two ed pill
names (you can see them in there) while compensating for heavy
obfuscation. The mistake was in using %+ through the rule.

The rule would match the intended spam (and there was a lot of it, so
22,055 most likely includes mostly spam.

Unfortunately it would also match messages containing the listed
capital letters in that order throughout the message. Essentially, if
the text is long enough then it will probably match. A greater chance
of FP match if the text of the message is in all caps. Also if there
is a badly coded base64 segment and file attachment (badly coded
base64 might not be decoded... raw base64 will contain many of these
letters in mixed case and therefore increase the probability of
matching them all).

Hope this helps,

_M






This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Bad Rule - 828931

2006-02-07 Thread Matt
Yes, knowing what the bad rule was will help me quickly narrow down the 
FP's that this might have caused.  I can't search my held E-mail for a 
rule number, and I don't have the tools set up or the knowledge of grep 
yet to do a piped query of Sniffer's logs to extract the spool file names.


BTW, David, it is generally better not to hold or block on one single 
test, especially one that automates such listings (despite whatever 
safeguards there might be).


Thanks,

Matt



David Sullivan wrote:


Sorry, wrong thread on the last post.

Add'l question. Pete, what is the content of the rule?

Tuesday, February 7, 2006, 6:05:53 PM, you wrote:

DS> Somebody please tell me I'm doing something wrong here. I use this
DS> expression in Baregrep "Final\t828931" and it yields 22,055 matching
DS> lines across 3 of my 4 license's log files.

DS> Since this is set to my hold weight, I'm assuming that means I've had
DS> 22,055 holds on this rule?




 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Watch out... SURBL & SORBS full of large ISPs and Antispamprovidres.

2006-01-17 Thread Matt

Pete,

I reviewed my Hold range going back to Monday morning and I wasn't able 
to find anything out of the ordinary.  I also searched my logs from my 
URIBL tool that queries SURBL among other things, and I wasn't able to 
find any hits for those domains that you pointed out.  I guess that I 
wasn't affected.


As far as promoting such domains to Sniffer through automated means 
goes, I believe that this helps substantiate the need for adding extra 
qualifications.  For instance, the chances of a 2 letter dot-com domain 
being a legitimately taggable spam domain are almost zero.  To a lesser 
extent the same is true as you add on more characters.  Also, it would 
be very helpful for such situations and false positives in general if 
you were to track long-standing domains that appear in ham and don't add 
these automatically by cross checking these blacklists.  There are many 
different ways to accomplish this.  I have found over time that foreign 
free E-mail services can get picked up by Sniffer, and because these 
services are frequently forged and legitimate traffic is low enough that 
people don't often either notice/report false positives, that these 
rules stay high in strength and live a very long time.  You can in fact 
prevent this from happening to a large extent with further validation.  
SURBL is subject to false positives on such things, but they expire such 
rules using different techniques that prevent them from being long-term 
issues, but these cross-checked false positives can have a life of their 
own on Sniffer sometimes.


Thanks,

Matt



Pete McNeil wrote:


On Tuesday, January 17, 2006, 7:21:11 AM, Matt wrote:

M> Pete,

M> w3.org would be a huge problem because Outlook will insert this in the
M> XML headers of any HTML generated E-mail.

M> If you could give us an idea of when this started and possibly ended, 
M> that would help in the process of review.


Indications are that the rule was in our system for only a couple of
hours this morning before we caught what was going on. Many folks
won't have ever seen the rule... though it may still be in surbl.

In fact, all of these rules that we know of followed very much the
same profile. Two of us were working in the rulebase at the time due
to heavy outscatter from a fake ph.d campaign and several new variants
of chatty_watches, chatty_drugs, and druglist.

We're continuing to look for any rules that might have entered our
system this way and we haven't found any new ones since about the time
I wrote my first post on it.

I'm about to run through false positives to see what might have been
reported and remove those.

Hope this helps,

_M



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Watch out... SURBL & SORBS full of large ISPs and Antispamprovidres.

2006-01-17 Thread Matt

Pete,

w3.org would be a huge problem because Outlook will insert this in the 
XML headers of any HTML generated E-mail.


If you could give us an idea of when this started and possibly ended, 
that would help in the process of review.


Thanks,

Matt



Pete McNeil wrote:


Hello Sniffer Folks,

 Watch out for false positives. This morning along with the current
 spam storm we discovered that SURBL and SORBs are listing a large
 number of ISP domains and anti-spam service/software providers.

 As a result, many of these were tagged by our bots due to spam
 arriving at our system with those domains and IPs. Most IPs and
 domains for these services are coded with "nokens" in our system to
 prevent this kind of thing, but a few slipped through.

 We are aggressively hunting any more that might have arrived.

 You may want to temporarily reduce the weight of the experimental IP
 and experimental ad-hoc rule groups until we have identified and
 removed the bad rules we don't know about yet.

 Please also do your best to report any false positives that you do
 identify so that we can remove any bad rules. I don't expect that
 there will be too many, but I do want to clear them out quickly if
 they are there.

 Please also, if you haven't already, review the false positive
 procedures: 
http://www.sortmonster.com/MessageSniffer/Help/FalsePositivesHelp.html

 Pay special attention to the rule-panic procedure and feature in
 case you are one of the services hit by these bad entries.

 An example of some that we've found in SURBL for example are
 declude.com, usinternet.com, and w3.org

 It's not clear yet how large the problem is, but I'm sure it will be
 resolved soon.

 Hope this helps,

Thanks,
_M

Pete McNeil (Madscientist)
President, MicroNeil Research Corporation
Chief SortMonster (www.sortmonster.com)
Chief Scientist (www.armresearch.com)


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 




This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Rash of false positives

2005-11-09 Thread Matt




John,

The mystery heap issue is a memory issue with Windows where it only
reserves so much memory for running things like Declude, Sniffer, other
external tests and your virus scanners.  If you have something that is
hanging, running slowly, or taking too long, it can gobble up all of
the memory available to these launched processes and then result in
errors.  Generally speaking, you can only get about 40 or so processes
of these types to run at one time before you could start seeing these
errors.  Declude counts as one process, and often there is one other
process that Declude launches that goes to this count (external tests
and virus scanners are all run in serial so only one can be launched at
a time by a single Declude process).  If you have something like a
virus scanner that crashes and then pops up a window on your next
login, this can count towards the number of open processes.

You can specify in Declude how many processes to run before Declude
starts dumping things into an overflow, either the overflow folder in
2.x and before, or something under proc in 3.x.  If you create a file
called Declude.cfg and place in it "PROCESSES   20" that should protect
you from hitting the mystery heap's limitations unless something is
crashing and hanging.  You might want to check Task Manager for
processes to verify if things are hanging since not everything will pop
up a window.

I believe that running Sniffer in persistent mode will help to
alleviate this condition, but it's only one part and if the mystery
heap is the cause, it might just cause the errors to be triggered on
other IMail launched processes including Declude.exe and your virus
scanners.

Matt



John Moore wrote:

  
  
  
  
  
  


  
  
  

  
  
  We have not  run snf2check on the
updates. And
it may be a coincidence or bad timing that sniffer
appears to be the culprit. But we have stopped sniffer
(commented out in the declude global.cfg)
for an observed period of time and the mail never stops (and had never
stopped
before sniffer) and conversely, it only
stops when sniffer is running.
  We have not
gone the extra steps of
putting sniffer in persistent mode.
  We are
looking at moving the imail/declude/sniffer
setup to a newer box with more
resources.
  Currently on
a dell 2450 dual 833 and 1
gig of ram and raid 5. Volume of email is less than 10,000 emails per
day.
  J
   
  
  
  
  From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On
Behalf Of Darin Cox
  Sent: Wednesday,
November 09, 2005
1:47 PM
  To: sniffer@SortMonster.com
  Subject: Re: Re[4]:
[sniffer] Rash
of false positives
  
   
  
  Are corrupted
rulebase files the culprit?   How do you update... and do you run
snf2check on the updates?
  
  
   
  
  
  Just wondering if
the rulebase file is the problem, if the problem occurs during the
update, or if you are running into obscure errors with the EXE
itself
  
  
  
Darin.
  
  
   
  
  
   
  
  
  - Original
Message - 
  
  From: John Moore
  
  
  
  To: sniffer@SortMonster.com
  
  
  
  Sent: Wednesday,
November 09, 2005 12:42 PM
  
  
  Subject: RE: Re[4]:
[sniffer] Rash of false positives
  
  
  
   
  
  We had this
same thing happen.
  It has been
happening more frequently
recently and we are looking into disabling sniffer as it seems to be
the
culprit each time.
  John Moore
305 Spin
   
  
  
  
  From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On
Behalf Of Richard Farris
  Sent: Wednesday,
November 09, 2005
11:38 AM
  To: sniffer@SortMonster.com
  Subject: Re: Re[4]:
[sniffer] Rash
of false positives
  
   
  
  This
morning my server quit sending mail and my tech said the Dr.
Watson error on the server was my Sniffer file...I rebooted and thought
it was
OK but quit again..I had a lot of mail back logged...so I updated a new
rule
base but it did not seem to helpI reinstalled Imail and things seem
OK but
slow since there is such a back log of mailIf things don't get back
to
normal I will be back..
  
  
  
Richard Farris
Ethixs Online
1.270.247. Office
1.800.548.3877 Tech Support
"Crossroads to a Cleaner Internet"
  
  

-
Original Message - 


From: Pete
McNeil 


To: Darin
Cox



Sent:
Tuesday, November 08, 2005 3:03 PM


Subject: Re[4]:
[sniffer] Rash of false positives


 

On
Tuesday, November 8, 2005, 3:25:20
PM, Darin wrote:
 


  

  
  > 
  
  
  Hi Pete,
   
  There was a
consistent stream of false positives over the mentioned time period,
not just a blast at a particular time.  They suddenly started at 5pm
(shortly after a 4:30pm rulesbase update), and were fairly evenly
spread from 5pm - 11pm and 6am - 10am today (not many legitimate emails
came in between 11pm and 6am)...spanning 4 other rulebase updates at
8:40pm, 12am, 3am, and 6:20am.  There were a number of diffe

Re: [sniffer] Large amounts of spam still getting through

2005-10-14 Thread Matt
This is a form of graylisting, but instead of returning a temporary SMTP 
error, it spools the E-mail for processing later.


The better way around this is to use two pieces of information and use 
DNS as the database.  You would capture the Mail From and IP address and 
the Reverse DNS address.  On the first match of either the Mail From + 
IP or the Mail From + Reverse DNS base domain, you would add a record to 
the zone and spool the message  for scanning X minutes later.  The 
lookups would use a MailFromBL format where you replace the @ with a dot 
and characters not allowed would be removed, and double dots replaced 
with single dots.  For instance:


   Mail From: [EMAIL PROTECTED]
   Reverse DNS: mnr1.microneil.com
   IP: 216.88.36.96
   Lookups: 
microneil.com.sniffer-owner.SortMonster.com.graylist.example.com
   
96.36.88.216.sniffer-owner.SortMonster.com.graylist.example.com
   A Records: 
*.microneil.com.sniffer-owner.SortMonster.com A127.0.0.2
 
96.36.88.216.sniffer-owner.SortMonster.com A127.0.0.2


With Declude you could then do a lookup like so:

   SEENBEFORE1dnsbl
%REVDNS%.%MAILFROMBL%.graylist.example.com127.0.0.200
   SEENBEFORE2dnsbl%IP4R%.%MAILFROMBL%.graylist.example.com 
  127.0.0.200


Note that the reverse DNS base domain has a wildcard in it. This is 
important because of issues with graylisting where multiple servers can 
send messages for the same senders, and you don't want to be continually 
delaying them so you just wildcard the base domain.


You could then use Declude's COPYFILE action to copy the file to a 
holding directory and then a separate process that would move pairs of 
files older than X minutes back into Declude's overflow or proc 
(depending on the version).  The program that moves the files could also 
extract the IP, Reverse DNS and Mail From information and add it 
automatically to a zone on every run.


This has merit, but there are a couple of issues.  First, Declude's 
MailFromBL needs some tweaking so that it doesn't use characters that 
are invalid in zones, and it needs to replace double dots which are also 
invalid.  Secondly, I know that this isn't a good global solution 
because some people don't want their messages delayed, and to give 
things like Sniffer and SpamCop enough time to react, it might mean 
holding for several hours and that isn't optimal for all 
users/customers, though some might take the bad with the good.


I could do this fairly easily in plain VBScript, and it only needs one 
script since Declude could handle the spooling of the message with 
existing functionality.


Matt




Mike Nice wrote:

getting much better at what they do.  When a spammer uses Geocities 
links, hijacks real accounts on major providers to send spam through, 
and changes their techniques every few hours, it makes it difficult 
for Sniffer to proactively block them, and the delay between rulebase 
updates means a delay in catching things that have been tagged.



 This brings to mind a technique with optional adaptive delay - 
enabled by the user. Each mail is assigned a 'triplicate': (To_Email, 
From_Email, and domain_of_sending_server).  Previously unknown 
triplicates are held for a period of time before being examined for 
spam.  The delay is long enough that SpamCop, Sniffer, and InvURIBL 
mailtraps see copies of the spam and update the blacklists.


  This would be hard to do with the stock IMail, but possibly could be 
done by Declude with the V3 architecture and a database.


  It still doesn't provide a good answer to the problem of spammers 
hijacking a computer and sending spam through legitimate servers.



This E-Mail came from the Message Sniffer mailing list. For 
information and (un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html





This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] POP Approach

2005-10-14 Thread Matt
Funny, I sent a message about this last night, and since I didn't yet 
reply to Pete, I'll do it here.


IMO, false positives are way under reported, and reporting spam has a 
negligible effect on capture performance.  It seems to me that people 
should be more focused on reporting false positives, and that Sniffer 
should be more focused in correcting them, and allowing for POP 
retrieval would be a step in that direction.  I disagree that POP 
retrieval would impede the interactivity of the process, it would just 
aid the first step in reporting it.  It is much easier for me to copy 
false positives that hit Sniffer to a mail box instead of manually 
reporting them.  This however is something that many won't bother doing 
and that might be a good enough reason to not bother with it.  I am 
still going to automate my own reports.  I have reprocess links in our 
spam review accounts, and I will just need to end up writing something 
complicated that takes the old message and creates a new one with it 
attached.  This is a must for me because of the time involved.


And just to clarify the issue with spam submissions being mostly 
ineffective.  I have found from maintaining my own blacklists that 
reactive methods of blocking provide only a short-term gain when 
combined with the other components in the system.  I could literally sit 
here blacklisting things 24 hours a day and get less than a 0.1% gain in 
spam blocking, and it's better that I focus on other things with more 
reward.  Sniffer will pick up almost everything that we see without 
effort, especially the big zombie spam campaigns and new static spam 
blocks.  The only things that Sniffer won't generally see without our 
help is the niche spam, low volume spam campaigns, and some foreign 
spam.  I'm sure that Pete likes to have the feedback of what is getting 
through, but I don't know that it does very much for capturing a 
measurably larger amount of spam.


Matt



Pete McNeil wrote:


On Friday, October 14, 2005, 11:18:18 AM, Daniel wrote:

DB> Hello Pete,

DB> Are you going to implement something similar for false positives?

No.

The false positive process is very interactive, so each case is
handled individually until it is resolved. This works best as it is
currently described because a new email thread is created for each new
case and that thread can be followed to ground.

In contrast, spam submissions are treated anonymously without any
further interaction so it is appropriate for us to pick up the
messages and move on with our processing.

Hope this helps,

_M



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Large amounts of spam still getting through

2005-10-14 Thread Matt

Chuck,

The issue is not that Sniffer is lapsing, but that the spammers are 
getting much better at what they do.  When a spammer uses Geocities 
links, hijacks real accounts on major providers to send spam through, 
and changes their techniques every few hours, it makes it difficult for 
Sniffer to proactively block them, and the delay between rulebase 
updates means a delay in catching things that have been tagged.  These 
drug spams have also rendered tools like URIBL useless, and most here 
can only rely on blacklists, but the volume is high enough, and the IP's 
are clean enough that some still get through.  Sniffer mostly relies on 
pseudo-URIBL functionality, but it can detect such things given that 
rules have been written.


The spammers are definitely exploiting many weaknesses, and it's a very 
difficult issue.  Personally I was only able to catch these by 
programming a whole application than parses messages and analyzes 
patterns, and I still have to add new patterns for this stuff as it 
morphs.  Pete would call what I do a neural-net, or process that works 
on multiple levels to create a pattern.  Sniffer wasn't designed to 
operate like this, and that worked great for the most part until spam 
blockers raised the bar high enough that the spammers have improved 
their techniques.  I don't expect for it to get any better, in fact I 
expect that spammers will start hacking AUTH to send through legitimate 
servers and exploiting things like free hosting sites in much greater 
numbers than they are now.  When Sniffer goes real-time, it will allow 
it to keep up with the spammers, but for the moment, some of these guys 
are leading the way.


Matt



Chuck Schick wrote:


Pete:

Thanks.  I am just frustrated by the continued spam growth.

Chuck Schick
Warp 8, Inc.
(303)-421-5140
www.warp8.com

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Pete McNeil
Sent: Friday, October 14, 2005 9:08 AM
To: Chuck Schick
Subject: Re: [sniffer] Large amounts of spam still getting through


On Friday, October 14, 2005, 10:59:05 AM, Chuck wrote:

CS> We are seeing a lot of the drug spam getting through.  Anyway that 
CS> sniffer could start catching these.  And yes I am forwarding them 
CS> all.


There are a number of new campaigns launched today with some heavy bandwidth
behind them. We have rules in place for most (if not all) of the new stuff,
however there is a delay before these rules might get to you - during that
window some of these will get through.

Over the past few months we have increased the rate at which we send out
updates - nearly cutting the time in half. Updates are now sent every 180
minutes or so. We are also working on the next version which will allow for
nearly instantaneous updates.

In the mean time we will continue to work on speeding things up as much as
we can.

Hope this helps,

_M



This E-Mail came from the Message Sniffer mailing list. For information and
(un)subscription instructions go to
http://www.sortmonster.com/MessageSniffer/Help/Help.html


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Sniffer working now

2005-10-11 Thread Matt




Pete,

You're one of those "Reply-All" people aren't you :)

FYI, I had a customer press Reply-All on a message with 1,880
recipients on Thursday...he still can't use his account.  The number of
recipients uncovered a bug in more than one E-mail server where it
couldn't figure out if it had bounced the message successfully, so it
kept retrying and retrying.  These messages were also rather large, and
our URIBL component went nuts while trying to process every address
with DNS lookups so it was almost a denial of service for us as the
recipient.

Just figured I would give you or anyone else a kick out of the
Reply-All habit :)

Matt





Pete McNeil wrote:

  
  
  
  GREAT!
  
  
  _M
  
  
  On Tuesday, October 11, 2005, 10:58:10 AM, Stephen wrote:
  
  
  
  

  

>


Pete, 
It's working fine now...






agseap2t    20051011023411  
 d249f017a01ec.smd    0    16    Match    29721    60    1549  
 1571    63
agseap2t    20051011023411  
 d249f017a01ec.smd    0    16    Final    67573    52    0    3954
   63
agseap2t    20051011030147  
 d2b19014f0222.smd    15    0    White    74524    0    1    621  
 51
agseap2t    20051011030147  
 d2b19014f0222.smd    15    0    Final    74524    0    0    2854  
 51
agseap2t    20051011031628  
 d2e89017a0237.smd    15    16    Match    506947    61    20    32
   58
agseap2t    20051011031628  
 d2e89017a0237.smd    15    16    Match    183021    61    1975  
 1984    58
agseap2t    20051011031628  
 d2e89017a0237.smd    15    16    Final    506947    61    0  
 6622    58
agseap2t    20051011032054  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011032054  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011032054  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011032055  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0


--- Continued
---


agseap2t    20051011033829  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011033829  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011033829  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011033830  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011033830  
 -INITIALIZING-    0    0    ERROR_RULE_AUTH    73    0    0    0    0
agseap2t    20051011034716  
 d35c001550253.smd    0    94    Clean    0    0    0    2529    57
agseap2t    20051011043007  
 d3fc80174028a.smd    16    16    Match    232386    63    1    46
   60
agseap2t    20051011043007  
 d3fc80174028a.smd    16    16    Match    459995    58    6557  
 6599    60
agseap2t    20051011043007  
 d3fc80174028a.smd    16    16    Final    459995    58    0  
 6727    60
agseap2t    20051011043140  
 d401d017a028c.smd    0    31    Match    510479    52    217  
 280    57
agseap2t    20051011043140  
 d401d017a028c.smd    0    31    Match    496805    58    5770  
 5782    57
agseap2t    20051011043140  
 d401d017a028c.smd    0    31    Match    465550    59    7180  
 7218    57
agseap2t    20051011043140  
 d401d017a028c.smd    0    31    Final    510479    52    0    7335
   57
agseap2t    20051011043619  
 d413f018f028f.smd    16    16    Match    506947    61    20    32
   50


--- Continued
---


agseap2t    20051011145215  
 dd19b0195073f.smd    0    16    Clean    0    0    0    6736    66
agseap2t    20051011145219  
 dd19f018f0741.smd    0    31    Match    48763    61    1    44  
 43
agseap2t    20051011145219  
 dd19f018f0741.smd    0    31    Match    462947    61    1598  
 1613    43
agseap2t    20051011145219  
 dd19f018f0741.smd    0    31    Final    48763    61    0    15686
   43
agseap2t    20051011145415  
 dd213016d0744.smd    0    47    White    257246    0    0    163  
 50
agseap2t    20051011145415  
 dd213016d0744.smd    0    47    Final    257246    0    0    13308
   50




Todays log of a new upload :






--07:43:37--  http://www.sortmonster.net/Sniffer/Updates/agseap2t.snf
           => `agseap2t.new.gz'
Resolving w

Re: [sniffer] YAhoo mails failing sniffer?

2005-09-21 Thread Matt

Quick follow-up.  The bad rule appears to be 497585.

Matt



Marc Catuogno wrote:


I'm seeing a few legit e-mails from Yahoo failing sniffer.  Anyone else?

---
[This E-mail scanned for viruses by Declude Virus]


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] YAhoo mails failing sniffer?

2005-09-21 Thread Matt
I have noted a few.  I think that this has something to do with some 
Phishing rules that are hitting on content in combination with the Yahoo 
inserted footer that is advertising donations for Hurricane Katrina.


I haven't reported my latest batch of FP's yet, but I will do so now.

Matt



Marc Catuogno wrote:


I'm seeing a few legit e-mails from Yahoo failing sniffer.  Anyone else?

---
[This E-mail scanned for viruses by Declude Virus]


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


 



This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Sniffer taking a long time?

2005-08-02 Thread Matt




Dan,

Think about the fact that the entire rulebase and the executable don't
have to be read except on occasion when running as a service, and
memory doesn't need to be allocated to the application each time it is
run.  This should also help with the mystery heap issues that can occur
on an overloaded IMail/Declude server under very heavy load.  If you
ever get bursts of traffic, this can come in handy.

Matt



Dan Horne wrote:

  So basically, what you are saying is that my volume is really too low to take advantage of the persistent sniffer (and such may actually decrease my performance), and I should stick with the non-service version.  Is that right?  That is about what I thought (without the details of how sniffer works, I just wanted to be sure).

Thanks, Pete.

Dan Horne

  
  
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]] On Behalf Of Pete McNeil
Sent: Tuesday, August 02, 2005 4:09 PM
To: Dan Horne
Subject: Re[2]: [sniffer] Sniffer taking a long time?

After following through all of this and looking at the .stat 
file, I think I see what's going on.

Now that it is running and producing a .stat file, the flow 
rate is very low. According to the stat data, about 6 msgs / minute.

Note the poll and loop times are in the 450 - 550 ms range.

SNF with the persistent engine is built for high throughput, 
but it's also built to play nice.

The maximum poll time gets up to 2 seconds or so (sound familiar?)

If there are no messages for a while, then everything slows 
down until the first message goes through. For that first 
message, the SNF client will probably wait about 2 seconds 
before looking for it's result because that's what the stat 
file will tell it to do.

Since the next message probably won't come around for a few 
seconds, that next message will probably wait about 2 seconds also.

If you were doing 6 messages a second then all of the times 
would be much lower and so would the individual delays.

When you turn off the persistent instance, each new message 
causes a client to look and see if there are any other peers 
acting a servers... Since the messages are far and few 
between, the client will elect to be a server (momentarily), 
will find no work but it's own, will process it's own message 
and leave. -- This is the automatic peer-server mode. It will 
always work like this unless more than one message is being 
processed at the same moment.

In peer-server mode, since there is nothing else going on and 
no persistent instance to coordinate the operations, each 
message will get processed as fast as the rulebase can be 
loaded and then the program will drop.

When the persistent instance is introduced, it sets the pace 
- and sicne there are no other messages, each client will 
wait about 2 seconds (or half a second or so with the .stat 
file contents you show) before it begins looking for it's results.

The server instance will also wait a bit before looking for 
new jobs so that the file system isn't constantly being scanned.

Of course, if a burst of messages come through then the 
pacing will speed up as much as necessary to keep up with the volume.

Hope this helps,

_M

On Tuesday, August 2, 2005, 3:38:52 PM, Dan wrote:

DH> No, I followed your instructions exactly (and not for the first 
DH> time).  I didn't add those extra values until today.  Prior to  
DH> adding the AppDirectory value, the service was taking a minute to 
DH> scan emails;  after adding it the scan time went to around 2 
DH> seconds.  I can't get it any  lower than that.  Initially 
mine was 
DH> set up exactly as you said, with only  "Application" 
containing the 
DH> path, authcode and persistent.  Today after  hearing no 
suggestions 
DH> from the list, and based on recent list messages 
mentioning the home 
DH> directory for the service, I looked at the srvany.exe 
doco  to find 
DH> out how to give it a home directory.
DH> That's when I added  AppDirectory.  I also saw and added 
DH> AppParameters at the same time and  added those as well, 
though they 
DH> seem not to be needed.
DH>  
DH> Prior to adding the AppDirectory value, I never got any 
.stat file 
DH> or any .SVR file in my sniffer dir.  After adding that value and  
DH> starting the service those files appeared.
DH>  
DH>  


DH> From: [EMAIL PROTECTED]
DH> [mailto:[EMAIL PROTECTED]] On  Behalf Of Matt
DH> Sent: Tuesday, August 02, 2005 3:24  PM
DH> To: sniffer@SortMonster.com
DH> Subject: Re: [sniffer]  Sniffer taking a long time?


  

DH> Dan,

DH> There is no AppDirectory value on my servereither.  The
DH> Parameters key has only one value under it besides Default   
DH> which is "Application", and it contains exactly what I provided
DH> below. Could it be that you tried to hard to get everything
DH> right by 

Re: [sniffer] Sniffer taking a long time?

2005-08-02 Thread Matt




Dan,

There is no AppDirectory value on my server either.  The Parameters key
has only one value under it besides Default which is "Application", and
it contains exactly what I provided below.  Could it be that you tried
to hard to get everything right by tweaking these additional keys?

Something else.  Did you make sure that the Sniffer service that you
created was started?  No doubt it will work if you follow those
directions to a T, and there aren't any issues with your server apart
from this.

Matt



Dan Horne wrote:

  
  
  I removed the AppParameters
value and put the authcode and persistent back in the Application value
where it was before.  It didn't make any difference at all in the
processing time, still right around 2 seconds.  I don't know how your
setup is working without at least the AppDirectory value, because mine
didn't start working until I put that in, but if it is, I can't argue. 
My server load isn't anywhere near yours, so I don't see what the
problem could be with mine.  Oh well, unless Pete responds with a
suggestion, I guess I'll just keep using the non-service version.
   
  Thanks anyway.
  
  

 From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Matt
Sent: Tuesday, August 02, 2005 2:37 PM
To: sniffer@SortMonster.com
Subject: Re: [sniffer] Sniffer taking a long time?


Dan,

I seem to recall trying to use the AppParameters key and having
difficulty with it.  I think that you might want to try removing that
key and putting everything in the Parameters key, or at least that
works for me.  If you change
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Sniffer\Parameters in
RegEdit to the following it might fix the issue that you are having:
C:\IMail\Declude\Sniffer\***RULEBASE-NAME***.exe
  ***AUTH-CODE*** persistent

You should of course adjust the path and service name as well.

The directions that I provided are working perfectly on my server so
far as I can tell.  I'm running dual 3.2 Ghz 1 MB cache Xeons with 5 x
15,000 RPM drives in RAID 5.  The following three debug log entries
shows between 300 ms and 550 ms per message:
08/02/2005 14:19:47.113 QB93D976201222A43 [2616]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code F:\\DB93D976201222A43.SMD
  08/02/2005 14:19:47.676 QB93D976201222A43 [2616]
SNIFFER-IP: External program reports exit code of 61
  -
  08/02/2005 14:19:47.488 QB9418A4800EC2A49 [6196]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code F:\\DB9418A4800EC2A49.SMD
  08/02/2005 14:19:47.770 QB9418A4800EC2A49 [6196]
SNIFFER-IP: External program reports exit code of 51
  -
  08/02/2005 14:19:49.879 QB943711501382A4D [6388]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code F:\\DB943711501382A4D.SMD
  08/02/2005 14:19:50.176 QB943711501382A4D [6388]
SNIFFER-IP: External program reports exit code of 59

My stat file shows the following:
TicToc: 1122992104
Loop: 154
Poll: 0
Jobs: 118392
Secs: 155137
Msg/Min: 45.7887
Current-Load: 24.4275   
Average-Load: 23.8719  

I'm not sure why people use FireDaemon for this.  My experience with
SRVANY.exe has been absolutely flawless since I integrated this, and it
has worked on both Win2k and Windows 2003.

Matt





Dan Horne wrote:

  OK, I have managed to get SOMETHING working, but it still seems too slow
and something is still not right.  I originally set up the persistent
sniffer using the instructions from this post:

http://www.mail-archive.com/sniffer@sortmonster.com/msg00169.html

This uses SRVANY.exe.  I conjectured that possibly the service needed a
home directory, so I added an AppDirectory value to the sniffer
service's "Parameters" key in the registry.  This value is set to the
directory sniffer resides in.  I also (based on my reading of the
srvany.exe documentation) added another value to the same key called
AppParameters.  This is set to my auth code followed by a space,
followed by the word persistent.

Now when I start the service, the time spent processing a single message
goes down to something around 2 seconds, but is still far longer than
the non-service version.  I also still had no .stat file in my sniffer
directory.  I did get a *.SVR file, which I never got before.

So then I'm thinking, let's just make sure that I have the latest
version of sniffer.  I downloaded that, did the necessary renaming of
the files and then started the service.  NOW there is a
*.persistent.stat file.  However, the scan time is still at around 2
seconds.

Average Scan times (based on average scan times of 5 emails each):
Without sniffer service running: .033 seconds
With sniffer service running: 2.244 seconds

The *.persistent.stat file has 

Re: [sniffer] Sniffer taking a long time?

2005-08-02 Thread Matt




You are correct.  My bad.

Matt



Nick Hayer wrote:

  
Without regard to content I believe the edits would be made in
CurrentControlSet - not in ControlSetxxx - the later are the backups.
-Nick
  
Matt wrote:
  

Dan,

I seem to recall trying to use the AppParameters key and having
difficulty with it.  I think that you might want to try removing that
key and putting everything in the Parameters key, or at least that
works for me.  If you change
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Sniffer\Parameters in
RegEdit to the following it might fix the issue that you are having:
 C:\IMail\Declude\Sniffer\ ***RULEBASE-NAME*** .exe
***AUTH-CODE*** persistent 

You should of course adjust the path and service name as well.

The directions that I provided are working perfectly on my server so
far as I can tell.  I'm running dual 3.2 Ghz 1 MB cache Xeons with 5 x
15,000 RPM drives in RAID 5.  The following three debug log entries
shows between 300 ms and 550 ms per message:
 08/02/2005 14:19:47.113 QB93D976201222A43 [2616]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\
executable .exe auth-code F:\\DB93D976201222A43.SMD 
08/02/2005 14:19:47.676 QB93D976201222A43 [2616] SNIFFER-IP:
External program reports exit code of 61 
- 
08/02/2005 14:19:47.488 QB9418A4800EC2A49 [6196] SNIFFER-IP:
External program started: C:\IMail\Declude\Sniffer\ executable .exe
auth-code F:\\DB9418A4800EC2A49.SMD 
08/02/2005 14:19:47.770 QB9418A4800EC2A49 [6196] SNIFFER-IP:
External program reports exit code of 51 
- 
08/02/2005 14:19:49.879 QB943711501382A4D [6388] SNIFFER-IP:
External program started: C:\IMail\Declude\Sniffer\ executable .exe
auth-code F:\\DB943711501382A4D.SMD 
08/02/2005 14:19:50.176 QB943711501382A4D [6388] SNIFFER-IP:
External program reports exit code of 59 

My stat file shows the following:
 TicToc: 1122992104
Loop: 154
Poll: 0
Jobs: 118392
Secs: 155137
Msg/Min: 45.7887
Current-Load: 24.4275   
Average-Load: 23.8719  

I'm not sure why people use FireDaemon for this.  My experience with
SRVANY.exe has been absolutely flawless since I integrated this, and it
has worked on both Win2k and Windows 2003.

Matt





Dan Horne wrote:

  OK, I have managed to get SOMETHING working, but it still seems too slow
and something is still not right.  I originally set up the persistent
sniffer using the instructions from this post:

http://www.mail-archive.com/sniffer@sortmonster.com/msg00169.html

This uses SRVANY.exe.  I conjectured that possibly the service needed a
home directory, so I added an AppDirectory value to the sniffer
service's "Parameters" key in the registry.  This value is set to the
directory sniffer resides in.  I also (based on my reading of the
srvany.exe documentation) added another value to the same key called
AppParameters.  This is set to my auth code followed by a space,
followed by the word persistent.

Now when I start the service, the time spent processing a single message
goes down to something around 2 seconds, but is still far longer than
the non-service version.  I also still had no .stat file in my sniffer
directory.  I did get a *.SVR file, which I never got before.

So then I'm thinking, let's just make sure that I have the latest
version of sniffer.  I downloaded that, did the necessary renaming of
the files and then started the service.  NOW there is a
*.persistent.stat file.  However, the scan time is still at around 2
seconds.

Average Scan times (based on average scan times of 5 emails each):
Without sniffer service running: .033 seconds
With sniffer service running: 2.244 seconds

The *.persistent.stat file has the following contents:

  TicToc: 1122990610
Loop: 512
Poll: 445
Jobs: 34
Secs: 303
 Msg/Min: 6.73267
Current-Load: 8.69565   
Average-Load: 10.6371 

Any suggestions? 

Thanks, 
Dan Horne

This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Sniffer taking a long time?

2005-08-02 Thread Matt




Dan,

Just two thoughts at this moment.  One would be permissions.  Maybe
setting up SRVANY by installing the Resource Kit would be the way to go
(if that isn't what you did).  I would imagine that if the service
starts, all should be fine.  The other would be to check your Sniffer's
CFG file to make sure that nothing funky is in there.

Matt


Dan Horne wrote:

  
  
  No, I followed your instructions
exactly (and not for the first time).  I didn't add those extra values
until today.  Prior to adding the AppDirectory value, the service was
taking a minute to scan emails; after adding it the scan time went to
around 2 seconds.  I can't get it any lower than that.  Initially mine
was set up exactly as you said, with only "Application" containing the
path, authcode and persistent.  Today after hearing no suggestions from
the list, and based on recent list messages mentioning the home
directory for the service, I looked at the srvany.exe doco to find out
how to give it a home directory.  That's when I added AppDirectory.  I
also saw and added AppParameters at the same time and added those as
well, though they seem not to be needed.
   
  Prior to adding the AppDirectory
value, I never got any .stat file or any .SVR file in my sniffer dir. 
After adding that value and starting the service those files appeared.
   
   
  
  
  From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Matt
  Sent: Tuesday, August 02, 2005 3:24 PM
  To: sniffer@SortMonster.com
  Subject: Re: [sniffer] Sniffer taking a long time?
  
  
   Dan,

There is no AppDirectory value on my server either.  The Parameters key
has only one value under it besides Default which is "Application", and
it contains exactly what I provided below.  Could it be that you tried
to hard to get everything right by tweaking these additional keys?

Something else.  Did you make sure that the Sniffer service that you
created was started?  No doubt it will work if you follow those
directions to a T, and there aren't any issues with your server apart
from this.

Matt



Dan Horne wrote:

  
  I removed the AppParameters
value and put the authcode and persistent back in the Application value
where it was before.  It didn't make any difference at all in the
processing time, still right around 2 seconds.  I don't know how your
setup is working without at least the AppDirectory value, because mine
didn't start working until I put that in, but if it is, I can't argue. 
My server load isn't anywhere near yours, so I don't see what the
problem could be with mine.  Oh well, unless Pete responds with a
suggestion, I guess I'll just keep using the non-service version.
   
  Thanks anyway.
  
  
    
 From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]
On Behalf Of Matt
Sent: Tuesday, August 02, 2005 2:37 PM
To: sniffer@SortMonster.com
Subject: Re: [sniffer] Sniffer taking a long time?


Dan,

I seem to recall trying to use the AppParameters key and having
difficulty with it.  I think that you might want to try removing that
key and putting everything in the Parameters key, or at least that
works for me.  If you change
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Sniffer\Parameters in
RegEdit to the following it might fix the issue that you are having:
C:\IMail\Declude\Sniffer\***RULEBASE-NAME***.exe
  ***AUTH-CODE*** persistent

You should of course adjust the path and service name as well.

The directions that I provided are working perfectly on my server so
far as I can tell.  I'm running dual 3.2 Ghz 1 MB cache Xeons with 5 x
15,000 RPM drives in RAID 5.  The following three debug log entries
shows between 300 ms and 550 ms per message:
08/02/2005 14:19:47.113 QB93D976201222A43
[2616] SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code
F:\\DB93D976201222A43.SMD
  08/02/2005 14:19:47.676 QB93D976201222A43 [2616]
SNIFFER-IP: External program reports exit code of 61
  -
  08/02/2005 14:19:47.488 QB9418A4800EC2A49 [6196]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code
F:\\DB9418A4800EC2A49.SMD
  08/02/2005 14:19:47.770 QB9418A4800EC2A49 [6196]
SNIFFER-IP: External program reports exit code of 51
  -
  08/02/2005 14:19:49.879 QB943711501382A4D [6388]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code
F:\\DB943711501382A4D.SMD
  08/02/2005 14:19:50.176 QB943711501382A4D [6388]
SNIFFER-IP: External program reports exit code of 59

My stat file shows the following:
TicToc: 1122992104
Loop: 154
Poll: 0
Jobs: 118392
Secs: 155137
Msg/Min: 45.7887
Current-Load: 24.427

Re: [sniffer] Sniffer taking a long time?

2005-08-02 Thread Matt




Dan,

I seem to recall trying to use the AppParameters key and having
difficulty with it.  I think that you might want to try removing that
key and putting everything in the Parameters key, or at least that
works for me.  If you change
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Sniffer\Parameters in
RegEdit to the following it might fix the issue that you are having:
C:\IMail\Declude\Sniffer\***RULEBASE-NAME***.exe
  ***AUTH-CODE*** persistent

You should of course adjust the path and service name as well.

The directions that I provided are working perfectly on my server so
far as I can tell.  I'm running dual 3.2 Ghz 1 MB cache Xeons with 5 x
15,000 RPM drives in RAID 5.  The following three debug log entries
shows between 300 ms and 550 ms per message:
08/02/2005 14:19:47.113 QB93D976201222A43 [2616]
SNIFFER-IP: External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code F:\\DB93D976201222A43.SMD
  08/02/2005 14:19:47.676 QB93D976201222A43 [2616] SNIFFER-IP:
External program reports exit code of 61
  -
  08/02/2005 14:19:47.488 QB9418A4800EC2A49 [6196] SNIFFER-IP:
External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code F:\\DB9418A4800EC2A49.SMD
  08/02/2005 14:19:47.770 QB9418A4800EC2A49 [6196] SNIFFER-IP:
External program reports exit code of 51
  -
  08/02/2005 14:19:49.879 QB943711501382A4D [6388] SNIFFER-IP:
External program started: C:\IMail\Declude\Sniffer\executable.exe
  auth-code F:\\DB943711501382A4D.SMD
  08/02/2005 14:19:50.176 QB943711501382A4D [6388] SNIFFER-IP:
External program reports exit code of 59

My stat file shows the following:
TicToc: 1122992104
Loop: 154
Poll: 0
Jobs: 118392
Secs: 155137
Msg/Min: 45.7887
Current-Load: 24.4275   
Average-Load: 23.8719  

I'm not sure why people use FireDaemon for this.  My experience with
SRVANY.exe has been absolutely flawless since I integrated this, and it
has worked on both Win2k and Windows 2003.

Matt





Dan Horne wrote:

  OK, I have managed to get SOMETHING working, but it still seems too slow
and something is still not right.  I originally set up the persistent
sniffer using the instructions from this post:

http://www.mail-archive.com/sniffer@sortmonster.com/msg00169.html

This uses SRVANY.exe.  I conjectured that possibly the service needed a
home directory, so I added an AppDirectory value to the sniffer
service's "Parameters" key in the registry.  This value is set to the
directory sniffer resides in.  I also (based on my reading of the
srvany.exe documentation) added another value to the same key called
AppParameters.  This is set to my auth code followed by a space,
followed by the word persistent.

Now when I start the service, the time spent processing a single message
goes down to something around 2 seconds, but is still far longer than
the non-service version.  I also still had no .stat file in my sniffer
directory.  I did get a *.SVR file, which I never got before.

So then I'm thinking, let's just make sure that I have the latest
version of sniffer.  I downloaded that, did the necessary renaming of
the files and then started the service.  NOW there is a
*.persistent.stat file.  However, the scan time is still at around 2
seconds.

Average Scan times (based on average scan times of 5 emails each):
Without sniffer service running: .033 seconds
With sniffer service running: 2.244 seconds

The *.persistent.stat file has the following contents:

  TicToc: 1122990610
Loop: 512
Poll: 445
Jobs: 34
Secs: 303
 Msg/Min: 6.73267
Current-Load: 8.69565   
Average-Load: 10.6371 

Any suggestions? 

Thanks, 
Dan Horne

This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] New Spam/Virus?

2005-06-06 Thread Matt




FYI,

This virus appears to be using multiple forms of infection.  One seems
to link to the IP where you are prompted to run/download the infected
program and the others have infected attachments in the E-mail itself.

Based on reviewing my logs and spam capture file, it appears that
initially they were all mass mailed from 66.251.60.35 including the
linked IP in the body that everyone was seeing.  Then when I stopped
seeing these in my Hold/review range about 2 hours ago, I started
seeing E-mails come in with attachments that were being blocked by at
least McAfee.  I'm thinking that 66.251.60.35 was being used to seed
the virus using a link to the payload and now the infected computers
from this seeding run are sending the actual virus out as an attachment.

Matt



Pete McNeil wrote:

  New rule - 369676 under Malware.

New experimental rule on message structure: 369677

_M

On Monday, June 6, 2005, 6:13:23 PM, Dave wrote:

DM> New target ip:  205.138.199.146

DM> -Original Message-
DM> From: [EMAIL PROTECTED]
DM> [mailto:[EMAIL PROTECTED]] On Behalf Of Jim Matuska
DM> Sent: Monday, June 06, 2005 3:01 PM
DM> To: sniffer@SortMonster.com
DM> Subject: Re: Re[2]: [sniffer] New Spam/Virus?


DM> Thanks Pete,
DM> What Return code will this be under?

DM> Jim Matuska Jr.
DM> Computer Tech2, CCNA
DM> Nez Perce Tribe
DM> Information Systems
DM> [EMAIL PROTECTED]
DM> - Original Message - 
DM> From: "Pete McNeil" <[EMAIL PROTECTED]>
DM> To: "Dave Koontz" 
DM> Sent: Monday, June 06, 2005 3:00 PM
DM> Subject: Re[2]: [sniffer] New Spam/Virus?


  
  

  On Monday, June 6, 2005, 5:50:38 PM, Dave wrote:

DK> Same exact IP  here!

We've got a couple of rules for this now -- making the rounds as new
compiles go out.

_M



This E-Mail came from the Message Sniffer mailing list. For 
information
and (un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html

  

  
  

DM> This E-Mail came from the Message Sniffer mailing list. For information
DM> and (un)subscription instructions go to
DM> http://www.sortmonster.com/MessageSniffer/Help/Help.html

DM> This E-Mail came from the Message Sniffer mailing list. For
DM> information and (un)subscription instructions go to
DM> http://www.sortmonster.com/MessageSniffer/Help/Help.html


This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] New Spam Storm

2005-05-17 Thread Matt




Pete,

Your memory fails you :)  I reported one just yesterday, however it was
understandable.  The rule is below (slightly obfuscated for public
consumption).

  MB> Final
MB> RULE 349776-055: User Submission, 13 days, 3.1979660500
MB> NAME: Account and Password Information are attached!%+account_info(dot)zip
MB> CODE: Account and Password Information are attached!%+account\_info\(dot)zip
MB> No prior False Positive Reports.

This was in a virus advisory sent out by McAfee.  It makes sense that
sometimes these rules will hit discussions of spam and viruses.

I rarely see FP's for the Malware group since the greeting card sites
were removed or expired last year (former purveyors of spyware infected
greeting cards), but they also don't hit very often on my system.

I think like everything, including virus scanners themselves, there's
always a chance of human error.  I get the impression that this group
is almost exclusively if not exclusively manually encoded.  I'm fairly
conservative when it comes to blocking on just one test, but if you
aren't otherwise protected from the neo-Nazi propaganda, I wouldn't
recommend against raising the weights on this result code so that it is
blocked automatically, just not necessarily deleted.

The point of where the rule should be classified is a bit unclear
however.  Since this mailing was likely associated with the virus
writer, then many consider it to be part of the virus, but virtually
every zombie sent piece of spam has a similar degree of association. 
This for now is a definitely a special case due to it's success in
getting through systems early on, the lack of a legitimate payload link
(all belong to uninvolved third-parties) and the volume seen.  It's
scary what someone can do if they prepare properly for such a thing.

Matt







Pete McNeil wrote:

  On Tuesday, May 17, 2005, 2:57:44 PM, Jim wrote:

JM> Thanks Pete, would you be able to provide the current false positive rates
JM> for the return codes?

This is not something that we are formally capturing at present,
however anecdotally I can't recall the last time we had an FP
submitted for the malware group.

_M

PS: We will eventually build some instrumentation to capture these
statistics. We've done a few spot analyses and each time we have found
very low volume, widely distributed results -- with each analysis
showing peaks and valleys on different groups. As a result, the data
we currently have about this is too "noisy" for any conclusive
statements.


This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Rule 353039 - .comcast.net

2005-05-10 Thread Matt
Pete,
My config file was completely unedited, i.e. every setting was commented 
out.  I verified that one and a half hours after the config change this 
rule was still hitting until I had restarted the service.  Maybe there 
is a bug in the persistent engine reloading the config without 
intervention...

Thanks,
Matt
Pete McNeil wrote:
On Tuesday, May 10, 2005, 12:45:53 PM, Computer wrote:
CHS> Mail from Comcast is still getting caught, even with the panic rule in
CHS> place.  Any suggestions?
* be sure you have updated .cfg
* be sure your entry is in the correct format. You will find examples
at the bottom of your .cfg file with each example commented out. The
easiest way to make the entry is to change the number in one of the
examples and remove the # and any spaces in front of it. An active
rule-panic entry will being on the first character of the line.
The persistent engine should reload and pick up your change within no
more than 10 minutes unless you have altered your timing settings.
For immediate results you should issue ".exe reload" from
your command line, Or you could restart your persistent instance
service.
Hope this helps,
_M


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Rule 353039 - .comcast.net

2005-05-10 Thread Matt
See my message below...restart your Sniffer service and it should work.
Matt

Computer House Support wrote:
Mail from Comcast is still getting caught, even with the panic rule in 
place.  Any suggestions?

Mike Stein
This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Rule 353039 - .comcast.net

2005-05-10 Thread Matt
Warning!
When you add a RulePanic entry and are running Sniffer in persistent 
mode, you have to restart the service for it to take effect.  I changed 
this earlier and it had no effect until I restarted the service on my 
box.  Maybe I'm wrong about this, but just changing my config file had 
no effect on it's own.

Pete, when you send out these notifications, would you please add a few 
instructions to them, including the file name that needs to be modified, 
i.e. .cfg, the format of the line, and the instructions to 
restart the service.  Another important piece of information would be 
the time that the bad rule was created, otherwise we need to search our 
logs for it.  My first hit on this was yesterday at 9 p.m. EST, but some 
probably hit it earlier by up to a couple of hours I would imagine.

Thanks,
Matt

Pete McNeil wrote:
Hello Sniffer Folks,
 A rule was created today by one of the robots which targets
 .comcast.net -- This happened when a number of blacklists including
 SBL listed comcast IPs causing the robot to be convinced that a
 message in the spamtrap warranted tagging the domain.
 The rule has been removed and I am pushing out new rulebase
 compilation as quickly as possible. Please do not rush to download
 your rulebase file in response to this --- wait for the update
 notification or else your file is not updated.
 I believe we've caught this quickly enough that most of you will not
 be effected. However, if you suspect that you do have the bad rule
 in your rulebase you can temporarily eliminate the rule by adding
 353039 to your Rule-panic entries in your configuration file.
 The rule cannot be recreated once removed.
 We are very sorry for the confusion.
Thanks,
_M
Pete McNeil (Madscientist)
President, MicroNeil Research Corporation
Chief SortMonster (www.sortmonster.com)

This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Latest medication campaign

2005-04-14 Thread Matt
Quick update.  I found a few false positives (about 1 in 50,000 
messages) and as a result I modified things a little and added a few 
more checks for supposedly rather unique patterns.  The new version is 
attached.  Unless there is a problem I probably won't update it any 
more, but I felt that it was a good idea to share the update to prevent 
the possibility of problems.  The new version is attached.

Matt
Matt wrote:
Attached is something that I coded up last night for this guy.  It's 
designed to be not totally dependant on one pattern so that it might 
have some longevity.  His forging of a Microsoft format is quite good, 
but he does make mistakes and does leave patterns, some of which can 
be tagged with a standard Declude filter, but VBScript could do it 
even better and even less specifically.  Nevertheless, this filter 
hits 100% of the time right now, levies very heavy points despite 
being variable, and I haven't seen a false positive yet due to the way 
that it was designed to operate.  Note, the scores are based on a 
system that holds at a score of 10.

Matt
--- Global.cfg ---
FORGEDPILLSPAMMERfilter
C:\IMail\Declude\Filters\ForgedPillSpammer.txtx50

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
# FORGEDPILLSPAMMER v1.0.1

SKIPIFWEIGHT40
MINWEIGHTTOFAIL 5

# Disable when it comes from an IP that is in the MX record just for safety 
since this targets zombies.
TESTSFAILED END NOTCONTAINS IPNOTINMX

# Prerequisites for spam pattern.  Note that the spammer is near perfect for 
the headers.
HEADERS END NOTCONTAINS X-MimeOLE: Produced By Microsoft 
MimeOLE V
HEADERS END NOTCONTAINS To: "
HEADERS END NOTCONTAINS From: "
BODYEND NOTCONTAINS- Original Message -

# Dead giveaway for Pharmacy spam (non-obfuscated part).
BODY3   CONTAINSyByMail
BODY3   CONTAINSBy-Mail
BODY3   CONTAINSByMAlL
BODY1   CONTAINSBy MAIL S

# This line is too long for Outlook in quoted-printable format.
BODY3   CONTAINS  

# Subject is always Re:.
HEADERS 1   CONTAINSSubject: Re: 

# Body does text/html as us-ascii.
BODY1   CONTAINSContent-Type: text/html;
charset="us-ascii"

# Quoted-printable line ended too early in body
BODY3   CONTAINS> Hello, = Would

# Text or code patterns uncommon in Outlook generated E-mails
BODY1   CONTAINSsave up to
BODY1   CONTAINSon the Net!
BODY1   CONTAINSsize=3D4> C
BODY1   CONTAINS and many 
BODY1   CONTAINS

Re: [sniffer] Latest medication campaign

2005-04-13 Thread Matt
Attached is something that I coded up last night for this guy.  It's 
designed to be not totally dependant on one pattern so that it might 
have some longevity.  His forging of a Microsoft format is quite good, 
but he does make mistakes and does leave patterns, some of which can be 
tagged with a standard Declude filter, but VBScript could do it even 
better and even less specifically.  Nevertheless, this filter hits 100% 
of the time right now, levies very heavy points despite being variable, 
and I haven't seen a false positive yet due to the way that it was 
designed to operate.  Note, the scores are based on a system that holds 
at a score of 10.

Matt
--- Global.cfg ---
FORGEDPILLSPAMMERfilter
C:\IMail\Declude\Filters\ForgedPillSpammer.txtx50

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
# FORGEDPILLSPAMMER v1.0.0

SKIPIFWEIGHT40
MINWEIGHTTOFAIL 5

# Disable when it comes from an IP that is in the MX record just for safety 
since this targets zombies.
TESTSFAILED END NOTCONTAINS IPNOTINMX

# Prerequisites for spam pattern.  Note that the spammer is near perfect for 
the headers.
HEADERS END NOTCONTAINS X-MimeOLE: Produced By Microsoft 
MimeOLE V
HEADERS END NOTCONTAINS To: "
HEADERS END NOTCONTAINS From: "
BODYEND NOTCONTAINS - Original Message -

# Dead giveaway for Pharmacy spam (non-obfuscated part).
BODY3   CONTAINSyByMail
BODY3   CONTAINSBy-Mail

# This line is too long for Outlook in quoted-printable format.
BODY3   CONTAINS  

# Subject is always Re:.
HEADERS 1   CONTAINSSubject: Re: 

# Body does text/html as us-ascii.
BODY1   CONTAINSContent-Type: text/html;
charset="us-ascii"

# Body contains empty Style tags.
BODY1   CONTAINS


Re: [sniffer] Persistent Sniffer

2005-04-01 Thread Matt
Keith,
Windows DNS service will handle over a million lookups a day without 
blinking.  There should be no reason to switch to a different DNS 
server.  It hardly even registers any CPU load on my boxes.  The biggest 
CPU hog is the virus scanners, and choosing your virus scanners 
carefully will have a great benefit.  F-Prot is the champ followed by 
ClamAV in daemon mode (the non-daemon is a hog), followed by McAfee at a 
distant third, though there are many others that are far worse.  The 
AVAFTERJM switch will stop most messages from being virus scanned and 
hence the magic there, however if you don't delete any messages with 
JunkMail there is no real advantage.  I'm not clear on whether or not 
the ROUTETO will bypass scanning, but you could create some filters 
using VBScript to tag messages with attachments associated with viruses 
and handle them differently.

Personally, I haven't found a huge impact from running Sniffer in 
persistent mode, but it does have a slightly measurable effect on my 
server.  If you are hurting for disk I/O or memory, this could help 
immensely.  If you are running into an issue with disk I/O, it could 
back things up significantly.  Also, if you have any domains where the 
addresses aren't validated (nobody aliases or gateway domains), this 
could easy be attacked in such a way so that it overwhelmed your 
server.  We are presently only validating for about 2/3 of our customer 
base and this morning the address validation software/service failed an 
automatic restart and it allowed everything through to IMail/Declude and 
it pegged our server at 100% until it was turned back on.  Normally at 
that time of day, our server runs at an average of about 25% (and it 
will get better when the other 1/3 becomes validated).

BODY and ANYWHERE filters in Declude can also be huge hogs if you don't 
limit them to a reasonable level.  I probably have about 1,500 lines of 
BODY filters and that isn't causing me any real issues but I am also 
using SKIPIFWEIGHT and other methods of skipping such filters when it 
isn't beneficial to run them.  Managing my Declude filtering better 
definitely helped me steal back some CPU.

Placing Sniffer in persistent mode definitely shouldn't cause things to 
slow down unless maybe it was configured improperly.  I use the same 
SERVANY setup that you said that you are using and it has worked 
flawlessly for me since the day that Pete released that functionality.  
I am thinking that you might want to scrutinize your setup.

Hope that this helps.
Matt

Keith Johnson wrote:
Pete,
Wow, thank you for the explanation.  I did let the persistent
server run for 30 min after I restarted the services.  However, I did
stop the services, then started Sniffer service, then restart Imail
services.  I could have gotten a backlog of retries at that moment that
pegged the CPU as you stated.  We have batted around running BIND for
NT/2000 on the local machine, but my fear was overhead of another major
process running.  I don't have any good stats on how much CPU/Memory
BIND on an Imail Server requires, thus, we have a SUN/BIND box local to
the switch.  Are you aware of any stats on this?
	We don't run the AVAFTERJM switch.  This is done in part due to
so many of our customers still look at their spam email from time to
time.  We heavily use the ROUTETO and MAILBOX command, thus, if I let a
virus go through to their to mailbox, they could potentially open a
virus spam email and hurt themselves.  

We defrag each partition every night using Diskeeper and it
works great.  I regularly look at the Sniffer directory to ensure no
left over .fin files and others that could cause server load.  I will
retry it again tonight and see what type of results I get and post them
here.  It could be as you say, I am on the far side :)
Thanks again,
Keith 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pete McNeil
Sent: Friday, April 01, 2005 2:16 PM
To: Keith Johnson
Subject: Re[4]: [sniffer] Persistent Sniffer
On Friday, April 1, 2005, 11:44:07 AM, Keith wrote:
KJ> Pete,
KJ> Thanks for the reply.  

KJ> Running on an IBM Xseries 225 Dual Xeon 2.4Ghz w/ 1GB RAM - 
KJ> running IBM's ServerRAID 5i in IBM's RAID 10 config (4 73GB 10K 
KJ> drives)
KJ> - O/S is Windows 2000 Standard Server SP4

KJ> Running Imail 8.15HF1 with Declude JM/Virus 1.82 - BIND DNS 
KJ> Server is 1 hop away (on switch backbone).  I had to drop back to 
KJ> the non-persistent mode, thus the .stat file disappeared.  I will 
KJ> run it again tonight and copy the file away and post it here
tonight.

KJ> Thanks again for the time and aid.
I don't see any problems with this setup.
Your description sounds like your server is fairly heavily loaded
(35-55% cpu in peer-server mode), though I would expect more from the
hardware yo

Re: [sniffer] Porn Spam again

2005-03-28 Thread Matt
Just an FYI from my perspective.  As things stand, Sniffer false 
positives on dirty language is one of the top 5 types of FP's that I see 
with Sniffer.  It's not a huge problem, but I definitely wouldn't want 
to see any more of it.  While some companies do not have an issue with 
blocking dirty language even if legitimate, this is not wise to do in a 
global sense.  Thankfully Pete does allow for rulebase customization so 
that customers that want this type of blocking can have it.

Due to the variability of the messages, it is also generally better to 
tag the URL or another pattern rather than the phrases that might be 
used.  I'm generally happy with how Sniffer picks up new URL's and 
updates rulebases to block this stuff, but they will get through on 
occasion no matter what you do because as soon as you start tagging word 
patterns, the spammer changes those patterns or obfuscates in some other 
way.  No matter what however, every piece of spam needs a payload, which 
is generally a link, E-mail address or phone number.

Matt

Pete McNeil wrote:
On Monday, March 28, 2005, 2:09:52 PM, Heimir wrote:
HE> Anyway that sniffer could trigger on this type of stuff?

Yes. The bad news is that this stuff is highly variable and so more of
it gets through than we would like. The good news is that we are
developing filters to deal with it by capturing small fragments and
phrases so that they cannot be reused. For example, I created 7 new
rules based on the note use sent - each containing 2-5 word phrases
and fragments.
The hard part is to avoid blocking legitimate messages - so we can't
generally code on single words. For example hardcore and it's
variations has a high porn spam score, but it is also widely used in
current language. The word suck by itself is not a workable solo and
neither is that random combination of hardcore an suck (though you
might be tempted)... A quick look at any extreme sports article
readily yields many of these words.
You could opt to create some black rules that contain simple
combinations or even single words like these if you have a
sufficiently narrow demographic on your system.
In the mean time we will continue to aggressively create rules for the
safe combinations we can spot and/or predict. Of course, we always
capture URI in these cases when available.
Hope this helps,
_M


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


RE: [sniffer] Money, drugs, and sex

2005-03-22 Thread Matt Day
You truly are a mad scientist - But we love ya! :) 

Matt

MaxNett Ltd
T.08701 624 989
F.08701 624 889
www.maxnett.co.uk

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Pete McNeil
Sent: 23 March 2005 00:37
To: Colbeck, Andrew
Subject: Re: [sniffer] Money, drugs, and sex

On Tuesday, March 22, 2005, 4:47:30 PM, Andrew wrote:

CA> http://www.sophos.com/spaminfo/articles/spamwords.html

CA> Interesting, but a pity they didn't publish a list of, say, their 
CA> 1,000 most popular obfuscations.

If you do the math then 1000 wouldn't even scratch it. One way to attack
this ( at least one of the ways we do it in Message Sniffer ) is to apply
some obfuscation algorithms to each word in the list using some generic
expansion patterns -- this helps to simplify the problem a bit.

For example, one obfuscation algorithm is to insert a single extra character
in the word. If you take the word obfuscation and apply this expansion
algorithm you get something like:

o~bfuscation
ob~fuscation
obf~uscation
...
obfuscatio~n

where ~ represents any random character.

Then think about adding two characters...

...
ob~fusc~ation
...

Then think about breaking the word with an empty anchor at any of the places
where you would insert a character...

...
obfushttp://yo-mama.it";>cation
...

and so on...

Of course, you can't simply apply all of the possible obfuscation
algorithms, and you can't completely exercise each one that you do try...
you have to pick and choose and learn as you go because otherwise you would
simply never finish the job. ***

If you iterate through all of the permutations and count them then the
numbers become astronomical... as in "viagra can be obfuscated (and detected
by their fine software) more than 5,600,000,000 different ways" .
That's market speak for "look how powerful our software is -whoooah!"

This is similar to a lot of other AI problems too and it's probably why I'm
involved since I love AI work. In most AI problems if you add up all of the
possible solutions to the problem you usually come up with a number you
couldn't possibly write down without writing the formula instead. That is,
the number would be so large that you would probably die of old age before
you actually finished writing all the digits. In the AI world we talk about
this huge sea of possibilities as a "solution space".

If you tried to check every possible solution one by one until you found the
best answer it would take you forever. This is called a brute force attack.
It's also what makes the big numbers seem impressive, and what makes most
encryption schemes work.###

Since we don't usually have "forever", we do something else in the AI world.
We use algorithms to "search the solution space" for the best answer. That
is, rather than just going through the possible solutions one at a time as
we come to them (brute force) we try to figure out which ones to look at and
which ones to skip. The way we make that decision is to use an algorithm
that leverages special "rules of thumb" (heuristics) to help us search the
solution space more efficiently. This effectively "reduces the solution
space" and makes it possible to come up with an answer that is good
enough+++ within the time we have.

So, when they talk about recognizing more than 5 billion different
obfuscated forms of the word viagra they are really just estimating how many
of the permutations their heuristics are able to eliminate from the solution
space. (A more accurate way to think about it might be that a single
heuristic for a particular obfuscated word covers a large amount of the
solution space all at once. Since it's already been covered it doesn't have
to be searched -- the extra work is eliminated as compared to a brute-force
attack.)

For example: Suppose you have a sandbox into which someone has thrown a
marble. If you have to find the marble then you could estimate all of the
grains of sand you would have to examine in order to find it.
Let's see... for a sandbox that is 3 meters on a side and 10 cm deep that
would be... (scratches head, punches on calculator, looks at watch, gives
up...) a "Sagan" of sand grains. (I fondly remember Dr.
Carl Sagan talking about astronomical numbers like this when talking about
the cosmos.)

So, to find the marble in the sandbox without individually picking up each
grain of sand then we'll need some tools (algorithms) to help us reduce the
problem. We could also use a heuristic to help us reduce the problem
further...

Let's do this:

1) Get a bucket, a screen with holes smaller than a marble but larger than a
few grains of sand, and a shovel.

2) Use a stick to draw lines on the sandbox and divide it into a grid where
each square is about the size of our bucket.

3) Skipping all of the squares where the sand 

Re: [sniffer] RAID Levels for Spool Folder

2005-03-16 Thread Matt
d it had absolutely no issues.  That server also ran MS SMTP
as the primary gateway along with VamSoft's ORF doing address
validation, so in effect, each message was being received twice by the
same machine.  If I can do this off of three slower drives and a bottom
of the line mainstream controller, you can surely do the same off of 5
faster drives and better controller and reap the benefits of the extra
redundancy as well as the simplicity of managing one array.  There's no
reason to get any more complicated than a single array in this case. 
If your system is not heavily fragmented (and it won't be under my
config if you move the logs periodically), then it would be near
impossible for you to run into a wall with disk I/O before you ran out
of CPU.  I have seen over 100,000 messages a day run on a single IDE
hard drive, single partition, with IMail/Declude/F-Prot and about 7,000
accounts, though it was starting to stress the server at that point and
needed to be addressed.

Matt




Goran Jovanovic wrote:

  
  


  
  
  
  Matt,
   
  I think that
you sort of answered the
question that I did not really ask. I was really trying to get
information on
the different performance levels for of S/W vs H/W RAID for an “ideal”
scanning only box. So let me try this out and people can comment
   
  All SCSI 15K
drives with HW RAID
controller
   
  2 x 36 GB
drives R1 on first channel (36 GB
usable)
      C –
Windows 10 GB
      D –
IMAIL/Smartermail/Declude
files/Declude filters & per domain configs/banned files (5 days
only) 20 GB
      P – Page
volume 3
GB
   
  3 x 36 GB
drives R5 on second channel (72
GB usable)
      L – Logs
for JM,
Virus, IMAIL/SmarterMail, Sniffer, invURIBL, et al 10 GB
      S –
Storage for all
daily logs 60 GB 
   
  1 x 36 GB
Hot Spare drive
   
  From what we
have discussed here drive L
will get hit a lot. If you create a process that Matt is describing to
move the
active logs from L to S you should not worry about running out of space
on the
L drive. 
   
  Now looking
back I am not sure if I have
crafted this well since the SPOOL files for IMAIL will end up on D. Is
there a
way to move them for Smartermail as there does not seem to be a way to
move
them in IMail? The good part of this config is that the spool files
which have
a lot of read/write are on a different volume/channel from the other
log files.
I am not sure what amount of space you should allocate to a server that
would
process 100,000+ messages a day?
   
  Anyone have
comments on this config. 
   
  Thanx
   
  
   
   
   
  
  Goran
Jovanovic
  
The LAN Shoppe
   
  
   
  
  
  
  
  From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On
Behalf Of Matt
  Sent: Wednesday, March
16, 2005
3:49 PM
  To:
sniffer@SortMonster.com
  Subject: Re: [sniffer]
RAID Levels
for Spool Folder
  
   
  IMO, Software RAID is not the
way to go on a busy
machine.  You will save a measurable amount of overhead by going with
hardware based RAID of any sort since the controller should handle the
processes associated with the RAID.  Note that this isn't the case with
inexpensive RAID controllers such as the cheaper IDE and SATA
controllers which
still place a fair burden on the OS/processor.  True RAID cards also
offer
additional cache which can speed up the performance on reads, and also
on
writes if you are battery backed up (otherwise don't use write caching
because
you could lose or corrupt data during a power outage).
  
There's also several common misconceptions about what is proper to do
for a
mail server.  RAID 5 is the best choice under almost all conditions. 
The trick here is that while RAID 10 offers both redundancy in
mirroring and
speed in striping, most servers have a limited amount of space for
disks. 
So a server with 6 disks will operate with the speed of 3 disks spanned
in a
RAID 10 configuration, but 6 disks in RAID 5 will operate as 5 disks
spanned
plus a little bit of overhead, though not nearly enough so that it
falls short
of the performance of just 3 disks in a simple span.  Therefore RAID 5
should be the default choice for speed in such an environment.
  
Another misconception is that data is always striped in RAID 0
or RAID
5.  This depends on the file size and the stripe size.  Most stripes
are 64 KB (configurable in most setups).  If you have some form of
striping for your spool drive, most messages fall far under 64 KB and
will only
get written to one disk (CRC will also get written in RAID 5). 
Therefore
for a spool folder, RAID 5 with 3 drives (the minimum), will perform
rather
closely to RAID 5 with 10 drives since most files will only land on one
disk
(with the other corresponding stripes containing no data).  The MFT
however for a drive with a lot of files will grow to be quite large and
benefits from having multiple disks, and opening very large files such
as logs
will also benefit from having many disks.  There is also an advantage
to
seek times when having multiple dis

Re: [sniffer] RAID Levels for Spool Folder

2005-03-16 Thread Matt
ill improve performance
in a RAID configuration, though that will change over time. 
Essentially you should think of SCSI as both a protocol as well as a
mark of component quality.

With that said, if performance isn't an issue with a single drive,
mirroring it in Windows might be a perfectly fine solution.  I would
still lean towards a cheap RAID card for this however.

Matt






Andy Schmidt wrote:

  Uh, sorry, I had thought that discussion was RAID-5 vs. RAID-1?

If someone is running RAID-5, I assume that it's hardware based. If so, then
that person could use the same hardware to configure a RAID-1 array instead
- so why even bother with software RAID then?

If the discussions is software RAID-1 vs. no-raid, then the answer is: Sure,
software RAID is a cost effective solution if the system has sufficient
head-room to deal with whatever possible overhead that may cause. However,
if we are talking about a machine that is already taxed, then I would
suggest plugging in a RAID controller instead of adding software RAID to the
mix.

I have several (older) systems running Windows 2000 RAID-1. At least ONE of
the servers I later upgraded to Hardware RAID.  I can't say that I've
noticed any difference (but then again, I have not run benchmarks and the
server was not really taxed before either.)

>From what I understand, there are many factors involved and it much depends
on your systems configuration. CPU availability is critical. A server that
is already CPU taxed may suffer if software RAID is added.  Having the
drives split on two SCSI controllers should also help with software RAID-1.
Doing software RAID-1 with a master/slave ATA drive, however, may slow
things down.

There may not be a single answer...

Best Regards
Andy 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
On Behalf Of Goran Jovanovic
Sent: Wednesday, March 16, 2005 02:05 PM
To: sniffer@SortMonster.com
Subject: RE: Re[2]: [sniffer] Moving Sniffer to Declude/SmarterMail


OK that is for hardware level RAID. I had thought that you would offset the
extra processing time by being able to write less to each drive.

Now does anyone know how much overhead Windows 2000/2003 software RAID 1 on
dynamic disks produces over hardware level RAID 1?


This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Seperate Lists?

2005-02-19 Thread Matt
Pete,
Being guilty of being 'chatty' myself, I still second this idea.  I 
would much prefer to pick through an occasional message dealling with 
global announcements regarding the service than picking through both 
discussions as well as announcements.  I'm not always up to date on this 
list and others that I follow and I probably will pay less attention to 
them over time as I get buried in other things as many of us do.

Please consider a separate announcement-only list for this purpose.
Alternatively, having both mixed will both overwhelm users not 
interested in the more general discussion and it may also cause those of 
us that are more 'chatty' to quiet down which stifles the discussion and 
the benefit that can be gleamed from it.

Thanks,
Matt

Pete McNeil wrote:
On Saturday, February 19, 2005, 1:28:14 PM, Dave wrote:
DK> I am all in favor of a SUPPORT list to announce timely
DK> notifications of problems. solutions and/or changes to your
DK> product or services.  However, the threads I"ve been seeing here
DK> lately are 'iMail' specific or involve theoretical discussion of
DK> improving iMail performance via a Gateway product using IIS's SMTP
DK> engine.  These discussions have absolutely nothing to do with the
DK> "support" of your product in it's current offerings, nor my server
DK> of choice.
DK> As a new customer Might I request that you setup a separate list for
DK> discussion, development, beta, theoretical or iMail issues?  If not, then I
DK> too will also ask to be removed from this list.
DK> Thanks in advance for your consideration.
On this list I hope to provide not only support and announcements but
also to foster a community around all of the related issues - often
there are things to be learned from other platforms or related
discussions - even the theoretical ones.
From my perspective the IIS SMTP engine discussion covers a wide range
and is on topic. For example, this is how MS Exchange could be
supported directly. It is my goal to see SNF available and used well
on as many platforms as possible.
That said, we do have many IMail users here so those topics are
frequent as are Declude discussions.
In the end though, there are a lot of people here - most of whom are
dealing with problems and issues that are related to spam, email
services, and related technologies such as DNS, server performance,
networks & security, etc. In my view, this community is invaluable and
provides an enriched level of support on all issues by sharing
experiences and additional perspectives.
I would be sorry to see you go, but certainly I understand if this is
not a forum you want to follow.
Instructions for (un)subscribing to this list are on our help page as
indicated in the tag line of these messages.
http://www.sortmonster.com/MessageSniffer/Help/Help.html
If you do decide to leave this list then please remember to check our
web site regularly for any important announcements - we put them in
our news section.
Support is also available through our support@ address, of course,
though I may refer back to this list if I get stumped ;-).
Thanks,
_M

This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] IIS SMTP Integration

2005-02-18 Thread Matt
Title: Message




Yeah, I mixed up some words earlier in my reply to Sandy's post.  I
should have said that it needed to be paired with or run as a
protocol/OnInBound sink that also does address validation.  That's
probably what confused you as to the meaning of what I had said
earlier.  I'm only roughly familiar with the terminology.

Matt



Andy Schmidt wrote:

  
  
  
  Uh, I see, you are not against the protocol sink
in principal - you are only against it IF there is no means of doing
address validation (and possible some other checks) at the same time.
   
  Yes, I have other protocol sinks in place
(including ORF) that allow me to do protocol rejections on the other
items (and have been sitting on my relay customers to give me access to
their user base as well). So in my case, Sniffer will ONLY check a
small percentage of emails (those to valid recipients that didn't have
more than two false recipients and didn't have a HELO with my IP and
didn't use SMTP AUTH and who didn't fail certain various *proxy DNSBLs.)
   
  Once I have my last two customers' LDAP
information integrated, I'll block off my Imail/Declude server
altogether and use two IIS SMTP servers as incoming gateways. Ideally I
want to move my Sniffer license to the IIS SMTP server and then buy an
extra license for the second IIS SMTP server.
   
  With ORF's 2.0 graylisting and tarpitting,
things will become pretty solid - and Sniffer integration was/is the
missing brick in the wall.
   
  PS: Let's not forget, there is no reason why
Sniffer couldn't be configured to check either  at the protocol level
OR the transport level. ORF currently does that.
  But I think it's important that protocol is
offered as one choice.
  
  
  Best Regards
  Andy Schmidt
  
  Phone:  +1 201 934-3414 x20
(Business)
Fax:    +1 201 934-9206 
  
-Original Message-
    From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Matt
Sent: Friday, February 18, 2005 10:33 PM
To: sniffer@SortMonster.com
Subject: Re: [sniffer] IIS SMTP Integration


I guess you essentially got my point and what appears to be Sandy's. 
Once you take an Exchange server (or any other server) and insert such
a gateway, you loose your ability to do address validation.  Nowadays
this is vital due to real world circumstances as you have yourself
experienced.  If Sniffer was introduced with some form of MS SMTP
integration and was unable to do address validation during the RCPT TO,
then it could very well create issues beyond what it solves
(backscatter and potentially drowning the CPU).

There will be a solution created for this at some point within the next
year I'm sure.  As to how far it goes in terms of spam blocking, I
don't know.  I suppose the best solution would be to have a full
Declude installation bound to MS SMTP doing both OnInBound and
OnArrival sinks.  The market potential for this would be rather large
in comparison to targeting specific mail servers as they do now, though
it appears that it would be somewhat more complicated.

Matt



Andy Schmidt wrote:

  

  The idea being that you don't want any more content searching than is
  

  
  necessary, particularly when a recipients-dictionary-attack is underway. <<

Okay, but if you wait until the message is stored in the queue and NOW you
have to scan each one with a command-line process - how is THAT better
(that's the transport sink scenario).

What you want to do is:

A) upon connection, check DNS BLs - if matches, add points

B) upon HELO, check HELO rules - if matches, add points

C) upon MAIL FROM - check for <>, if it matches, set a flag (there should
only be ONE recipient)
   check DNS BLs for blacklisted recipients, if matches, add points

D) upon RCPT TO - check for valid recipient - if more than 2 invalid
recipients, drop connection.
   If sender is <> and more than 1 recipient, drop connection
   If recipient is Postmaster@ or Abuse@ or Root@ (etc) and more than 1
recipient, drop connection (with proper return code "too many recipients)

E) at EOD (after the CR.CR), 
   - check for SMTP AUTH (so you can skip scanning)
   - otherwise scan the content with Sniffer (and Virus Scanner) - add
points
   
If the points exceed your threshold at ANY time, drop connection.  No bounce
message necessary, no need to store the message in the queue, etc.

Whenever you drop connection, add IP to your "tarpit/graylist" list.  Use
that for subsequent "upon connections"


  
  

  Me, I like the idea of accruing a weight against the sending IP when a
  

  
  recipient lookup fails.  <<

You can do that by processing the log file.



Best Regards
Andy Schmidt

Phone:  +1 201 934-3414 x20 (Business)
Fax:+1 201 934-9206 



-Original Messag

Re: [sniffer] IIS SMTP Integration

2005-02-18 Thread Matt




I guess you essentially got my point and what appears to be Sandy's. 
Once you take an Exchange server (or any other server) and insert such
a gateway, you loose your ability to do address validation.  Nowadays
this is vital due to real world circumstances as you have yourself
experienced.  If Sniffer was introduced with some form of MS SMTP
integration and was unable to do address validation during the RCPT TO,
then it could very well create issues beyond what it solves
(backscatter and potentially drowning the CPU).

There will be a solution created for this at some point within the next
year I'm sure.  As to how far it goes in terms of spam blocking, I
don't know.  I suppose the best solution would be to have a full
Declude installation bound to MS SMTP doing both OnInBound and
OnArrival sinks.  The market potential for this would be rather large
in comparison to targeting specific mail servers as they do now, though
it appears that it would be somewhat more complicated.

Matt



Andy Schmidt wrote:

  

  The idea being that you don't want any more content searching than is
  

  
  necessary, particularly when a recipients-dictionary-attack is underway. <<

Okay, but if you wait until the message is stored in the queue and NOW you
have to scan each one with a command-line process - how is THAT better
(that's the transport sink scenario).

What you want to do is:

A) upon connection, check DNS BLs - if matches, add points

B) upon HELO, check HELO rules - if matches, add points

C) upon MAIL FROM - check for <>, if it matches, set a flag (there should
only be ONE recipient)
   check DNS BLs for blacklisted recipients, if matches, add points

D) upon RCPT TO - check for valid recipient - if more than 2 invalid
recipients, drop connection.
   If sender is <> and more than 1 recipient, drop connection
   If recipient is Postmaster@ or Abuse@ or Root@ (etc) and more than 1
recipient, drop connection (with proper return code "too many recipients)

E) at EOD (after the CR.CR), 
   - check for SMTP AUTH (so you can skip scanning)
   - otherwise scan the content with Sniffer (and Virus Scanner) - add
points
   
If the points exceed your threshold at ANY time, drop connection.  No bounce
message necessary, no need to store the message in the queue, etc.

Whenever you drop connection, add IP to your "tarpit/graylist" list.  Use
that for subsequent "upon connections"


  
  

  Me, I like the idea of accruing a weight against the sending IP when a
  

  
  recipient lookup fails.  <<

You can do that by processing the log file.



Best Regards
Andy Schmidt

Phone:  +1 201 934-3414 x20 (Business)
Fax:+1 201 934-9206 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
On Behalf Of Colbeck, Andrew
Sent: Friday, February 18, 2005 08:06 PM
To: sniffer@SortMonster.com
Subject: RE: Re[2]: [sniffer] IIS SMTP Integration


Pete, Matt was specifically referring to envelope rejection (as well as
other info gathering actions) based on address validation (and any other
criteria based on information as it can be tested, like a local blacklist
against the sending IP).

The idea being that you don't want any more content searching than is
necessary, particularly when a recipients-dictionary-attack is underway.

Me, I like the idea of accruing a weight against the sending IP when a
recipient lookup fails.  I get a lot of spam that is a single message which
is CC'ed and BCC'ed against a lot of addresses that are simply guessed, and
I want to punish those, and ideally, share that news with other mailservers.

Andrew 8)

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
On Behalf Of Pete McNeil
Sent: Friday, February 18, 2005 4:33 PM
To: Matt
Subject: Re[2]: [sniffer] IIS SMTP Integration


On Friday, February 18, 2005, 7:23:03 PM, Matt wrote:

M> Sanford Whiteman wrote:

  
  

  Incidentally,  it  is  a  transport sink, not a protocol sink, meaning
  

  
  
  
  

  that envelope rejection is not possible. I can't defend this as solely
  

  
  
  
  

  a  choice  made for stability, as it was also a choice necessitated by
  

  
  
  
  

  my  prototyping  in  VB (and, though it's been in production, it's not
  

  
  
  
  

  much more than a prototype due to the lack of docs).
 

  

  
  M> Yes, that really is a key issue.  It needs to be a transport sink, or

M> at least work with one in order to prevent ongoing issues with brute
M> force spam floods.  I'm not sure that Peter from VamSoft understands 
M> the large market out there for non-Exchange based setups, or even for

M> going the extra mile that is necessary for this stuff, though that
M> might be an issue with resources and not just simply understanding.

Pl

Re: [sniffer] IIS SMTP Integration

2005-02-18 Thread Matt
Sanford Whiteman wrote:
Incidentally,  it  is  a  transport sink, not a protocol sink, meaning
that envelope rejection is not possible. I can't defend this as solely
a  choice  made for stability, as it was also a choice necessitated by
my  prototyping  in  VB (and, though it's been in production, it's not
much more than a prototype due to the lack of docs).
 

Yes, that really is a key issue.  It needs to be a transport sink, or at 
least work with one in order to prevent ongoing issues with brute force 
spam floods.  I'm not sure that Peter from VamSoft understands the large 
market out there for non-Exchange based setups, or even for going the 
extra mile that is necessary for this stuff, though that might be an 
issue with resources and not just simply understanding.

Matt
--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Sniffer and SURBL

2005-01-10 Thread Matt
Pete,
Not that I necessarily expect for this to happen, but rather something 
to consider as things progress...

With the cross-checking of SURBL data, one needs to be careful to not be 
double scoring tests that target the same piece of data.  An example of 
this would be scoring each DUL hit separately as opposed to scoring DUL 
hits, one or many, as a single scoreable hit.  Unfortunately we don't 
know exactly what triggers a hit in Sniffer when it happens.  In 
reality, the very high correlation between SURBL hits and Sniffer can be 
attributed to both the +95% spam hit rates in Sniffer, but also the 
cross referencing.  If I were to be able to tell that Sniffer hit on 
some other form of content, and SURBL hit on the URL, the combination of 
hits would be stronger as a group, but not knowing, and understanding 
that there would be a fair amount of hits for the same piece of data, 
the combination of hits should be treated as weaker as a group than the 
sum of scores.

I guess the only way to modify Sniffer to such needs might be to 
reclassify rules based on the type of data that the rule targets instead 
of the type of content the rule was generated from.  As you are aware, 
Subject rules are weaker than body domain name hits, and body domain 
name hits are weaker than full URL hits.  Some current result codes are 
highly suggestive of the types of rules that are contained, such as 
obfuscation rules, but porn rules are rather wide open.  I guess as an 
administrator, I would prefer to know the classification based on the 
reliability of the data used as opposed to the genre of spam from which 
the rule was created (and this is not perfectly consistent in subsequent 
hits).

I fear that this would mostly benefit power users who construct 
combination filters from this data and would benefit from classifying 
such hits, though some benefit could come by way of the simple weighting 
of Sniffer in isolation from other such things where there are notable 
differences in the reliability of the data.

This might also be somewhat impractical, and certainly not expected 
outside of a large change in the way that the app behaved.  So again, 
just something to chew on, and I'm sure it has crossed your mind before.

Thanks,
Matt

Pete McNeil wrote:
On Monday, January 10, 2005, 7:17:29 PM, Andrew wrote:
CA> Pete, I thought that you had said at one point that SortMonster fetches
CA> one or more SURBL zones and incorporates those as spam data for Message
CA> Sniffer?
CA> It seems like a great idea to me.  But then, from my distance, a lot of
CA> things look like a good idea for someone else to implement!
That's not exactly how it works -
What we do is that our robots will look at some of the messages that
hit our spamtraps and if they find a URI that looks like a good choice
they will cross check it with SURBL.
More often than not we've already got the URI coded from our manual
work, but this robotic mechanism allows the rulebase to keep up minute
by minute - and since the email triggering this work has come in
through one of our spamtraps, it acts like an extra check - so those
listings that we do have tend to be very solid.
At some point we may bolt on some additional real-time lookups like
SURBL etc... but we don't have plans for that just yet, and most
installations already have these tools employed in other mechanisms
they are running, so it would be redundant for us to add it - at least
at this point.
Hope this helps,
_M

This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Still having problems

2005-01-10 Thread Matt




I just wanted to add some stats that I thought might be of some use
here.  I gathered info on my block rates over the past three days and
compared my Sniffer hits to them.  There has been no measurable change
to my system with an average of 96% of spam getting tagged by Sniffer. 
I'm at least not seeing any issues.
FRIDAY
==
Blocked:    89.45% of Total Message Volume
Sniffer:   85.74% of Total Message Volume
  -
Sniffer Capture Rate on Spam: 95.85%
  
  
SATURDAY
==
Blocked:    96.57% of Total Message Volume
  Sniffer:   92.55% of Total Message Volume
  -
Sniffer Capture Rate on Spam: 95.84%
  
  
SUNDAY
==
Blocked:    96.19% of Total Message Volume
  Sniffer:       92.60% of Total Message Volume
  -
Sniffer Capture Rate on Spam: 96.26%


The way that I generated these stats was to assume that my "Hold"
weight in Declude was an accurate approximate delineation between ham
and spam.  Then the total for the Sniffer tests was added together and
divided by the block rate in order to calculate the "Sniffer Capture
Rate on Spam".

Hope this helps.

Matt




Pete McNeil wrote:

  On Monday, January 10, 2005, 12:38:45 AM, Kirk wrote:


KM>   I would like to attack this more aggressively. The increase we've seen in
KM> spam getting through over the last week has brought on a dramatic increase
KM> in customer complaints. What different approaches might I be able to take?

I'm sorry to hear that. Spam is an increasing problem.

I have adjusted your rulebase to the new rule strength threshold 0.5.

Earlier today I coded a number of rules that are based on some of the
subjects you submitted.

If you can think of any black rules that you would feel comfortable
coding on your system please let me know and I will add them. For
example, you may be willing to accept single words or word pairs that
we could not normally code into the core rulebase.

I am open to any ideas you have and I will help you to create rules
that meet your criteria.

Hope this helps,
_M




This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Tweaking our rule base

2005-01-06 Thread Matt
If this person is using Declude, Mdaemon or SpamAssassin, they might 
want to consider just using the blackholes.us zones that list every 
known IP delegated to certain countries that are known to have spam 
problems (in addition to some providers as well).  This stuff can be set 
up as a simple IP4R tests and weighted accordingly in one's config.

   http://www.blackholes.us/
Matt

Pete McNeil wrote:
On Thursday, January 6, 2005, 3:42:21 PM, Jeff wrote:
JW> Hi,
JW> Whats the procedure for tweaking our rule base?  We would
JW> like to catch anything from foreign domains. If thats not
JW> possible, I  saw you had an option for catching the foreign
JW> character sets.
I can work with you to create aggressive black rules for your
rulebase. I will contact you off-line soon to talk about this.
Thanks,
_M
 


This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] reporting spam in bulk

2005-01-05 Thread Matt
Pete,
I've been meaning to add a link to a script from within Killer WebMail 
that will allow me to report things to you with a single click.  If I do 
this, am I correct in assuming that I should just use something like 
CDONTS to construct a mail and place the original source as the body?  
If not, what would be the preferred method?

Note that I have original D*.SMD files for everything in the range of 
E-mails that I would consider reporting (using Declude's COPYFILE).  
Generally speaking, this would be a customized setup, although 
achievable by anyone with IMail and Declude.  The hack to KWM is just 
some JavaScript to extract the spool data file name from my message 
headers that I insert (full headers must be turned on in Web mail), and 
this links to an ASP script on my server that handles everything else.

Matt

Pete McNeil wrote:
On Wednesday, January 5, 2005, 4:03:28 PM, Rick wrote:
RR> 100's of spams a problem, LOL!
RR> Before sniffer I was facing around 10 thousand spams a day. But then I'm
RR> coordinating 1000's of domains, so on a per domain basis, it's actually very
RR> small.
RR> I think what I'll do is route a combined spam report email to a server
RR> script which will break it down and resubmit individual messages to your
RR> spam@ address. However, this will still be sent to you as an attachment. The
RR> advantage is that the original header info will be in place, the
RR> disadvantage is that you might still be ignoring messages with attachments,
RR> right?
Not necessarily. If they are not encoded we usually get good use out
of them even if they are attachments. The trick is that they will be
one message per message - so our automated tools will help us see what
we need to see.
It would be better to see them as a redirect, followed by a simple
forward, then as a last resort an attachment. As long as they are one
at a time we should be in good shape. I'm sure Gonzo is watching and
I'll talk to him about it. Once this starts happening we'll coordinate
and give you some feedback.
RR> If you don't take spam report messages with attachments, how would you be
RR> able to get the original internet header mail info?
The trick is that unless the message comes from a clean spamtrap we
don't trust the headers anyway. Under "abuse" rules, the entire
message is always suspect, so we will only dig into the headers if we
have good reason to trust what we're looking and, and we know what we
are looking for.
Spamtrap rules are different because the delivery chain is mapped and
consistent - so we know where the goodguy headers stop and the
questionable headers begin.
Thanks!
_M
PS: I've had one other call for this mechanism - a script that will
split multiple spam attachments and forward them to us. I would be
interested to see what you develop just in case it's applicable in
other places - or perhaps adaptable as a service in some way.

This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] new spam storm?

2005-01-04 Thread Matt
I've noted that dictionary attack type spam is generally of this 
variety, and while you are probably blocking a great deal of this, the 
sheer volume makes it look like you aren't doing that well against it.

I've also noted that the domains that they use are frequently changed, 
thus escaping both SURBL and Sniffer for periods of time.  I am under 
the impression that these spammers have taken to using multiple domains 
at once and segmenting the domains that they attack with them so that if 
one domain gets listed in SURBL (or Sniffer for a select group), then it 
won't affect their entire campaign.  Some of these campaigns are so high 
in volume that there is no way that the domains could otherwise escape 
being listed for more than 15 minutes.

This technique would fall under the guise of "if I was a spammer, this 
would be what I would do."  Generally these guys are only underachievers 
because spam prevention generally sucks and even if blocked, the 
anti-social characteristics of hijacking computers and pummeling others 
with their garbage has enough redeeming value (from their perspective) 
to keep them happy.  They are however capable of finding ways around 
almost every method that we use, but they for the most part just don't 
bother to try, but they are definitely trying harder than before.

Something else that I have noted recently is that they seem to be going 
after DUL space overseas instead of exclusively crawling well known and 
well tagged IP space in North America.  It seems that the majority of 
zombie generated spam that gets through or is scored low on my system is 
originating from overseas.

Maybe applicable in your case, maybe not.
I believe that Pete's plans for incremental updates will help to address 
such issues by making Sniffer even more real-time than it already is.

Matt

Kirk Mitchell wrote:
 Seems like I've been getting a ton of spam in the last few days that's
been scored as either LOW or CLEAN, many of them for cheap drugs, watches
or my cheating wife. I have AutoSNF running every 2 hours, so it shouldn't
be due to outdated rulesets. Is anyone else seeing this, or could I be
missing something?
Thanks,
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Sniffer Notifications now failing declude spamheaderstest

2005-01-03 Thread Matt




Jim,

See the Declude list, it is a Declude problem

In short, turn off SPAMHEADERS by commenting out the test.  It has a
bug with 2005 years in the date header.  They should be coming out with
a fix shortly.

Matt



Jim Matuska wrote:

  
  
  
  Has anything changed recently in the
format of the sniffer notification messages?  I am noticing all the
notifications for the last few days have been failing decludes
spamheaders test, this hasn't happened before.
   
  Jim Matuska Jr.
Computer Tech2, CCNA
Nez Perce Tribe
Information Systems
  [EMAIL PROTECTED]
   
   


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Triggered rulebase update instructions

2004-12-29 Thread Matt
t; get people to use the  automated system and compressed files,
MB> and this adds complexity to the  setup.  My thought here would
MB> be to create a "chaining" option that  could be used to kick

LW> Again, this script is focused only on IMail users.  If we
LW> follow  your suggestion in section 4 above, then why move the
LW> e-mail report out of the  basic script?
IMO there is a lot of complexity here.
The notification scheme could expand into a menagerie of logging,
email, and action mechanisms.
To simplify this then why not simply prepare a stub that optionally
calls a notification script with a variable. The variable indicates
either success or failure.
The notification mechanism can then evolve on it's own as a separate
problem. ?
 

From the newbie's standpoint, I completely agree about the complexity 
(hence my recommendation).  From me personally, this isn't an issue.

I figured that eliminating tertiary tasks would simplify the 
instructions, the script, and therefore the entire implementation.  The 
goal is to make it easy for as many people as possible to implement.  
The hook to call another script I don't believe should be documented 
anywhere but the script itself in order to simplify, and if it was my 
script and Sniffer was my business, I would probably personally choose 
to let the modders (power users) take care of it for themselves.


LW> Let see what Pete and others on the list think.  If we can
LW> come up  with a basic consensus, then we can run with it. 
LW> Whatever we decide, I  would welcome your help.
 

IMO, the goal here isn't necessarily to reach a consensus, it's to make 
things easier, more accessible to the novice, and more widely 
implemented.  If this was based on my own personal needs, as I stated 
before things would be different.  I wouldn't expect a consensus to 
necessarily reflect the best choices under these conditions.  Maybe it's 
just a matter of establishing 'agreement', and primarily so according to 
Pete's direction.

Once an agreement is made, I would be happy to test the scripts, and 
also work on any other piece that you wish to ask of me, so just ask 
when the time comes.  Naturally, very few ever have to ask for my 
opinions :)

Matt
--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Triggered rulebase update instructions

2004-12-28 Thread Matt




Bill,

I think that this is overwhelmingly much better (the whole thing), but
I have a few suggestions to add.
1) The commenting in the CMD file seemed a bit excessive
and that made it a little hard to follow.  It might be nice to arrange
all of the tweakable variables in a single section instead of
separating each one out, and then block coding the main program with a
standard amount of commenting.  I think that would make the script more
readable for both programmers as well as beginners.
  
2) I personally find it to be a bit messy to have everything running
from within my Sniffer directory.  After all of the other CMD files,
old rulebases, service related files, logs, etc., it's not obvious what
is needed or not.  I would suggest coding this up with a default
directory structure of using a subdirectory called "updates".  This
would require a separation of variables for the updates directory and
the destination directory I believe.
  
3) I think it would be a good idea to consider a different default
directory structure.  With Sniffer evolving to support other platforms,
IMail effectively abandoning us, and Declude moving to SmarterMail and
possibly others, I could very well see Sniffer establishing a
non-dependant directory structure.  I would suggest that the default
recommendation become "C:\Sniffer", which might also necessitate a
change in some of Pete's other documentation.  Keep in mind that it is
confusion and convolution that contributes to the lack of efficient
rulebase downloads and not the lack of resources or help.  IMO, things
would benefit from standardization of this sort, and it should all be
done with purpose.
  
4) Since this setup is targeted specifically at IMail, I would
recommend that different packages be provided for different platforms,
and these should probably be in separate zip's so that one doesn't get
all sorts of extra stuff.  This could be "Rulebase_Updater_IMail.zip",
but there should also be a Linux, MDaemon and SmarterMail updater added
to the list.
  
5) I'm thinking that including the notification process within this
script might be too much.  The primary goal is to get people to use the
automated system and compressed files, and this adds complexity to the
setup.  My thought here would be to create a "chaining" option that
could be used to kick off any script, not necessarily IMail1.exe.  You
could then include this separate notification script in the package and
have it configured from within that file, leaving only the optional
chaining command within the primary script and stripping out the rest
of the stuff.  I do know that from interface design there is a basic
tenet where you don't want to overwhelm the viewer/visitor, otherwise
they retain even less than they would with a smaller group of things. 
Programming is often at odds with this tenet, which is fine for
programmers because the functionality necessitates complication, but
the issue being addressed here is really ease of use for the lowest
common denominator, and the primary goal is just the downloads.  You
should consider that this whole thing will be used by people with very
little administration experience, no programming experience, and in
some cases, English will be a second language to them (or only
translated by a tool of some sort).
  

Most of this stuff is somewhat minor taken in isolation from each
other, but I believe that it could be a bit tighter in one way or
another for a better result.  I'll volunteer my own services if you
would like for me to provide examples of any one of these things, but
I'll wait for your direction before doing so.  I think the most
important thing would be for Pete to provide some guidance for the
preferred directory structure (independent of the app), so that this
could be used for the default settings in this and other scripts.

Matt


Landry William wrote:

  Attached is an updated instructions file to fix some typos and missed
information.  I'll send out another update after receiving feedback from
others.

Bill



---
This message and any included attachments are from Siemens Medical Solutions 
USA, Inc. and are intended only for the addressee(s).  
The information contained herein may include trade secrets or privileged or 
otherwise confidential information.  Unauthorized review, forwarding, printing, 
copying, distributing, or using such information is strictly prohibited and may 
be unlawful.  If you received this message in error, or have reason to believe 
you are not authorized to receive it, please promptly delete this message and 
notify the sender by e-mail with a copy to [EMAIL PROTECTED] 

Thank you
  

==
Sniffer triggered rulebase update instructions
==
	By [EMAIL PROTECTED]

These are in

Re: [sniffer] Downloads are slow...

2004-12-28 Thread Matt




Yep.  Despite the fact that one could design a process to work properly
with the -N option (also leaving the old file for comparison), since
this is generally scheduled by users on the hour, it would still
produce a run on the bandwidth at the top of some or even every hour. 
Enforcing a time bracket is not realistic.

Using the program alias is the best way all around for now, and I
believe that this should be promoted as the only option for IMail users
at least.  It appears that Pete times his notifications so that it
doesn't produce backups, and I assume that notifications are sent
immediately upon publishing the new customized rulebases, so it is also
the fastest method to achieving an update.

The code is there, but I just think that it can be better variablized
to adjust for different directories and codes, so it fits appropriately
in everyone's config.  Packing this together with gzip and including
that in the default setup would also be seemingly preferable.  Throwing
together a how-to that was written for the lowest common denominator
would enhance the ease of use for many (pictures are nice where
appropriate), and would help with reducing support.

Matt



Woody G Fussell wrote:

  Why do you not use a "program alias" and only download when you receive
notification that a new rule base is available? If everyone used gzip and
only downloaded when notified the bandwidth could be controlled by
staggering the notifications. 

Woody Fussell
Wilbur Smith Associates


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
On Behalf Of Jim Matuska
Sent: Tuesday, December 28, 2004 12:49 PM
To: sniffer@SortMonster.com
Subject: Re: [sniffer] Downloads are slow...

I agree that something needs to be done about the update scripts that are 
inadvertently downloading the full rulebase all the time.  I didn't even 
know it but we were doing this until I went through our update script again 
this morning and found it didn't have the -N option in Wget, so we were 
downloading the entire rulebase whether we needed it or not.  The gzip 
compression is cool, and I will likely implement it soon, but I think the 
major problem is everyone that is using scripts that keep downloading the 
same file over and over again tying up the bandwidth.

I would recommend 2 things to help alleviate this problem:
1.  Monitor connections to rulebase downloads to see who is downloading the 
rulebase everytime they connect on a schedule to determine who has their 
scripts setup wrong, and contact them to correct it.  It took me under a 
minute to add the -N option to wget, it should be a no brainer.

2.  Correct the scripts posted on the Sniffer website to include date 
checking, and possibly gzip compression, I used one of those scripts for our

system and assumed it would be setup correctly, but it was not.

Jim Matuska Jr.
Computer Tech2, CCNA
Nez Perce Tribe
Information Systems
[EMAIL PROTECTED]

- Original Message - 
From: "Matt" <[EMAIL PROTECTED]>
To: 
Sent: Monday, December 27, 2004 10:03 PM
Subject: Re: [sniffer] Downloads are slow...


  
  
I agree entirely.  If bandwidth has become an issue, it would be resolved 
with a focus on producing very tight and easily customizable scripts (a 
variables section in the top of the scripts).  I believe that going the 
VBScript route might be the best way to go, or at least I believe that more

  
  
  
  
of us can hack a more involved VBScript than a batch or CMD file. 
Enforcing compressed downloads and checking for timestamps prior to 
downloading should be done in these scripts as well.

Right now the script examples assume a familiarity with scripting, and 
while local participants can mostly handle that stuff, the non-vocal ones 
are most likely to not even be aware of the issues or how to fix them, and

  
  
  
  
might have scripted timed downloads because it is definitely the easiest 
way to go.  This is probably the majority of the customer base.  There is 
an impression for instance with Declude's user base that +80% use 
primarily the default config which most of us know is severely lacking in 
comparison to the potential that exists by tweaking the settings.

With better script examples and a careful step-by-step readme promoted in 
a mailing to your customers, I believe that this issue could go away, or 
at least theoretically it should.

Personally, I have mine tied to the E-mails, I download the zipped 
versions, I don't bother checking on the status, and have never noticed 
any issues as a result.  It would be a small shame if I was missing 
downloads due to timeouts, but not that big of a deal if this has never 
caused a noticeable problem.

Matt




Andy Schmidt wrote:



  Pete,

With all due respect - I think the download problem is "self-inflicted",
because your web site is providing unsuitable examples to your customers!
Even wi

Re: [sniffer] Downloads are slow...

2004-12-27 Thread Matt
I agree entirely.  If bandwidth has become an issue, it would be 
resolved with a focus on producing very tight and easily customizable 
scripts (a variables section in the top of the scripts).  I believe that 
going the VBScript route might be the best way to go, or at least I 
believe that more of us can hack a more involved VBScript than a batch 
or CMD file.  Enforcing compressed downloads and checking for timestamps 
prior to downloading should be done in these scripts as well.

Right now the script examples assume a familiarity with scripting, and 
while local participants can mostly handle that stuff, the non-vocal 
ones are most likely to not even be aware of the issues or how to fix 
them, and might have scripted timed downloads because it is definitely 
the easiest way to go.  This is probably the majority of the customer 
base.  There is an impression for instance with Declude's user base that 
+80% use primarily the default config which most of us know is severely 
lacking in comparison to the potential that exists by tweaking the settings.

With better script examples and a careful step-by-step readme promoted 
in a mailing to your customers, I believe that this issue could go away, 
or at least theoretically it should.

Personally, I have mine tied to the E-mails, I download the zipped 
versions, I don't bother checking on the status, and have never noticed 
any issues as a result.  It would be a small shame if I was missing 
downloads due to timeouts, but not that big of a deal if this has never 
caused a noticeable problem.

Matt

Andy Schmidt wrote:
Pete,
With all due respect - I think the download problem is "self-inflicted",
because your web site is providing unsuitable examples to your customers!
Even with moderate bandwidth, your server would be able to handle tens of
thousands of hits a day.  Checking if an updated file exists should barely
be noticeable - as long as it doesn't result in an unnecessary download.  

You probably suffer TWO problems:
A) Most of your customers are downloading rules based on a schedule, even if
no rules exists. Potential savings: 100% per download attempt.
B) Your customers are not downloading "compressed" rule files. 
Potential savings: about 66%, but that's not bad either.

One likely explanation is that at least THREE of your sample scripts do an
unconditional and uncompressed download!  Here the 3 URLs you list on your
web site and WGET command they are using:
http://www.sortmonster.com/MessageSniffer/Help/UserScripts/david_snifferUpda
teMethod.zip
wget http://www.sortmonster.net/Sniffer/Updates/.snf -O .new
--http-user=username --http-passwd=password
http://www.sortmonster.com/MessageSniffer/Help/UserScripts/Hank_SnifferScrip
ts.zip
wget http://www.sortmonster.net/Sniffer/Updates/.snf -O .new
--http-user=sniffer --http-passwd=ki11sp8m
http://www.sortmonster.com/MessageSniffer/Help/UserScripts/Michiel_AutoUpdat
e.zip
wget
http://sniffer:[EMAIL PROTECTED]/Sniffer/Updates/12345678.snf -O
.tst 

My recommendation: Replace these with examples that implement conditional,
compressed downloading.
Best Regards
Andy Schmidt
H&M Systems Software, Inc.
600 East Crescent Avenue, Suite 203
Upper Saddle River, NJ 07458-1846
Phone:  +1 201 934-3414 x20 (Business)
Fax:+1 201 934-9206
http://www.HM-Software.com/
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Pete McNeil
Sent: Monday, December 27, 2004 08:10 AM
To: Chuck Schick
Subject: Re: [sniffer] Downloads are slow...
On Monday, December 27, 2004, 1:17:21 AM, Chuck wrote:
CS> Pete:
CS> It appears on weekends the sniffer downloads are really slow. I am 
CS> downloading at 14 minutes past the hour and I am about 1/20 th of 
CS> the normal speed.

That is an unusual observation - I don't think weekends have anything to do
with making things slower. I will look at the logs to see if I can figure
out what heppened.
You're not manually downloading I hope?
_M

This E-Mail came from the Message Sniffer mailing list. For information and
(un)subscription instructions go to
http://www.sortmonster.com/MessageSniffer/Help/Help.html
This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


Re: [sniffer] Sniffer updates...

2004-12-22 Thread Matt
Title: Message




Scott Fosseen wrote:

  
  
  
  
  I may have missed this if it was
discussed.  But my last conversation with IPSwitch is that as a current
user of IMail I can continue to purchase support and keep getting
updates to the IMail portions without going to the new product.  The
person told me that the Collaboration Suite will still use the IMail
core so IMail will continue to be developed as a product.  I just can
not purchase IMail by itself anymore.  
   
  So my understanding is that IMail
will still be updated for existing users. 
  

...sure, for a 40% increase in cost for your support contract, and
absolutely no guarantee that they won't again cancel the product like
they did a couple of months ago, only to offer a concession with this
huge increase in price for a product that they have indicated clearly
wouldn't be marketed to anyone but existing customers and new purchases
would have to be negotiated by calling them on the phone and proving to
them that you are worth their time.

This was nothing but a way to recover some of the money that they were
clearly going to lose by not offering the option at all.  Don't be a
sucker for their game, at least know what you are getting.  This is the
same company that claimed that their customer base was "clamoring" for
a a collaboration suite and mandatory bundling with Symantec AntiVirus
that required a repurchase of the software for $6,000 dollars
(unlimited users) and a yearly support contract of $4,000 after that.

Please don't get me started :)

Matt
-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Sniffer updates...

2004-12-22 Thread Matt
Title: Message




Joe,

In their defense, I don't think that they necessarily knew any better
than to have approached it this way.  I don't necessarily get that the
new ownership has worked from the IT side of the business before and
understands security and trust as a corporate administrator would, in
fact Barry comes from the marketing side of the business and I'm afraid
that this is a bit of trial-by-fire.  I expect (hope) that he will get
the message and change their ways before this will be released in final
format.  Scott didn't have the resources to enforce licensing, and as a
business, this is critical to their success.  I have no qualms with
that goal.  They didn't intend to violate privacy or functionality,
they just overlooked it.

The whole IMail debacle is a different story.  Most everyone using
Declude on that platform will eventually be switching, and Declude has
been more than fair by offering free migrations of their license to a
different platform, starting with SmarterMail which is very reasonably
priced and seemingly quite responsive to their customers.

Matt



Joe Wolf wrote:

  
  
  
  
  I'm currently using Sniffer via Imail and
Declude.  We all know that Ipswitch has lost their mind and is
abandoning the small ISP, and now it seems that Declude has lost their
way.  The new version of Declude is tied to a single MAC address.  That
counts me out since I run multiple NIC's in the same machine and am
multi-homed.  Their spyware "phone home" system is a violation of our
security policies as well.
   
  That leads me to Sniffer.  I love the product.
   
  Does anyone have a complete list of mail
servers that have direct support for Sniffer?  The Imail / Declude
thing is too much to deal with and I'm going to make a change.
   
  Thanks,
  Joe


-- 
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=




Re: [sniffer] Change in coding policies

2004-12-21 Thread Matt
FYI,
I'm still debating myself about what to do with this stuff.  I'm hoping 
that it will go away, albeit slowly, and I presently rarely take action 
to correct any issues with this E-mail, though I do reprocess some 
individual messages.  Seems that many of the C/R providers have gotten 
better at filtering spam prior to the challenge, and that also 
helps...but they're still idiots :)

If I implement something, I will probably make it an optional filter for 
my domains.  I already isolate the bounce stuff that is held in it's own 
folder for each client.  The NDR issue has grown much bigger recently 
because of one single spammer that will use the same real address for 
about a week at a time (sporadically), and I have been contacted by 
clients about 3 or 4 times in the past two months about high volumes of 
bounces to single accounts.  We do catch most of it already, but not 
most of the stuff as generic as what IMail would send (no content), and 
there is unfortunately no good way to tell the good from the bad.  Right 
now I can only offer to block all NDR's, but I suggest that they just 
wait a week and the issue will clear up, and thankfully it always has so 
far.

Matt

Pete McNeil wrote:
On Tuesday, December 21, 2004, 1:13:15 PM, Matt wrote:
M> Given that the precision is difficult to assign under the single result
M> framework, I don't doubt the choice.  Might I suggest creating a 
M> sub-group for the three main types of backscatter so that individuals
M> can turn them off as a group instead of one rule at a time.  Note that
M> the three groups that I have defined personally are as follows:

M> - Joe-Job NDR's.
M> - Challenge/Response Idiots
M> - AntiVirus Notifications
I'm thinking in this direction - but I'm not sure yet what our coding
rules will allow since there tends to be a lot of overlap.
I don't think we'll be creating C/R related rules, at least not yet.
That's a category where zealots are everywhere and we have learned to
avoid filtering anything that even looks like part of a C/R system
unless specifically asked to do so.
If we get a lot of call for it then I _might_ create a project to code
those rules as an optional add-in on request.
For now we're going to stick to the safe stuff that can be applied
broadly.
_M

This E-Mail came from the Message Sniffer mailing list. For information and 
(un)subscription instructions go to 
http://www.sortmonster.com/MessageSniffer/Help/Help.html
 

--
=
MailPure custom filters for Declude JunkMail Pro.
http://www.mailpure.com/software/
=
This E-Mail came from the Message Sniffer mailing list. For information and (un)subscription instructions go to http://www.sortmonster.com/MessageSniffer/Help/Help.html


  1   2   >