[sniffer] How to deal with False Positives and other Documentation Issues

2008-10-07 Thread Andy Schmidt
Hi,

 

1.   I read this page:

http://www.armresearch.com/support/articles/procedures/falsePositives.jsp

and it seems to be the same.

 

However, should this chapter be expanded to contain information about what
to do if some of the new technologies are responsible for the false
positive? The "panic rule" instructions don't really apply in cases like
this where there IS no rule:

 









 

Instead you should have some ready-made sample that shows how to except an
IP that has ended up on the Truncate list, or at least move it to the
"caution" list?

 

2.   The explanation of the Log files is incomplete:
http://www.armresearch.com/support/articles/software/snfServer/logFiles/acti
vityLogs.jsp

As you can see from the log snippet I posted, there is a node s:r=0.
However, s:r is not in the documentation.

 

Best Regards,

Andy



[sniffer] Re: How to deal with False Positives and other Documentation Issues

2008-10-07 Thread Pete McNeil




Hello Andy,

Thanks for this -- I will address the documentation issues shortly.

Regarding GBUdb FP issues-- to date we've not had a truncate (result code 20) false positive report from any system that was configured properly.

Are you reporting such an FP?

Depending upon the circumstances you may want to add the IP to your ignore list.

You can drop the record for the IP from GBUdb with SNFClient -drop , but if the system is not configured properly then the IP will quickly rise back into the truncate list.

If that is being caused by a pattern rule then you need to discover the pattern rule from logs first and then panic that rule and report the FP.

Hope this helps,

_M

Remainder for reference...

Tuesday, October 7, 2008, 12:58:43 PM, you wrote:




>


Hi,
 
1.       I read this page:
http://www.armresearch.com/support/articles/procedures/falsePositives.jsp
and it seems to be the same.
 
However, should this chapter be expanded to contain information about what to do if some of the new technologies are responsible for the false positive? The “panic rule” instructions don’t really apply in cases like this where there IS no rule:
 

                
                

 
Instead you should have some ready-made sample that shows how to except an IP that has ended up on the Truncate list, or at least move it to the “caution” list?
 
2.       The explanation of the Log files is incomplete:
http://www.armresearch.com/support/articles/software/snfServer/logFiles/activityLogs.jsp
As you can see from the log snippet I posted, there is a node s:r=0. However, s:r is not in the documentation.
 
Best Regards,
Andy








-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: GBUdb False Positives vs. Rule IDs

2008-10-07 Thread Andy Schmidt
Hi Pete,

>> You can drop the record for the IP from GBUdb with SNFClient -drop ,
but if the system is not configured properly then the IP will quickly rise
back into the truncate list. <<

The IP address in question was a third party IP address, not related to us,
not a gateway. It was not in the ignore list and shouldn't be - does that
qualify as "configured properly"?

>> If that is being caused by a pattern rule then you need to discover the
pattern rule from logs first and then panic that rule and report the FP.<<

Hm - so if we have such a GBUdb FP issue, we would need to first go into the
log for the message ID in question and locate the IP address. THEN, we have
to search the log files to find where this IP address may have occurred
(possibly several days of logs, before someone noticed legitimate email from
being missing) in hopes of eventually still finding some log entry that
relates to the original rule-ID, before we can add it to the panic list?

I suppose it would be technically impossible to include the underlying rule
in the GBUdb, so that it can be properly reported when messages are blocked?


>> Are you reporting such an FP?<<

Yes, your FP support identified the underlying rule and reported it back to
me. Of course, I need to have a panic procedure in place that doesn't rely
on outside assistance.  Doesn't happen often, but better ask the questions
now than when the brown matter hits to air circulation enhancer.

Best Regards,

Andy

From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Pete McNeil
Sent: Tuesday, October 07, 2008 1:59 PM
To: Message Sniffer Community
Subject: [sniffer] Re: How to deal with False Positives and other
Documentation Issues

 

Hello Andy,

 

Thanks for this -- I will address the documentation issues shortly.

 

Regarding GBUdb FP issues-- to date we've not had a truncate (result code
20) false positive report from any system that was configured properly.

 

Are you reporting such an FP?

 

Depending upon the circumstances you may want to add the IP to your ignore
list.

 

You can drop the record for the IP from GBUdb with SNFClient -drop , but
if the system is not configured properly then the IP will quickly rise back
into the truncate list.

 

If that is being caused by a pattern rule then you need to discover the
pattern rule from logs first and then panic that rule and report the FP.

 

Hope this helps,

 

_M

 

Remainder for reference...

 

Tuesday, October 7, 2008, 12:58:43 PM, you wrote:

 


> 

Hi,

 

1.   I read this page:

 
http://www.armresearch.com/support/articles/procedures/falsePositives.jsp

and it seems to be the same.

 

However, should this chapter be expanded to contain information about what
to do if some of the new technologies are responsible for the false
positive? The "panic rule" instructions don't really apply in cases like
this where there IS no rule:

 









 

Instead you should have some ready-made sample that shows how to except an
IP that has ended up on the Truncate list, or at least move it to the
"caution" list?

 

2.   The explanation of the Log files is incomplete:

 

http://www.armresearch.com/support/articles/software/snfServer/logFiles/acti
vityLogs.jsp

As you can see from the log snippet I posted, there is a node s:r=0.
However, s:r is not in the documentation.

 

Best Regards,

Andy

 

 

 

 

-- 

Pete McNeil

Chief Scientist,

Arm Research Labs, LLC.

#
 
This message is sent to you because you are subscribed to
 
  the mailing list .
 
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
 
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
 
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
 
Send administrative queries to  <[EMAIL PROTECTED]>
 
 


[sniffer] Re: GBUdb False Positives vs. Rule IDs

2008-10-07 Thread Pete McNeil




Hello Andy,

Tuesday, October 7, 2008, 2:40:01 PM, you wrote:




>


Hi Pete,
>> You can drop the record for the IP from GBUdb with SNFClient -drop , but if the system is not configured properly then the IP will quickly rise back into the truncate list. <<
The IP address in question was a third party IP address, not related to us, not a gateway… It was not in the ignore list and shouldn’t be – does that qualify as “configured properly”?





Yes.




>


>> If that is being caused by a pattern rule then you need to discover the pattern rule from logs first and then panic that rule and report the FP.<<
Hm – so if we have such a GBUdb FP issue, we would need to first go into the log for the message ID in question and locate the IP address. THEN, we have to search the log files to find where this IP address may have occurred (possibly several days of logs, before someone noticed legitimate email from being missing) in hopes of eventually still finding some log entry that relates to the original rule-ID, before we can add it to the panic list?





Yes-- more or less.

It's not as bad as it seems though.

In order for an IP to get into the truncate range in GBUdb it has to consistently send messages that match pattern rules. That is 95% of the time if a message is sent from this IP it matches a pattern rule AND it has to send a bunch of them.

If the messages come in over separate days the statistics condense every day -- so on any given day it is likely a number of messages would have to come in and match pattern rules.

That means that a message matching the offending pattern rule is likely to be listed in the same log file and previous days (if any).

It also means that if you find that IP in that log you are virtually guaranteed that the message you find will have either matched the pattern rule or been truncated.

In this case the probability figure is 1 indicating that all messages from this IP have matched pattern rules. GBUdb override results (caution, black, truncate) do not change IP statistics... so the only way for an IP to get into the truncate range is by consistently producing messages that match pattern rules.

Presumably if substantially all messages from this legitimate source were to be tagged as spam then they would be reported as false positives.

Even if they were not immediately reported as false positives then the daily condensation of GBUdb statistics would force the IP out of the truncate range until more messages were tagged by the pattern rule -- and presumably one or more of those would be reported as false positives.

Bottom line -- it should not be difficult to find log records associated with this IP that are also associated with the pattern rules that pushed it into the truncate range.




>


I suppose it would be technically impossible to include the underlying rule in the GBUdb, so that it can be properly reported when messages are blocked?





Yes. The GBUdb engine only stores the statistics about the IPs and the data needed to index and access these records quickly. However, as I've said, information on the pattern rules should be relatively easy to find -- especially for truncate cases.




>


 
>> Are you reporting such an FP?<<
Yes, your FP support identified the underlying rule and reported it back to me. Of course, I need to have a panic procedure in place that doesn’t rely on outside assistance…  Doesn’t happen often, but better ask the questions now than when the brown matter hits to air circulation enhancer.





This case is somewhat unique. The pattern rule has been around for a very long time -- so it is extremely unlikely that a similar case would arise again.

A short-term and immediate fix for such a case -- while figuring out what is really going on -- is to reset the statistics on the IP so that they are not in the truncate range and so that it would take a large effort to get them there.

For example, you could SNFClient -set  ugly 0 32

This would move the IPs statistics far toward the white so that a truly large number of hits would be required to push it back into truncate even if every message failed a pattern rule. In the mean time the IP would be in the "normal" range. 

This gives you immediate relieve with a "fire and forget" command. The GBUdb statistics for the IP will eventually return to the correct value for the IP and by the time that happens you will have resolved the underlying pattern rule issue or made some other decision regarding the IP.

Hope this helps,

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: GBUdb False Positives vs. Rule IDs

2008-10-07 Thread Andy Schmidt
Thanks Pete - I'll save that command.

I also suggest that some of your instructions might be helpful to see in the
documentation in the chapters on how to deal with false positives.

From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Pete McNeil
Sent: Tuesday, October 07, 2008 3:41 PM
To: Message Sniffer Community
Subject: [sniffer] Re: GBUdb False Positives vs. Rule IDs

 

Hello Andy,

 

Tuesday, October 7, 2008, 2:40:01 PM, you wrote:

 


> 

Hi Pete,

>> You can drop the record for the IP from GBUdb with SNFClient -drop ,
but if the system is not configured properly then the IP will quickly rise
back into the truncate list. <<

The IP address in question was a third party IP address, not related to us,
not a gateway. It was not in the ignore list and shouldn't be - does that
qualify as "configured properly"?

 

Yes.

 


> 

>> If that is being caused by a pattern rule then you need to discover the
pattern rule from logs first and then panic that rule and report the FP.<<

Hm - so if we have such a GBUdb FP issue, we would need to first go into the
log for the message ID in question and locate the IP address. THEN, we have
to search the log files to find where this IP address may have occurred
(possibly several days of logs, before someone noticed legitimate email from
being missing) in hopes of eventually still finding some log entry that
relates to the original rule-ID, before we can add it to the panic list?

 

Yes-- more or less.

 

It's not as bad as it seems though.

 

In order for an IP to get into the truncate range in GBUdb it has to
consistently send messages that match pattern rules. That is 95% of the time
if a message is sent from this IP it matches a pattern rule AND it has to
send a bunch of them.

 

If the messages come in over separate days the statistics condense every day
-- so on any given day it is likely a number of messages would have to come
in and match pattern rules.

 

That means that a message matching the offending pattern rule is likely to
be listed in the same log file and previous days (if any).

 

It also means that if you find that IP in that log you are virtually
guaranteed that the message you find will have either matched the pattern
rule or been truncated.

 

In this case the probability figure is 1 indicating that all messages from
this IP have matched pattern rules. GBUdb override results (caution, black,
truncate) do not change IP statistics... so the only way for an IP to get
into the truncate range is by consistently producing messages that match
pattern rules.

 

Presumably if substantially all messages from this legitimate source were to
be tagged as spam then they would be reported as false positives.

 

Even if they were not immediately reported as false positives then the daily
condensation of GBUdb statistics would force the IP out of the truncate
range until more messages were tagged by the pattern rule -- and presumably
one or more of those would be reported as false positives.

 

Bottom line -- it should not be difficult to find log records associated
with this IP that are also associated with the pattern rules that pushed it
into the truncate range.

 


> 

I suppose it would be technically impossible to include the underlying rule
in the GBUdb, so that it can be properly reported when messages are blocked?

 

Yes. The GBUdb engine only stores the statistics about the IPs and the data
needed to index and access these records quickly. However, as I've said,
information on the pattern rules should be relatively easy to find --
especially for truncate cases.

 


> 

 

>> Are you reporting such an FP?<<

Yes, your FP support identified the underlying rule and reported it back to
me. Of course, I need to have a panic procedure in place that doesn't rely
on outside assistance.  Doesn't happen often, but better ask the questions
now than when the brown matter hits to air circulation enhancer.

 

This case is somewhat unique. The pattern rule has been around for a very
long time -- so it is extremely unlikely that a similar case would arise
again.

 

A short-term and immediate fix for such a case -- while figuring out what is
really going on -- is to reset the statistics on the IP so that they are not
in the truncate range and so that it would take a large effort to get them
there.

 

For example, you could SNFClient -set  ugly 0 32

 

This would move the IPs statistics far toward the white so that a truly
large number of hits would be required to push it back into truncate even if
every message failed a pattern rule. In the mean time the IP would be in the
"normal" range. 

 

This gives you immediate relieve with a "fire and forget" command. The GBUdb
statistics for the IP will eventually return to the correct value for the IP
and by the time that happens you will have resolved the underlying pattern
rule issue or made some other decision regarding the IP.

 

Hope this helps,

 

_M

 

-- 

Pete McNeil

Chief Sci

[sniffer] Re: Update Script - Choice of WGET Parameter Prevents TimeStamping

2008-10-07 Thread Andy Schmidt
PS: 

 

And, for bonus points, to correctly support your sub-directory feature in
your sample script, you would do that with the -P parameter, e.g.:

 

wget http://www.sortmonster.net/Sniffer/Updates/%LICENSE_ID%.snf -N -P
%RULEBASE_PATH% --header=Accept-Encoding:gzip --http-user=sniffer
--http-passwd=ki11sp8m

 

 

From: Andy Schmidt [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 08, 2008 12:41 AM
To: 'Message Sniffer Community'
Subject: Update Script - Choice of WGET Parameter Prevents TimeStamping

 

Hi,

 

I've spent some time over the last few days trying to integrate the "new"
sniffer update scheme into my current scripts and kept hitting a wall
because the update script was downloading the rulebase file with the CURRENT
date/time instead of your webserver's date/time. In the past I had used CURL
instead of WGET, but I'm trying to stick with the provided samples, the best
I can (to make future upgrades easier).

 

I finally figured out why the downloaded files were timestamped incorrectly
(and why the "conditional" download that I had working with CURL was not
working with WGET. The reason was your choice of WGET parameters in your
sample.

 

You currently are using:

 

wget http://www.sortmonster.net/Sniffer/Updates/(licensecode).snf -S -O
(licensecode).snf --header=Accept-Encoding:gzip --http-user=sniffer
--http-passwd=ki11sp8m

 

However, the -O parameter is not an "output file" parameter, but rather an
OVERRIDE parameter intended for cases where MORE than one file is downloaded
from the server. The '-O' parameter allows you to combine ALL these
downloaded files into a SINGLE file. Since all files are combined into one
large file, the file date is simply set to the current time. Clearly, this
entire scenario does NOT apply to the rulebase download!

 

Worse, this overrides the normal handling of downloads, where the output
filename is controlled by the server AND the timestamp of the local file
would be set to the "Last-Modified" header of your web server. The effect
is, that the downloaded files have the "wrong" timestamp and thus will
prevent employing a "conditional" download scheme in cases where the local
file already exists with the correct size and timestamp.

 

The "normal" command (and the one intended for YOUR application) would be:

 

wget http://www.sortmonster.net/Sniffer/Updates/(licensecode).snf -S -N
--header=Accept-Encoding:gzip --http-user=sniffer --http-passwd=ki11sp8m

 

This will:

 

a)  Download the file, maintain the filename and (by omitting the -O)
inherit the original timestamp from the web server -as it should be.

b)  The -N parameter will further improve the situation. If the local
file already exists with the correct file size and time stamp, then an
unnecessary download will be skipped!

 

Again, I know, that you are only providing your script as a "sample" - but,
the closer your sample tracks "reality" the fewer customers will see a need
to adapt it and thus reducing YOUR tech support effort if the customer
modifications lead to errors.

 

Best Regards
Andy Schmidt

Phone:  +1 201 934-3414 x20 (Business)
Fax:+1 201 934-9206 



[sniffer] Re: Update Script - Choice of WGET Parameter Prevents TimeStamping

2008-10-07 Thread Pete McNeil




Hello Andy,

Wednesday, October 8, 2008, 12:50:23 AM, you wrote:




>


PS: 
 
And, for bonus points, to correctly support your sub-directory feature in your sample script, you would do that with the –P parameter, e.g.:
 
wget http://www.sortmonster.net/Sniffer/Updates/%LICENSE_ID%.snf -N -P %RULEBASE_PATH% --header=Accept-Encoding:gzip --http-user=sniffer --http-passwd=ki11sp8m







Thanks for your help. 

We have to cover a lot of ground so we often get solutions from our customers and others that just want to help out. We do our best to test and edit.

I will see that your suggestions / corrections are reviewed and included in our updates.

In any case they will be in the mailing list archives ;-)

THANKS!

_M


-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: Update Script - Choice of WGET Parameter Prevents TimeStamping

2008-10-07 Thread Andy Schmidt
Hi Pete,

Thanks for giving it your consideration. If you decide to revise these
parameteres, then it will require an extra command in your script (because
the WGET command will output the compressed file as .SNF).

If you don't insist on using WGET, then CURL (also free/open software)
actually has more flexible parameters that will simplify your script because
it will let you compare the timestamp of the unzipped, local .SNF file
against the server timestamp, e.g.:

curl http://www.sortmonster.net/Sniffer/Updates/(licensecode).snf -o
(licensecode).snf.gz -s -S -R -z (licensecode).snf -H "Accept-Encoding:gzip"
-u sniffer:ki11sp8m

Best Regards,
Andy

From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Pete McNeil
Sent: Wednesday, October 08, 2008 1:03 AM
To: Message Sniffer Community
Subject: [sniffer] Re: Update Script - Choice of WGET Parameter Prevents
TimeStamping

 

Hello Andy,

 

Wednesday, October 8, 2008, 12:50:23 AM, you wrote:

 


> 

PS: 

 

And, for bonus points, to correctly support your sub-directory feature in
your sample script, you would do that with the -P parameter, e.g.:

 

wget http://www.sortmonster.net/Sniffer/Updates/%LICENSE_ID%.snf -N -P
%RULEBASE_PATH% --header=Accept-Encoding:gzip --http-user=sniffer
--http-passwd=ki11sp8m

 



 

Thanks for your help. 

 

We have to cover a lot of ground so we often get solutions from our
customers and others that just want to help out. We do our best to test and
edit.

 

I will see that your suggestions / corrections are reviewed and included in
our updates.

 

In any case they will be in the mailing list archives ;-)

 

THANKS!

 

_M

 

 

-- 

Pete McNeil

Chief Scientist,

Arm Research Labs, LLC.

#
 
This message is sent to you because you are subscribed to
 
  the mailing list .
 
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
 
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
 
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
 
Send administrative queries to  <[EMAIL PROTECTED]>
 
 


[sniffer] Re: Update Script - Choice of WGET Parameter Prevents TimeStamping

2008-10-07 Thread Pete McNeil




Hello Andy,

Wednesday, October 8, 2008, 1:13:50 AM, you wrote:




>


Hi Pete,
Thanks for giving it your consideration. If you decide to revise these parameteres, then it will require an extra command in your script (because the WGET command will output the compressed file as .SNF).





There is actually a bit more to it than that -- The existing script generally works even though it doesn't preserve the servers's timestamp because:

1. It is usually triggered from the SNFServer when SNF detects a newer rulebase file.

2. Any rulebase fill recently downloaded is guaranteed to be newer provided the local server's clock is correct (or close to it).

Also -- are you saying that with the parameters you've provided WGET would decompress the file on it's own so that we wouldn't need to do that in our script? If so, how does it know for sure where to find GZIP? If not then it would be a little dangerous to have a .snf file around that looked correct but was in fact not yet decompressed.

Another consideration is that if the file name is going to collide with the existing rulebase file we would want to move that into another location so that we don't stomp the existing rulebase file until we've tested the new one.

It would be preferable to use WGET since there's nothing wrong with it and we've been using it long enough that most SNF folks already have it.

That doesn't mean you shouldn't provide an alternate script that works with CURL just in case someone has a preference.

Best,

_M


-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: Update Script - Choice of WGET Parameter Prevents TimeStamping

2008-10-07 Thread Pete McNeil




Hello Andy,

Wednesday, October 8, 2008, 1:35:41 AM, you wrote:






>



Also -- are you saying that with the parameters you've provided WGET would decompress the file on it's own so that we wouldn't need to do that in our script? If so, how does it know for sure where to find GZIP?







Sorry--- the above was just me hitting the button before my brain caught up with my fingers. You did say that the compressed file would be output as .SNF -- 

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.



#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: Update Script - Replace WGET and GZIP with CURL

2008-10-07 Thread Andy Schmidt
Hi Pete,

Agreed, with WGET it gets quite a bit complicated (because it really doesn't
understand the GZ format). That's why you currently have to override the
filename, call it GZ, then call GZIP to unzip it. I've come to the
conclusion that it's not worth the trouble with WGET (as you surmised, it
would make the script more complicated - so forget that approach.)

The reason to switch to CURL is that behaves like a true HTTP application
with GZIP support. You don't need to ship an extra GZIP on top of WGET in
your distribution. CURL requests your file from your server, asks for it to
be GZ compressed during transport, receives it, decompresses it before
saving it - and sets timestamp from the server. All that in ONE command.

Basically, you would replace these TWO lines in your current script:

wget http://www.sortmonster.net/Sniffer/Updates/%LICENSE_ID%.snf -O
%RULEBASE_PATH%\%LICENSE_ID%.new.gz --header=Accept-Encoding:gzip
--http-user=sniffer --http-passwd=ki11sp8m
if exist %RULEBASE_PATH%\%LICENSE_ID%.new.gz gzip -d -f
%RULEBASE_PATH%\%LICENSE_ID%.new.gz

with this single line:

curl http://www.sortmonster.net/Sniffer/Updates/%LICENSE_ID%.snf -R -o
%RULEBASE_PATH%\%LICENSE_ID%.new -z %RULEBASE_PATH%\%LICENSE_ID%.snf
--compressed -u sniffer:ki11sp8m

Best Regards,

Andy

From: Message Sniffer Community [mailto:[EMAIL PROTECTED] On Behalf
Of Pete McNeil
Sent: Wednesday, October 08, 2008 1:36 AM
To: Message Sniffer Community
Subject: [sniffer] Re: Update Script - Choice of WGET Parameter Prevents
TimeStamping

 

Hello Andy,

 

Wednesday, October 8, 2008, 1:13:50 AM, you wrote:

 


> 

Hi Pete,

Thanks for giving it your consideration. If you decide to revise these
parameteres, then it will require an extra command in your script (because
the WGET command will output the compressed file as .SNF).

 

There is actually a bit more to it than that -- The existing script
generally works even though it doesn't preserve the servers's timestamp
because:

 

1. It is usually triggered from the SNFServer when SNF detects a newer
rulebase file.

 

2. Any rulebase fill recently downloaded is guaranteed to be newer provided
the local server's clock is correct (or close to it).

 

Also -- are you saying that with the parameters you've provided WGET would
decompress the file on it's own so that we wouldn't need to do that in our
script? If so, how does it know for sure where to find GZIP? If not then it
would be a little dangerous to have a .snf file around that looked correct
but was in fact not yet decompressed.

 

Another consideration is that if the file name is going to collide with the
existing rulebase file we would want to move that into another location so
that we don't stomp the existing rulebase file until we've tested the new
one.

 

It would be preferable to use WGET since there's nothing wrong with it and
we've been using it long enough that most SNF folks already have it.

 

That doesn't mean you shouldn't provide an alternate script that works with
CURL just in case someone has a preference.

 

Best,

 

_M

 

 

-- 

Pete McNeil

Chief Scientist,

Arm Research Labs, LLC.

#
 
This message is sent to you because you are subscribed to
 
  the mailing list .
 
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
 
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
 
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
 
Send administrative queries to  <[EMAIL PROTECTED]>