[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-01 Thread Pete McNeil
Hello Peer-to-Peer,

Friday, August 1, 2008, 10:49:52 AM, you wrote:



> I also have a scheduled reboot every night since we did
> confirm w/ Arvel at MDaemon there is a memory leak in MDaemon.exe (if
> heavily utilizing their Gateway feature).  Have yet to hear anything from
> AltN regarding a fix on the MDaemon.exe leak.

> In any case, do you think lowering the upper limit will help the
> St9bad_alloc error, or am I fishing in the wrong area.

That will help your memory leak issue because it will leave more room
for the leak to expand before causing allocation failures.

You shouldn't see a significant drop-off in GBUdb performance after
you reduce your upper RAM limit because your message rates are low
enough that GBUdb should be able to function quite well with fewer
entries-- Also there is a "shared memory" effect that emerges from the
interaction of GBUdb nodes and the cloud... When records are condensed
they are more likely to be bounced off the cloud and get new data so
what you might loose in fewer records you will gain in more frequent
reflections.

Hope this helps,

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-01 Thread Peer-to-Peer (Support)
H sorry, just before posting my question last night I lowered the upper
limit to 100MB which is why you're now seeing more normal numbers on your
end.  Six servers were at 150MB last night and today the numbers are 1/2 of
the size.

Here's an example from server#1 (LAST NIGHT)







Here's an example from server#1 (TODAY)







I lowered the upper limit because since installing 3.0, I'm now seeing a
dramatic increase of the St9bad_alloc (out of memory error) on a daily basis
again.  As you know when that error occurs, all mail is allowed to pass &
none filtered, so my server reboots automatically when the St9bad_alloc
error occurs.  I also have a scheduled reboot every night since we did
confirm w/ Arvel at MDaemon there is a memory leak in MDaemon.exe (if
heavily utilizing their Gateway feature).  Have yet to hear anything from
AltN regarding a fix on the MDaemon.exe leak.


In any case, do you think lowering the upper limit will help the
St9bad_alloc error, or am I fishing in the wrong area.


Thanks,
--Paul



-Original Message-
From: Message Sniffer Community [mailto:[EMAIL PROTECTED]
Behalf Of Pete McNeil
Sent: Friday, August 01, 2008 10:04 AM
To: Message Sniffer Community
Subject: [sniffer] Re: FW: Memory Usage of MessageSniffer 3


Hello Peer-to-Peer,

Thursday, July 31, 2008, 10:05:15 PM, you wrote:

> Would it be correct to say the higher we can increase the size-trigger
> 'megabytes' value, the better filtering results (accuracy) we will
achieve?
> In other words, would it be beneficial for us to purchase more memory on
our
> server (say an additional 2GB), then increase the 'megabytes' value to 400
> or 800?

> Several of our servers are hitting the upper limit (159,383,552) 150 MB

I don't think so. A quick look at your telemetry indicates that your
systems are typically rebooted once per day. This is actually
preempting your daily condensation.

One result of this is that many of your GBUdb nodes only condense when
they reach their size limit. From what I can see, when this happens a
significant portion of your GBUdb data is dropped. For example,
several of the systems I looked at have not condensed in months. Here
is some data from one of them:















This one has not condensed since 200804 most likely due to restarts
that prevented the daily condensation timer from expiring.

If this is the case with your other systems as well, it is likely that
they are occasionally condensing when they reach their size threshold,
but if they were allowed to condense daily they would never reach that
limit.

In that case, adding additional memory for GBUdb would probably not
improve performance significantly.

The default settings are conservative even for very large message
loads. for example our spamtrap processing systems typically handle
3000-4000 msg/minute continuously and typically have timer & GBUdb
telemetry like this:














Note that this SNF node has not been restarted since 20080717 and that
it's last condensation was in the early hours today-- most likely due
to it's daily timer.

Note also that it's GBUdb size is only 117 MBytes. It is unlikely that
this system will reach 150Mbytes before the day is finished.

Since most systems we see are handling traffic rates significantly
smaller than 4.75M/day it is safe to assume that most systems would
also be unlikely to reach their default GBUdb size limit during any
single day... So, the default of 150 MBytes is likely more than
sufficient for most production systems.

---

All that said, if you want to intentionally run larger GBUdb data sets
on your systems there is no harm in that. Your system will be more
aware of habitual bot IPs etc at the expense of memory. Since all
GBUdb nodes receive reflections on IP encounters within one minute, it
is likely that the benefit would be the ability to reject the first
message from a bad IP more frequently... Subsequent messages from bad
IPs would likely be rejected by all GBUdb nodes based on reflected
data.

It is likely that increasing the amount of RAM you assign to your
GBUdb nodes will have diminishing returns past the defaults currently
set... but it might be fun to try it and see :-)

---

If you are looking for better capture rates you may be able to achieve
those more readily by adjusting your GBUdb envelopes. The default
envelopes are set to avoid false positives on large filtering systems
with a diverse client base.

It is likely that more restricted systems could afford to use more
aggressive envelopes without creating false positives because their
traffic would be more specific to their systems.

In a hypothetical case: If your system generally never receives
legitimate messages from Russian or Chinese ISPs, then it is likely
that your system would begin to learn very negative statistics for IPs
belonging to those ISPs. A slight adjustment to your black-range GBUdb
envelope might be just enough to capture those IPs without creating
false positives for other ISPs whe

[sniffer] Out of Office

2008-08-01 Thread rick
I'm currently out of the office until August 11, if you require immediate 
assistance please email Kevin Mahoney at [EMAIL PROTECTED] or call us at 
360.527.9111 

Thanks!





#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>



[sniffer] Re: FW: Memory Usage of MessageSniffer 3

2008-08-01 Thread Pete McNeil
Hello Peer-to-Peer,

Thursday, July 31, 2008, 10:05:15 PM, you wrote:

> Would it be correct to say the higher we can increase the size-trigger
> 'megabytes' value, the better filtering results (accuracy) we will achieve?
> In other words, would it be beneficial for us to purchase more memory on our
> server (say an additional 2GB), then increase the 'megabytes' value to 400
> or 800?

> Several of our servers are hitting the upper limit (159,383,552) 150 MB

I don't think so. A quick look at your telemetry indicates that your
systems are typically rebooted once per day. This is actually
preempting your daily condensation.

One result of this is that many of your GBUdb nodes only condense when
they reach their size limit. From what I can see, when this happens a
significant portion of your GBUdb data is dropped. For example,
several of the systems I looked at have not condensed in months. Here
is some data from one of them:















This one has not condensed since 200804 most likely due to restarts
that prevented the daily condensation timer from expiring.

If this is the case with your other systems as well, it is likely that
they are occasionally condensing when they reach their size threshold,
but if they were allowed to condense daily they would never reach that
limit.

In that case, adding additional memory for GBUdb would probably not
improve performance significantly.

The default settings are conservative even for very large message
loads. for example our spamtrap processing systems typically handle
3000-4000 msg/minute continuously and typically have timer & GBUdb
telemetry like this:














Note that this SNF node has not been restarted since 20080717 and that
it's last condensation was in the early hours today-- most likely due
to it's daily timer.

Note also that it's GBUdb size is only 117 MBytes. It is unlikely that
this system will reach 150Mbytes before the day is finished.

Since most systems we see are handling traffic rates significantly
smaller than 4.75M/day it is safe to assume that most systems would
also be unlikely to reach their default GBUdb size limit during any
single day... So, the default of 150 MBytes is likely more than
sufficient for most production systems.

---

All that said, if you want to intentionally run larger GBUdb data sets
on your systems there is no harm in that. Your system will be more
aware of habitual bot IPs etc at the expense of memory. Since all
GBUdb nodes receive reflections on IP encounters within one minute, it
is likely that the benefit would be the ability to reject the first
message from a bad IP more frequently... Subsequent messages from bad
IPs would likely be rejected by all GBUdb nodes based on reflected
data.

It is likely that increasing the amount of RAM you assign to your
GBUdb nodes will have diminishing returns past the defaults currently
set... but it might be fun to try it and see :-)

---

If you are looking for better capture rates you may be able to achieve
those more readily by adjusting your GBUdb envelopes. The default
envelopes are set to avoid false positives on large filtering systems
with a diverse client base.

It is likely that more restricted systems could afford to use more
aggressive envelopes without creating false positives because their
traffic would be more specific to their systems.

In a hypothetical case: If your system generally never receives
legitimate messages from Russian or Chinese ISPs, then it is likely
that your system would begin to learn very negative statistics for IPs
belonging to those ISPs. A slight adjustment to your black-range GBUdb
envelope might be just enough to capture those IPs without creating
false positives for other ISPs where you do receive legitimate
messages.

In any case, since the default ranges are extremely conservative and
tuned for large scale filtering systems it is worth experimenting with
them to boost your capture rates on nodes that have a more restricted
client base.

If you have a larger system and you use a clustering deployment
methodology then you might still take advantage of these statistics by
grouping similar clients on the same node(s) based on where they get
their messages. Even if you don't adjust your envelopes this
clustering will have the effect of "increasing the signal to noise
ratio" for GBUdb as it learns which IPs to trust and which ones to
suspect.

Hope this helps,

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list .
To unsubscribe, E-mail to: <[EMAIL PROTECTED]>
To switch to the DIGEST mode, E-mail to <[EMAIL PROTECTED]>
To switch to the INDEX mode, E-mail to <[EMAIL PROTECTED]>
Send administrative queries to  <[EMAIL PROTECTED]>