Folks: As you're aware, InterMapper sends ping and SNMP packets, and uses the fraction of returned packets to compute packet loss statistics. We're looking for some advice about the algorithm:
1) InterMapper 4.5 introduces a Short-term Packet Loss statistic. InterMapper remembers the history of the last 100 packets. The fraction of lost packets is displayed in the device's Status Window. 2) Version 4.5 also implements a new way to compute packet loss. Here's a description of the current way (4.4) and then the new (4.5). Earlier versions of InterMapper counted the number of pings and snmp queries sent, and the number of pings/snmp responses received. It did the simple calculation with these values to get percentage loss. InterMapper 4.5 does essentially the same thing, except that it it does not update the percent loss statistics when the device is down. That is, after a sufficient number of lost packets (default is 3), the packet loss statistics are not updated until InterMapper hears a response from the device. The argument for the old behavior is that it's simple to explain. The difficulty with that behavior is that an outage will cause the reported error rate to creep up as dropped packets accumulate. The behavior in version 4.5 attempts to preserve the packet loss stats from before the outage, so that a network administrator can determine what the packet loss was before the failure. The questions to the InterMapper-Talk list are: - Which behavior is more useful? - Are there circumstances where ignoring lost packets during outages would give a "wrong" result? - Should that behavior apply to both short-term and long-term packet loss? Many thanks! Rich Brown [EMAIL PROTECTED] Dartware, LLC http://www.dartware.com 10 Buck Road, PO Box 130 Telephone: 603-643-9600 Hanover, NH 03755-0130 USA Fax: 603-643-2289 ____________________________________________________________________ List archives: http://www.mail-archive.com/intermapper-talk%40list.dartware.com/ To unsubscribe: send email to: [EMAIL PROTECTED]
