On 12/16/2012 10:44 AM, Rogier Wolff wrote:

Hi,

The variance, which is used to calculate the stdev, is stored in a 64-bit
integer.

However, what we store there are the squares of the difference from the
average. So if you have 70 second ping time (sometimes), the square of 70000
miliseconds becomes 4900 million! Quite a lot, but unlikely to overflow a
64-bit value.... However the calculation is done in microseconds.... Thus
your 70 seconds is 70 million microseocnds, giving 4900 trillion (4.9 *
10^15) added to the running total every second or so, (as long as the average
remains around zero). This can overflow a 64-bit variable in human-observable
time.

The case at hand was only about 60000 milliseconds, but yes, that would explain
the problem.

The fact that I've seen 70000-millisecond "Worst" times without seeing this
problem would then be explained by the fact that those sessions didn't last this
long; IIRC they were about two weeks at most, and this one is over six.

I've modified the code to do the calculations in miliseconds from now on.
This should buy us a factor of a million of margin. :-)

Not a 100% fix in theory, but it should hide the problem for pretty much any
case that's actually reasonable to support.

Sounds good to me; thanks for the prompt response!


--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to