Richard Price wrote:
> Thanks David,
> 
> Yes it makes sense but the algorithm by doubling and halving the 
> intervals may never get close to the actual timeout value.
> 
> You could slowly increase the interval and then rapidly decrease it upon 
> a timeout. Like in the TCP congestion control algorithm.

Generally it is faster and more accurate to find a scaling or 
tipping point to go in steps of 10, 5, 3, and 1. In a case like 
this one would make steps up by Y(starting point) plus 10X(unit 
of measure) until failure, then back off in steps of Y(failure) - 
5X until success. Next ramp up in steps of Y(success) + 3X until 
failure and then back off at Y(failure) - 1X until the percentage 
of successful operation is what you need, in this case finding 
the optimal timeout/keepalive value.

If one was measuring milliseconds one might start at 10 and step 
up at 10 milliseconds then back off in 5 millisecond increments, etc.

If the failure point was 500 milliseconds and the success point 
was 300 milliseconds, then one could easily reset the starting 
point for the next run at 200 milliseconds so as to avoid the 
long ramp up where everything was working correctly.

One would likely find that over a number of test runs that the 
timeout/keepalive value varied over a range and as one did more 
test runs this range would become more defined until you could 
come up with a safe range to use.

If one was doing this dynamically in the background, one could 
plot the changing safe point over time so the safe point could be 
studied to, perhaps, find what was affecting it and any shifting 
that might occur.

Allen

_______________________________________________
p2p-hackers mailing list
p2p-hackers@lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to