On Jan 29, 2008, at 1:40 PM, Matthew Toseland wrote: > Should we use per-peer or overall ping times when considering > whether to > accept a request? Just because the code to use per-peer ping times was > reverted doesn't mean it's wrong. > > Per-peer: > - Don't accept requests from slow nodes, because they may not > recognise our > Accepted within the timeout? What is the main reason for not accepting > requests from slow nodes?
Because beyond a threshold they are most likely severally overloaded, or have such network latency that the hard-coded timeouts for freenet would not work (if only for the cause of this measurable latency). More importantly though, we should not route request *to* them, and this is presently done only after a timeout (accepted/sendSync/fatal/ etc.). > - More accurate for a specific node. > Overall: > - Good indication of overall network and CPU load. For example, it > tends to > spike during startup while we are deserialising our inserts and > reloading our > requests from the datastore. > - More accurate overall. These are two sides of the same coin. For an overloaded node (with all the ping times quite high), the effect is expected to be about the same; the slow-node is expected to reject requests in any event. Overall averaging suffers from: - odd calculation variances w/ backed off peers (as backed off peers are not counted for some reason) - disproportionate statistics when running w/ few peers (or not counting 'self' ping time) - (as per above) possible cut off of all traffic Per-node suffers from: - Unsmoothed/jittery rejection rates (at least, as presented) - Unfairly rejecting local traffic - possible cut off of all traffic to/from a particular node -- Robert Hailey
