On Tuesday 29 January 2008 21:24, Robert Hailey wrote:
> 
> On Jan 29, 2008, at 1:40 PM, Matthew Toseland wrote:
> 
> > Should we use per-peer or overall ping times when considering  
> > whether to
> > accept a request? Just because the code to use per-peer ping times was
> > reverted doesn't mean it's wrong.
> >
> > Per-peer:
> > - Don't accept requests from slow nodes, because they may not  
> > recognise our
> > Accepted within the timeout? What is the main reason for not accepting
> > requests from slow nodes?
> 
> Because beyond a threshold they are most likely severally overloaded,  
> or have such network latency that the hard-coded timeouts for freenet  
> would not work (if only for the cause of this measurable latency).  

Yes but we are handling the request, so if we complete it, we send it back to 
them ... it might timeout on their side due to slow transfers or something?

> More importantly though, we should not route request *to* them, and  
> this is presently done only after a timeout (accepted/sendSync/fatal/ 
> etc.).

You think we should consider the ping time directly? This might make sense...
> 
> > - More accurate for a specific node.
> > Overall:
> > - Good indication of overall network and CPU load. For example, it  
> > tends to
> > spike during startup while we are deserialising our inserts and  
> > reloading our
> > requests from the datastore.
> > - More accurate overall.
> 
> These are two sides of the same coin. For an overloaded node (with all  
> the ping times quite high), the effect is expected to be about the  
> same; the slow-node is expected to reject requests in any event.

Long term the expectation is that the slow node recognises it is slow and only 
asks for a few requests. But then it wouldn't be so overloaded, so this 
mechanism would be bypassed.
> 
> Overall averaging suffers from:
> - odd calculation variances w/ backed off peers (as backed off peers  
> are not counted for some reason)

Because we already know they are overloaded?

> - disproportionate statistics when running w/ few peers

True.

> (or not counting 'self' ping time)

Huh?

> - (as per above) possible cut off of all traffic

Not due to a single node - the median would have to be high.
> 
> Per-node suffers from:
> - Unsmoothed/jittery rejection rates (at least, as presented)

Possibly yes.

> - Unfairly rejecting local traffic

Yes: different accept/reject logic for local requests. Which is bad for 
security reasons too. Originally this mechanism was simply a proxy for 
directly measuring CPU load (because we can't do that).

> - possible cut off of all traffic to/from a particular node
- different accept/reject logic for each peer. Okay, we already have this to a 
degree because of the send queue vs send rate logic, but even more so. IMHO 
this is bad if it can be avoided without too much impact on load management.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080129/5b5e025a/attachment.pgp>

Reply via email to