On Jan 10, 2008, at 5:20 PM, Matthew Toseland wrote:

> I'm also not entirely against what you've done here, but IMHO the  
> code up to
> this commit is way too latency sensitive. If you just used  
> ACCEPTED_TIMEOUT
> for example it would be a perfectly valid optimisation.

I think that was my major misunderstanding. While it is for certain  
that the faster requests are processed the "better" the network at  
large, I was running under the assumption that if the most-recently- 
routed-to-node did not process the request in FETCH_TIMEOUT that the  
world was over, and nothing would ever work.

Concerning the first request made to a node, both sendSync and  
sendAsync seem interestingly wrong for this use.

* Surely if we use sendAsync, once the average send queue reaches the  
ACCEPTED_TIMEOUT/2, the network is disastrously flooded as the nodes  
all send every request to all there nodes waiting on none of them.

* Intuitively, sendSync *seems* to be better. But again, once the  
average send queue reaches ACCEPTED_TIMEOUT, the accepted packet BACK  
to the originating node will be lost (and it will continue), and the  
requesting node will go about it's business spreading the request.

That is how I came up with the conditionalSend, but at best it only  
attacks half the problem (exactly as you said).

> I'd be happy to prioritise accepted's over data transfer in order to  
> get an
> accepted more quickly; what priority e.g. requests should take is  
> another
> question. When calculating such things we have to bear in mind that  
> 95% of
> connections are asymmetrical, so the send queue and the receive  
> queue may not
> be closely correlated.


Prioritizing accepted messages over bulk data may help this case, but  
the more general solution would be to prioritize data less than  
control packets, no? Find the data fast (Request/DF/DNF/RNF/...),  
transfer it slow (packetTransmit/???). Then this issue would only come  
back up if the time to transmit the control messages in the queue was  
more than ACCEPTED_TIMEOUT/etc. I do not reason there would be any  
issue with the requests being the same (higher) priority, as even with  
a huge back-logged send queue of data the standard reject mechanism  
would be effective (send queue length/threadLimit), we would just know  
about it much sooner.

In fact... I *REALLY* like that solution.

--
Robert Hailey

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080110/5d8843b8/attachment.html>

Reply via email to