Just because our queue of data to send to a node is full doesn't mean its 
queue of data to send to us is full. Queue based load management would have 
to take the latter into account. In fact, it would have to take not only the 
data currently queued but also the data that is likely to become queued into 
account. The liability limiting code works on this principle. Various 
proposals for load management work similarly by passing back information on 
how many requests can be made in the near future - but these will not be 
deployed until they have been simulated.

So please show me that this has no effect on routing, or revert it.

I'd be happy to prioritise accepted's over data transfer in order to get an 
accepted more quickly; what priority e.g. requests should take is another 
question. When calculating such things we have to bear in mind that 95% of 
connections are asymmetrical, so the send queue and the receive queue may not 
be closely correlated.

I'm also not entirely against what you've done here, but IMHO the code up to 
this commit is way too latency sensitive. If you just used ACCEPTED_TIMEOUT 
for example it would be a perfectly valid optimisation.

I apologise for being away recently, some local problems.

On Monday 07 January 2008 20:45, robert at freenetproject.org wrote:
> Author: robert
> Date: 2008-01-07 20:45:19 +0000 (Mon, 07 Jan 2008)
> New Revision: 16960
> 
> Modified:
>    trunk/freenet/src/freenet/node/RequestSender.java
> Log:
> comment general theory of conditionalSend versus SEND_TIMEOUT
> 
> 
> Modified: trunk/freenet/src/freenet/node/RequestSender.java
> ===================================================================
> --- trunk/freenet/src/freenet/node/RequestSender.java 2008-01-07 20:07:30 
UTC (rev 16959)
> +++ trunk/freenet/src/freenet/node/RequestSender.java 2008-01-07 20:45:19 
UTC (rev 16960)
> @@ -208,9 +208,27 @@
>              long timeSentRequest = System.currentTimeMillis();
>                       
>              try {
> -             //This is the first contact to this node
> -             //async is preferred, but makes ACCEPTED_TIMEOUT much more 
likely for long send queues.
> -                             //using conditionalSend this way might actually 
> approximate Q-routing 
load balancing accross the network.
> +             //This is the first contact to this node, it is more likely to 
timeout
> +                             /*
> +                              * using sendSync could:
> +                              *   make ACCEPTED_TIMEOUT more accurate (as it 
> is measured from the 
send-time),
> +                              *   use a lot of our time that we have to 
> fulfill this request (simply 
waiting on the send queue, or longer if the node just went down),
> +                              * using sendAsync could:
> +                              *   make ACCEPTED_TIMEOUT much more likely,
> +                              *   leave many hanging-requests/unclaimedFIFO 
> items,
> +                              *   potentially make overloaded peers MORE 
> overloaded (we make a 
request and promptly forget about them).
> +                              * using conditionalSend could:
> +                              *   make ACCEPTED_TIMEOUT as accurate as 
> sendSync (as it to waits for 
transmittion)
> +                              *   reduce general latency around peers which 
> have slow network links
> +                              *   not needlessly overload nodes w/ forgotten 
> requests (as 
conditonalSend will try and withdraw the request if it times out)
> +                              *!!!make us skip peers which would otherwise 
> have the data (they are 
closer, but slower)
> +                              *
> +                              * To avoid the pitfall of conditionalSend 
> (potentially skipping a good 
peer), we will come back to them when it is
> +                              * apparent that we cannot fill the request 
> quickly. Using 
conditionalSend this way might actually approximate
> +                              * Q-routing (for load balancing/latency) 
> across the network; if 
SEND_TIMEOUT is too high... this reduces to
> +                              * using sendSync w/ a good error catch, and if 
> SEND_TIMEOUT is too 
low... this reduces to creating a cache-backbone
> +                              * of fast links in the network which will 
> always be queried before 
general nodes in the network are.
> +                              */
>               if (!next.conditionalSend(req, this, sendTimeout)) {
>                                       if (usingBusyPeer)
>                                               continue;
> 
> _______________________________________________
> cvs mailing list
> cvs at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/cvs
> 
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080110/b3d3b7e3/attachment.pgp>

Reply via email to