We might want to wait until we have finished with the opennet connection limit 
changes, but IMHO this is a good idea too.

Basically, requests would have a flag, which is either bulk or realtime.

Currently, we only allow new requests if our current requests can be completed 
within available bandwidth - assuming they all succeed - within 90 seconds. It 
is very rare that they all succeed, in fact a lot of requests fail, but across 
a route spanning 10 hops it is likely there is one node which is bogged down 
with lots of transfers. 

For bulk requests, we increase the transfer threshold to 120 seconds or maybe 
even 300 seconds. This will optimise throughput.

For realtime requests, we reduce the transfer threshold to maybe 20 seconds, 
severely limiting the number of requests but ensuring they all complete fast. 
Any incoming realtime transfer that takes 20 seconds is turtled (at which point 
they become bulk requests). Data blocks for realtime requests take precedence 
over data blocks for bulk requests. We would need to ensure that the data for 
the bulk requests does get transferred, and the realtime requests don't 
constantly starve the bulk requests. This would require a token bucket or 
something similar to limit the proportion of bandwidth used by realtime 
requests, which would need to be relative to the available/used bandwidth and 
not necessarily to the limit.

Fproxy would use realtime requests. Persistent downloads would use bulk 
requests. Big files being fetched in fproxy after asking the user might use 
bulk requests.

All this assumes the probability of a CHK request succeeding (in the region of 
10% at the moment) doesn't dramatically rise with Bloom filter sharing. Maybe 
we should put it off until after that?

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to