Since I'm about to do this code I thought I'd get some options on what the old
whiterose core called tasking, and peoples opinions on if it's a good thing.

Basically, if multiple requests came in for the same key, only 1 would be
forwarded. The reply to the forwarded message would be duplicated to all the
nodes that requested the key. An incoming request could also 'join' a tunnel
that was currently active.

* Keeps bandwidth and node load down

Issues with doing it:
* What to do when a request with a different HTL comes in? I'd suggest flagging
that the HTL of the request has increaed and if the request fails, reply
QueryRestarted and try again with the same node (or with a certain probalbility
select the next best).
* What to do if the Store doesn't want cache it? The Store mightn't want to
cache for a number of reasons (data too big etc) - but you can end up with one
incoming stream and many client nodes. You could choose one (at random, or the
one with the biggest Depth, thus most likly to go though a node that will
cache it) and RequestFail all the others. Or you could limit all the streams
to the speed of the slowest.

Issues with not doing it
* If you have an active tunnel that is caching in the Store and another request
comes in you can't start cacheing with the second request. If the first
request dies do you leave a half finished document in the store and try to
'pickup' where it left off?

Compromise?
Maybe every request for the same data should forward (to different nodes?), but
everyone jumps onto the first tunnel to be created. Or maybe new request can
only join active tunnels that are being cached?

Ideas please

AGL

-- 
The difference between genius and stupidity is that genius has its limits.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 240 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20000726/11ea2efb/attachment.pgp>

Reply via email to