Jim Duey <jim at weathercom.com> wrote:
> In the case where a single node starts multiple searches in parallel,
> the nodes most affected would be those where these parallel search
> requests meet.  In that case, you'd have two or more requests for the
> same file pending on the same node with different id's.  If each request
> that this node forwarded where successful, it would have to receive the
> same document multiple times.  To prevent this, when a node receives a
> request, it checks that key with the keys of the requests it has
> pending.  If it finds a match, it does not forward that request but
> merely puts it into its pending list.  If the request it did forward for
> that key is sucessful, it fulfills all pending requests for that
> document.  If it wasn't, it can forward a request for that document to
> the second best node once, and use the maximum HopsToLive from the
> requests pending for that key.

I think this is a great idea from an efficiency point of view, actually,
apart from the security aspect.  Essentially it means every node will be an
aggregating cache for pending requests as well as fulfilled ones.  After
all, why should a node send out a new request for a key it's already
waiting for?

> Such parallel searches would cause a
> document to reside on many different nodes.

Actually it's not a parallel search.  It does the opposite - combining
ongoing parallel searches into a single search.  There is no effect on the
number of nodes involved, because the multiple searches that are being
aggregated would have all gone down the same path anyway.

theo


_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to