On Tuesday 22 July 2003 06:05 pm, Toad wrote:
> The main argument put forward at the time was to prevent the network
> from dividing into islands, or to stitch it back together when they did
> form. However, random routing every request on the origin node is not the
> only way to deal with it - one reasonable possibility would be to have a
> (very low - way under 1/htl) probability to fork the request at any
> node, sending the second request to a random route. This would not give
> away the privelidged information of whether the request started on that
> node, so seems safer. It might introduce a slight performance bias,
> which is why similar proposals weren't implemented in the past - but
> NGRouting introduces a massive performance bias, completely swamping the
> effects of occasionally forking requests.

The another way to combat the network from dividing would be to try to get 
connections to other nodes that are not currently near you in the network. 
This could be done without slowing routing. I came up with a ridiculously 
complex way to do this a while back, when people were trying to make Freenet 
operate more like BitTorrent. 
http://hawk.freenetproject.org:8080/pipermail/devl/2003-June/006628.html

I realize that this is way overkill for this problem, and that there are much 
more important things that need to be done in the near term. However I think 
it is worth while, not just because you can acquire node references from all 
over the network, but because it would allow Freenet to route data biased on 
both the last node to provide it and the last to request/proxy it without the 
two stepping on each other. I have never received any feedback on this idea. 
What does everyone think?
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to