On 2011/01/26 (Jan), at 1:15 PM, Matthew Toseland wrote:

> The following is relevant to Robert's thread. It's close to what he  
> was talking about. TheSeeker is the other big contributor.
>
> On a node which does a lot of downloads, the peers approach a flat  
> circle: They are more or less evenly distributed.
>
> On a node which just routes we get a very strong small world effect.

Wow, that makes sense. I think that you have found the direct cause,

> PRO:
> - Routing is no longer disproportionately optimised for local  
> requests. Hopefully higher overall performance.
> - Prevents the trivial opennet path-folding-originator-sampling  
> attack.
>
> CON:
> - No direct theoretical backing at present. We need to talk to a  
> theoretician.
> - We should not deploy this while there are other big network things  
> happening.

Theoretical backing?

Facts...

(1) Path folding presently operates across local & remote requests

Axioms...

(1) local requests tend to be evenly distributed,
(2) remote requests tend to be highly specialized,
(3) some nodes have disproportionately high local requests (downloaders)
(4) some nodes have disproportionately high remote requests (browsers)

I would expect...

(5) downloaders to have a routable view of the network,
(6) browsers will have a routable view of there local network &  
*might* over specialize
(7) when browsers "start browsing again" the node might quickly latch  
onto the nearest downloader b/c it can satisfy the requests
(8) when browsers go idle, any long connections or connections to  
downloaders will start dropping off

The FOAF routing scheme probably makes the #7 & #8 behaviour much more  
real.

I recall the whole 'run a spider and freenet starts working' argument  
and now wonder if it was more routing than caching.

As an aside, my understanding is that opennet routing would  
theoretically work even if every node simply held onto it's two  
nearest connections (left & right; neglecting load concerns entirely  
for the moment); this is the circular klienberg model which the  
simulators use.

In my former investigation (which I appreciate you bringing back up  
for its relevance), it seemed like my 'browser' nodes were becoming  
way too specialized; e.g. 5-7 fold redundant links to nearby nodes. It  
was this 'downloader' / 'browser' role separation is what I was  
refering to when I said the network might be behaving like a scale  
free network (maybe wrong terminology).

My original thought on improving this situation (if it does have such  
a negative effect), was to only accept a path folding request for  
local requests (i.e. eliminate the causing fact #1). This would surely  
cut down on the number of path folds accepted by browser-nodes, but  
they would be 'healthier'.... or at least become the same as the  
'downloaders' path folding, so the network would be more homogenous  
(if nothing else). It would be interesting to hear theories and  
opinions on this change.

> Should we only start path folding once HTL drops below the caching  
> threshold?


I don't totally understand the implications of that, but it sounds  
like you're looking for a solution on the origination side of path  
folding, and creating a new tunable. I would consider modifications on  
the acceptance side of path folding first, and nail down what it means  
(in code) to 'want' a new opennet peer enough to grab a path folding  
request.

--
Robert Hailey

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20110129/ce71d412/attachment.html>

Reply via email to