On 2010/10/30 (Oct), at 10:59 AM, Matthew Toseland wrote:

> On Friday 29 October 2010 17:24:30 Robert Hailey wrote:
>>
>> It needs a little more work anyway :)

I've tested the current head over the weekend and it now appears to be  
behaving as intended.

>>> We do not make significant changes to routing without detailed
>>> simulations. The Least Recently Used policy used for opennet has
>>> been extensively simulated and while it is not proven, there is also
>>> a very strong mathematical basis for it. There is every reason to
>>> think that it should perform well, in other words, and automatically
>>> establish a small world topology. Plus, it trades off performance
>>> against location in a way which is simple and avoids any need for
>>> extra layers of performance evaluation separate to optimising
>>> locations.

Is the simulator under "freenet/node/simulator"? I did not see a  
simulator for such opennet ideas.

Anyway... if you even look at the output of the freenet/model/ 
SmallWorldLinkModel, you can see that it forms the ideal link pattern  
for a smallworld network. If the rest of the supporting code  
successfully prefers nodes in this pattern (which is both its intent  
and now seen to do by experimental observation) there is every reason  
to believe that it will form a vastly-superior small world network on  
the live network.

As for LRU... my contribution also you to tune the networks clustering  
coefficient! can you even speculate what LRU does in this respect?

>> I think that the announcement algorithm accounts for 95% of peer  
>> selection.
>>
>> In my experience, nodes announce... get peers at the given  
>> location...
>> and then are forevermore content with the announce-gathered peers.  
>> LRU
>> would only have the effect that you state if we routinely dropped the
>> lowest peer (in such a way that they could not just reconnect).

Perhaps a picture will help illustrate what I generally see:


(link to picture instead)

How can you say this follows a smallworld link pattern?

If you want a security reason to merge my fix, I'll give you one! I  
have reason to believe that *many* nodes have peer patterns just like  
this (just a clump). In this case, all an attack has to do is get  
*two* opennet connections to your node (one at +epsilon, and one at - 
epsilon), and he can monitor 99% of all traffic coming from your node.  
What's more, because the incoming requests are so specialized he can  
be nearly 100% sure that the traffic originated from your node.

On the other hand, if the node gets anywhere close to the target link  
pattern (blue dots), the most keyspace he could monitor with two  
connections would be about 33% (and it must be the keyspace far from  
the target, and he could *not* be sure it was coming from your node).


(link to picture instead)

You will notice that my patch is not strict. There are still several  
un-preferred opennet peers, and the peers for the "preferred slots"  
fall some distance from the center (one-plus-sign-per-slot, the cut- 
off line is about 1/2-way between the dots).

>>> Finally, I don't believe routing is the problem limiting performance
>>> on the current network. The distribution of incoming requests is
>>> usually very specialised, for example.

I'm thinking that it is the major problem. With my patch the incoming  
distribution resembles a steep bell-curve.

>> The overall CHK success rate is a better measure of network health  
>> IMO.
>
> And it's amazingly good by historical standards if you look at the  
> higher HTL numbers.

No doubt. IIRC, it was around the time of the htl increase and  
implementation of turtle requests, no?

> IMHO the rapid decline below 16 is to be expected because a lot of  
> stuff is answered quickly.

I'm sorry, but that doesn't make any sense... a high-htl request (even  
if answered early at hop-4) should register a success/failure based on  
the incoming htl.

My experiment shows an obvious and marked improvement in chk success  
rate (across-the-board), but this might be expected because it judges  
which peers to let fill the slots (and therefore hang on longer) based  
on a measurement of chk success rate. And this is with only one node  
running the fix!

(link to graph & data)

> However, IMHO there are lots of possible problems that would cause  
> poor performance by that measure other than poor routing. And they  
> do. Data persistence in particular is relatively poor. And I believe  
> this is caused by poor load management - inserts rejected by the  
> nodes where the data should be stored, etc.

But what is data persistence? it may simply be not being able to find  
the data!

If backoff is an indication of bad load management, I'd say it has  
been doing rather well recently!

> Update Over Mandatory means auto-update will work in *almost* all  
> cases unless we *really* mess things up in e.g. the transport layer  
> or announcement.

That is good to know!

I'm sure you can understand my eagerness, IMO this may be the last  
major functional holdout.

--
Robert Hailey


-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20101101/202478fa/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fn-peer-clump2.png
Type: image/png
Size: 8997 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20101101/202478fa/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fn-topo-fix.png
Type: image/png
Size: 9907 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20101101/202478fa/attachment-0001.png>

Reply via email to