On Wednesday 12 December 2007 21:14, you wrote:
> 
> On Dec 11, 2007, at 5:59 PM, Matthew Toseland wrote:
> > Ok that could be interesting. Although ideally we'd have a
> > circular-keyspace-aware averager.
> >> ...
> > Ok. I suggest you commit, I will review post-commit.
> 
> Committed it r16508, though it has changed slightly according to my  
> experiments with swap-biasing (e.g. shows number of cache/store writes  
> as well).

Ok, I will review the commit.
> 
>  From what I have discovered (or theorized) thus far, using the  
> average location of the entire store to bias against is way too much  
> of an anchor. This is good, because it takes up way to much memory to  
> remember such a running average anyway. 

It is rarely useful to implement averages that way. Most real world 
applications use klein filters (we use 
freenet.support.math.BootstrappingDecayingRunningAverage, which is pretty 
much the same thing).

> If we end up implementing such   
> a bias, in the end, it will likely just take into account the last few  
> inserts or successes (a small constant amount). In this way, the  
> pressure of your peers can still pull you into a new location  
> (dragging the weight of the last few inserts with you; which will be  
> updated).

Won't happen without convincing simulations. The current network has a lot of 
#freenet-refs connections (which are random, by no means small-world), and 
therefore has quite poor topology. A longer term problem is that many nodes 
don't run 24x7.
> 
> Whereas previously I have seen the 'storeDistance' to be 0.065 (IIRC),  
> after running with a recently-stored-location bias, I must have pulled  
> my peers closer to me as well, as now even with the patch off, I do  
> not see the storeDistance go nearly so high (staying around 0.004).  
> Or, since the network has no anchor, maybe I've pressured the whole  
> wheel to stop turning. This, valuing the store as one peer.

Valuing the store as one peer does indeed make a lot of sense. However, we 
can't just implement something like that.
> 
> With these stats I have noticed what may be an odd constant-location- 
> shift between what the node is requested to insert and retrieve, like  
> the network does not look for data in the exact same place it stores  
> it?!?!?! Suspect, but still can't confirm anything yet.

Odd, it really should be the same, except for the known issue that inserts 
don't stop when they find the data, so they go significantly further than the 
average (CHK) request.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20071213/3d82ac97/attachment.pgp>

Reply via email to