On Tuesday 08 April 2008 00:36, Ian Clarke wrote:
> http://video.google.com/videoplay?docid=-2372664863607209585
> 
> He mentions Freenet's use of this technique about 10-15 minutes in,
> they also use erasure codes, so it seems they are using a few
> techniques that we also use (unclear about whether we were the direct
> source of this inspiration).

They use 500% redundancy in their RS codes. Right now we use 150% (including 
the original 100%). Maybe we should increase this? A slight increase to say 
200% or 250% may give significantly better performance, despite the increased 
overhead...

Also, they discourage low uptime nodes by not giving them any extra storage. 
I'm not sure exactly what we can do about this, but it's a problem we need to 
deal with.

We should also think about randomising locations less frequently. It can take 
a while to recover, and the current code randomizes roughly every 13 to 22 
hours. It may be useful to increase this significantly? Unfortunately this 
parameter is very dependant on the network size and so on, it's not really 
something we can get a good value for from simulations... I suggest we 
increase it by say a factor of 4, and if we get major location distribution 
issues, we can reduce it again.
> 
> Ian.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20080408/ae258f1b/attachment.pgp>

Reply via email to