>
>> Ever since the discussion of file splitting and raid levels, I've been
>> casually looking around for algorithms. This looks interesting, but
>> unfortunately I'm not a member of the ACM so I can't download the paper.
>> Perhaps some kind soul would upload it to Freenet, or make it available in
>> some other manner.
>
>The problem with using an algorithm which will reconstruct data from an
>incomplete selection of parts may undermine Freenet's caching.  It would
>be difficult to persuade client writers to download more parts than are
>nescessary, yet this will result in superfluous parts being lost in
>Freenet, and you are back to square one.

If the various combinations of parts which add up to a whole document are 
stored under rotations of the same text key, as suggested, maybe client 
writers could be persuaded to rotate the text key through a random number of 
steps before hashing it? This wouldn't increase the bandwidth required 
(except in the case of repeat requests), but it would ensure that all parts 
were requested an equal number of times.


Michael

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to