>
> Proposal for file splitting - please ignore this if you've already come up
> with a better method.
>
> If a node cannot store the entire file, it tries to store half of it, then a
> third of it, and so on until it gets to a fraction that it can store. It then
> passes on the other fractions of the file under new keys, retaining a list of
> those keys so that when the original file is requested it can request the
> parts stored elsewhere and reassemble the original file.
>
> Obviously this can happen recursively - if a large node gets a 200 Mb file, it
> may store 100 Mb and pass the rest on. If the next node can only store 10 Mb,
> it will split its 100 Mb file into ten 10 Mb files and pass nine of them on.
By file splitting, we meant that there would be a mandatory chunk sizes
for files such as 16k, 32k, 64k, 128k, and 256k or perhaps higher. Files
would be padded so that they fit a given chunk size. Your proposal would
have some might have some routing problems too, I think.
Scott
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
URL:
<https://emu.freenetproject.org/pipermail/devl/attachments/20000727/6a476f55/attachment.pgp>