> By file splitting, we meant that there would be a mandatory chunk sizes
> for files such as 16k, 32k, 64k, 128k, and 256k or perhaps higher.  Files
> would be padded so that they fit a given chunk size.  Your proposal would
> have some might have some routing problems too, I think.

(I stole this from the Freehaven project.)

Instead of splitting files into chunks, why not use Rabin's IDA?
(http://www.acm.org/pubs/citations/journals/jacm/1989-36-2/p335-rabin/)

Basically it lets you split a file of length L into n parts, where only m
parts are needed to reconstruct the file.  m < n, and the size of the parts
is L/m.  The benefits are higher reliability, since if some chunks are
missing you can still reconstruct the file, and a harder time reconstructing
the partial contents of a file as compared to getting bytes 0-K of a file,
which can give you useful info.

The problem is higher bandwith and storage usage, but that can be balanced
with the benefits by changing with the n/m ration (and m == n is identical
to regular file splitting in terms of storage.)

As far as implementation goes this is a client-side issue only, so it
doesn't really require any work on the server.

-- 
Itamar S.T.  itamar at maxnm.com
Fingerprint = D365 7BE8 B81E 2B18 6534  025E D0E7 92DB E441 411C

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to