On Fri, Jul 28, 2000 at 03:47:32PM +0300, Itamar Shtull-Trauring wrote:
> > By file splitting, we meant that there would be a mandatory chunk sizes
> > for files such as 16k, 32k, 64k, 128k, and 256k or perhaps higher.  Files
> > would be padded so that they fit a given chunk size.  Your proposal would
> > have some might have some routing problems too, I think.
> 
> (I stole this from the Freehaven project.)
> 
> Instead of splitting files into chunks, why not use Rabin's IDA?
> (http://www.acm.org/pubs/citations/journals/jacm/1989-36-2/p335-rabin/)
> 
> Basically it lets you split a file of length L into n parts, where only m
> parts are needed to reconstruct the file.  m < n, and the size of the parts
> is L/m.  The benefits are higher reliability, since if some chunks are
> missing you can still reconstruct the file, and a harder time reconstructing
> the partial contents of a file as compared to getting bytes 0-K of a file,
> which can give you useful info.
> 
> The problem is higher bandwith and storage usage, but that can be balanced
> with the benefits by changing with the n/m ration (and m == n is identical
> to regular file splitting in terms of storage.)
> 
> As far as implementation goes this is a client-side issue only, so it
> doesn't really require any work on the server.
>


Ever since the discussion of file splitting and raid levels, I've been 
casually looking around for algorithms. This looks interesting, but
unfortunately I'm not a member of the ACM so I can't download the paper.
Perhaps some kind soul would upload it to Freenet, or make it available in
some other manner.


David Schutt

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to