On Sat, Jun 07, 2003 at 10:42:26AM +0200, Thomas Leske wrote:
> fish wrote:
> > On Fri, Jun 06, 2003 at 07:57:02PM +0200, Thomas Leske wrote:
> > > One could also solve the two problems without containers:
> > > 1) completeness:
> > >   Make FEC work for sets of small files. The regular files are
> > >   just inserted normally. The zip archiv is only generated on the side 
> of
> > >   the inserter or reader for creating respectively using the
> > >   error correcting blocks but the archiv is not stored on freenet.
> >
> > I should point out, that if your container is large enough to FEC (that's
> > around a meg), then your container is too fucking big - in ideal 
> conditions,
> > (i.e. not freenet), a 56k modem (which are very common, despite what some 
> of
> > the broadband elite think) will take around 5-10 minutes to fetch your 
> data.
> 
> But this is a shortcoming of the current FEC implementation that was 
> designed
> to protect large splitfiles. If you choose a smaller block size in the
> fec algorithm, it could be used on smaller files as well. There is a quite
> flexible tool that could do the job independent of fproxy:
>  http://parchive.sourceforge.net/
> But an in-freenet solution would be much more elegant, because the site can
> be browsed via fproxy and you can reinsert missing chunks.
> Of course you would not base the FEC Encoding on the zip archive format, but
> extend the current FEC meta data to allow different sizes for each chunk and
> an arbitrary block size. The chunks are concatenated to a large file that is
> split into blocks in order to determine the correcting blocks. It may also 
> be
> useful, to allow zero padding after a chunk up to the next block boundery.
> If the block size is quite small, you would combine the correcting blocks to
> a number of larger chunks. The second version of parchive allows correcting
> chunks of increasing sizes of 2^n. Thus you have to download just as much
> error correction information as you need.
> 
> > And that[s of course on the low end of the FEC scale.  Files this large
> > should not be allowed to be containered.
> 
> Note that there is no container in my proposal that everyone has to 
> download.
> One can also just browse a site in the usual and unreliable way. Only if you
> what to fetch the whole site or you really need a certain missing file
> you use the FEC feature (and combine the chunks to a container in your 
> client).

The result of this is that all the check blocks will fall out of freenet
because of not being used. This is why we always download a random m out
of n blocks in the splitfile fetcher.
> 
>  Thomas

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
GPG key lost in last few weeks, new key on keyservers
ICTHUS - Nothing is impossible in him whom we trust

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to