On Tue, Oct 09, 2001 at 05:23:36PM -0500, thelema wrote:
> On Tue, 09 Oct 2001, Ian Clarke wrote:
< > 
> > It is nothing to do with having something published in Freenet, it is
> > the simple mathematics of it.  See GJs freesite.
> > 
> I've read the freesite.  That argument was proposed when this argument
> was first had.  I'm still confused as to why the argument is convincing
> this time.  The 10% retrieval failure rate is, in my opinion, highly
> exaggerated, and if freenet faile to retrieve a file 10% of the time you
> request it, we should fix *that*, not put more data into freenet which
> will push chunks out faster.

It hardly matters whether retrieval success is 90% or 99%, if you are
trying retrieve 100 parts without redundancy you are still fucked (2e-5%
or 36% success). In a system where retrieval cannot be guaranteed,
redundancy is necessary when splitting so as to offset the normal
exponential decay when a file is split into many parts. It is ridiculous
that we should sacrifice aspects of the network to achieve 1 in 1000
failure rather than something acceptable like 1 in 50 or 100, so that
files split into 100 parts "only" should fail 1 time in 10. 

--

Oskar Sandberg
oskar at freenetproject.org

_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to