On 5 Nov 2006, [EMAIL PROTECTED] wrote:

> Well, as said, there's the overlap check. /dev/urandom from a single
> source would work but not from multiple - assuming they're not using
> the same /dev/urandom or there's at least one good source. I would
> think if you cranked the overlap check size up to the maximum, you
> should eventually get the correct file especially with such a huge
> amount of sources.

It is rather easy with a congruent random number generator.  See
Knuth, et. al.  The SHA1 is a fine seed value.  If overlap ranges are
known and the complete file is available, then feeding valid data at
the start and end of ranges is also not a problem.  Many files only
need a few bytes in a header screwed up to make them unusable; only
changing one byte in a middle of the range is needed.

Often if you have downloaded the file three times, you can use a
majority rule to make a good file.  I do this manually with cut, tail,
head and diff.  As gtkg keeps the files in "corrupt", it is fairly
easy to locate them to do this.  TTH is better imho, but that is an
immediate solution.

fwiw,
Bill Pringlemeir.



-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Gtk-gnutella-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/gtk-gnutella-devel

Reply via email to