Hi,

I think the default chunk sizes (min/max: 512 KiB, 10 MiB) are far
too big especially considering swarming. I'd like to reduce these
to 64 KiB (min) and 256 or 512 KiB (max). At a later time this could
be adjusted depending on the available bandwidth.

Anyway, I even suspect that these huge chunk limits are responsible
for lossage of downloaded data at shutdown time. E.g., consider
you're currently downloading from 4 sources 10 MiB each and shut
down shortly before these are finished. The chunks are marked as
busy and will thus be reset to empty on startup causing lossage
of up to 40 MiB - if my assumption is correct, since I didn't
verify this at the moment.

Smaller chunks would also decrease the turn-around time for
swarming so that peers could sooner contribute. If you look at
your upload screen, you'll probably see that all other vendors
use much smaller chunk sizes (dunno this might depend on your
connection speed if they handle this dynamically). Not very
familiar with PARQ, this also seem to cause Gtk-Gnutella peers
getting pushed far down in the queue due requesting up to 10 MiB.

This means also that Gtk-Gnutella blocks upload slots for
much longer than other peers resp. allow them to block slots
much longer.

-- 
Christian

Attachment: pgpOjHEeevPBh.pgp
Description: PGP signature

Reply via email to