>       In a long term always droping from the largest subqueue 
> gives you equal subqueues.

And, of course, one could have it drop them using a RED-like algorithm
to make the sessions stabilize themselves better.

> what they have) is better. May be doing it like the cbq with average
> packet size can gave us better results.

Calculating average packet size on the fly isn't that hard either.

>       All started with the so called Download Accelerators,
> which do paralel gets and make TCP behave bad. 
> In normal case TCP adjusts fast and does not create backlogs.
> But when you have to change it's bandwidth , you have to create 
> a backlog to slow it down. It then clears fast.

I actually usually download pieces of the same file from multiple
mirrors if its truly large and want it fast :-).  Ranged downloads make
that quite possible ... The only way to equalize bandwidth "fairly" in
these scenarios still seems to be to implement the hierarchial approach
of hashing against destination IP (the user receiving the packets) and
then having sub-queues to those queues based on ports and to drop
packets (based on RED?) in each of these sub-queues (because they're
closer to the actual TCP sessions involved).

>       People wanted options to tune hashsize/limits/depths and
> the most wanted (needed) was the option to select hash type.
> SRC/DST Port hash is in TODO now.

And I'm glad it exists for testing at least.
-- 
Michael T. Babcock
CTO, FibreSpeed Ltd.

_______________________________________________
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

Reply via email to