On 07/22/2005 07:55:41 AM, j knight wrote:
Karl O. Pinc wrote:
Hi,

It's been said on this list before that you can't
queue inbound traffic,

What's the point? By the time these packets reach your box and jump through these hoops, they've already traversed your network link. Any affectiveness that your QoS strategy would've had is gone.

The idea is to trade off a bit of bandwidth for improved
latency when the link's bandwidth is saturated.

The point is to throttle sustained data transfers to below
the maximum data rate of the link.  (I'd start by guessing
and trying about a 5 to 10% reduction.)  This allows more new
traffic through the link than the link would otherwise
be capable of sustaining, which in turn means that
a larger proportion of the datagrams reaching the QOS
queue's will be new traffic, and a larger proportion of
new traffic will therefore make it end-to-end through the
link.  This means that TCP sessions that are in the
process of throttling up the TCP slow-start curve will
be favored over established connections utilizing high
bandwidth, thus allowing a shared equilibrium of
bandwidth utilization between connections to be
reached sooner.  This is a benefit which should
in turn lower the latency of "bursty" traffic over
an already saturated link.  Using a hfsc queue, red,
and maybe other tricks should
assist this but I'd imagine there'd be results with
a straightforward throttling of the link bandwidth.

That's the theory anyhow.  I'd like to hear if anybody's
actually done this, by introducing another 2 port
box into the network for example, and how the various parameters
might be tweaked and what sort of improvements might
be expected.  And how it actually works out in practice
in various real environments as if there's never sessions
with sustained data transfer then I'd not expect to
see any improvment.  And of course whether or not
there's some big hole in the theory.

Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
                 -- Robert A. Heinlein

Reply via email to