Calomel wrote:
On Sun, Dec 23, 2007 at 10:59:10AM +0100, Federico Giannici wrote:
Tobias Wigand wrote:
hi,
Thank you for your suggestion.
Unfortunately I have found only generic sentences and not the answer
to my question: how exactly "priority" works?
Or, from a practical point of view: a queue with an high "priority"
parameter gets ALL the bandwidth he wants from queues with lower
"priority" (except, maybe, for the "realtime" one)?
well, if i get you right, your question is answered here:
http://calomel.org/pf_hfsc.html
quote:
Cbq and HFSC queues with a higher priority are preferred in the case of
overload.
OK, but the word "preferred" is little generic for me, especially
because this sentences is used to differentiate from the behaviour of
the "Pri" queues: "Priq queues with a higher priority are always served
first".
So, in HFSC queues with higher "priority" are not ALWAYS served before
queues with lower "priority"?
Maybe the "realtime" bandwidth is alway served first and then the
"priority" parameters enters in action?
Queues with higher "priority" gets ALL the bandwidth they need
regardless of the "linkshare" of queues of lower "priority"?
Bye.
--
___________________________________________________
__
|- [EMAIL PROTECTED]
|ederico Giannici http://www.neomedia.it
___________________________________________________
Federico,
"priority" is not the amount of bandwidth a queue gets, but the order in
which the packets are queued (buffered) when leaving the interface.
Lets say we have three queue's (Qone, Qtwo, Qthree) where Qone has the
highest priority and Qthree has the lowest. If the link is saturated
then Qone will have its packets put at the front of the
line in front of packets for Qtwo and Qthree which were already in line.
When bandwidth is available the packets will be sent in the order of
the line.
HFSC allows us to make sure that an important queue always has the bandwidth
it needs by using the "realtime" parameter. This guarantees bandwidth to
the queue no matter what priority it is.
If you use HFSC and the link is saturated then the highest priority queue
with "realtime" bandwidth reserved will get sent out before the other
saturated queues. If on the other hand the network link is saturated _AND_
all "realtime" reserved bandwidth is being used then the highest "priority"
packets will be buffered first.
So, if I have a SINGLE queue (called "voip") with priority 7 and all
other queues with default priority (1), the "voip" queue should get ALL
the bandwidth not used by all the "realtime" (at least 20% of the entire
bandwidth). Am I right?
The problem is that, even if this is our new configuration, from time to
time some of the traffic in the "voip" queue is dropped. It seems to
occur when the voip traffic exceeds (even for a little amount) the
"realtime" bandwidth assigned to the "voip" queue. The problem is that
in the same moment NO traffic is dropped from other queues, so it seems
that the "voip" queue does NOT get ALL the bandwidth remaining from the
"realtime" assignments...
We want that ALL the traffic in the "voip" queue is ALWAYS passed, but
we cannot assign ALL the bandwidth to the "realtime" of the "voip" queue
because we have to assign some guaranteed traffic to the other queues too.
Do you think this is the expected behaviour?
Can you suggest some other configuration?
Thanks.
--
___________________________________________________
__
|- [EMAIL PROTECTED]
|ederico Giannici http://www.neomedia.it
___________________________________________________