Hi John, please see inline below:

At 10:05 AM 6/27/2012, John Neiberger pronounced:
<...snip...>
> What this should be doing is just causing us to service the queue more
> frequently. That could certainly reduce/eliminate drops in the event of
> congestion, but only if there is traffic in the other queues that is also
> contending for the bandwidth.
>
> In other words, if there is only one active queue (ie only one queue has
> traffic in it), then it can & should get full unrestricted access to the
> entire link bandwidth. Can you confirm whether there's traffic in the other
> queues?
>

I'm not certain whether or not we have traffic in the other queues. In
nearly all cases, the output drops are all in one queue with zero in
the other queues. That seems to indicate that either all of our
traffic is one queue or there just isn't a lot of traffic in the other
queues.


Unfortunately, there are no 'absolute' per queue counters, only per queue drop counters. So no easy way to determine if other queues are being utilized unless you just 'know' (based on your classification policies and known application mix) or those queues overflow & drop.


>
>
<...snip...>


> This suggests to me that there is traffic in other queues contending for the > available bandwidth, and that there's periodically instantaneous congestion.
> Alternatively you could try sizing this queue bigger and using the original
> bandwidth ratio. Or a combination of those two (tweaking both bandwidth &
> queue-limit).
>
> Is there some issue with changing the bandwidth ratio on this queue (ie, are
> you seeing collateral damage)? Else, seems like you've solved the problem
> already ;)

Nope, we don't have a problem with it. That's what we've been doing.
We haven't really been adjusting the queue limit ratios, though. In
most cases, we were just changing the bandwidth ratio weights. I'm
looking at an interface right now where the 30-second weighted traffic
rate has never gone above around 150 Mbps but I'm still seeing OQDs in
one of the queues only. How do you think we should be interpreting
that?



In my opinion, it indicates that:
1. there is traffic in the other queues contending for the link bandwidth
2. there is instantaneous oversubscription that causes the problem queue to fill as it's not being serviced frequently enough and/or is inadequately sized 3. the other queues are sized/weighted appropriately to handle the amount of traffic that maps to them (ie, even under congestion scenarios, there is adequate buffer to hold enough packets to avoid drops)

If #1 was not true, then I don't see how changing the bandwidth ratio would make any difference at all - if there is no traffic in the other queues, then the single remaining active queue would get full unrestricted access to the full bandwidth of the link and no queuing would be necessary in the first place.

Supposing there is no traffic in the other queues - in that case, you could certainly still have oversubscription of the single queue and drops, but changing the weight should have no effect on that scenario at all (while changing the q-limit certainly could).


2 cents,
Tim





>
> Hope that helps,
> Tim

It helps a lot! thanks!

John




Tim Stevenson, tstev...@cisco.com
Routing & Switching CCIE #5561
Distinguished Technical Marketing Engineer, Cisco Nexus 7000
Cisco - http://www.cisco.com
IP Phone: 408-526-6759
********************************************************
The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.


_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to