We're doing some Catalyst testing to roll out QoS on our Ethernet network and have come up against a hurdle. On most of our backbone links in a MAN, the actual bandwidth between one C/O to another C/O is not always 100Mbps. There are times when the link is only capable of hitting say 80Mbps (we're a wireless isp) or less.

Since we have to use a FE port for this type of connection, do the switches believe that they have 100Mbps of bandwidth to play with when putting packets into the appropriate queues?

I'm a bit confused as to how the switches work in this fashion. If I were using CAT5 cables or fiber this would be simple to understand as the bandwidth would be fixed. :)

This is an example of a configuration on a 3550-24 that I'm using:


interface FastEthernet0/x
mls qos trust dscp
wrr-queue bandwidth 40 35 25 1
wrr-queue cos-map 1 0 1
wrr-queue cos-map 2 2
wrr-queue cos-map 3 3 4 6 7
wrr-queue cos-map 4 5
priority-queue out
!

The switches that we use are 2950, 3550, 3750 and 6524s.

With MQC and "layer 3" QoS, I would know how to fix this by simply using the "bandwidth" command on the physical interface and basing my output policy-map to use "bandwidth percent" for each class. Layer 2 QoS doesn't seem to work this way though.

Any help would be appreciated.

Thanks.

Jose
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to