-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 What about the VIP CPU load on these VIPs? do you graph it?
Todd Shipway wrote: > The multilink interface had qos setup but not on the individual serial > interfaces. However I am seeing these drops on interfaces with no qos > or shaping on them as well. > > > On Feb 19, 2009, at 3:03 PM, "David Freedman" > <david.freed...@uk.clara.net> wrote: > >> Todd, do you have any kind of shaping / QoS on these circuits? >> >> Drops by traffic management configurations are frequently shown as >> interface drops. >> >> Dave. >> >> >> Todd Shipway wrote: >>> Hi, >>> >>> We have multiple T1 interfaces across different cards and different type >>> of cards in a 7513. Many interfaces are showing output drops and I >>> can't pinpoint why. The interfaces are spread throughout the system and >>> I can't pinpoint a single point that could be causing the drops. Below >>> is 2 interfaces showing the drops. >>> >>> The bandwidth is very low when the drops are occurring, these 2 >>> interfaces are part of a multilink interface which shows no drops. Any >>> ideas as to what I should look for that could be causing this? >>> >>> Serial9/0/0:12 is up, line protocol is up >>> Hardware is cyBus T3 >>> Description: Bonded T1 >>> MTU 1500 bytes, BW 1540 Kbit, DLY 20000 usec, >>> reliability 255/255, txload 11/255, rxload 5/255 >>> Encapsulation PPP, LCP Open, multilink Open >>> Link is a member of Multilink bundle Multilink66, crc 16, loopback not >>> set >>> Keepalive set (10 sec) >>> Last input 00:00:00, output 00:00:00, output hang never >>> Last clearing of "show interface" counters 6d03h >>> Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: >>> 4050 >>> Queueing strategy: fifo >>> Output queue: 0/100 (size/max) >>> 5 minute input rate 33000 bits/sec, 19 packets/sec >>> 5 minute output rate 68000 bits/sec, 18 packets/sec >>> 3121597 packets input, 667733893 bytes, 0 no buffer >>> Received 0 broadcasts, 0 runts, 0 giants, 0 throttles >>> 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort >>> 2929740 packets output, 1363133541 bytes, 0 underruns >>> 0 output errors, 0 collisions, 0 interface resets >>> 0 output buffer failures, 0 output buffers swapped out >>> 0 carrier transitions no alarm present >>> Timeslot(s) Used: 1-24, Transmitter delay is 0 flags >>> non-inverted data >>> >>> Serial9/0/0:11 is up, line protocol is up >>> Hardware is cyBus T3 >>> Description: Bonded T1 >>> MTU 1500 bytes, BW 1540 Kbit, DLY 20000 usec, >>> reliability 255/255, txload 10/255, rxload 5/255 >>> Encapsulation PPP, LCP Open, multilink Open >>> Link is a member of Multilink bundle Multilink66, crc 16, loopback not >>> set >>> Keepalive set (10 sec) >>> Last input 00:00:00, output 00:00:00, output hang never >>> Last clearing of "show interface" counters 6d03h >>> Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: >>> 4085 >>> Queueing strategy: fifo >>> Output queue: 0/100 (size/max) >>> 5 minute input rate 31000 bits/sec, 19 packets/sec >>> 5 minute output rate 65000 bits/sec, 17 packets/sec >>> 3122847 packets input, 668516220 bytes, 0 no buffer >>> Received 0 broadcasts, 0 runts, 0 giants, 0 throttles >>> 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort >>> 2930011 packets output, 1364226428 bytes, 0 underruns >>> 0 output errors, 0 collisions, 0 interface resets >>> 0 output buffer failures, 0 output buffers swapped out >>> 0 carrier transitions no alarm present >>> Timeslot(s) Used: 1-24, Transmitter delay is 0 flags >>> non-inverted data >>> >>> Multilink interface for the T1's above: >>> Multilink66 is up, line protocol is up >>> Hardware is multilink group interface >>> Description: Bonded T1 >>> Internet address is 10.10.56.1/30 >>> MTU 1500 bytes, BW 3080 Kbit, DLY 100000 usec, >>> reliability 255/255, txload 11/255, rxload 4/255 >>> Encapsulation PPP, LCP Open, multilink Open >>> Open: IPCP, loopback not set >>> Keepalive set (10 sec) >>> DTR is pulsed for 2 seconds on reset >>> Last input 00:00:30, output never, output hang never >>> Last clearing of "show interface" counters 1w0d >>> Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 >>> Queueing strategy: Class-based queueing >>> Output queue: 0/40 (size/max) >>> 5 minute input rate 58000 bits/sec, 47 packets/sec >>> 5 minute output rate 139000 bits/sec, 49 packets/sec >>> 7075294 packets input, 1705142684 bytes, 0 no buffer >>> Received 0 broadcasts, 0 runts, 0 giants, 0 throttles >>> 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort >>> 7483773 packets output, 3524642362 bytes, 0 underruns >>> 0 output errors, 0 collisions, 0 interface resets >>> 0 output buffer failures, 0 output buffers swapped out >>> 0 carrier transitions >>> >>> Any help would be appreciated. As I said earlier, this scenario is >>> happening on multiple interfaces throughout the system. A Channelized >>> DS3 card has been swapped out as a test as well as the output queue >>> raised from 40 to 100 with no change in drops. >>> >>> -Todd >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> cisco-nsp mailing list cisco-nsp@puck.nether.net >>> https://puck.nether.net/mailman/listinfo/cisco-nsp >>> archive at http://puck.nether.net/pipermail/cisco-nsp/ >> -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkmdvksACgkQtFWeqpgEZrKCJwCgvpRWCkJtH6GXQC6aCUOpU56i //sAoI8gGHocPm5w3IzgXC6sYMDZfKM5 =uXGq -----END PGP SIGNATURE----- _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/