Hi.

We are seeing a problem with high cpu usage on a VIP4/80 card when traffic hits above 10Mbps. Has anyone seen this type of behavior? Details are below.


The IOS version is rsp-pv-mz.124-23.

GigabitEthernet6/0/0 is up, line protocol is up
Hardware is cyBus GigabitEthernet Interface, address is 0003.32fd.f0c0 (bia 0003.32fd.f0c0)
  Description: feed to cat1.5/6
  Internet address is X.X.X.X/28
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
     reliability 255/255, txload 5/255, rxload 6/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
  output flow-control is XON, input flow-control is XON
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters 7w6d
Input queue: 0/75/38707/81008 (size/max/drops/flushes); Total output drops: 10276
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 24360000 bits/sec, 4195 packets/sec
  30 second output rate 21260000 bits/sec, 4126 packets/sec
     9833124494 packets input, 4733684298220 bytes, 0 no buffer
     Received 11921351 broadcasts, 0 runts, 0 giants, 1350 throttles
2683847777 input errors, 0 CRC, 2683489514 frame, 4395 overrun, 370684 ignored
     0 watchdog, 11568950 multicast, 0 pause input
     0 input packets with dribble condition detected
     10530489310 packets output, 7707156852297 bytes, 0 underruns
     0 output errors, 0 collisions, 4 interface resets
     6002 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     8 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

VIP-Slot6   10:43:52 AM Tuesday Jul 6 2010 PDT


    111111111111111111111111111111111111111111111111111111111111
    222233333222221111100000333331111100000222223333322222222221
100
 90
 80
 70
 60
 50
 40
 30
 20
 10 ************************************************************
   0....5....1....1....2....2....3....3....4....4....5....5....6
             0    5    0    5    0    5    0    5    0    5    0
               CPU% per second (last 60 seconds)


    151111351111111511852381153122211111212111161211118331181111
    415869924534233096537174664310543638877434467042630685635444
100
 90                   *   *
 80                   *   *                           *    *
 70                   *   *                    *      *    *
 60                   *   *  *                 *      *    *
 50  *     *       *  **  *  *                 *      *    *
 40  *    **       *  **  *  *                 *      ***  *
 30  *    **       *  *****  **   *     * *    *      ***  #
 20  #****## *     ***##**# *#* ***  * ****    #**  * #****#*
 10 ############################################################
   0....5....1....1....2....2....3....3....4....4....5....5....6
             0    5    0    5    0    5    0    5    0    5    0
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%


996111 11 2111177379959211 1 1 1 211591 11 1 259 28254 990060991298000194619949920948867990778889895220938009799089999829934946
100 **                  ** *                        *                *
 90 **                  ** *                        *                *
80 ** * ** * * * * 70 ** ** *** * * * * 60 *** ** *** * * * * 50 *** ** ***** ** ** * ** 40 *** ******** ** ** * ** 30 *** * ********* * ** *** **** 20 *** * * ********* * * ** *** ***** 10 #############################*********#**#*#######*******#*******#****#* 0 ....5....1....1....2....2....3....3....4....4....5....5....6....6....7.. 0 5 0 5 0 5 0 5 0 5 0 5 0
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%



Thanks,

-Troy





_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to