Hi all,

I'm testing the control plane policy in my lab. Now i found a very interesting event.

I have a 6500/sup720 whit different IOS (SXF6, SXF10a, SXH3a). I send a very big SYN flood to this router.

I'm doing this test in clear config. (erase startup, reload :) )

I made a policy:

class-map match-all synfloodgeprol
  match access-group 199
!
policy-map synflood-in
  class synfloodgeprol
police cir 128000 bc 4000 be 4000 conform-action transmit exceed-action drop violate-action drop
!
access-list 199 remark DEFAULT
access-list 199 permit tcp any any
access-list 199 permit udp any any
access-list 199 permit icmp any any
access-list 199 permit ip any any
!
interface GigabitEthernet5/2
 ip address 10.0.0.1 255.255.255.0
 load-interval 30
 media-type rj45
 service-policy input synflood-in

I tried to put the service-policy to the control-plane but no difference:

The input interface traffic is:

  30 second input rate 155775000 bits/sec, 304249 packets/sec
  30 second output rate 128000 bits/sec, 250 packets/sec

The output rate is good, the cpu receive 128K SYN and answer 128K ACK/RST packets because my policy is working. That is the goal in this case.

Under this flood the CPU load:

Router#cpu
CPU utilization for five seconds: 0%/0%; one minute: 3%; five minutes:6%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
3 1368 1378 992 0.55% 0.07% 0.06% 0 Exec
   5        3868       263      14707  0.00%  0.33%  0.25%   0 Check hea
  20        2624     34446         76  0.00%  0.09%  0.06%   0 IPC Seat
  43         652        27      24148  0.00%  0.02%  0.00%   0 Per-minu
155 57572 310276 185 0.00% 1.57% 3.56% 0 IP Input
 230         368      2206        166  0.00%  0.01%  0.00%   0 CEF: IPv4
 240         528       703        751  0.07%  0.03%  0.02%   0 HIDDEN VL

The policy is working great.

But. In every 4. minutes the cpu load goes up:

Router#cpu
CPU utilization for five seconds: 79%/68%; one minute: 8%; five minutes: 6%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
3 2012 1617 1244 0.31% 0.61% 0.22% 0 Exec
   5        4072       278      14647  0.00%  0.20%  0.22%   0 Check hea
  20        2812     37348         75  0.00%  0.04%  0.05%   0 IPC Seat
27 56 555 100 0.00% 0.02% 0.00% 0 EnvMon
  43         708        29      24413  0.00%  0.03%  0.00%   0 Per-minut
155 59732 336634 177 10.47% 1.13% 2.68% 0 IP Input
 230         400      2373        168  0.00%  0.01%  0.00%   0 CEF: IPv4
 240         568       756        751  0.00%  0.03%  0.02%   0 HIDDEN VL

some second later:

Router#cpu
CPU utilization for five seconds: 99%/7%; one minute: 15%; five minutes: 7%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
3 2100 1637 1282 1.11% 0.65% 0.23% 0 Exec
   5        4072       278      14647  0.00%  0.19%  0.22%   0 Check he
  20        2812     37348         75  0.00%  0.03%  0.05%   0 IPC Seat
27 56 555 100 0.00% 0.02% 0.00% 0 EnvMon
  43         708        29      24413  0.00%  0.03%  0.00%   0 Per-minu
  77         252      1539        163  0.07%  0.00%  0.00%   0 Heartbeat
155 66192 338269 195 90.71% 8.30% 4.14% 0 IP Input
 230         400      2382        167  0.07%  0.02%  0.00%   0 CEF: IPv4
 240         572       759        753  0.00%  0.03%  0.01%   0 HIDDEN VL

and again some second later:

Router#cpu
CPU utilization for five seconds: 0%/0%; one minute: 2%; five minutes: 6%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
3 2320 1730 1341 0.23% 0.08% 0.17% 0 Exec
   5        4552       308      14779  0.00%  0.25%  0.24%   0 Check hea
20 3008 40249 74 0.00% 0.04% 0.04% 0 IPC Seat
  43         792        32      24750  0.00%  0.04%  0.00%   0 Per-minu
  77         316      1702        185  0.00%  0.01%  0.00%   0 Heartbeat
155 68644 378964 181 0.00% 1.03% 3.26% 0 IP Input
 230         444      2639        168  0.07%  0.02%  0.00%   0 CEF: IPv4
 240         636       841        756  0.00%  0.03%  0.02%   0 HIDDEN VL



This is the history of cpu:

            55555999999999944444
       333330000099999666667777711111          2222211111
100              **********
 90              **********
 80              **********
 70              **********
 60              **********
 50         ********************
 40         ********************
 30         ********************
 20         ********************
 10         ********************
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per second (last 60 seconds)

      1
      0   9   9  12
    460444944394439
100   *   *   *
 90   *   *   *
 80   *   *   *
 70   *   *   *
 60   *   *   *
 50   *   *   *
 40   *   *   *
 30   *   *   *   *
 20   #   #   #   *
 10   #   #   #  **
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%


If i increase the 128K to 256K in the policy, the big CPU load comes in every 2. minutes.

If i set it on 64K, the load is stay in every 4. minutes, but is ~40-50% instead 100%.

Any idea?

Thanks

Laszlo
_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to