Hello, folks!

We are testing PF_RING in heavy loaded environment and hit few bugs at ~7Mpps.

We using non Zero Copy drivers, we use plain pf_ring and patched
network drivers.

Please take a look at this screens:
https://www.dropbox.com/s/mio6fz9gz52x4fj/perftoppng.png?dl=0
https://www.dropbox.com/s/l4us5il10fjvl24/toppng.png?dl=0

Environment:
centos 7
3.10.0-123.6.3.el7.x86_64
PF_RING 6.0.2

PF_RING kernel module configuration: transparent_mode=2 quick_mode=1

As you can see PF_RING eat whole cpu and kill sever at average load....

But after few tests we really killed this server and it crashed with
following errors:

[13086.272014]
[13086.272020] CPU: 25 PID: 0 Comm: swapper/25 Tainted: GF
O--------------   3.10.0-123.6.3.el7.x86_64 #1
[13086.272064] Hardware name: Dell Inc. PowerEdge R720xd/0HJK12, BIOS
2.2.2 01/16/2014
[13086.272098] task: ffff880fe8d571c0 ti: ffff880fe8d66000 task.ti:
ffff880fe8d66000
[13086.272132] RIP: 0010:[<ffffffff812c6096>]  [<ffffffff812c6096>]
memcpy+0x6/0x110
[13086.272169] RSP: 0018:ffff881fff383ac0  EFLAGS: 00010282
[13086.272194] RAX: ffffc90022ada036 RBX: 00000000fffffffc RCX: 00000000fffcf032
[13086.272226] RDX: 00000000fffffffc RSI: ffff881f617ba698 RDI: ffffc90022b0b000
[13086.272258] RBP: ffff881fff383b18 R08: ffffc90022ada036 R09: 0000000000000081
[13086.272290] R10: ffff881f4c3ab800 R11: ffffffffa04c8020 R12: 0000000000000000
[13086.272321] R13: 00000000fffffffc R14: ffff881fff383c4c R15: 0000000000000032
[13086.272367] FS:  0000000000000000(0000) GS:ffff881fff380000(0000)
knlGS:0000000000000000
[13086.272403] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[13086.272430] CR2: ffffc90022b0b000 CR3: 00000000018d0000 CR4: 00000000001407e0
[13086.272477] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[13086.272509] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[13086.272554] Stack:
[13086.272566]  ffffffff814bf080 ffff881fe1a16000 ffff881f4c3ab800
0000000000000000
[13086.272613]  ffff881fff383af0 ffff881f4c3ab800 ffff881fe1a16000
ffff881f4c3ab800
[13086.272659]  0000000000000000 ffff881fff383c4c ffff881f4c3ab800
ffff881fff383d50
[13086.272711] Call Trace:
[13086.272725]  <IRQ>
[13086.272738]
[13086.272755]  [<ffffffff814bf080>] ? skb_copy_bits+0x60/0x290
[13086.272789]  [<ffffffffa04ba920>] skb_ring_handler+0x1600/0x1ef0 [pf_ring]
[13086.272838]  [<ffffffff8114ae64>] ? __alloc_pages_nodemask+0x174/0xb10
[13086.272871]  [<ffffffff81149868>] ? free_compound_page+0x38/0x40
[13086.272901]  [<ffffffff814be5d0>] ? build_skb+0x30/0x1d0
[13086.272936]  [<ffffffffa03fe818>] ixgbe_clean_rx_irq+0x928/0xd70 [ixgbe]
[13086.272971]  [<ffffffff810a1cc7>] ? enqueue_entity+0x237/0x890
[13086.273002]  [<ffffffffa03ffffd>] ixgbe_poll+0x46d/0x820 [ixgbe]
[13086.273033]  [<ffffffff814d02aa>] net_rx_action+0x15a/0x250
[13086.273074]  [<ffffffff81067047>] __do_softirq+0xf7/0x290
[13086.273103]  [<ffffffff815f40dc>] call_softirq+0x1c/0x30
[13086.273132]  [<ffffffff81014d25>] do_softirq+0x55/0x90
[13086.274236]  [<ffffffff810673e5>] irq_exit+0x115/0x120
[13086.275334]  [<ffffffff815f49d8>] do_IRQ+0x58/0xf0
[13086.276418]  [<ffffffff815e9b2d>] common_interrupt+0x6d/0x6d
[13086.277509]  <EOI>
[13086.277520]
[13086.278578]  [<ffffffff81483252>] ? cpuidle_enter_state+0x52/0xc0
[13086.279643]  [<ffffffff81483385>] cpuidle_idle_call+0xc5/0x200
[13086.280825]  [<ffffffff8101bcae>] arch_cpu_idle+0xe/0x30
[13086.281942]  [<ffffffff810b47b5>] cpu_startup_entry+0xf5/0x290
[13086.282983]  [<ffffffff815cff11>] start_secondary+0x265/0x27b
[13086.283989] Code: 43 58 48 2b 43 50 88 43 4e 5b 5d c3 66 0f 1f 84
00 00 00 00 00 e8 fb fb ff ff eb e2 90 90 90 90 90 90 90 90 90 48 89
f8 48 89 d1 <f3> a4 c3 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 20 4c 8b 06
4c 8b
[13086.286116] RIP  [<ffffffff812c6096>] memcpy+0x6/0x110
[13086.287102]  RSP <ffff881fff383ac0>
[13086.288042] CR2: ffffc90022b0b000

Can you fix this issues?

-- 
Sincerely yours, Pavel Odintsov
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to