Dirk
I have just realised nprobe 6.12.130315 that fixed a bug with DNA. With this 
version on my E3-1230 I can handle over 9 Gbit with one core using variable 
size packets (over 2Mpps per core)

Regards Luca

On Mar 15, 2013, at 6:09 PM, Dirk Janssens <[email protected]> wrote:

> Hi,
>  
>  
> For testing purposes, I want to measure how much traffic a single queue 
> nprobe instance can handle. Unfortunately, I’m seeing a lot of packetloss 
> with it. I would expect to see 100% cpu for the nprobe process if it cannot 
> keep up, but the strange part is nprobe consumes almost no cpu (3.6%). 
> Pfcount works fine (0% packetloss, 3.7% cpu usage).
>  
> What am I doing wrong?
>  
>  
>  
> The full details:
>  
> The hardware is a HP DL380G7 (dual sixcore X5670, 12GB ram, Intel dual port 
> 82599EB nic) with RHEL 6.4 x86_64).
> I installed the rpm packages of pfring (pfring-5.5.3-6049.x86_64) and nProbe 
> (nProbe-6.11.130311-3252.x86_64).
>  
> The pf_ring/DNA drivers are loaded as follows:
> insmod /usr/local/pfring/kernel/pf_ring.ko
> insmod /usr/local/pfring/drivers/DNA/ixgbe.ko RSS=0,0,0,0 MQ=0,0,0,0 
> LRO=0,0,0,0 FCoE=0,0,0,0
> ethtool -K dna1 lro off gso off gro off
> ip li set dna1 up
>  
>  
> If I run ‘pfcount -i dna1’ I get no packetloss:
>  
> =========================
> Absolute Stats: [8881013 pkts rcvd][8881013 pkts filtered][0 pkts dropped]
> Total Pkts=8881013/Dropped=0.0 %
> 8'881'013 pkts - 7'647'642'728 bytes [193'049.62 pkt/sec - 1'329.91 Mbit/sec]
> =========================
> Actual Stats: 198364 pkts [1'000.08 ms][198'346.74 pps/1.39 Gbps]
> =========================
>  
>  
> And the proc dna stats file (in /proc/net/pf_ring/stats/) gives:
>  
> Duration: 0:00:00:55
> Packets:  10411066
> Dropped:  0
> Filtered: 10411066
> Bytes:    8861158308
>  
> Cpu utilisation of pfcount is 3.6%.
>  
>  
>  
> However when I run ‘nprobe -b1 -w 16777216 -e0 -i dna1’ I get:
>  
> 15/Mar/2013 16:45:51 [nprobe.c:2004] Average traffic: [14.633 K pps][12 
> Mb/sec]
> 15/Mar/2013 16:45:51 [nprobe.c:2011] Current traffic: [14.283 K pps][12 
> Mb/sec]
> 15/Mar/2013 16:45:51 [nprobe.c:2017] Current flow export rate: [1349.4 
> flows/sec]
> 15/Mar/2013 16:45:51 [nprobe.c:2020] Flow drops: [export queue too 
> long=0][too many flows=0]
> 15/Mar/2013 16:45:51 [nprobe.c:2024] Export Queue: 0/16777216 [0.0 %]
> 15/Mar/2013 16:45:51 [nprobe.c:2029] Flow Buckets: 
> [active=33407][allocated=33407][toBeExported=0]
> 15/Mar/2013 16:45:51 [cache.c:744] Redis Cache [0 total/0.0 get/sec][0 
> total/0.0 set/sec]
> 15/Mar/2013 16:45:51 [cache.c:1226] LRUCache L7Cache [find: 0 operations/0.0 
> find/sec][cache miss 0/0.0 %][add: 0 operations/0.0 add/sec][tot: 0]
> 15/Mar/2013 16:45:51 [cache.c:1226] LRUCache FlowUserCache [find: 0 
> operations/0.0 find/sec][cache miss 0/0.0 %][add: 0 operations/0.0 
> add/sec][tot: 0]
> 15/Mar/2013 16:45:51 [nprobe.c:1871] Processed packets: 1316989 (max bucket 
> search: 3)
> 15/Mar/2013 16:45:51 [nprobe.c:1854] Fragment queue lenght: 2
> 15/Mar/2013 16:45:51 [nprobe.c:1880] Flow export stats: [262157752 
> bytes/350134 pkts][75054 flows/0 pkts sent]
> 15/Mar/2013 16:45:51 [nprobe.c:1890] Flow drop stats:   [0 bytes/0 pkts][0 
> flows]
> 15/Mar/2013 16:45:51 [nprobe.c:1895] Total flow stats:  [262157752 
> bytes/350134 pkts][75054 flows/0 pkts sent]
> 15/Mar/2013 16:45:51 [pro/pf_ring.c:80] Packet stats (PF_RING): 
> 1316989/14692243 [1115.6 %] pkts rcvd/dropped
> 15/Mar/2013 16:45:52 [engine.c:1921] 
> [maxBucketSearch=1][thread_id=0][idx=10508315][packet_hash=1470126107]
> 15/Mar/2013 16:45:54 [engine.c:1921] 
> [maxBucketSearch=2][thread_id=0][idx=10508233][packet_hash=1470126025]
>  
> The /proc/net/pf_ring/stats/ file contains:
> Duration: 0:00:01:30
> Packets:  1763196
> Dropped:  19789008
>  
> The cpu utilisation of the nprobe process is 3.6%
>  
> If nprobe cpu utilisation would be a problem, how comes it only uses 3.6%?
>  
>  
>  
> Kind regards,
> Dirk Janssens
>  
>  
>  
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to