> On 13 Oct 2015, at 10:48, Lars Kulseng <[email protected]> wrote:
> 
> Hi Alfredo,
> 
> - The machine I am testing on sees about 100k pps on the interface. The 
> machine has 8 cores with HyperThreading disabled. 

100kpps is not that much, I think there is some component wasting time 
somewhere.

> - Yes I am also maintaining an internal buffer when using pf_ring in the 
> capture tool. What I notice here is that, interestingly, there are almost no 
> packets in this buffer. Maybe a few thousand, compared to a couple million 
> when using af_packet. The buffer can actually hold 10M packets, but about 2M 
> are stored there during capture when using af_packet.

This means the application is not fast enough moving packets from the ring 
buffer to the internal buffer.. This depends on the code which is doing this 
job.

> Other observations:
> - I am also running suricata on the same machine, and when using pf_ring with 
> suricata, there is zero packets dropped and CPU-usage is way down compared to 
> af_packet. I'm not sure on the strategy that suricata uses, but I noticed 
> that /proc/net/pf_ring/ has several ids for suricata in it that are all part 
> of the same cluster, so does this mean that suricata is establishing several 
> instances of pfring_open, and attaching them to the same cluster?

Yes

> Knowing that the “Num Free Slots” need to stay above zero is a good strating 
> point I think. At least then I know that I have to speed up the application, 
> and not something with the drivers etc.

Yes

Alfredo

> 
> 
> Lars
> 
> 
> 
> tir. 13. okt. 2015 kl. 09.44 skrev Alfredo Cardigliano <[email protected] 
> <mailto:[email protected]>>:
> Hi Lars
> a few comments:
> - please note that the pcap API introduces some overhead, using this wrapper 
> on top of pf_ring introduces some performance degradation, however I need to 
> understand what is the rate (pps) you are talking about.
> - as of buffering, you said you are using a 2M pkts buffer with af_packet, 
> are you doing the same with pf_ring? Otherwise you should increase 
> min_num_slots in pf_ring.ko, but you face with limits in kernel memory 
> allocation at some point.
> - if “Num Free Slots” drops to 0, it meansyour application is not fast enough 
> dequeueing packets from the ring buffer.
> 
> Alfredo
> 
> > On 13 Oct 2015, at 09:33, Lars Kulseng <[email protected] 
> > <mailto:[email protected]>> wrote:
> >
> > I am authoring my own tool written in Go (cgo) (using the gopacket package 
> > from Google), that captures packets and does some processing on them. I 
> > have made it possible to choose how the tool will capture packets: pcap 
> > (-lpcap), pf_ring (-lpfring), or af_packet (raw socket)
> >
> > The results I'm getting, is that af_packet-mode has 0 packet loss, but the 
> > application needs to keep about 2 million packets in an internal buffer to 
> > keep up. Both pf_ring-mode and pcap-mode drops a lot of packets, probably 
> > about 30%, according to the stats reported by pcap_stats and pfring_stats.
> >
> > I am using a pf_ring-aware version of libpcap, and have installed the 
> > pf_ring drivers for my NIC, and the pf_ring instance shows up in 
> > /proc/net/pf_ring/<id>, which is also showing me the same drop numbers.
> >
> > Tweaks I have made so far is to increase the num_free_slots to 65536, but 
> > this made no notable difference. I also disabled Hyper-Threading in the 
> > BIOS, which was necessary to get the af_packet mode to not drop packets.
> >
> > I tested some of the included examples such as zcount (with option: -i eth5 
> > -c 1) a pfcount, and they seemed to work fine, with 0 packet loss. One 
> > difference I'm noticing when comparing the numbers from pfcount with the 
> > numbers from my tool is that "Num Free Slots" shown in 
> > /proc/net/pf_ring/<id> sometimes drops to 0 in my tool.
> >
> > I have several tools that I want to run simultaneously, and so pf_ring 
> > (maybe with ZC) is probably what I want to end up with, but so far it's not 
> > working well. How can I troubleshoot this?
> >
> > - Lars
> > _______________________________________________
> > Ntop-misc mailing list
> > [email protected] <mailto:[email protected]>
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected] <mailto:[email protected]>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>_______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to