Hi Charles
I guess you are using the standard PF_RING API, any chance you can try the ZC 
API to
check if there is any difference? (the passive-wait implementation is 
different, and it usually
perform better).
I also pushed a change to the standard API, please update and let us know if 
you see any
improvement.

Thank you
Alfredo

> On 3 Nov 2017, at 15:08, Charles-Antoine Mathieu 
> <[email protected]> wrote:
> 
> I found a major difference between doing the pfring_recv call with the
> wait_for_incoming_packet flag set and a busy loop with the flag set to
> 0.
> 
> Do you know why the flag induce so much latency ?
> 
> With wait_for_incoming_packet set to 1
> # ./ntop
> 2017/11/03 14:54:50 Opening PF_RING capture zc:eth2@0
> 2017/11/03 14:54:51 1.031169ms
> 2017/11/03 14:54:51 3.125617ms
> 2017/11/03 14:54:51 42.012934ms
> 2017/11/03 14:54:51 13.026811ms
> 2017/11/03 14:54:52 14.241295ms
> 2017/11/03 14:54:52 53.317201ms
> 2017/11/03 14:54:52 21.21709ms
> 2017/11/03 14:54:52 60.293962ms
> 2017/11/03 14:54:53 28.195748ms
> 2017/11/03 14:54:53 67.271545ms
> 2017/11/03 14:54:53 35.182174ms
> 2017/11/03 14:54:53 3.112353ms
> 2017/11/03 14:54:54 42.200138ms
> 2017/11/03 14:54:54 10.104631ms
> 2017/11/03 14:54:54 49.172548ms
> 
> Busy loop with With wait_for_incoming_packet set to 0
> # ./ntop
> 2017/11/03 14:53:54 Opening PF_RING capture zc:eth2@0
> 2017/11/03 14:53:54 112.808µs
> 2017/11/03 14:53:54 101.491µs
> 2017/11/03 14:53:54 105.697µs
> 2017/11/03 14:53:55 116.839µs
> 2017/11/03 14:53:55 92.49µs
> 2017/11/03 14:53:55 116.731µs
> 2017/11/03 14:53:55 101.466µs
> 2017/11/03 14:53:56 135.863µs
> 2017/11/03 14:53:56 112.909µs
> 2017/11/03 14:53:56 99.826µs
> 2017/11/03 14:53:56 111.789µs
> 2017/11/03 14:53:57 137.237µs
> 2017/11/03 14:53:57 191.66µs
> 2017/11/03 14:53:57 97.025µs
> 2017/11/03 14:53:57 100.77µs
> 2017/11/03 14:53:58 122.966µs
> 2017/11/03 14:53:58 91.301µs
> 2017/11/03 14:53:58 98.335µs
> 2017/11/03 14:53:58 80.595µs
> 
> 
> # ping 176.31.237.4
> PING 176.31.237.4 (176.31.237.4) 56(84) bytes of
> data.
> 64 bytes from 176.31.237.4: icmp_seq=1 ttl=60 time=0.185 ms
> 64
> bytes from 176.31.237.4: icmp_seq=2 ttl=60 time=0.171 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=3 ttl=60 time=0.198 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=4 ttl=60 time=0.203 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=5 ttl=60 time=0.201 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=6 ttl=60 time=0.191 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=7 ttl=60 time=0.215 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=8 ttl=60 time=0.172 ms
> 64 bytes from
> 176.31.237.4: icmp_seq=9 ttl=60 time=0.205 ms
> ^C
> --- 176.31.237.4 ping
> statistics ---
> 9 packets transmitted, 9 received, 0% packet loss, time
> 7996ms
> rtt min/avg/max/mdev = 0.171/0.193/0.215/0.019 ms
> 
> 
> On Mon, 2017-10-30 at 17:25 +0100, Charles-Antoine Mathieu wrote:
>> I tried a lot of different configurations but could not figure out
>> why
>> I get those weird latencies by taking the time right after send ans
>> recv calls.
>> 
>> I'm reduced the code to the minimum and put it in this repository if
>> you want to take a quick look.
>> https://github.com/camathieu/pfring_latency#pfring_latency
>> 
>> Now, I wonder if there is some tweaks in the module or driver
>> parameter
>> that I overlooked or if it could be related to golang because I'm out
>> of ideas ^^.
>> 
>> On Fri, 2017-10-27 at 10:18 +0200, Charles-Antoine Mathieu wrote:
>>> 
>>> Yes :
>>> 
>>> https://github.com/google/gopacket/blob/master/pfring/pfring.go#L24
>>> 7
>>> 
>>> On Thu, 2017-10-26 at 19:14 +0200, Alfredo Cardigliano wrote:
>>>> 
>>>> 
>>>> Hi
>>>> are you setting the flush flag to 1 when calling pfring_send()?
>>>> 
>>>> Alfredo
>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On 26 Oct 2017, at 18:42, Charles-Antoine Mathieu <charles-
>>>>> antoine.
>>>>> [email protected]> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm trying to implement that, but my results looks odd. I'm
>>>>> trying
>>>>> to
>>>>> ping a bunch of ~20 hosts with a latency of ~1.77 ms every 5
>>>>> seconds.
>>>>> 
>>>>> I capture time right after pfring_send and pfring_recv. For
>>>>> every
>>>>> batch
>>>>> I get around the same latency for every host but the latency
>>>>> range
>>>>> from
>>>>> 1ms to 300ms.
>>>>> 
>>>>> 37.187.9.32 : 11.532119ms
>>>>> 37.187.9.245 : 11.54512ms
>>>>> 37.187.9.124 : 11.525933ms
>>>>> ... 5 sec later the next batch ...
>>>>> 37.187.9.14 : 107.540483ms
>>>>> 37.187.9.17 : 107.519064ms
>>>>> 37.187.9.32 : 107.505375ms
>>>>> 
>>>>> I wonder if you are aware of some kind of buffering that might
>>>>> happen
>>>>> in the process to yield such results.
>>>>> 
>>>>> The ring is opened in ZC mode
>>>>> NIC model is Intel Corporation 82599ES 10-Gigabit SFI/SFP+
>>>>> Network
>>>>> Connection (rev 01)
>>>>> 
>>>>> I'm using gopacket and ARISTA's nanotime package to capture a
>>>>> monotonic
>>>>> time which rely upon CLOCK_MONOTONIC, it seems more reliable
>>>>> and
>>>>> lightweight that time.Now() to me.
>>>>> https://github.com/aristanetworks/goarista/blob/master/monotime
>>>>> /n
>>>>> an
>>>>> otim
>>>>> e.go
>>>>> 
>>>>> On Fri, 2017-10-06 at 16:38 +0200, Alfredo Cardigliano wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Hi
>>>>>> in ZC mode kernel is bypassed, thus you should send the
>>>>>> packet
>>>>>> and
>>>>>> call clock_gettime()
>>>>>> immediately after in order to compute latency (the same
>>>>>> function
>>>>>> is
>>>>>> used for RX in this case).
>>>>>> 
>>>>>> Alfredo
>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On 6 Oct 2017, at 16:34, Charles-Antoine Mathieu <charles-
>>>>>>> antoine.m
>>>>>>> [email protected]> wrote:
>>>>>>> 
>>>>>>> Hello,
>>>>>>> 
>>>>>>> I have a process that sends and receive ICMP packets using
>>>>>>> PF_RING.
>>>>>>> 
>>>>>>> I'd like to capture send and received packets to be able to
>>>>>>> compute
>>>>>>> the
>>>>>>> latency. When I'm not in ZC mode I get the send and the
>>>>>>> received
>>>>>>> packet
>>>>>>> in the capture. However in ZC mode I only get the received
>>>>>>> packets
>>>>>>> in
>>>>>>> the capture.
>>>>>>> 
>>>>>>> Is there a way go get the same behaviour in ZC mode ?
>>>>>>> _______________________________________________
>>>>>>> Ntop mailing list
>>>>>>> [email protected]
>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>>> _______________________________________________
>>>>>> Ntop mailing list
>>>>>> [email protected]
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>> _______________________________________________
>>>>> Ntop mailing list
>>>>> [email protected]
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>> _______________________________________________
>>>> Ntop mailing list
>>>> [email protected]
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>> _______________________________________________
>>> Ntop mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>> _______________________________________________
>> Ntop mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop
> _______________________________________________
> Ntop mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
Ntop mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop

Reply via email to