Anyway, I just tried to increase min_num_slots  and everything works fine
now. But I guess the problem fundamentally exists yet :)

On Thu, Nov 6, 2014 at 5:36 PM, Behrooz Shafiee <[email protected]> wrote:

> And if I use the DNA drivers, then I guess NAPI is not involved, right?
> does it solve the problem?
>
> On Thu, Nov 6, 2014 at 5:34 PM, Behrooz Shafiee <[email protected]>
> wrote:
>
>> Oh, I see. So I guess there is no way to stop NAPI thread from doing so?
>> I mean can't NAPI first check the ring then read from NIC?
>>
>> On Thu, Nov 6, 2014 at 5:28 PM, Alfredo Cardigliano <[email protected]
>> > wrote:
>>
>>> The NAPI thread is fast enough dequeueing from the NIC, but then it find
>>> the ring full and discards the packets (drops).
>>>
>>> Alfredo
>>>
>>>
>>> On 06 Nov 2014, at 23:25, Behrooz Shafiee <[email protected]> wrote:
>>>
>>> Hi Alfredo,
>>>
>>>  I got your point but I don't understand the point of flow-control then!
>>> If I'm slow then pfring should slow down reading from NIC so NIC will
>>> notice the slowdown and sends a pause frame to the sender! Is there
>>> anything wrong with this logic?
>>>
>>> Thanks,
>>>
>>> On Thu, Nov 6, 2014 at 5:21 PM, Alfredo Cardigliano <
>>> [email protected]> wrote:
>>>
>>>> Hi Behrooz
>>>> your application is not fast enough dequeueing packets from the ring,
>>>> thus the drops.
>>>> You should try increasing the ring size (via insmod parameter), it
>>>> helps at least with spikes.
>>>>
>>>> Alfredo
>>>>
>>>> On 06 Nov 2014, at 23:01, Behrooz Shafiee <[email protected]> wrote:
>>>>
>>>> Hello Pavel,
>>>>
>>>>  I'm not sure what do you mean but here is the output of to during run:
>>>>
>>>> top - 16:58:48 up  2:27,  8 users,  load average: 0.25, 0.59, 0.81
>>>> Tasks: 211 total,   4 running, 207 sleeping,   0 stopped,   0 zombie
>>>> %Cpu(s):  7.3 us,  7.1 sy,  0.0 ni, 84.6 id,  0.2 wa,  0.8 hi,  0.0
>>>> si,  0.0 st
>>>> KiB Mem:  16334304 total,  5330972 used, 11003332 free,   112164 buffers
>>>> KiB Swap: 10207228 total,        0 used, 10207228 free.  2101892 cached
>>>> Mem
>>>>
>>>> I just realized that if I use pfring_stats  when I receive the whole
>>>> 5000 packet the number of drops is 0 and when I don't receive all of them
>>>> the number of drops in the stats is around 2000-3000.
>>>>
>>>> On Thu, Nov 6, 2014 at 4:54 PM, Pavel Odintsov <
>>>> [email protected]> wrote:
>>>>
>>>>> Hello!
>>>>>
>>>>> Could you show top header and htop output?
>>>>>
>>>>> On Fri, Nov 7, 2014 at 12:48 AM, Behrooz Shafiee <[email protected]>
>>>>> wrote:
>>>>> > Hi everyone,
>>>>> >
>>>>> >  I have implemented a small transmission protocol over pfring. I
>>>>> rely on the
>>>>> > Ethernet flow control meaning that I assume that in the same subnet
>>>>> I won't
>>>>> > loose any packet (I've no router so no queueing...). Everything was
>>>>> fine
>>>>> > until I did some stress test as follows.
>>>>> > I have a rcvThread which block on pfring_recv() function for each one
>>>>> > incoming packet and process it. I start a huge number of other
>>>>> threads(e.g
>>>>> > 5000) and each of them send a req to server through pfring_send
>>>>> (resp is one
>>>>> > packet). And then server replies with 5000 packet. most of the time i
>>>>> > receive the 5000 packets but sometimes I miss some of them. For
>>>>> example I
>>>>> > reach the line after pfring_recv() 4503 times. I thought this is due
>>>>> to
>>>>> > overflow in the NIC but I use intel pro which has both rx/tx flow
>>>>> pause
>>>>> > frame on and I actually used a packet dump tool (such as wireshark)
>>>>> and I
>>>>> > see the packets are being received by the NIC. So I assume they get
>>>>> lost
>>>>> > somewhere a long the line from NIC to pfring_recv() function. Can
>>>>> anyone
>>>>> > help me what might have gone wrong?
>>>>> >
>>>>> > PS. I use pfring in normal mode not DNA
>>>>> >
>>>>> >
>>>>> > Thanks,
>>>>> > --
>>>>> > Behrooz
>>>>> >
>>>>> > _______________________________________________
>>>>> > Ntop-misc mailing list
>>>>> > [email protected]
>>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Sincerely yours, Pavel Odintsov
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> [email protected]
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Behrooz
>>>>  _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected]
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected]
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>
>>>
>>>
>>> --
>>> Behrooz
>>>  _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>
>>
>>
>> --
>> Behrooz
>>
>
>
>
> --
> Behrooz
>



-- 
Behrooz
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to