No, I am getting the -1 return code from pfring_send().
When I said it didn't return an error code, I meant a
specific error code: it just always returns -1 no matter
what the problem is. Specifically, it doesn't say
whether the failure is such that a retry is appropriate.

I didn't consider opening two different rings for each
direction. Is there a specific reason to think that would
be better?
-don

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Alfredo
Cardigliano
Sent: Tuesday, October 18, 2011 12:01 AM
To: [email protected]
Subject: Re: [Ntop-misc] Using DNA devices,pfring_send() sometimes fails
to a ring that's also receiving


On Oct 18, 2011, at 2:56 AM, Don Provan wrote:

> OK, thanks, I think I got it. Once I started playing around with it,
> I discovered that this approach works fine with mid-sized packets.
> It's only large packets (>1KB) and tiny packets (<128B) that run
> into trouble. It looks like in those cases, I was just overrunning
> the TX queue.
> 

So, if I understood, with large/tiny packets you are getting a return
value != -1 from pfring_send(), but packets are not sent on the wire?

> 
> I have 16 threads. Each is assigned a unique ring for RX
> and a different unique ring for TX. So:
> 
> 1. Only one thread ever calls pfring_send() for any given
> ring.
> 
> 1a. Only one thread ever calls pfring_recv() for any given
> ring.
> 
> 2. For any give ring, one thread calls pfring_send() while
> a different thread calls pfring_recv().

With this design you can also open two rings per queue, setting the
direction:
pfring_set_direction(<dna0 rx ring>, rx_only_direction);
pfring_set_direction(<dna0 tx ring>, tx_only_direction);

Alfredo

> 
> -don
> 
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Alfredo
> Cardigliano
> Sent: Monday, October 17, 2011 10:55 AM
> To: [email protected]
> Subject: Re: [Ntop-misc] Using DNA devices,pfring_send() sometimes
fails
> to a ring that's also receiving
> 
> 
> On Oct 17, 2011, at 6:40 PM, Don Provan wrote:
> 
>> Oh, sorry. I thought there was just some obvious quirk I was
>> overlooking.
>> 
>> I'm using an 82599 silicom card with two ports. The driver is
> configured
>> to present 8 RX queues for each port, all though DNA. The application
>> has
>> 16 threads, each using pfring_recv(wait_for_incoming_packet=0) to
read
>> packets from one pfring, and then using pfring_send(flush_packet=1)
to
>> transmit each packet to one of the *other* rings. The system has 8
>> cores,
>> and each thread is assigned to a specific core. (As it happens, the
> two
>> threads handling the two rings that are "working together" are
> assigned
>> to the same core, but I don't know if that's relevant.)
>> 
>> The basic approach for my app was just cloned from the
>> pfcount_multicast.c
>> example. I'm using "active polling", sleeping for 10us whenever
>> pfring_recv() returns nothing, then trying again.
>> 
>> On xmit, I'm generally flushing packets, but I don't think that
> matters
>> one way or the other.
>> 
>> As I say, this works fine when one of the ports is idle, but when
>> packets
>> are sent to both ports, all the packets are still received fine, but
a
>> few
>> of the sends fail. The DNA send function that pfring_send() calls
>> doesn't
>> return any error code, so I can't tell you why it's failing.
>> 
>> These tests are all being done at high packet rates, so I'm presuming
>> that the failure has something to do with simultaneous calls into
>> pfring_recv() and pfring_send() using the same ring but from two
>> different threads. Since the DNA library is secret, I can't look at
it
>> to see why that might be a problem.
> 
> Don
> to better understand, you have:
> 1. different threads simultaneously calling pfring_send() on the same
> ring
> or 2. two different threads, one calling pfring_recv() and one calling
> pfring_send(), on the same ring?
> 
> Alfredo
> 
>> 
>> I was assuming I was just missing a technical detail, so I didn't
>> experiment very much with this, but I can if you think it will help.
>> 
>> -don
>> 
>> -----Original Message-----
>> From: [email protected]
>> [mailto:[email protected]] On Behalf Of Alfredo
>> Cardigliano
>> Sent: Saturday, October 15, 2011 12:41 AM
>> To: [email protected]
>> Subject: Re: [Ntop-misc] Using DNA devices,pfring_send() sometimes
> fails
>> to a ring that's also receiving
>> 
>> Don
>> can you better explain your app configuration, maybe with an example,
> in
>> order to better understand (or try to reproduce) the issue?
>> Your previous description is a little confusing and I would like to
> know
>> exactly on which interface/thread you are sending/receiving packets.
>> 
>> Regards
>> Alfredo
>> 
>> On Oct 15, 2011, at 12:23 AM, Don Provan wrote:
>> 
>>> I'm using a silicom 82599 based NIC (close enough?) running with 8
>>> queues per port.
>>> -don
>>> 
>>> -----Original Message-----
>>> From: [email protected]
>>> [mailto:[email protected]] On Behalf Of Luca
> Deri
>>> Sent: Friday, October 14, 2011 1:48 PM
>>> To: [email protected]
>>> Cc: <[email protected]>
>>> Subject: Re: [Ntop-misc] Using DNA devices,pfring_send() sometimes
>> fails
>>> to a ring that's also receiving
>>> 
>>> Don
>>> A few questions:
>>> - what NIC do you own?
>>> - are you using the drive in single or multi queue ?
>>> 
>>> Regards Luca
>>> 
>>> Sent from my iPad
>>> 
>>> On 14/ott/2011, at 21:02, "Don Provan" <[email protected]> wrote:
>>> 
>>>> I'm using ixgbe ports via DNA with pf_ring 5.1.0. My code uses
>>>> pfring_recv() to receive packets from one ring, then uses
>>> pfring_send()
>>>> to transmit them via another ring. The code works fine *unless* the
>>>> other ring is *also* receiving packets at the same time on another
>>>> thread. In that case, pfring_send() fails a few times out of a
>>> hundred.
>>>> Is there some ring locking requirement that I'm missing?
>>>> -don
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected]
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to