> -----Original Message-----
> From: EXT Zoltan Kiss [mailto:zoltan.k...@linaro.org]
> Sent: Thursday, December 17, 2015 8:10 PM
> To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
> Subject: Re: [lng-odp] [API-NEXT PATCH v5 2/7] api: pktio: added multiple
> pktio input queues
> 
> 
> 
> On 17/12/15 08:21, Savolainen, Petri (Nokia - FI/Espoo) wrote:
> >
> >
> >> -----Original Message-----
> >> From: EXT Zoltan Kiss [mailto:zoltan.k...@linaro.org]
> >> Sent: Wednesday, December 16, 2015 4:58 PM
> >> To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
> >> Subject: Re: [lng-odp] [API-NEXT PATCH v5 2/7] api: pktio: added
> multiple
> >> pktio input queues
> >>
> >>
> >>
> >> On 16/12/15 12:56, Savolainen, Petri (Nokia - FI/Espoo) wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: EXT Zoltan Kiss [mailto:zoltan.k...@linaro.org]
> >>>> Sent: Tuesday, December 15, 2015 9:07 PM
> >>>> To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
> >>>> Subject: Re: [lng-odp] [API-NEXT PATCH v5 2/7] api: pktio: added
> >> multiple
> >>>> pktio input queues
> >>>>
> >>>>
> >>>>
> >>>> On 15/12/15 08:24, Savolainen, Petri (Nokia - FI/Espoo) wrote:
> >>>>> Actually, this patch set was just merged, but we consider these for
> >> the
> >>>> next set following shortly (today/tomorrow). See replies below.
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: EXT Zoltan Kiss [mailto:zoltan.k...@linaro.org]
> >>>>>> Sent: Monday, December 14, 2015 8:26 PM
> >>>>>> To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
> >>>>>> Subject: Re: [lng-odp] [API-NEXT PATCH v5 2/7] api: pktio: added
> >>>> multiple
> >>>>>> pktio input queues
> >>>>>>
> >>>>>> Hi,
> >>>>>>
> >>>>>> I wanted to raise some questions about this, unfortunately it fell
> >> off
> >>>>>> the radar for a while.
> >>>>>>
> >>>>>> On 26/11/15 08:35, Petri Savolainen wrote:
> >>>>>>> Added input queue configuration parameters and functions
> >>>>>>> to setup multiple input queue and hashing. Added also
> >>>>>>> functions to query the number of queues and queue handles.
> >>>>>>> Direct receive does use new odp_pktin_queue_t handle type.
> >>>>>>>
> >>>>>>> Signed-off-by: Petri Savolainen <petri.savolai...@nokia.com>
> >>>>>>> ---
> >>>>>>>      include/odp/api/packet_io.h                        | 136
> >>>>>> +++++++++++++++++++++
> >>>>>>>      .../include/odp/plat/packet_io_types.h             |   2 +
> >>>>>>>      2 files changed, 138 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/include/odp/api/packet_io.h
> >> b/include/odp/api/packet_io.h
> >>>>>>> index 264fa75..26c9be5 100644
> >>>>>>> --- a/include/odp/api/packet_io.h
> >>>>>>> +++ b/include/odp/api/packet_io.h
> >>>>>>> @@ -19,6 +19,7 @@ extern "C" {
> >>>>>>>      #endif
> >>>>>>>
> >>>>>>>      #include <odp/api/packet_io_stats.h>
> >>>>>>> +#include <odp/api/queue.h>
> >>>>>>>
> >>>>>>>      /** @defgroup odp_packet_io ODP PACKET IO
> >>>>>>>       *  Operations on a packet Input/Output interface.
> >>>>>>> @@ -42,6 +43,11 @@ extern "C" {
> >>>>>>>       */
> >>>>>>>
> >>>>>>>      /**
> >>>>>>> + * @typedef odp_pktin_queue_t
> >>>>>>> + * Direct packet input queue handle
> >>>>>>
> >>>>>> What's the difference between a ODP_PKTIN_MODE_RECV and a
> >>>>>> ODP_PKTIN_MODE_POLL queue? Apart from using different functions for
> >>>>>> receive?
> >>>>>
> >>>>> _RECV means that application uses only odp_pktio_recv() /
> >> _recv_queue()
> >>>> to directly fetch packet from the interface. _POLL means that
> >> application
> >>>> uses only poll type queues (odp_queue_t) to receive packets ==
> >>>> odp_queue_deq() / _deq_multi(). The third type is for scheduled
> queues
> >>>> (use only odp_schedule() / schedule_multi()).
> >>>>
> >>>> I don't think that's a real difference between _RECV and _POLL type
> of
> >>>> input mode. You just call a different function to do exactly the
> same.
> >>>> E.g. ODP-OVS calls odp_pktio_recv() at the moment to _poll_ on the
> >>>> (only) queue of the NIC, and it opens it as ODP_PKTIN_MODE_RECV. With
> >>>> this patch applied, it can call odp_pktio_recv_queue() to do the same
> >>>> (but with several queues), or open the pktio as ODP_PKTIN_MODE_POLL,
> >> and
> >>>> then odp_queue_deq[_multi]() on a ODP_PKTIN_MODE_POLL. I don't see
> any
> >>>> difference, but it's very confusing.
> >>>
> >>> The difference is that application (or other blocks in the system) can
> >> enqueue and event to an odp_queue_t. Direct packet input queues
> >> (odp_pktin_queue_t) do not accept packets (or events) from application.
> >> Application can only receive from those.
> >>
> >> I think this fact MUST be emphasized very strongly where these types
> are
> >> defined. Currently you can only figure that out when you read the
> >> description of odp_enqueue(), in a completely different part of the
> >> documentation.
> >> But more importantly: what happens when you enqueue an event to a
> >> ODP_PKTIN_MODE_POLL (or _SCHED) queue? Where does it go?
> >
> >
> > Yes, documentation can be improved. Although, the API definition
> documents (forces) it by defining only recv for odp_pktin_queue_t and send
> for odp_pktout_queue_t. There are no API calls to enq/send to
> odp_pktin_queue_t, or deq/recv from odp_pktout_queue_t.
> >
> > int odp_pktio_recv_queue(odp_pktin_queue_t queue, odp_packet_t
> packets[], int num);
> > int odp_pktio_send_queue(odp_pktout_queue_t queue, odp_packet_t
> packets[], int num);
> >
> > Whereas odp_queue_t has both enq and deq operations.
> >
> > After we remove ODP_QUEUE_TYPE_PKTIN, there are only POLL/SCHED type
> odp_queue_t queues in packet input. Those odp_queue_t queues are no
> different from user created odp_queue_t queues.
> 
> 
> > If user enqueues an event (packet or any other event) there, he'll
> receive it a moment later from that queue.
> I guess you mean FIFO operation. "Receive a moment later" suggests to me
> that it comes off first at the next dequeue, which is LIFO.
> 
> In that case, I think it's not possible to implement these POLL and
> SCHED queues on top of DPDK, or generally on any NIC poll mode driver
> with a ring buffer. They are written with the assumption in mind that
> only the hardware enqueues stuff on the input queue, and only software
> dequeues. Changing that is probably a quite extensive modification,
> likely implies locking between sw and hw, and therefore performance
> drop. And for ODP-DPDK it should be done for all the drivers there.
> 


odp_pktin_queue_t == NIC input HW queue: NIC enqueues, SW dequeues

odp_queue_t == higher level SW (or Queue manager) queue: SW/HW enqueues, SW 
dequeues



For example, pseudo code for odp-dpkd:

int odp_pktio_recv_queue(odp_pktin_queue_t queue, odp_packet_t packets[], int 
num)
{
        return rte_eth_rx_burst(queue.port_id, queue.queue_id, packets, num);
}


int odp_queue_deq_multi(odp_queue_t queue, odp_event_t events[], int num)
{
        return rte_ring_dequeue_burst(queue.ring, events, num);
}


Implementation either sets up HW to enqueue packets into higher layer queues 
(odp_queue_t) or it enqueues those in SW. This is also how the old API / 
implementation works. The pktio mode selection quarantees that both application 
and implementation are not polling lower layer queues simultaneously.


-Petri





> 
> >
> >
> >
> >>
> >>>> Also, based on the current API definition you should be able to call
> >>>> odp_queue_enq() on a queue associated with an interface created with
> >>>> ODP_PKTIN_MODE_POLL or ODP_PKTIN_MODE_SCHED. odp_pktio_in_queues()
> will
> >>>> return an array of odp_queue_t type queues with ODP_QUEUE_TYPE_POLL
> or
> >>>> ODP_QUEUE_TYPE_SCHED. That operation doesn't make any sense.
> >>>
> >>>
> >>> odp_pktin_queue_t are lower level where you cannot enqueuer anything
> and
> >> odp_queue_t is upper level where you can. Upper level queues are useful
> >> when application need to synchronize other (e.g. control or timeout)
> >> events with incoming packets.
> >>
> >> Can you give me a real world example? Also, even if enqueue of other
> >> events makes sense, enqueue packets does not. Or does it?
> >>
> >
> >
> > Application needs to synchronize control events (e.g. flow context
> updates, etc messages from control plane) with incoming packets.
> > 1) Classify a flow into an atomic queue
> > 2) Send/forward any flow control events (context/table update messages)
> from control plane to the same queue
> > 3) Process events from the queue atomically (packets and control
> messages), without a need to lock the flow.
> >
> > Why not? E.g. maybe your control messages are packets (received from an
> another interface) and you'd forward those to the per flow queue (and
> avoid locking again).
> >
> >
> > -Petri
> >
> >
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to