Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Benoît Ganne

Hi Ola,

thanks for your feedback. I think part of the problem is that today we 
have 2 things dubbed IPC trying to solve different problems (I think): 
yours vs Maxim and me. Maybe we should clarify that.
From my understanding, what Maxim is proposing is closer to what I was 
trying to achieve. The main differences of my proposal vs Maxim proposal 
was:
 - use a more POSIX namespace approach for naming resources (eg. 
/ipc/... vs ipc_...

 - extend pktio to allow unidirectional communication to save HW resources
I agree your need is different and need a different API. Maybe we could 
call it message bus or odp bus or something like that to disambiguate?


Other comments inline.


  - use it for control/data planes signaling but also to exchange packets to
be allowed to build packets processing pipelines

Control/data plane signaling must be reliable (per the definition in
my RFC). But do you have the same requirements for packet transfer
between pipeline stages (on different cores)? Or it just happens to be
reliable on your hardware? Is reliability (guaranteed delivery) an
inherent requirement for pipelined packet processing? (this would be
contrary to my experience).


Right, it won't be reliable, especially if the rx is too slow to consume 
messages, tx should get EAGAIN and need to retry later. But reliability 
can be build on top of that (buffering on tx side, spinning...).



Also couldn't you use ODP queues as the abstraction for transferring
packets between cores/pipeline stages? Per your pktio proposal below,
if you connect input and output queues to your ipc pktio instances,
applications can just enqueue and dequeue packets from these queues
and that will cause packets to be transferred between pipeline stages
(on different cores/AS's).


Sure. But as far as I understand, queues are not associated to pools by 
themselves. We still need a way to move data from one AS to another, and 
it means from one pool to another. I need a way to identify the rx pool.



This use case does not want or need the ability to specify the
destination address for each individual send operation and this would
likely just add overhead to a performance critical operation.


It needs to identify the destination pool, as we are moving from one AS 
to another, and pools are defined per-AS.



  - IPC ops should be mapped to our NoC HW as much as possible, especially
for send/recv operations

Is there anything in the proposed API that would limit your
implementation freedom?


Not in the case of an application messaging bus as you propose. I just 
wanted to highlight the fact that we need a lower-level IPC mechanism 
where I can do unidirectional communications (to save HW resources) 
without any centralized logic.



 From your proposal, it seems to me that what you proposed is more like an
application messaging bus such as d-bus (especially the deferred lookup()
and monitor()),

Perhaps d-bus can be used as the implementation, that could save me
some work.


I didn't mean to use D-Bus nor CORBA :)
It was just to better understand what you were trying to achieve. That 
said, I think we can learn a few things from D-Bus in this case. For 
example, you did mention your need for delivering reliability, but what 
about message ordering and broadcast/multicast?



But in that case you need to be able to identify the endpoints. What I
proposed in precedent thread, was to use a hierarchical device naming:
  - /dev/device for real devices
  - /ipc/identifier for IPC
What I had in mind so far for our platform was to used {AS identifier + pool
name in the AS} as IPC identifier, which would avoid the need of a
centralized rendez-vous point.

The semantics of the IPC endpoint names are not defined in my
proposal. I think it is a mistake to impose meaning on the names.


Agreed. I was just proposing a naming scheme for pktio in my case to 
ease readability.



Also I don't envision any (active) rendez-vous point, it's just a
global shared table in my implementation for linux-generic. This is
needed since names are user-defined and arbitrary.


Hmm a global shared table looks like a rendez-vous point to me. You 
cannot implement that this way in a 0-sharing architecture. But anyway, 
I completely see the value of a messaging bus with discovery service, 
the implementation will use whatever suits it for this.



Why are these pktio instances of type IPC? They could just as well be
network interfaces on which you send and/or receive packets with the
normal semantics. No need for the IPC API I proposed. What stops you
from implementing this today?


A specific type is still useful in my opinion:
 - it eases readability
 - real pktio may have specific characteristics missing for IPC pktio 
(eg. in our case, we have HW acceleration for packet 
classification/extraction for real pktio, but not for IPC pktio)



So we need to amend the API spec that it is OK to specify
ODP_POOL_INVALID for the pool parameter to the opd_pktio_open() call
and 

Re: [lng-odp] odp timer unit test case question

2015-05-20 Thread Ola Liljedahl
On 20 May 2015 at 06:56, Jerin Jacob jerin.ja...@caviumnetworks.com wrote:
 On Wed, May 20, 2015 at 12:25:12AM +0200, Ola Liljedahl wrote:
 On 19 May 2015 at 15:34, Jacob,  Jerin jerin.ja...@caviumnetworks.com 
 wrote:
  Ola,
 
  Is there any specific reason for following check in timer validation test ?
 pa
 
  diff --git a/test/validation/odp_timer.c b/test/validation/odp_timer.c
  index 554b353..724026e 100644
  --- a/test/validation/odp_timer.c
  +++ b/test/validation/odp_timer.c
  @@ -260,7 +260,7 @@ static void handle_tmo(odp_event_t ev, bool stale, 
  uint64_t prev_tick)
 
  if (ttp != NULL) {
  /* Internal error */
  -   CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
  +--CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
  ttp-ev = ev;
  }
   }
 
  AFAIU, I should be CU_ASSERT_FATAL(ttp-ev != ODP_EVENT_INVALID) as
  tt[i].ev = odp_timeout_to_event(odp_timeout_alloc(tbp)) specified while 
  preparing  all timers.
 Yes the timers are still inactive and the timeout event is stored in
 the 'ev' member.

 handle_timeout() is called for received timeouts (timer has expired).
 In that case, the corresponding 'ev' member should not contain any
 timeout event.

 
  Am I missing something in the timer specification ?
 Or the timer specification is missing something?

 odp_timer_set_abs(tt[i].tim, tck, tt[i].ev); (line 309) is supposed
 to grab the timeout event (on success) and clear the variable (write
 ODP_TIMEOUT_INVALID), that's why the timeout is passed by reference
 (tt[i].ev).

 Possibly this is not specified clearly enough in timer.h:
  * @param[in,out] tmo_ev  Reference to an event variable that points to
  * timeout event or NULL to reuse the existing timeout event. Any existing
  * timeout event that is replaced by a successful set operation will be
  * returned here.

 The new timeout event is read from *tmo_ev. The old timeout event (if
 timer was active) or ODP_TIMEOUT_INVALID (if timer was inactive) is
 stored in *tmo_ev. I hope this is at least clear in the reference
 implementation.

 We are on same page, except the last notes
 IMO, linux generic timer implementation details leaked into creating the test 
 case.
Well I don't agree and I hope I can convince you.


 AFAIU, *tmo_ev should have the event that used for _arming_ the timer so
 that application can do some look up after receiving event through queue or 
 something similar..
 What is the point of providing ODP_TIMEOUT_INVALID to application back, 
 What the
 use of it for the application.
It is possible to set an already active timer (which then is already
associated with a timeout). If the user specifies a new timeout, the
old timeout must be returned to the user (because all alloc and free
of timeouts is the responsibility of the user). So any old timeout
(for an already active timer) is return in *tmo_ev. But it is
possible that the timer has already expired (and the timeout been
delivered) or wasn't active to start with. We want the application to
be able to differ between these two scenarios and we achieve this by
updating *tmo_ev accordingly. When the timer_set call return, if
*tmo_ev != ODP_EVENT_INVALID, an timeout has been returned and the
application needs to do something with it. If *tno_ev ==
ODP_EVENT_INVALID, no timeout was returned.

I hope you can agree with this line of thinking.


 IMO, two way we can come to a conclusion for this issue.
 1) Remove CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID) in handle_tmo 
 function in unit testcase
 2) Or some reason, If application wants ODP_TIMEOUT_INVALID(for inactive 
 case) in *tmo_ev
 lets update the specification so that it will clear for odp implementer
Yes the spec needs to be clearer.


 any thought from application perspective ?



 
  Thanks,
  Jerin.
 
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Ola Liljedahl
On 20 May 2015 at 10:25, Maxim Uvarov maxim.uva...@linaro.org wrote:
 Hello Ola,

 please find some replays bellow.

 On 19 May 2015 at 01:03, Ola Liljedahl ola.liljed...@linaro.org wrote:

 As promised, here is my first attempt at a standalone API for IPC - inter
 process communication in a shared nothing architecture (message passing
 between processes which do not share memory).

 Currently all definitions are in the file ipc.h but it is possible to
 break out some message/event related definitions (everything from
 odp_ipc_sender) in a separate file message.h. This would mimic the
 packet_io.h/packet.h separation.

 The semantics of message passing is that sending a message to an endpoint
 will always look like it succeeds. The appearance of endpoints is
 explicitly
 notified through user-defined messages specified in the odp_ipc_resolve()
 call. Similarly, the disappearance (e.g. death or otherwise lost
 connection)
 is also explicitly notified through user-defined messages specified in the
 odp_ipc_monitor() call. The send call does not fail because the addressed
 endpoints has disappeared.

 Messages (from endpoint A to endpoint B) are delivered in order. If
 message
 N sent to an endpoint is delivered, then all messages N have also been
 delivered. Message delivery does not guarantee actual processing by the
 recipient. End-to-end acknowledgements (using messages) should be used if
 this guarantee is important to the user.

 IPC endpoints can be seen as interfaces (taps) to an internal reliable
 multidrop network where each endpoint has a unique address which is only
 valid for the lifetime of the endpoint. I.e. if an endpoint is destroyed
 and then recreated (with the same name), the new endpoint will have a
 new address (eventually endpoints addresses will have to be recycled but
 not for a very long time). Endpoints names do not necessarily have to be
 unique.

 Signed-off-by: Ola Liljedahl ola.liljed...@linaro.org
 ---
 (This document/code contribution attached is provided under the terms of
 agreement LES-LTM-21309)

  include/odp/api/ipc.h | 261
 ++
  1 file changed, 261 insertions(+)
  create mode 100644 include/odp/api/ipc.h

 diff --git a/include/odp/api/ipc.h b/include/odp/api/ipc.h
 new file mode 100644
 index 000..3395a34
 --- /dev/null
 +++ b/include/odp/api/ipc.h
 @@ -0,0 +1,261 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier: BSD-3-Clause
 + */
 +
 +
 +/**
 + * @file
 + *
 + * ODP IPC API
 + */
 +
 +#ifndef ODP_API_IPC_H_
 +#define ODP_API_IPC_H_
 +
 +#ifdef __cplusplus
 +extern C {
 +#endif
 +
 +/** @defgroup odp_ipc ODP IPC
 + *  @{
 + */
 +
 +/**
 + * @typedef odp_ipc_t
 + * ODP IPC handle
 + */
 +
 +/**
 + * @typedef odp_ipc_msg_t
 + * ODP IPC message handle
 + */
 +
 +
 +/**
 + * @def ODP_IPC_ADDR_SIZE
 + * Size of the address of an IPC endpoint
 + */
 +
 +/**
 + * Create IPC endpoint
 + *
 + * @param name Name of local IPC endpoint
 + * @param pool Pool for incoming messages
 + *
 + * @return IPC handle on success
 + * @retval ODP_IPC_INVALID on failure and errno set
 + */
 +odp_ipc_t odp_ipc_create(const char *name, odp_pool_t pool);
 +


 Will it be separate pool or you want to attach existing pool to your ipc?
It should be a pool with events of type messages (ODP_EVENT_MESSAGE)
so in practice it will likely be a separate pool.




 +/**
 + * Destroy IPC endpoint
 + *
 + * @param ipc IPC handle
 + *
 + * @retval 0 on success
 + * @retval 0 on failure
 + */
 +int odp_ipc_destroy(odp_ipc_t ipc);
 +
 +/**
 + * Set the default input queue for an IPC endpoint
 + *
 + * @param ipc   IPC handle
 + * @param queue Queue handle
 + *
 + * @retval  0 on success
 + * @retval 0 on failure
 + */
 +int odp_ipc_inq_setdef(odp_ipc_t ipc, odp_queue_t queue);
 +
 +/**
 + * Remove the default input queue
 + *
 + * Remove (disassociate) the default input queue from an IPC endpoint.
 + * The queue itself is not touched.
 + *
 + * @param ipc  IPC handle
 + *
 + * @retval 0 on success
 + * @retval 0 on failure
 + */
 +int odp_ipc_inq_remdef(odp_ipc_t ipc);
 +
 +/**
 + * Resolve endpoint by name
 + *
 + * Look up an existing or future endpoint by name.
 + * When the endpoint exists, return the specified message with the
 endpoint
 + * as the sender.
 + *
 + * @param ipc IPC handle
 + * @param name Name to resolve
 + * @param msg Message to return
 + */
 +void odp_ipc_resolve(odp_ipc_t ipc,
 +const char *name,
 +odp_ipc_msg_t msg);
 +
 +/**
 + * Monitor endpoint
 + *
 + * Monitor an existing (potentially already dead) endpoint.
 + * When the endpoint is dead, return the specified message with the
 endpoint
 + * as the sender.
 + *
 + * Unrecognized or invalid endpoint addresses are treated as dead
 endpoints.
 + *
 + * @param ipc IPC handle
 + * @param addr Address of monitored endpoint
 + * @param msg Message to return
 + */
 +void 

Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Maxim Uvarov
On 20 May 2015 at 13:21, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 20 May 2015 at 10:25, Maxim Uvarov maxim.uva...@linaro.org wrote:
  Hello Ola,
 
  please find some replays bellow.
 
  On 19 May 2015 at 01:03, Ola Liljedahl ola.liljed...@linaro.org wrote:
 
  As promised, here is my first attempt at a standalone API for IPC -
 inter
  process communication in a shared nothing architecture (message passing
  between processes which do not share memory).
 
  Currently all definitions are in the file ipc.h but it is possible to
  break out some message/event related definitions (everything from
  odp_ipc_sender) in a separate file message.h. This would mimic the
  packet_io.h/packet.h separation.
 
  The semantics of message passing is that sending a message to an
 endpoint
  will always look like it succeeds. The appearance of endpoints is
  explicitly
  notified through user-defined messages specified in the
 odp_ipc_resolve()
  call. Similarly, the disappearance (e.g. death or otherwise lost
  connection)
  is also explicitly notified through user-defined messages specified in
 the
  odp_ipc_monitor() call. The send call does not fail because the
 addressed
  endpoints has disappeared.
 
  Messages (from endpoint A to endpoint B) are delivered in order. If
  message
  N sent to an endpoint is delivered, then all messages N have also been
  delivered. Message delivery does not guarantee actual processing by the
  recipient. End-to-end acknowledgements (using messages) should be used
 if
  this guarantee is important to the user.
 
  IPC endpoints can be seen as interfaces (taps) to an internal reliable
  multidrop network where each endpoint has a unique address which is only
  valid for the lifetime of the endpoint. I.e. if an endpoint is destroyed
  and then recreated (with the same name), the new endpoint will have a
  new address (eventually endpoints addresses will have to be recycled but
  not for a very long time). Endpoints names do not necessarily have to be
  unique.
 
  Signed-off-by: Ola Liljedahl ola.liljed...@linaro.org
  ---
  (This document/code contribution attached is provided under the terms of
  agreement LES-LTM-21309)
 
   include/odp/api/ipc.h | 261
  ++
   1 file changed, 261 insertions(+)
   create mode 100644 include/odp/api/ipc.h
 
  diff --git a/include/odp/api/ipc.h b/include/odp/api/ipc.h
  new file mode 100644
  index 000..3395a34
  --- /dev/null
  +++ b/include/odp/api/ipc.h
  @@ -0,0 +1,261 @@
  +/* Copyright (c) 2015, Linaro Limited
  + * All rights reserved.
  + *
  + * SPDX-License-Identifier: BSD-3-Clause
  + */
  +
  +
  +/**
  + * @file
  + *
  + * ODP IPC API
  + */
  +
  +#ifndef ODP_API_IPC_H_
  +#define ODP_API_IPC_H_
  +
  +#ifdef __cplusplus
  +extern C {
  +#endif
  +
  +/** @defgroup odp_ipc ODP IPC
  + *  @{
  + */
  +
  +/**
  + * @typedef odp_ipc_t
  + * ODP IPC handle
  + */
  +
  +/**
  + * @typedef odp_ipc_msg_t
  + * ODP IPC message handle
  + */
  +
  +
  +/**
  + * @def ODP_IPC_ADDR_SIZE
  + * Size of the address of an IPC endpoint
  + */
  +
  +/**
  + * Create IPC endpoint
  + *
  + * @param name Name of local IPC endpoint
  + * @param pool Pool for incoming messages
  + *
  + * @return IPC handle on success
  + * @retval ODP_IPC_INVALID on failure and errno set
  + */
  +odp_ipc_t odp_ipc_create(const char *name, odp_pool_t pool);
  +
 
 
  Will it be separate pool or you want to attach existing pool to your ipc?
 It should be a pool with events of type messages (ODP_EVENT_MESSAGE)
 so in practice it will likely be a separate pool.

 
 
 
  +/**
  + * Destroy IPC endpoint
  + *
  + * @param ipc IPC handle
  + *
  + * @retval 0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_destroy(odp_ipc_t ipc);
  +
  +/**
  + * Set the default input queue for an IPC endpoint
  + *
  + * @param ipc   IPC handle
  + * @param queue Queue handle
  + *
  + * @retval  0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_inq_setdef(odp_ipc_t ipc, odp_queue_t queue);
  +
  +/**
  + * Remove the default input queue
  + *
  + * Remove (disassociate) the default input queue from an IPC endpoint.
  + * The queue itself is not touched.
  + *
  + * @param ipc  IPC handle
  + *
  + * @retval 0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_inq_remdef(odp_ipc_t ipc);
  +
  +/**
  + * Resolve endpoint by name
  + *
  + * Look up an existing or future endpoint by name.
  + * When the endpoint exists, return the specified message with the
  endpoint
  + * as the sender.
  + *
  + * @param ipc IPC handle
  + * @param name Name to resolve
  + * @param msg Message to return
  + */
  +void odp_ipc_resolve(odp_ipc_t ipc,
  +const char *name,
  +odp_ipc_msg_t msg);
  +
  +/**
  + * Monitor endpoint
  + *
  + * Monitor an existing (potentially already dead) endpoint.
  + * When the endpoint is dead, return the specified message with the
  endpoint
  

Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Ola Liljedahl
On 20 May 2015 at 12:47, Maxim Uvarov maxim.uva...@linaro.org wrote:


 On 20 May 2015 at 13:21, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 20 May 2015 at 10:25, Maxim Uvarov maxim.uva...@linaro.org wrote:
  Hello Ola,
 
  please find some replays bellow.
 
  On 19 May 2015 at 01:03, Ola Liljedahl ola.liljed...@linaro.org wrote:
 
  As promised, here is my first attempt at a standalone API for IPC -
  inter
  process communication in a shared nothing architecture (message passing
  between processes which do not share memory).
 
  Currently all definitions are in the file ipc.h but it is possible to
  break out some message/event related definitions (everything from
  odp_ipc_sender) in a separate file message.h. This would mimic the
  packet_io.h/packet.h separation.
 
  The semantics of message passing is that sending a message to an
  endpoint
  will always look like it succeeds. The appearance of endpoints is
  explicitly
  notified through user-defined messages specified in the
  odp_ipc_resolve()
  call. Similarly, the disappearance (e.g. death or otherwise lost
  connection)
  is also explicitly notified through user-defined messages specified in
  the
  odp_ipc_monitor() call. The send call does not fail because the
  addressed
  endpoints has disappeared.
 
  Messages (from endpoint A to endpoint B) are delivered in order. If
  message
  N sent to an endpoint is delivered, then all messages N have also been
  delivered. Message delivery does not guarantee actual processing by the
  recipient. End-to-end acknowledgements (using messages) should be used
  if
  this guarantee is important to the user.
 
  IPC endpoints can be seen as interfaces (taps) to an internal reliable
  multidrop network where each endpoint has a unique address which is
  only
  valid for the lifetime of the endpoint. I.e. if an endpoint is
  destroyed
  and then recreated (with the same name), the new endpoint will have a
  new address (eventually endpoints addresses will have to be recycled
  but
  not for a very long time). Endpoints names do not necessarily have to
  be
  unique.
 
  Signed-off-by: Ola Liljedahl ola.liljed...@linaro.org
  ---
  (This document/code contribution attached is provided under the terms
  of
  agreement LES-LTM-21309)
 
   include/odp/api/ipc.h | 261
  ++
   1 file changed, 261 insertions(+)
   create mode 100644 include/odp/api/ipc.h
 
  diff --git a/include/odp/api/ipc.h b/include/odp/api/ipc.h
  new file mode 100644
  index 000..3395a34
  --- /dev/null
  +++ b/include/odp/api/ipc.h
  @@ -0,0 +1,261 @@
  +/* Copyright (c) 2015, Linaro Limited
  + * All rights reserved.
  + *
  + * SPDX-License-Identifier: BSD-3-Clause
  + */
  +
  +
  +/**
  + * @file
  + *
  + * ODP IPC API
  + */
  +
  +#ifndef ODP_API_IPC_H_
  +#define ODP_API_IPC_H_
  +
  +#ifdef __cplusplus
  +extern C {
  +#endif
  +
  +/** @defgroup odp_ipc ODP IPC
  + *  @{
  + */
  +
  +/**
  + * @typedef odp_ipc_t
  + * ODP IPC handle
  + */
  +
  +/**
  + * @typedef odp_ipc_msg_t
  + * ODP IPC message handle
  + */
  +
  +
  +/**
  + * @def ODP_IPC_ADDR_SIZE
  + * Size of the address of an IPC endpoint
  + */
  +
  +/**
  + * Create IPC endpoint
  + *
  + * @param name Name of local IPC endpoint
  + * @param pool Pool for incoming messages
  + *
  + * @return IPC handle on success
  + * @retval ODP_IPC_INVALID on failure and errno set
  + */
  +odp_ipc_t odp_ipc_create(const char *name, odp_pool_t pool);
  +
 
 
  Will it be separate pool or you want to attach existing pool to your
  ipc?
 It should be a pool with events of type messages (ODP_EVENT_MESSAGE)
 so in practice it will likely be a separate pool.

 
 
 
  +/**
  + * Destroy IPC endpoint
  + *
  + * @param ipc IPC handle
  + *
  + * @retval 0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_destroy(odp_ipc_t ipc);
  +
  +/**
  + * Set the default input queue for an IPC endpoint
  + *
  + * @param ipc   IPC handle
  + * @param queue Queue handle
  + *
  + * @retval  0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_inq_setdef(odp_ipc_t ipc, odp_queue_t queue);
  +
  +/**
  + * Remove the default input queue
  + *
  + * Remove (disassociate) the default input queue from an IPC endpoint.
  + * The queue itself is not touched.
  + *
  + * @param ipc  IPC handle
  + *
  + * @retval 0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_inq_remdef(odp_ipc_t ipc);
  +
  +/**
  + * Resolve endpoint by name
  + *
  + * Look up an existing or future endpoint by name.
  + * When the endpoint exists, return the specified message with the
  endpoint
  + * as the sender.
  + *
  + * @param ipc IPC handle
  + * @param name Name to resolve
  + * @param msg Message to return
  + */
  +void odp_ipc_resolve(odp_ipc_t ipc,
  +const char *name,
  +odp_ipc_msg_t msg);
  +
  +/**
  + * Monitor endpoint
  + *
  + * Monitor an existing (potentially already dead) 

Re: [lng-odp] [PATCHv5 0/4] IPC

2015-05-20 Thread Maxim Uvarov

On 05/19/2015 17:37, Ciprian Barbu wrote:

It seems like there is still a discussion floating on patch 2, about
including helper code in the linux-generic implementation. I will have
a look at the other patches, but I think you need to solve the issue
with the ring code before marking the patchset as ready to be merged.


Hello Ciprian, we talked about that with Mike. And propose will be to 
build odp helpers as separate
library. Then linux-generic implementation can link that library. 
Because there are 2 c files now:

ring.c and linux.c then it will be good to do that move in separate patch.

Thank you,
Maxim.


On Mon, May 18, 2015 at 3:55 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:

If no more comments here can all reviewers add their review-by to that
patchset? So that I can merge that patches.

Thank you,
Maxim.

On 05/08/2015 12:57, Maxim Uvarov wrote:

v5: - accounted all comments for previous v4 review.
  - do not modify any api function, everything is done inside
linux-generic.

Please review that version.

Thank you,
Maxim.


Maxim Uvarov (4):
linux-generic: zero params for pool create
api ipc: update ring with shm proc argument
linux-generic: add ipc pktio support
ipc: example app

   configure.ac   |   1 +
   example/Makefile.am|   2 +-
   example/ipc/.gitignore |   1 +
   example/ipc/Makefile.am|   7 +
   example/ipc/odp_ipc.c  | 427 +++
   helper/include/odp/helper/ring.h   |   7 +-
   helper/ring.c  |  12 +-
   platform/linux-generic/Makefile.am |   3 +
   .../linux-generic/include/odp_buffer_internal.h|   3 +
   .../linux-generic/include/odp_packet_io_internal.h |  16 +
   .../include/odp_packet_io_ipc_internal.h   |  48 ++
   platform/linux-generic/include/odp_shm_internal.h  |  22 +
   platform/linux-generic/odp_packet_io.c |  19 +-
   platform/linux-generic/odp_packet_io_ipc.c | 603
+
   platform/linux-generic/odp_pool.c  |  23 +-
   platform/linux-generic/odp_schedule.c  |   1 +
   platform/linux-generic/odp_shared_memory.c |  10 +-
   test/validation/odp_queue.c|   1 +
   18 files changed, 1193 insertions(+), 13 deletions(-)
   create mode 100644 example/ipc/.gitignore
   create mode 100644 example/ipc/Makefile.am
   create mode 100644 example/ipc/odp_ipc.c
   create mode 100644
platform/linux-generic/include/odp_packet_io_ipc_internal.h
   create mode 100644 platform/linux-generic/include/odp_shm_internal.h
   create mode 100644 platform/linux-generic/odp_packet_io_ipc.c


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCH] validation: pktio: do not destroy pool

2015-05-20 Thread Maxim Uvarov
Some platforms like DPDK and Cavium bare metal can not
dynamically destroy and create pool with the same name.
Keep pool destroy only in test suite termination to
make all other  pktio tests passed.

Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
---
 test/validation/odp_pktio.c | 66 ++---
 1 file changed, 39 insertions(+), 27 deletions(-)

diff --git a/test/validation/odp_pktio.c b/test/validation/odp_pktio.c
index d4bf7bf..7c1a666 100644
--- a/test/validation/odp_pktio.c
+++ b/test/validation/odp_pktio.c
@@ -56,6 +56,8 @@ odp_pool_t default_pkt_pool = ODP_POOL_INVALID;
 /** sequence number of IP packets */
 odp_atomic_u32_t ip_seq;
 
+odp_pool_t pool[MAX_NUM_IFACES] = {ODP_POOL_INVALID, ODP_POOL_INVALID};
+
 static void pktio_pkt_set_macs(odp_packet_t pkt,
   pktio_info_t *src, pktio_info_t *dst)
 {
@@ -219,29 +221,11 @@ static int default_pool_create(void)
return 0;
 }
 
-static odp_pktio_t create_pktio(const char *iface)
+static odp_pktio_t create_pktio(const char *iface, int num)
 {
-   odp_pool_t pool;
odp_pktio_t pktio;
-   char pool_name[ODP_POOL_NAME_LEN];
-   odp_pool_param_t params;
 
-   memset(params, 0, sizeof(params));
-   params.pkt.seg_len = PKT_BUF_SIZE;
-   params.pkt.len = PKT_BUF_SIZE;
-   params.pkt.num = PKT_BUF_NUM;
-   params.type= ODP_POOL_PACKET;
-
-   snprintf(pool_name, sizeof(pool_name), pkt_pool_%s, iface);
-
-   pool = odp_pool_lookup(pool_name);
-   if (pool != ODP_POOL_INVALID)
-   CU_ASSERT(odp_pool_destroy(pool) == 0);
-
-   pool = odp_pool_create(pool_name, ODP_SHM_NULL, params);
-   CU_ASSERT(pool != ODP_POOL_INVALID);
-
-   pktio = odp_pktio_open(iface, pool);
+   pktio = odp_pktio_open(iface, pool[num]);
if (pktio == ODP_PKTIO_INVALID)
pktio = odp_pktio_lookup(iface);
CU_ASSERT(pktio != ODP_PKTIO_INVALID);
@@ -431,7 +415,7 @@ static void pktio_test_txrx(odp_queue_type_t q_type, int 
num_pkts)
io = pktios[i];
 
io-name = iface_name[i];
-   io-id   = create_pktio(iface_name[i]);
+   io-id   = create_pktio(iface_name[i], i);
if (io-id == ODP_PKTIO_INVALID) {
CU_FAIL(failed to open iface);
return;
@@ -487,7 +471,7 @@ static void test_odp_pktio_mtu(void)
 {
int ret;
int mtu;
-   odp_pktio_t pktio = create_pktio(iface_name[0]);
+   odp_pktio_t pktio = create_pktio(iface_name[0], 0);
 
mtu = odp_pktio_mtu(pktio);
CU_ASSERT(mtu  0);
@@ -503,7 +487,7 @@ static void test_odp_pktio_mtu(void)
 static void test_odp_pktio_promisc(void)
 {
int ret;
-   odp_pktio_t pktio = create_pktio(iface_name[0]);
+   odp_pktio_t pktio = create_pktio(iface_name[0], 0);
 
ret = odp_pktio_promisc_mode_set(pktio, 1);
CU_ASSERT(0 == ret);
@@ -530,7 +514,7 @@ static void test_odp_pktio_mac(void)
unsigned char mac_addr[ODPH_ETHADDR_LEN];
int mac_len;
int ret;
-   odp_pktio_t pktio = create_pktio(iface_name[0]);
+   odp_pktio_t pktio = create_pktio(iface_name[0], 0);
 
printf(testing mac for %s\n, iface_name[0]);
 
@@ -553,7 +537,7 @@ static void test_odp_pktio_mac(void)
 
 static void test_odp_pktio_inq_remdef(void)
 {
-   odp_pktio_t pktio = create_pktio(iface_name[0]);
+   odp_pktio_t pktio = create_pktio(iface_name[0], 0);
odp_queue_t inq;
odp_event_t ev;
int i;
@@ -582,7 +566,7 @@ static void test_odp_pktio_open(void)
 
/* test the sequence open-close-open-close() */
for (i = 0; i  2; ++i) {
-   pktio = create_pktio(iface_name[0]);
+   pktio = create_pktio(iface_name[0], 0);
CU_ASSERT(pktio != ODP_PKTIO_INVALID);
CU_ASSERT(odp_pktio_close(pktio) == 0);
}
@@ -613,7 +597,7 @@ static void test_odp_pktio_inq(void)
 {
odp_pktio_t pktio;
 
-   pktio = create_pktio(iface_name[0]);
+   pktio = create_pktio(iface_name[0], 0);
CU_ASSERT(pktio != ODP_PKTIO_INVALID);
 
CU_ASSERT(create_inq(pktio, ODP_QUEUE_TYPE_POLL) == 0);
@@ -621,12 +605,35 @@ static void test_odp_pktio_inq(void)
CU_ASSERT(odp_pktio_close(pktio) == 0);
 }
 
+static int create_pool(const char *iface, int num)
+{
+   char pool_name[ODP_POOL_NAME_LEN];
+   odp_pool_param_t params;
+
+   memset(params, 0, sizeof(params));
+   params.pkt.seg_len = PKT_BUF_SIZE;
+   params.pkt.len = PKT_BUF_SIZE;
+   params.pkt.num = PKT_BUF_NUM;
+   params.type= ODP_POOL_PACKET;
+
+   snprintf(pool_name, sizeof(pool_name), pkt_pool_%s, iface);
+
+   pool[num] = odp_pool_create(pool_name, ODP_SHM_NULL, params);
+   if (ODP_POOL_INVALID == pool[num]) {
+   CU_FAIL(unable to create pool);
+   return -1;

Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Maxim Uvarov

On 05/20/2015 15:33, Ola Liljedahl wrote:

On 20 May 2015 at 13:29, Benoît Ganne bga...@kalray.eu wrote:

Hi Ola,

thanks for your feedback. I think part of the problem is that today we have
2 things dubbed IPC trying to solve different problems (I think): yours vs
Maxim and me. Maybe we should clarify that.

Yes. Maybe all of us should stay away from the phrase IPC. And message
passing (which is what I am looking for) is just one type of IPC
(useful for shared nothing and distributed systems), in the past many
did IPC using shared memory and mutexes and semaphores. The horror.


yes, IPC is very wide term.

 From my understanding, what Maxim is proposing is closer to what I was
trying to achieve. The main differences of my proposal vs Maxim proposal
was:
  - use a more POSIX namespace approach for naming resources (eg.
/ipc/... vs ipc_...

I see the names of pktio interfaces as platform specific so each
platform can use whatever syntax it wants.
yes, it's platform specific. In your case you can do that names. Or send 
patch when my version will

be merged to have same names in linux-generic.

  - extend pktio to allow unidirectional communication to save HW resources

A slight tweak to the current packet_io API. Post a patch.


v4 was unidirectional then I broke my head how to make it bidirectional. 
If we need single direction pktio than you can just not implement recv 
or send in you specific packet i/o and dont' call corresponding 
functions from application. Maybe even flags are not needed if it's 
single or bi directional.


Maxim.



I agree your need is different and need a different API. Maybe we could call
it message bus or odp bus or something like that to disambiguate?

Yes. Need a nice three-letter-acronym as well...


Other comments inline.


   - use it for control/data planes signaling but also to exchange packets
to
be allowed to build packets processing pipelines

Control/data plane signaling must be reliable (per the definition in
my RFC). But do you have the same requirements for packet transfer
between pipeline stages (on different cores)? Or it just happens to be
reliable on your hardware? Is reliability (guaranteed delivery) an
inherent requirement for pipelined packet processing? (this would be
contrary to my experience).


Right, it won't be reliable, especially if the rx is too slow to consume
messages, tx should get EAGAIN and need to retry later. But reliability can
be build on top of that (buffering on tx side, spinning...).

Well if the sender gets a notice that send didn't succeed and the data
is still around (so can be resent later), I still see this as
reliable. Unreliably is when data goes missing without notification.


Also couldn't you use ODP queues as the abstraction for transferring
packets between cores/pipeline stages? Per your pktio proposal below,
if you connect input and output queues to your ipc pktio instances,
applications can just enqueue and dequeue packets from these queues
and that will cause packets to be transferred between pipeline stages
(on different cores/AS's).


Sure. But as far as I understand, queues are not associated to pools by
themselves. We still need a way to move data from one AS to another, and it
means from one pool to another. I need a way to identify the rx pool.

Yes a queue by itself is not enough.

An egress queue leads to a (transmit-only) pktio interface which then
can magically transport packet to another (receive-only) pktio
interface in another AS. That receive pktio interface uses a pool in
the AS to allocate packet buffers from. Received buffers can be put on
an ingress queue. So the application code just sees the queues, only
the main logic needs handle the packet pipe pktio interfaces.


This use case does not want or need the ability to specify the
destination address for each individual send operation and this would
likely just add overhead to a performance critical operation.


It needs to identify the destination pool, as we are moving from one AS to
another, and pools are defined per-AS.

But you don't specify the destination pool on a per-packet basis. And
the producer (sender) of packets doesn't care which pool is used. Just
enqueue the packet on a queue.


   - IPC ops should be mapped to our NoC HW as much as possible,
especially
for send/recv operations

Is there anything in the proposed API that would limit your
implementation freedom?


Not in the case of an application messaging bus as you propose. I just
wanted to highlight the fact that we need a lower-level IPC mechanism where
I can do unidirectional communications (to save HW resources) without any
centralized logic.


  From your proposal, it seems to me that what you proposed is more like
an
application messaging bus such as d-bus (especially the deferred lookup()
and monitor()),

Perhaps d-bus can be used as the implementation, that could save me
some work.


I didn't mean to use D-Bus nor CORBA :)
It was just to better understand what you were trying 

Re: [lng-odp] odp timer unit test case question

2015-05-20 Thread Jerin Jacob
On Wed, May 20, 2015 at 12:42:29PM +0200, Ola Liljedahl wrote:
 On 20 May 2015 at 06:56, Jerin Jacob jerin.ja...@caviumnetworks.com wrote:
  On Wed, May 20, 2015 at 12:25:12AM +0200, Ola Liljedahl wrote:
  On 19 May 2015 at 15:34, Jacob,  Jerin jerin.ja...@caviumnetworks.com 
  wrote:
   Ola,
  
   Is there any specific reason for following check in timer validation 
   test ?
  pa
  
   diff --git a/test/validation/odp_timer.c b/test/validation/odp_timer.c
   index 554b353..724026e 100644
   --- a/test/validation/odp_timer.c
   +++ b/test/validation/odp_timer.c
   @@ -260,7 +260,7 @@ static void handle_tmo(odp_event_t ev, bool stale, 
   uint64_t prev_tick)
  
   if (ttp != NULL) {
   /* Internal error */
   -   CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
   +--CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
   ttp-ev = ev;
   }
}
  
   AFAIU, I should be CU_ASSERT_FATAL(ttp-ev != ODP_EVENT_INVALID) as
   tt[i].ev = odp_timeout_to_event(odp_timeout_alloc(tbp)) specified while 
   preparing  all timers.
  Yes the timers are still inactive and the timeout event is stored in
  the 'ev' member.
 
  handle_timeout() is called for received timeouts (timer has expired).
  In that case, the corresponding 'ev' member should not contain any
  timeout event.
 
  
   Am I missing something in the timer specification ?
  Or the timer specification is missing something?
 
  odp_timer_set_abs(tt[i].tim, tck, tt[i].ev); (line 309) is supposed
  to grab the timeout event (on success) and clear the variable (write
  ODP_TIMEOUT_INVALID), that's why the timeout is passed by reference
  (tt[i].ev).
 
  Possibly this is not specified clearly enough in timer.h:
   * @param[in,out] tmo_ev  Reference to an event variable that points to
   * timeout event or NULL to reuse the existing timeout event. Any existing
   * timeout event that is replaced by a successful set operation will be
   * returned here.
 
  The new timeout event is read from *tmo_ev. The old timeout event (if
  timer was active) or ODP_TIMEOUT_INVALID (if timer was inactive) is
  stored in *tmo_ev. I hope this is at least clear in the reference
  implementation.
 
  We are on same page, except the last notes
  IMO, linux generic timer implementation details leaked into creating the 
  test case.
 Well I don't agree and I hope I can convince you.
 
 
  AFAIU, *tmo_ev should have the event that used for _arming_ the timer so
  that application can do some look up after receiving event through queue or 
  something similar..
  What is the point of providing ODP_TIMEOUT_INVALID to application back, 
  What the
  use of it for the application.
 It is possible to set an already active timer (which then is already
 associated with a timeout). If the user specifies a new timeout, the
 old timeout must be returned to the user (because all alloc and free
 of timeouts is the responsibility of the user). So any old timeout
 (for an already active timer) is return in *tmo_ev. But it is
 possible that the timer has already expired (and the timeout been
 delivered) or wasn't active to start with. We want the application to
 be able to differ between these two scenarios and we achieve this by
 updating *tmo_ev accordingly. When the timer_set call return, if
 *tmo_ev != ODP_EVENT_INVALID, an timeout has been returned and the
 application needs to do something with it. If *tno_ev ==
 ODP_EVENT_INVALID, no timeout was returned.


Just to understand the usecase, What application is gonna do with returned 
*tmp_ev
if timer is active and it returned the associated timeout ?
it can't free as it will be cause double free when it comes back in
app mainloop(it will have odp_timeout_free() there).

and application can't use the returned associated timeout for long time
what if it event is delivered and  free'ed it in the main loop.
Typical main loop application
processing will be check for event type, process it and free the resources

Is this scheme is replacement for the API like odp_timer_active() to find the 
timer active or not ?

I thought the reason for returning *tmo_ev in timer set operation
is that, if application is lazy(ie don't want to manage) to create the timeout 
event then
it can ask for the implementation with 'NULL' so that implementation
can get a odp timer event from the implementation and if application
want explicit control on any event(say packet event type) then it can set
through explicit event on timer set call :-)

 
 I hope you can agree with this line of thinking.
 
 
  IMO, two way we can come to a conclusion for this issue.
  1) Remove CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID) in handle_tmo 
  function in unit testcase
  2) Or some reason, If application wants ODP_TIMEOUT_INVALID(for inactive 
  case) in *tmo_ev
  lets update the specification so that it will clear for odp implementer
 Yes the spec needs to be clearer.
 
 
  any thought from application perspective ?
 
 
 
  
   

Re: [lng-odp] [check-odp PATCH] enable optional perf testing

2015-05-20 Thread Zoltan Kiss

I've tested it as well:

Reviewed-by: Zoltan Kiss zoltan.k...@linaro.org

On 14/05/15 20:22, Mike Holmes wrote:

Signed-off-by: Mike Holmesmike.hol...@linaro.org
---
  apply-and-build.sh | 1 +
  build.sh   | 1 +
  helper/generic | 9 -
  3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/apply-and-build.sh b/apply-and-build.sh
index 9d73a2f..e41c297 100755
--- a/apply-and-build.sh
+++ b/apply-and-build.sh
@@ -40,6 +40,7 @@ usage() {
  echo -e \tFILE_EXT:\tsupported extensions in patch file names, default: 
${FILE_EXT}
  echo -e \tM32_ON_64:\tenable 32 bit builds on a 64 bit host, default: 0
  echo -e \tCPP_TEST:\tenable cpp test, default: 0
+common_usage
  echo -e \tDISTCHECK:\tset to 0 to disable DISTCHECK build, default: 1
  openssl_usage
  tc_usage
diff --git a/build.sh b/build.sh
index 2f69bc0..2bf2ac3 100755
--- a/build.sh
+++ b/build.sh
@@ -93,6 +93,7 @@ usage() {
  echo -e \tCLANG:\t\t build with clang, default: 0
  echo -e \tEXIT_ON_ERROR:\t bail out on error, default: 1
  echo -e \tCPP_TEST:\t enable cpp test, default: 0
+common_usage
  echo -e \tDISTCHECK:\tset to 1 to enable DISTCHECK build, default: 0
  echo -e \tE_VALGRIND:\t run Valgrind, default: 0
  ${PLATFORM_SHORT}_usage
diff --git a/helper/generic b/helper/generic
index aa1b10a..d4be657 100644
--- a/helper/generic
+++ b/helper/generic
@@ -3,7 +3,10 @@
  export SRCDIR=${ROOT_DIR}/src
  export BUILDDIR=${ROOT_DIR}/build
  export LOGDIR=${ROOT_DIR}/log
-export EXTRA_FLAGS=${EXTRA_FLAGS} --enable-test-perf
+export PERF_TEST=${PERF_TEST:-1}
+if [[ ${PERF_TEST} -eq 1 ]]; then
+export EXTRA_FLAGS=${EXTRA_FLAGS} --enable-test-perf
+fi

  mkdir -p ${SRCDIR}
  mkdir -p ${BUILDDIR}
@@ -19,6 +22,10 @@ control_c()
  # trap keyboard interrupt (control-c)
  trap control_c SIGINT

+common_usage() {
+echo -e \tPERF_TEST:\t enable perf test, default: 1
+}
+
  remote_name(){
  retval=$(echo ${1} |tr ':/.-' '_')
  echo ${retval}
-- 2.1.4


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 4/4] ipc: example app

2015-05-20 Thread Ciprian Barbu
On Fri, May 8, 2015 at 12:57 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:
 Simple example app creates one packet i/o to external interface
 and one ipc pktio to other process. Then transfer packet from
 external interface to other process and back thought ipc queue.

 Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
 ---
  configure.ac|   1 +
  example/Makefile.am |   2 +-
  example/ipc/.gitignore  |   1 +
  example/ipc/Makefile.am |   7 +
  example/ipc/odp_ipc.c   | 427 
 
  5 files changed, 437 insertions(+), 1 deletion(-)
  create mode 100644 example/ipc/.gitignore
  create mode 100644 example/ipc/Makefile.am
  create mode 100644 example/ipc/odp_ipc.c

 diff --git a/configure.ac b/configure.ac
 index d20bad2..1ceb922 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -274,6 +274,7 @@ AC_CONFIG_FILES([Makefile
  example/Makefile
  example/classifier/Makefile
  example/generator/Makefile
 +example/ipc/Makefile
  example/ipsec/Makefile
  example/packet/Makefile
  example/timer/Makefile
 diff --git a/example/Makefile.am b/example/Makefile.am
 index 353f397..506963f 100644
 --- a/example/Makefile.am
 +++ b/example/Makefile.am
 @@ -1 +1 @@
 -SUBDIRS = classifier generator ipsec packet timer
 +SUBDIRS = classifier generator ipc ipsec packet timer
 diff --git a/example/ipc/.gitignore b/example/ipc/.gitignore
 new file mode 100644
 index 000..963d99d
 --- /dev/null
 +++ b/example/ipc/.gitignore
 @@ -0,0 +1 @@
 +odp_ipc
 diff --git a/example/ipc/Makefile.am b/example/ipc/Makefile.am
 new file mode 100644
 index 000..3da9549
 --- /dev/null
 +++ b/example/ipc/Makefile.am
 @@ -0,0 +1,7 @@
 +include $(top_srcdir)/example/Makefile.inc
 +
 +bin_PROGRAMS = odp_ipc
 +odp_ipc_LDFLAGS = $(AM_LDFLAGS) -static
 +odp_ipc_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
 +
 +dist_odp_ipc_SOURCES = odp_ipc.c
 diff --git a/example/ipc/odp_ipc.c b/example/ipc/odp_ipc.c
 new file mode 100644
 index 000..0ed5442
 --- /dev/null
 +++ b/example/ipc/odp_ipc.c
 @@ -0,0 +1,427 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier: BSD-3-Clause
 + */
 +
 +/**
 + * @file
 + *
 + * @example odp_ipc.c  ODP IPC test application.
 + */
 +
 +#include stdlib.h
 +#include string.h
 +#include getopt.h
 +#include unistd.h
 +
 +#include example_debug.h
 +
 +#include odp.h
 +#include odp/helper/linux.h
 +
 +/** @def SHM_PKT_POOL_SIZE
 + * @brief Size of the shared memory block
 + */
 +#define SHM_PKT_POOL_SIZE  (512*2048)
 +
 +/** @def SHM_PKT_POOL_BUF_SIZE
 + * @brief Buffer size of the packet pool buffer
 + */
 +#define SHM_PKT_POOL_BUF_SIZE  1856
 +
 +/** @def MAX_PKT_BURST
 + * @brief Maximum number of packet bursts
 + */
 +#define MAX_PKT_BURST  16
 +
 +/** Get rid of path in filename - only for unix-type paths using '/' */
 +#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
 +   strrchr((file_name), '/') + 1 : (file_name))
 +
 +/** Application argument */
 +static char *pktio_name;
 +
 +/* helper funcs */
 +static void parse_args(int argc, char *argv[]);
 +static void print_info(char *progname);
 +static void usage(char *progname);
 +
 +/**
 + * Create a pktio handle.
 + *
 + * @param dev Name of device to open
 + * @param pool Pool to associate with device for packet RX/TX
 + *
 + * @return The handle of the created pktio object.
 + * @retval ODP_PKTIO_INVALID if the create fails.
 + */
 +static odp_pktio_t create_pktio(const char *dev, odp_pool_t pool)
 +{
 +   odp_pktio_t pktio;
 +   odp_pktio_t ipc_pktio;
 +
 +   /* Open a packet IO instance */
 +   pktio = odp_pktio_open(dev, pool);
 +   if (pktio == ODP_PKTIO_INVALID)
 +   EXAMPLE_ABORT(Error: pktio create failed for %s\n, dev);
 +
 +   printf(pid: %d, create IPC pktio\n, getpid());
 +   ipc_pktio = odp_pktio_open(ipc_pktio, pool);

I had some troubles starting the example, it was like the second
process started but the main process would die for some reason. I
tried to debug it with gdb but then it started working. It was around
this line I think, I could see the message
Wait for second process set mdata_offset...

Did you see any strange behavior on your side, especially after a fresh boot?

 +   if (ipc_pktio == ODP_PKTIO_INVALID)
 +   EXAMPLE_ABORT(Error: ipc pktio create failed.\n);
 +
 +   return pktio;
 +}
 +
 +/**
 + * Packet IO loopback worker thread using bursts from/to IO resources
 + *
 + * @param arg  thread arguments of type 'thread_args_t *'
 + */
 +static void *pktio_run_loop(odp_pool_t pool)
 +{
 +   int thr;
 +   odp_pktio_t pktio;
 +   int pkts;
 +   odp_packet_t pkt_tbl[MAX_PKT_BURST];
 +   odp_pktio_t ipc_pktio;
 +   thr = odp_thread_id();
 +
 +   pktio = odp_pktio_lookup(pktio_name);
 +   if 

Re: [lng-odp] odp timer unit test case question

2015-05-20 Thread Ola Liljedahl
On 20 May 2015 at 16:16, Jerin Jacob jerin.ja...@caviumnetworks.com wrote:
 On Wed, May 20, 2015 at 12:42:29PM +0200, Ola Liljedahl wrote:
 On 20 May 2015 at 06:56, Jerin Jacob jerin.ja...@caviumnetworks.com wrote:
  On Wed, May 20, 2015 at 12:25:12AM +0200, Ola Liljedahl wrote:
  On 19 May 2015 at 15:34, Jacob,  Jerin jerin.ja...@caviumnetworks.com 
  wrote:
   Ola,
  
   Is there any specific reason for following check in timer validation 
   test ?
  pa
  
   diff --git a/test/validation/odp_timer.c b/test/validation/odp_timer.c
   index 554b353..724026e 100644
   --- a/test/validation/odp_timer.c
   +++ b/test/validation/odp_timer.c
   @@ -260,7 +260,7 @@ static void handle_tmo(odp_event_t ev, bool stale, 
   uint64_t prev_tick)
  
   if (ttp != NULL) {
   /* Internal error */
   -   CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
   +--CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
   ttp-ev = ev;
   }
}
  
   AFAIU, I should be CU_ASSERT_FATAL(ttp-ev != ODP_EVENT_INVALID) as
   tt[i].ev = odp_timeout_to_event(odp_timeout_alloc(tbp)) specified while 
   preparing  all timers.
  Yes the timers are still inactive and the timeout event is stored in
  the 'ev' member.
 
  handle_timeout() is called for received timeouts (timer has expired).
  In that case, the corresponding 'ev' member should not contain any
  timeout event.
 
  
   Am I missing something in the timer specification ?
  Or the timer specification is missing something?
 
  odp_timer_set_abs(tt[i].tim, tck, tt[i].ev); (line 309) is supposed
  to grab the timeout event (on success) and clear the variable (write
  ODP_TIMEOUT_INVALID), that's why the timeout is passed by reference
  (tt[i].ev).
 
  Possibly this is not specified clearly enough in timer.h:
   * @param[in,out] tmo_ev  Reference to an event variable that points to
   * timeout event or NULL to reuse the existing timeout event. Any existing
   * timeout event that is replaced by a successful set operation will be
   * returned here.
 
  The new timeout event is read from *tmo_ev. The old timeout event (if
  timer was active) or ODP_TIMEOUT_INVALID (if timer was inactive) is
  stored in *tmo_ev. I hope this is at least clear in the reference
  implementation.
 
  We are on same page, except the last notes
  IMO, linux generic timer implementation details leaked into creating the 
  test case.
 Well I don't agree and I hope I can convince you.

 
  AFAIU, *tmo_ev should have the event that used for _arming_ the timer so
  that application can do some look up after receiving event through queue 
  or something similar..
  What is the point of providing ODP_TIMEOUT_INVALID to application back, 
  What the
  use of it for the application.
 It is possible to set an already active timer (which then is already
 associated with a timeout). If the user specifies a new timeout, the
 old timeout must be returned to the user (because all alloc and free
 of timeouts is the responsibility of the user). So any old timeout
 (for an already active timer) is return in *tmo_ev. But it is
 possible that the timer has already expired (and the timeout been
 delivered) or wasn't active to start with. We want the application to
 be able to differ between these two scenarios and we achieve this by
 updating *tmo_ev accordingly. When the timer_set call return, if
 *tmo_ev != ODP_EVENT_INVALID, an timeout has been returned and the
 application needs to do something with it. If *tno_ev ==
 ODP_EVENT_INVALID, no timeout was returned.


 Just to understand the usecase, What application is gonna do with returned 
 *tmp_ev
 if timer is active and it returned the associated timeout ?
Either the application specified a new timeout in the timer_set call
and it is that timeout which will be delivered upon timer expiration.
If a timeout is returned (the old timeout for an already active
timer), the application should free it or re-use it.

 it can't free as it will be cause double free when it comes back in
 app mainloop(it will have odp_timeout_free() there).
If a timeout is returned in *tmo_ev then it is not the same timeout.
Old vs. new.


 and application can't use the returned associated timeout for long time
 what if it event is delivered and  free'ed it in the main loop.
 Typical main loop application
 processing will be check for event type, process it and free the resources

 Is this scheme is replacement for the API like odp_timer_active() to find the 
 timer active or not ?

 I thought the reason for returning *tmo_ev in timer set operation
 is that, if application is lazy(ie don't want to manage) to create the 
 timeout event then
 it can ask for the implementation with 'NULL' so that implementation
 can get a odp timer event from the implementation and if application
 want explicit control on any event(say packet event type) then it can set
 through explicit event on timer set call :-)
No, the application is always responsible for 

Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Ola Liljedahl
On 20 May 2015 at 13:29, Benoît Ganne bga...@kalray.eu wrote:
 Hi Ola,

 thanks for your feedback. I think part of the problem is that today we have
 2 things dubbed IPC trying to solve different problems (I think): yours vs
 Maxim and me. Maybe we should clarify that.
 From my understanding, what Maxim is proposing is closer to what I was
 trying to achieve. The main differences of my proposal vs Maxim proposal
 was:
  - use a more POSIX namespace approach for naming resources (eg.
 /ipc/... vs ipc_...
  - extend pktio to allow unidirectional communication to save HW resources
 I agree your need is different and need a different API. Maybe we could call
 it message bus or odp bus or something like that to disambiguate?

 Other comments inline.

   - use it for control/data planes signaling but also to exchange packets
 to
 be allowed to build packets processing pipelines

 Control/data plane signaling must be reliable (per the definition in
 my RFC). But do you have the same requirements for packet transfer
 between pipeline stages (on different cores)? Or it just happens to be
 reliable on your hardware? Is reliability (guaranteed delivery) an
 inherent requirement for pipelined packet processing? (this would be
 contrary to my experience).


 Right, it won't be reliable, especially if the rx is too slow to consume
 messages, tx should get EAGAIN and need to retry later. But reliability can
 be build on top of that (buffering on tx side, spinning...).

 Also couldn't you use ODP queues as the abstraction for transferring
 packets between cores/pipeline stages? Per your pktio proposal below,
 if you connect input and output queues to your ipc pktio instances,
 applications can just enqueue and dequeue packets from these queues
 and that will cause packets to be transferred between pipeline stages
 (on different cores/AS's).


 Sure. But as far as I understand, queues are not associated to pools by
 themselves. We still need a way to move data from one AS to another, and it
 means from one pool to another. I need a way to identify the rx pool.

 This use case does not want or need the ability to specify the
 destination address for each individual send operation and this would
 likely just add overhead to a performance critical operation.


 It needs to identify the destination pool, as we are moving from one AS to
 another, and pools are defined per-AS.

   - IPC ops should be mapped to our NoC HW as much as possible,
 especially
 for send/recv operations

 Is there anything in the proposed API that would limit your
 implementation freedom?


 Not in the case of an application messaging bus as you propose. I just
 wanted to highlight the fact that we need a lower-level IPC mechanism where
 I can do unidirectional communications (to save HW resources) without any
 centralized logic.

  From your proposal, it seems to me that what you proposed is more like
 an
 application messaging bus such as d-bus (especially the deferred lookup()
 and monitor()),

 Perhaps d-bus can be used as the implementation, that could save me
 some work.


 I didn't mean to use D-Bus nor CORBA :)
CORBA, ain't it dead yet?

Actually kdbus (D-bus re-implemented in the kernel) looks interesting:
https://lwn.net/Articles/580194/
All kernels need proper IPC/message passing support.

 It was just to better understand what you were trying to achieve. That said,
 I think we can learn a few things from D-Bus in this case. For example, you
 did mention your need for delivering reliability, but what about message
 ordering and broadcast/multicast?

 But in that case you need to be able to identify the endpoints. What I
 proposed in precedent thread, was to use a hierarchical device naming:
   - /dev/device for real devices
   - /ipc/identifier for IPC
 What I had in mind so far for our platform was to used {AS identifier +
 pool
 name in the AS} as IPC identifier, which would avoid the need of a
 centralized rendez-vous point.

 The semantics of the IPC endpoint names are not defined in my
 proposal. I think it is a mistake to impose meaning on the names.


 Agreed. I was just proposing a naming scheme for pktio in my case to ease
 readability.

 Also I don't envision any (active) rendez-vous point, it's just a
 global shared table in my implementation for linux-generic. This is
 needed since names are user-defined and arbitrary.


 Hmm a global shared table looks like a rendez-vous point to me. You cannot
 implement that this way in a 0-sharing architecture. But anyway, I
 completely see the value of a messaging bus with discovery service, the
 implementation will use whatever suits it for this.

 Why are these pktio instances of type IPC? They could just as well be
 network interfaces on which you send and/or receive packets with the
 normal semantics. No need for the IPC API I proposed. What stops you
 from implementing this today?


 A specific type is still useful in my opinion:
  - it eases readability
  - real pktio may have specific 

[lng-odp] [PATCHv3 0/2] validation: init tests gathered and using common main

2015-05-20 Thread Christophe Milard
checkpatch fixes since v2.

Christophe Milard (2):
  validation: init src files in single directory
  validation: init tests using common main

 test/validation/Makefile.am |  6 ++--
 test/validation/common/odp_cunit_common.c   | 45 ++---
 test/validation/{ = init}/odp_init.c   | 37 ++--
 test/validation/{ = init}/odp_init_abort.c | 35 +-
 test/validation/{ = init}/odp_init_log.c   | 36 +--
 test/validation/odp_crypto.c| 19 
 test/validation/odp_synchronizers.c |  9 ++
 7 files changed, 96 insertions(+), 91 deletions(-)
 rename test/validation/{ = init}/odp_init.c (58%)
 rename test/validation/{ = init}/odp_init_abort.c (67%)
 rename test/validation/{ = init}/odp_init_log.c (70%)

-- 
1.9.1

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCHv3 1/2] validation: init src files in single directory

2015-05-20 Thread Christophe Milard
The 3 tests for initialisation still generate 3 executables,
but their sources are simply gathered in a single directory called as
the module itself, init.
And other cosmetic fixes to please check-odp.

Signed-off-by: Christophe Milard christophe.mil...@linaro.org
---
 test/validation/Makefile.am | 6 +++---
 test/validation/{ = init}/odp_init.c   | 3 ++-
 test/validation/{ = init}/odp_init_abort.c | 0
 test/validation/{ = init}/odp_init_log.c   | 2 +-
 4 files changed, 6 insertions(+), 5 deletions(-)
 rename test/validation/{ = init}/odp_init.c (95%)
 rename test/validation/{ = init}/odp_init_abort.c (100%)
 rename test/validation/{ = init}/odp_init_log.c (95%)

diff --git a/test/validation/Makefile.am b/test/validation/Makefile.am
index 7470d9d..814873c 100644
--- a/test/validation/Makefile.am
+++ b/test/validation/Makefile.am
@@ -47,9 +47,9 @@ dist_odp_classification_SOURCES = 
classification/odp_classification_tests.c \
 odp_crypto_CFLAGS = $(AM_CFLAGS) -I$(srcdir)/crypto
 dist_odp_crypto_SOURCES = crypto/odp_crypto_test_inp.c \
  odp_crypto.c $(ODP_CU_COMMON)
-dist_odp_init_SOURCES  = odp_init.c
-dist_odp_init_abort_SOURCES = odp_init_abort.c
-dist_odp_init_log_SOURCES = odp_init_log.c
+dist_odp_init_SOURCES  = init/odp_init.c
+dist_odp_init_abort_SOURCES = init/odp_init_abort.c
+dist_odp_init_log_SOURCES = init/odp_init_log.c
 dist_odp_queue_SOURCES = odp_queue.c $(ODP_CU_COMMON)
 dist_odp_random_SOURCES = odp_random.c $(ODP_CU_COMMON)
 dist_odp_scheduler_SOURCES = odp_scheduler.c $(ODP_CU_COMMON)
diff --git a/test/validation/odp_init.c b/test/validation/init/odp_init.c
similarity index 95%
rename from test/validation/odp_init.c
rename to test/validation/init/odp_init.c
index 82f8849..1332757 100644
--- a/test/validation/odp_init.c
+++ b/test/validation/init/odp_init.c
@@ -8,12 +8,13 @@
 #include odp.h
 #include CUnit/Basic.h
 
-#define DEFAULT_MSG_POOL_SIZE  (4*1024*1024)
+#define DEFAULT_MSG_POOL_SIZE  (4 * 1024 * 1024)
 #define DEFAULT_MSG_SIZE   (8)
 
 static void test_odp_init_global(void)
 {
int status;
+
status = odp_init_global(NULL, NULL);
CU_ASSERT_FATAL(status == 0);
 
diff --git a/test/validation/odp_init_abort.c 
b/test/validation/init/odp_init_abort.c
similarity index 100%
rename from test/validation/odp_init_abort.c
rename to test/validation/init/odp_init_abort.c
diff --git a/test/validation/odp_init_log.c 
b/test/validation/init/odp_init_log.c
similarity index 95%
rename from test/validation/odp_init_log.c
rename to test/validation/init/odp_init_log.c
index 275d343..98cc059 100644
--- a/test/validation/odp_init_log.c
+++ b/test/validation/init/odp_init_log.c
@@ -11,7 +11,7 @@
 int replacement_logging_used;
 
 ODP_PRINTF_FORMAT(2, 3)
-static int odp_init_log(odp_log_level_e level , const char *fmt, ...);
+static int odp_init_log(odp_log_level_e level, const char *fmt, ...);
 
 static void test_odp_init_global_replace_log(void)
 {
-- 
1.9.1

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCHv3 2/2] validation: init tests using common main

2015-05-20 Thread Christophe Milard
The 3 init tests (init, abort,log) now links with
common/odp_cunit_common, as other tests.
In main, ODP init is now performed via weak functions
which are overloaded (to do nothing) by the 3 init tests.
And other cosmetic fixes to please check-odp.

Signed-off-by: Christophe Milard christophe.mil...@linaro.org
---
 test/validation/Makefile.am   |  6 ++---
 test/validation/common/odp_cunit_common.c | 45 +++
 test/validation/init/odp_init.c   | 34 +--
 test/validation/init/odp_init_abort.c | 35 ++--
 test/validation/init/odp_init_log.c   | 34 +--
 test/validation/odp_crypto.c  | 19 +
 test/validation/odp_synchronizers.c   |  9 +++
 7 files changed, 93 insertions(+), 89 deletions(-)

diff --git a/test/validation/Makefile.am b/test/validation/Makefile.am
index 814873c..577063b 100644
--- a/test/validation/Makefile.am
+++ b/test/validation/Makefile.am
@@ -47,9 +47,9 @@ dist_odp_classification_SOURCES = 
classification/odp_classification_tests.c \
 odp_crypto_CFLAGS = $(AM_CFLAGS) -I$(srcdir)/crypto
 dist_odp_crypto_SOURCES = crypto/odp_crypto_test_inp.c \
  odp_crypto.c $(ODP_CU_COMMON)
-dist_odp_init_SOURCES  = init/odp_init.c
-dist_odp_init_abort_SOURCES = init/odp_init_abort.c
-dist_odp_init_log_SOURCES = init/odp_init_log.c
+dist_odp_init_SOURCES  = init/odp_init.c $(ODP_CU_COMMON)
+dist_odp_init_abort_SOURCES = init/odp_init_abort.c $(ODP_CU_COMMON)
+dist_odp_init_log_SOURCES = init/odp_init_log.c $(ODP_CU_COMMON)
 dist_odp_queue_SOURCES = odp_queue.c $(ODP_CU_COMMON)
 dist_odp_random_SOURCES = odp_random.c $(ODP_CU_COMMON)
 dist_odp_scheduler_SOURCES = odp_scheduler.c $(ODP_CU_COMMON)
diff --git a/test/validation/common/odp_cunit_common.c 
b/test/validation/common/odp_cunit_common.c
index 2af4410..eac2d81 100644
--- a/test/validation/common/odp_cunit_common.c
+++ b/test/validation/common/odp_cunit_common.c
@@ -39,13 +39,32 @@ int odp_cunit_thread_exit(pthrd_arg *arg)
return 0;
 }
 
-__attribute__((__weak__)) int tests_global_init(void)
+ODP_WEAK_SYMBOL int tests_global_init(void)
 {
+   if (0 != odp_init_global(NULL, NULL)) {
+   fprintf(stderr, error: odp_init_global() failed.\n);
+   return -1;
+   }
+   if (0 != odp_init_local()) {
+   fprintf(stderr, error: odp_init_local() failed.\n);
+   return -1;
+   }
+
return 0;
 }
 
-__attribute__((__weak__)) int tests_global_term(void)
+ODP_WEAK_SYMBOL int tests_global_term(void)
 {
+   if (0 != odp_term_local()) {
+   fprintf(stderr, error: odp_term_local() failed.\n);
+   return -1;
+   }
+
+   if (0 != odp_term_global()) {
+   fprintf(stderr, error: odp_term_global() failed.\n);
+   return -1;
+   }
+
return 0;
 }
 
@@ -56,18 +75,8 @@ int main(void)
printf(\tODP API version: %s\n, odp_version_api_str());
printf(\tODP implementation version: %s\n, odp_version_impl_str());
 
-   if (0 != odp_init_global(NULL, NULL)) {
-   fprintf(stderr, error: odp_init_global() failed.\n);
+   if (0 != tests_global_init())
return -1;
-   }
-   if (0 != odp_init_local()) {
-   fprintf(stderr, error: odp_init_local() failed.\n);
-   return -1;
-   }
-
-   ret = tests_global_init();
-   if (ret)
-   return ret;
 
CU_set_error_action(CUEA_ABORT);
 
@@ -83,15 +92,5 @@ int main(void)
if (0 != tests_global_term())
return -1;
 
-   if (0 != odp_term_local()) {
-   fprintf(stderr, error: odp_term_local() failed.\n);
-   return -1;
-   }
-
-   if (0 != odp_term_global()) {
-   fprintf(stderr, error: odp_term_global() failed.\n);
-   return -1;
-   }
-
return (ret) ? -1 : 0;
 }
diff --git a/test/validation/init/odp_init.c b/test/validation/init/odp_init.c
index 1332757..082e7c2 100644
--- a/test/validation/init/odp_init.c
+++ b/test/validation/init/odp_init.c
@@ -7,10 +7,23 @@
 #include stdarg.h
 #include odp.h
 #include CUnit/Basic.h
+#include odp_cunit_common.h
 
 #define DEFAULT_MSG_POOL_SIZE  (4 * 1024 * 1024)
 #define DEFAULT_MSG_SIZE   (8)
 
+/* overwrite common default so as not to perform odp init in main */
+int tests_global_init(void)
+{
+   return 0;
+}
+
+/* overwrite common default so as not to perform odp term in main */
+int tests_global_term(void)
+{
+   return 0;
+}
+
 static void test_odp_init_global(void)
 {
int status;
@@ -31,24 +44,3 @@ CU_SuiteInfo odp_testsuites[] = {
{Init, NULL, NULL, NULL, NULL, test_odp_init},
CU_SUITE_INFO_NULL,
 };
-
-int main(void)
-{
-   int ret;
-
-   printf(\tODP API version: %s\n, odp_version_api_str());
-   printf(\tODP implementation 

Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Benoît Ganne

I didn't mean to use D-Bus nor CORBA :)

CORBA, ain't it dead yet?


I am sure it is still haunting corporate systems...


Actually kdbus (D-bus re-implemented in the kernel) looks interesting:
https://lwn.net/Articles/580194/
All kernels need proper IPC/message passing support.


Sure. But do not bet all your money on kdbus yet though ;) 
http://lwn.net/Articles/641275/


ben
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Benoît Ganne

Hi Maxim,


 From my understanding, what Maxim is proposing is closer to what I was
trying to achieve. The main differences of my proposal vs Maxim proposal
was:
  - use a more POSIX namespace approach for naming resources (eg.
/ipc/... vs ipc_...

I see the names of pktio interfaces as platform specific so each
platform can use whatever syntax it wants.

yes, it's platform specific. In your case you can do that names. Or send
patch when my version will
be merged to have same names in linux-generic.


Well, does anyone (apart from me) see any value of doing that? If not, 
we will keep that completely implementation specific.



  - extend pktio to allow unidirectional communication to save HW
resources

A slight tweak to the current packet_io API. Post a patch.

v4 was unidirectional then I broke my head how to make it bidirectional.
If we need single direction pktio than you can just not implement recv
or send in you specific packet i/o and dont' call corresponding
functions from application. Maybe even flags are not needed if it's
single or bi directional.


I need to know if a pktio is read-only, write-only or read-write on 
creation (open()) because on our hardware, using a pktio for RX consumes 
HW resources. So if I only need a write-only pktio, I would like to be 
able to create it this way because it avoids me to consume HW resources 
without needs.
In our specific case, we can merge read-write and read-only case but I 
think it is worth supporting the 3 cases for completeness (maybe another 
HW will have different requirements).


Thanks,
ben
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC] Add ipc.h

2015-05-20 Thread Benoît Ganne

 From my understanding, what Maxim is proposing is closer to what I was
trying to achieve. The main differences of my proposal vs Maxim proposal
was:
  - use a more POSIX namespace approach for naming resources (eg.
/ipc/... vs ipc_...

I see the names of pktio interfaces as platform specific so each
platform can use whatever syntax it wants.


Fair enough :).


  - extend pktio to allow unidirectional communication to save HW resources

A slight tweak to the current packet_io API. Post a patch.


Will do.


An egress queue leads to a (transmit-only) pktio interface which then
can magically transport packet to another (receive-only) pktio
interface in another AS. That receive pktio interface uses a pool in
the AS to allocate packet buffers from. Received buffers can be put on
an ingress queue. So the application code just sees the queues, only
the main logic needs handle the packet pipe pktio interfaces.


Ha ok. Yes my idea was to use standard pktio mechanisms (so either 
through queues or bursts API). We could provide helpers which setup 
those pktio and just expose queues.



Thanks,
ben
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp