Re: [lng-odp] [PATCH] odp_shared_memory.h: Document default value

2014-10-22 Thread Savolainen, Petri (NSN - FI/Espoo)

Default is 0. It includes all current and future flags. There's no point to 
define ODP_SHM_SW_AND_HW, that's defined when the flag value does not include 
ODP_SHM_SW_ONLY. This is how flag parameters work in general.

-Petri

> -Original Message-
> From: lng-odp-boun...@lists.linaro.org [mailto:lng-odp-
> boun...@lists.linaro.org] On Behalf Of ext Mike Holmes
> Sent: Wednesday, October 22, 2014 10:50 PM
> To: lng-odp@lists.linaro.org
> Subject: [lng-odp] [PATCH] odp_shared_memory.h: Document default value
> 
> Signed-off-by: Mike Holmes 
> ---
> Although adding the default description to the list of legal #defined
> values
> helps the reader understand what the default means, it looks like this
> could be more clearly coded as an enum.
> 
>  platform/linux-generic/include/api/odp_shared_memory.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/platform/linux-generic/include/api/odp_shared_memory.h
> b/platform/linux-generic/include/api/odp_shared_memory.h
> index d8d40dd..d2965af 100644
> --- a/platform/linux-generic/include/api/odp_shared_memory.h
> +++ b/platform/linux-generic/include/api/odp_shared_memory.h
> @@ -34,6 +34,7 @@ extern "C" {
>   */
> 
>  /* Share level */
> +#define ODP_SHM_SW_AND_HW   0x0 /**< Both SW and HW acess */
>  #define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */
>  #define ODP_SHM_PROC0x2 /**< Share with external processes */
> 
> @@ -64,7 +65,7 @@ typedef struct odp_shm_info_t {
>   * @param name   Name of the block (maximum ODP_SHM_NAME_LEN - 1 chars)
>   * @param size   Block size in bytes
>   * @param align  Block alignment in bytes
> - * @param flags  Shared mem parameter flags (ODP_SHM_*). Default value is
> 0.
> + * @param flags  Shared mem parameter flags (ODP_SHM_*). Default value is
> ODP_SHM_SW_AND_HW
>   *
>   * @return Pointer to the reserved block, or NULL
>   */
> --
> 1.9.1
> 
> 
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [Bug 805] ODP-OVS:virtual memory exhausted problem

2014-10-22 Thread bugzilla-daemon
https://bugs.linaro.org/show_bug.cgi?id=805

Mike Holmes  changed:

   What|Removed |Added

 CC||lng-odp@lists.linaro.org

-- 
You are receiving this mail because:
You are on the CC list for the bug.___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCH] odp_shared_memory.h: Document default value

2014-10-22 Thread Mike Holmes
Signed-off-by: Mike Holmes 
---
Although adding the default description to the list of legal #defined values
helps the reader understand what the default means, it looks like this
could be more clearly coded as an enum.

 platform/linux-generic/include/api/odp_shared_memory.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/platform/linux-generic/include/api/odp_shared_memory.h 
b/platform/linux-generic/include/api/odp_shared_memory.h
index d8d40dd..d2965af 100644
--- a/platform/linux-generic/include/api/odp_shared_memory.h
+++ b/platform/linux-generic/include/api/odp_shared_memory.h
@@ -34,6 +34,7 @@ extern "C" {
  */
 
 /* Share level */
+#define ODP_SHM_SW_AND_HW   0x0 /**< Both SW and HW acess */
 #define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */
 #define ODP_SHM_PROC0x2 /**< Share with external processes */
 
@@ -64,7 +65,7 @@ typedef struct odp_shm_info_t {
  * @param name   Name of the block (maximum ODP_SHM_NAME_LEN - 1 chars)
  * @param size   Block size in bytes
  * @param align  Block alignment in bytes
- * @param flags  Shared mem parameter flags (ODP_SHM_*). Default value is 0.
+ * @param flags  Shared mem parameter flags (ODP_SHM_*). Default value is 
ODP_SHM_SW_AND_HW
  *
  * @return Pointer to the reserved block, or NULL
  */
-- 
1.9.1


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv2 1/2] doxygen: add grouping

2014-10-22 Thread Mike Holmes
On 21 October 2014 17:12, Anders Roxell  wrote:

> add submodules to gather the API docs.
>
> Signed-off-by: Anders Roxell 
>

Reviewed-and-Tested-by: MIke Holmes 


> ---
>  platform/linux-generic/include/api/odp_align.h  | 8 
>  platform/linux-generic/include/api/odp_atomic.h | 8 
>  platform/linux-generic/include/api/odp_barrier.h| 7 +++
>  platform/linux-generic/include/api/odp_buffer.h | 7 +++
>  platform/linux-generic/include/api/odp_buffer_pool.h| 8 
>  platform/linux-generic/include/api/odp_byteorder.h  | 8 
>  platform/linux-generic/include/api/odp_classification.h | 9 +
>  platform/linux-generic/include/api/odp_compiler.h   | 9 +
>  platform/linux-generic/include/api/odp_config.h | 8 
>  platform/linux-generic/include/api/odp_coremask.h   | 9 +
>  platform/linux-generic/include/api/odp_crypto.h | 9 +
>  platform/linux-generic/include/api/odp_debug.h  | 9 +
>  platform/linux-generic/include/api/odp_hints.h  | 7 +++
>  platform/linux-generic/include/api/odp_init.h   | 7 +++
>  platform/linux-generic/include/api/odp_packet.h | 7 +++
>  platform/linux-generic/include/api/odp_packet_flags.h   | 9 +
>  platform/linux-generic/include/api/odp_packet_io.h  | 9 +
>  platform/linux-generic/include/api/odp_queue.h  | 7 +++
>  platform/linux-generic/include/api/odp_rwlock.h | 9 +
>  platform/linux-generic/include/api/odp_schedule.h   | 7 +++
>  platform/linux-generic/include/api/odp_shared_memory.h  | 8 
>  platform/linux-generic/include/api/odp_spinlock.h   | 7 +++
>  platform/linux-generic/include/api/odp_sync.h   | 7 +++
>  platform/linux-generic/include/api/odp_system_info.h| 6 ++
>  platform/linux-generic/include/api/odp_thread.h | 6 ++
>  platform/linux-generic/include/api/odp_ticketlock.h | 6 ++
>  platform/linux-generic/include/api/odp_time.h   | 8 
>  platform/linux-generic/include/api/odp_timer.h  | 7 +++
>  platform/linux-generic/include/api/odp_version.h| 6 ++
>  29 files changed, 222 insertions(+)
>
> diff --git a/platform/linux-generic/include/api/odp_align.h
> b/platform/linux-generic/include/api/odp_align.h
> index a93dd24..5c18b16 100644
> --- a/platform/linux-generic/include/api/odp_align.h
> +++ b/platform/linux-generic/include/api/odp_align.h
> @@ -18,6 +18,11 @@
>  extern "C" {
>  #endif
>
> +/** @addtogroup odp_compiler_optim
> + *  Macros that allow cache line size configuration, check that
> + *  alignment is a power of two etc.
> + *  @{
> + */
>
>  #ifdef __GNUC__
>
> @@ -174,6 +179,9 @@ extern "C" {
>  #define ODP_VAL_IS_POWER_2(x) x)-1) & (x)) == 0)
>
>
> +/**
> + * @}
> + */
>
>  #ifdef __cplusplus
>  }
> diff --git a/platform/linux-generic/include/api/odp_atomic.h
> b/platform/linux-generic/include/api/odp_atomic.h
> index 0cc4cf4..213c81f 100644
> --- a/platform/linux-generic/include/api/odp_atomic.h
> +++ b/platform/linux-generic/include/api/odp_atomic.h
> @@ -21,6 +21,10 @@ extern "C" {
>
>  #include 
>
> +/** @addtogroup odp_synchronizers
> + *  Atomic operations.
> + *  @{
> + */
>
>  /**
>   * Atomic integer
> @@ -463,6 +467,10 @@ odp_atomic_cmpset_u64(odp_atomic_u64_t *dst, uint64_t
> exp, uint64_t src)
> return __sync_bool_compare_and_swap(dst, exp, src);
>  }
>
> +/**
> + * @}
> + */
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/platform/linux-generic/include/api/odp_barrier.h
> b/platform/linux-generic/include/api/odp_barrier.h
> index a7b3215..866648f 100644
> --- a/platform/linux-generic/include/api/odp_barrier.h
> +++ b/platform/linux-generic/include/api/odp_barrier.h
> @@ -22,6 +22,10 @@ extern "C" {
>  #include 
>  #include 
>
> +/** @addtogroup odp_synchronizers
> + *  Barrier between threads.
> + *  @{
> + */
>
>  /**
>   * ODP execution barrier
> @@ -48,6 +52,9 @@ void odp_barrier_init_count(odp_barrier_t *barrier, int
> count);
>   */
>  void odp_barrier_sync(odp_barrier_t *barrier);
>
> +/**
> + * @}
> + */
>
>  #ifdef __cplusplus
>  }
> diff --git a/platform/linux-generic/include/api/odp_buffer.h
> b/platform/linux-generic/include/api/odp_buffer.h
> index d8577fd..289e0eb 100644
> --- a/platform/linux-generic/include/api/odp_buffer.h
> +++ b/platform/linux-generic/include/api/odp_buffer.h
> @@ -22,6 +22,10 @@ extern "C" {
>  #include 
>
>
> +/** @defgroup odp_buffer ODP BUFFER
> + *  Operations on a buffer.
> + *  @{
> + */
>
>  /**
>   * ODP buffer
> @@ -83,6 +87,9 @@ int odp_buffer_is_valid(odp_buffer_t buf);
>   */
>  void odp_buffer_print(odp_buffer_t buf);
>
> +/**
> + * @}
> + */
>
>  #ifdef __cplusplus
>  }
> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
> b/platform/linux-generic/include/api/odp_buffer_pool.h
> index fe88898..d04abf0 100644
> --- a/platform/linux-gener

[lng-odp] [Bug 809] implicit declaration of function 'ftruncate with -std=c99

2014-10-22 Thread bugzilla-daemon
https://bugs.linaro.org/show_bug.cgi?id=809

Mike Holmes  changed:

   What|Removed |Added

 CC||lng-odp@lists.linaro.org

-- 
You are receiving this mail because:
You are on the CC list for the bug.___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Jerin Jacob
On Wed, Oct 22, 2014 at 06:15:37PM +0400, Maxim Uvarov wrote:
> On 10/22/2014 06:02 PM, Jerin Jacob wrote:
> >I think, you can create the queue in odp_pktio_open and attach it and then
> >expose the queues through odp_pktio_outq_getdef.
> Yes, I can go with:
> odp_pktio_outq_getdef(pktio) = ipc_queue;
> 
> In that case odp_pktio_outq_setdef is not needed.
> 
> >
> >Other point is the use of hard-cored name like "ipc_pktio" in application
> >IMO its not portable.
> >I think we should have API to enumerate all PKTIO in the given platform
> >along with capabilities(like 10G, IPC) then application can choose the
> >pktio for given capability.There can be a case of multiple IPC PKTIOs in
> >a platform as well.
> It's not hard coded. It's simple name provided to odp_pktio_open(),
> odp_pktio_lookup(),
> you can change it to any other.  In example I named it ipc_pktio. You can
> create as much pktios
> as you can but with different names. I do not see limitation here.

I agree, its not a limitation. From ODP platform implementer perceptive,
We would like avoid(if possible) any platform specific 
change in examples and test directory and maintain it separately.
I suggest to use at least  a command line argument for the pktio name
or lets enumerate all the pktio in given platform and use
the first pktio with IPC capability.

> 
> Maxim.
> 
> 

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 2/2] gitignore: remove odp_packet_netmap

2014-10-22 Thread Ciprian Barbu
This one is ok too, of course.

On Wed, Oct 22, 2014 at 3:40 PM, Anders Roxell  wrote:
> Signed-off-by: Anders Roxell 
> ---
>  .gitignore | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/.gitignore b/.gitignore
> index 6342e34..a721904 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -34,7 +34,6 @@ obj/
>  build/
>  odp_example
>  odp_packet
> -odp_packet_netmap
>  odp_atomic
>  odp_shm
>  odp_ring
> --
> 2.1.0
>
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 1/2] linux-generic: remove odp_packet_netmap.h

2014-10-22 Thread Ciprian Barbu
On Wed, Oct 22, 2014 at 3:40 PM, Anders Roxell  wrote:
> Signed-off-by: Anders Roxell 

It's trivial but I tested it anyway. You can add me as reviewer.

> ---
>  platform/linux-generic/include/odp_packet_netmap.h | 67 
> --
>  1 file changed, 67 deletions(-)
>  delete mode 100644 platform/linux-generic/include/odp_packet_netmap.h
>
> diff --git a/platform/linux-generic/include/odp_packet_netmap.h 
> b/platform/linux-generic/include/odp_packet_netmap.h
> deleted file mode 100644
> index 1ab50d0..000
> --- a/platform/linux-generic/include/odp_packet_netmap.h
> +++ /dev/null
> @@ -1,67 +0,0 @@
> -/* Copyright (c) 2013, Linaro Limited
> - * All rights reserved.
> - *
> - * SPDX-License-Identifier: BSD-3-Clause
> - */
> -
> -#ifndef ODP_PACKET_NETMAP_H
> -#define ODP_PACKET_NETMAP_H
> -
> -#include 
> -
> -#include 
> -#include 
> -#include 
> -
> -#include 
> -#include 
> -#include 
> -#include 
> -
> -#include 
> -
> -#define ODP_NETMAP_MODE_HW 0
> -#define ODP_NETMAP_MODE_SW 1
> -
> -#define NETMAP_BLOCKING_IO
> -
> -/** Packet socket using netmap mmaped rings for both Rx and Tx */
> -typedef struct {
> -   odp_buffer_pool_t pool;
> -   size_t max_frame_len; /**< max frame len = buf_size - sizeof(pkt_hdr) 
> */
> -   size_t frame_offset; /**< frame start offset from start of pkt buf */
> -   size_t buf_size; /**< size of buffer payload in 'pool' */
> -   int netmap_mode;
> -   struct nm_desc_t *nm_desc;
> -   uint32_t begin;
> -   uint32_t end;
> -   struct netmap_ring *rxring;
> -   struct netmap_ring *txring;
> -   odp_queue_t tx_access; /* Used for exclusive access to send packets */
> -   uint32_t if_flags;
> -   char ifname[32];
> -} pkt_netmap_t;
> -
> -/**
> - * Configure an interface to work in netmap mode
> - */
> -int setup_pkt_netmap(pkt_netmap_t * const pkt_nm, const char *netdev,
> -odp_buffer_pool_t pool, netmap_params_t *nm_params);
> -
> -/**
> - * Switch interface from netmap mode to normal mode
> - */
> -int close_pkt_netmap(pkt_netmap_t * const pkt_nm);
> -
> -/**
> - * Receive packets using netmap
> - */
> -int recv_pkt_netmap(pkt_netmap_t * const pkt_nm, odp_packet_t pkt_table[],
> -   unsigned len);
> -
> -/**
> - * Send packets using netmap
> - */
> -int send_pkt_netmap(pkt_netmap_t * const pkt_nm, odp_packet_t pkt_table[],
> -   unsigned len);
> -#endif
> --
> 2.1.0
>
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Maxim Uvarov

On 10/22/2014 06:02 PM, Jerin Jacob wrote:

I think, you can create the queue in odp_pktio_open and attach it and then
expose the queues through odp_pktio_outq_getdef.

Yes, I can go with:
odp_pktio_outq_getdef(pktio) = ipc_queue;

In that case odp_pktio_outq_setdef is not needed.



Other point is the use of hard-cored name like "ipc_pktio" in application
IMO its not portable.
I think we should have API to enumerate all PKTIO in the given platform
along with capabilities(like 10G, IPC) then application can choose the
pktio for given capability.There can be a case of multiple IPC PKTIOs in
a platform as well.
It's not hard coded. It's simple name provided to odp_pktio_open(), 
odp_pktio_lookup(),
you can change it to any other.  In example I named it ipc_pktio. You 
can create as much pktios

as you can but with different names. I do not see limitation here.

Maxim.



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Bill Fischofer
It's in the ODP Buffer Design doc (attached)
​
 ODP v1.0 Buffer Management API Design - Final

​

On Wed, Oct 22, 2014 at 9:04 AM, Maxim Uvarov 
wrote:

> On 10/22/2014 05:51 PM, Bill Fischofer wrote:
>
>> The currently defined syntax for odp_buffer_pool_create is:
>>
>> odp_buffer_pool_t odp_buffer_pool_create(constchar*name,
>>
>> odp_buffer_pool_param_t*params,
>>
>> odp_buffer_pool_init_t*init_params);
>>
>> Any proposed changes to include an odp_shm_t, etc. should be part of the
>> odp_buffer_pool_param_t.
>>
>>
> can you point me to where is that described?
>
> Maxim.
>
>  On Wed, Oct 22, 2014 at 8:41 AM, Maxim Uvarov > > wrote:
>>
>> On 10/22/2014 05:14 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:
>>
>> I think it might be better to add extend
>> odp_buffer_pool_create:
>>
>> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>void
>> *base_addr, uint64_t size,
>>size_t
>> buf_size, size_t
>> buf_align,
>>int buf_type)
>>
>> provide base_addr as remote_pool_mapped_base address and
>> buf_type as:
>> ODP_BUFFER_TYPE_IPC
>>
>> In that case I will not allocate memory, will link pktio
>> with remote
>> base_addr.
>> Will check how is it implementable.
>>
>> Will send new version of patch.
>>
>> Maxim.
>>
>> After my latest shm changes (reference shm with handle), I was
>> going change buffer pool create to use that handle. It should
>> be done still, and after that pool create can see e.g. the shm
>> flags. No need to " remote_pool_mapped_base address", right?
>>
>> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>   odp_shm_t shm,
>>   size_t buf_size,
>> size_t buf_align, int buf_type)
>>
>> -Petri
>>
>> Yes, in that case it will solve that problem. But after that we
>> will lost ability to app allocate memory for pool. Do we need both
>> handler and pointer? Or do we need to allocate memory for pool
>> from application for any reason?
>>
>> Maxim.
>>
>>
>>
>>
>>
>> ___
>> lng-odp mailing list
>> lng-odp@lists.linaro.org 
>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>>
>
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Maxim Uvarov

On 10/22/2014 05:51 PM, Bill Fischofer wrote:

The currently defined syntax for odp_buffer_pool_create is:

odp_buffer_pool_t odp_buffer_pool_create(constchar*name,

odp_buffer_pool_param_t*params,

odp_buffer_pool_init_t*init_params);

Any proposed changes to include an odp_shm_t, etc. should be part of 
the odp_buffer_pool_param_t.




can you point me to where is that described?

Maxim.

On Wed, Oct 22, 2014 at 8:41 AM, Maxim Uvarov > wrote:


On 10/22/2014 05:14 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:

I think it might be better to add extend
odp_buffer_pool_create:

odp_buffer_pool_t odp_buffer_pool_create(const char *name,
   void
*base_addr, uint64_t size,
   size_t
buf_size, size_t
buf_align,
   int buf_type)

provide base_addr as remote_pool_mapped_base address and
buf_type as:
ODP_BUFFER_TYPE_IPC

In that case I will not allocate memory, will link pktio
with remote
base_addr.
Will check how is it implementable.

Will send new version of patch.

Maxim.

After my latest shm changes (reference shm with handle), I was
going change buffer pool create to use that handle. It should
be done still, and after that pool create can see e.g. the shm
flags. No need to " remote_pool_mapped_base address", right?

odp_buffer_pool_t odp_buffer_pool_create(const char *name,
  odp_shm_t shm,
  size_t buf_size,
size_t buf_align, int buf_type)

-Petri

Yes, in that case it will solve that problem. But after that we
will lost ability to app allocate memory for pool. Do we need both
handler and pointer? Or do we need to allocate memory for pool
from application for any reason?

Maxim.





___
lng-odp mailing list
lng-odp@lists.linaro.org 
http://lists.linaro.org/mailman/listinfo/lng-odp





___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Jerin Jacob
On Wed, Oct 22, 2014 at 04:36:24PM +0400, Maxim Uvarov wrote:
> On 10/22/2014 03:53 PM, Jerin Jacob wrote:
> >Some review comments,
> >
> >1) Any specific reason to choose pktio instead of queue for IPC abstraction ?
> 
> Yes. First versions I did for queues. Refer to ipc v7 patch in mailing list.
> After that most everyone complained that it has to be on packet i/o level.
> And the main reason for that is that in hardware accelerated packet i/o
> communication between different processes has to be easy. I.e. one process
> just sends packet to hw and other receives and it's hw deal how to deliver
> this packet. In hw case even shared memory is not a requirement.

OK. Make sense.
if "non shared memory" is the requirement then PKTIO is the correct abstraction.

> 
> >2) How classification fits into the equation if we choose the IPC through 
> >pktio ?
> >API's like odp_pktio_pmr_cos..
> >for non linux-generic platform, May be we can map loopback port as IPC pktio
> >to implement complete ODP pktio capabilities.
> >But does application needs such capability ?
> >or just a "queue" is fine to meet the use cases ?
> 
> queue has to be fine. classifier is in the same process, where queue is
> shared. If we will move classifier to separate odp app then IPC is needed.
>

If we are abstracting IPC with ODP PKTIO then the platform should implement
all the ODP PKTIO capabilities(like attaching to classifier etc). Is my
understanding correct ?.if there is any expected limitation on specific
usage then we should document.
 
> >3) Currently odp_pktio_open creates the queue and application can get
> >the created queues through odp_pktio_outq_getdef.
> >Do we really need to introduce new "odp_pktio_outq_setdef" API now ?
> 
> I need somehow to link queue and pktio. I.e. say to queue to use specific
> output function.
> For that reason I implemented odp_pktio_outq_setdef.

I think, you can create the queue in odp_pktio_open and attach it and then
expose the queues through odp_pktio_outq_getdef.

Other point is the use of hard-cored name like "ipc_pktio" in application 
IMO its not portable.
I think we should have API to enumerate all PKTIO in the given platform
along with capabilities(like 10G, IPC) then application can choose the
pktio for given capability.There can be a case of multiple IPC PKTIOs in
a platform as well.

> 
> >4) Do we really need to introduce new API "odp_shm_lookup_ipc" ?
> >Is possible to abstract through existing odp_shm_lookup API ?
> 
> That is good question. Probably we can add some flag to odp_shm_lookup where
> to search, in local table or shared memory.
> 
> >5) Assuming pool value == 0 for odp_pktio_open as IPC port is not portable.
> >Need to introduce macro or standard name for IPC
> I think it might be better to add extend odp_buffer_pool_create:
> 
> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>  void *base_addr, uint64_t size,
>  size_t buf_size, size_t buf_align,
>  int buf_type)
> 
> provide base_addr as remote_pool_mapped_base address and buf_type as:
> ODP_BUFFER_TYPE_IPC
> 
> In that case I will not allocate memory, will link pktio with remote
> base_addr.
> Will check how is it implementable.
> 
> Will send new version of patch.
> 
> Maxim.
> 
> 
> 
> 

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Savolainen, Petri (NSN - FI/Espoo)


> -Original Message-
> From: ext Maxim Uvarov [mailto:maxim.uva...@linaro.org]
> Sent: Wednesday, October 22, 2014 4:41 PM
> To: Savolainen, Petri (NSN - FI/Espoo); Jerin Jacob
> Cc: lng-odp@lists.linaro.org
> Subject: Re: [lng-odp] [PATCH] ipc linux-generic implementation based on
> pktio
> 
> On 10/22/2014 05:14 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:
> >> I think it might be better to add extend odp_buffer_pool_create:
> >>
> >> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
> >>void *base_addr, uint64_t
> size,
> >>size_t buf_size, size_t
> >> buf_align,
> >>int buf_type)
> >>
> >> provide base_addr as remote_pool_mapped_base address and buf_type as:
> >> ODP_BUFFER_TYPE_IPC
> >>
> >> In that case I will not allocate memory, will link pktio with remote
> >> base_addr.
> >> Will check how is it implementable.
> >>
> >> Will send new version of patch.
> >>
> >> Maxim.
> > After my latest shm changes (reference shm with handle), I was going
> change buffer pool create to use that handle. It should be done still, and
> after that pool create can see e.g. the shm flags. No need to "
> remote_pool_mapped_base address", right?
> >
> > odp_buffer_pool_t odp_buffer_pool_create(const char *name,
> >   odp_shm_t shm,
> >   size_t buf_size, size_t
> buf_align, int buf_type)
> >
> > -Petri
> >
> Yes, in that case it will solve that problem. But after that we will
> lost ability to app allocate memory for pool. Do we need both handler
> and pointer? Or do we need to allocate memory for pool from application
> for any reason?
> 
> Maxim.
> 
> 

This would be the version where application gives the memory. More commonly 
application could leave shm invalid and let the implementation reserve the 
memory for the pool. It's safer that the memory range is under implementation 
control.

-Petri





___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Bill Fischofer
The currently defined syntax for odp_buffer_pool_create is:

odp_buffer_pool_t odp_buffer_pool_create(const char *name,

odp_buffer_pool_param_t *params,
odp_buffer_pool_init_t
*init_params);

Any proposed changes to include an odp_shm_t, etc. should be part of the
odp_buffer_pool_param_t.

On Wed, Oct 22, 2014 at 8:41 AM, Maxim Uvarov 
wrote:

> On 10/22/2014 05:14 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:
>
>> I think it might be better to add extend odp_buffer_pool_create:
>>>
>>> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>>void *base_addr, uint64_t
>>> size,
>>>size_t buf_size, size_t
>>> buf_align,
>>>int buf_type)
>>>
>>> provide base_addr as remote_pool_mapped_base address and buf_type as:
>>> ODP_BUFFER_TYPE_IPC
>>>
>>> In that case I will not allocate memory, will link pktio with remote
>>> base_addr.
>>> Will check how is it implementable.
>>>
>>> Will send new version of patch.
>>>
>>> Maxim.
>>>
>> After my latest shm changes (reference shm with handle), I was going
>> change buffer pool create to use that handle. It should be done still, and
>> after that pool create can see e.g. the shm flags. No need to "
>> remote_pool_mapped_base address", right?
>>
>> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>   odp_shm_t shm,
>>   size_t buf_size, size_t
>> buf_align, int buf_type)
>>
>> -Petri
>>
>>  Yes, in that case it will solve that problem. But after that we will
> lost ability to app allocate memory for pool. Do we need both handler and
> pointer? Or do we need to allocate memory for pool from application for any
> reason?
>
> Maxim.
>
>
>
>
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API

2014-10-22 Thread Bill Fischofer
I had previously detailed some of the problems that arise if we remove
segmentation support from buffers while trying to keep it as part of
packets.  I'd still like to see a response to these questions.  Given that
we support unsegmented buffers I don't see what the objection is here.
Those that don't want to deal with segments need not deal with them at
all.  That may limit the platforms they can run on, but applications will
always choose which implementations are best suited to their needs.

Bill

On Wed, Oct 22, 2014 at 7:27 AM, Savolainen, Petri (NSN - FI/Espoo) <
petri.savolai...@nsn.com> wrote:

> Hi,
>
> In short, I think we must not bring segmentation support to the buffer
> level "just in case" someone would need it there. Real use cases for
> segmentation are on packet level (large packets, packet
> fragmentation/reassembly, etc), so the feature should be implemented there.
>
> -Petri
>
> > -Original Message-
> > From: ext Ciprian Barbu [mailto:ciprian.ba...@linaro.org]
> > Sent: Wednesday, October 22, 2014 3:00 PM
> > To: Bill Fischofer
> > Cc: Ola Liljedahl; Savolainen, Petri (NSN - FI/Espoo); lng-
> > o...@lists.linaro.org
> > Subject: Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API
> >
> > On Wed, Oct 22, 2014 at 2:47 PM, Ciprian Barbu  >
> > wrote:
> > > This thread has been cold for 5 days, so the assumption is that we can
> > > go forward with the design right now. This patch series proposed by
> > > Bala updates some part of the API to the final form of the Buffer
> > > Design Document, we should have it merged if there are no more
> > > objections. For that more people with the right expertise should have
> > > a look at it and get the thread back on track.
> > >
> > > I for example have observed the following issue. All the examples
> > > create buffer pools over shared memory, which doesn't make sense for
> > > some platforms, linux-dpdk for example, which ignores the base_addr
> > > argument altogether. I think we need more clarity on this subject, for
> > > sure the creation of buffer pools will differ from platform to
> > > platform, which migrates to the application responsibility.
> > >
> > > I think we should have a helper function to easily create buffer pools
> > > without worrying too much about the difference in buffer management
> > > between platforms, so that one can write a simple portable application
> > > with no sweat. For the hardcore programmers the API still gives fine
> > > control to buffer management that depending on the platform could
> > > involve additional prerequisites, like creating a shared memory
> > > segment to hold the buffer pool.
> >
> > Ok, so I had another look at the Buffer Management final design. I now
> > see that the option of creating buffer pools from regions has been
> > removed, so in this case things will be simpler for the applications.
> > In other words we should really start working on the full
> > implementation of the API because from there the problem I just stated
> > above (having to create shared memory segments) will disappear.
> >
> > >
> > > On Fri, Oct 17, 2014 at 4:33 PM, Bill Fischofer
> > >  wrote:
> > >> Let's consider the implications of removing segmentation support from
> > >> buffers and only having that concept be part of packets.
> > >>
> > >> The first question that arises is what is the relationship between the
> > >> abstract types odp_packet_t and odp_buffer_t? This is important
> because
> > >> currently we say that packets are allocated from ODP buffer pools, not
> > from
> > >> packet pools.  Do we need a separate odp_packet_pool_t that is used
> for
> > >> packets?
> > >>
> > >> Today, when I allocate a packet I'm allocating a single object that
> > happens
> > >> to be a single buffer object of type ODP_BUFFER_TYPE_PACKET.  But that
> > only
> > >> works if the two objects have compatible semantics (including
> > segmentation).
> > >> If the semantics are not compatible, then an odp_packet_t may in fact
> > be
> > >> composed of multiple odp_buffer_t's because the packet may consist of
> > >> multiple segments and buffers no longer recognize the concept of
> > segments so
> > >> a single buffer can only be a single segment.
> > >>
> > >> So now an odp_packet_segment_t may be an odp_buffer_t but an
> > odp_packet_t in
> > >> fact is some meta-object that is constructed (by whom?) from multiple
> > >> odp_packet_segment_ts that are themselves odp_buffer_ts.  So
> > >> odp_packet_to_buffer() no longer makes sense since there is no longer
> a
> > >> one-to-one correspondence between packets and buffers.  We could have
> > an
> > >> odp_packet_segment_to_buffer() routine instead.
> > >>
> > >> Next question: What about meta data?  If an odp_packet_t is a type of
> > an
> > >> odp_buffer_t then this is very straightforward since all buffer meta
> > data is
> > >> reusable as packet meta data and the packet type can just add its own
> > >> specific meta data to this set.  But if an odp_packe

Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Maxim Uvarov

On 10/22/2014 05:14 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:

I think it might be better to add extend odp_buffer_pool_create:

odp_buffer_pool_t odp_buffer_pool_create(const char *name,
   void *base_addr, uint64_t size,
   size_t buf_size, size_t
buf_align,
   int buf_type)

provide base_addr as remote_pool_mapped_base address and buf_type as:
ODP_BUFFER_TYPE_IPC

In that case I will not allocate memory, will link pktio with remote
base_addr.
Will check how is it implementable.

Will send new version of patch.

Maxim.

After my latest shm changes (reference shm with handle), I was going change buffer pool 
create to use that handle. It should be done still, and after that pool create can see 
e.g. the shm flags. No need to " remote_pool_mapped_base address", right?

odp_buffer_pool_t odp_buffer_pool_create(const char *name,
  odp_shm_t shm,
  size_t buf_size, size_t buf_align, 
int buf_type)

-Petri

Yes, in that case it will solve that problem. But after that we will 
lost ability to app allocate memory for pool. Do we need both handler 
and pointer? Or do we need to allocate memory for pool from application 
for any reason?


Maxim.




___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API

2014-10-22 Thread Alexandru Badicioiu
Thanks for clarification. Regarding the sharing, is 0 specifying that the
application does not require sharing or is a requirement for that segment
to be private?

Alex


On 22 October 2014 16:18, Savolainen, Petri (NSN - FI/Espoo) <
petri.savolai...@nsn.com> wrote:

>  0 is the default => with current flag definitions: both SW + HW can
> access,  not shared with external processes.
>
>
>
> -Petri
>
>
>
> *From:* ext Alexandru Badicioiu [mailto:alexandru.badici...@linaro.org]
> *Sent:* Wednesday, October 22, 2014 4:01 PM
> *To:* Ciprian Barbu
> *Cc:* Bill Fischofer; Savolainen, Petri (NSN - FI/Espoo);
> lng-odp@lists.linaro.org
>
> *Subject:* Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API
>
>
>
> The option of creating buffer pools out of memory regions does not solve
> existing applications portability problem. The odp_pktio application
> expects packets to be allocated in shared memory regions. Is this a
> semantic that should be satisfied by all platforms?
>
>  odp_shm_create takes a flag argument which has two values for
> linux-generic:
>
> /*
>
>  * Shared memory flags
>
>  */
>
>
>
> /* Share level */
>
> #define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */
>
> #define ODP_SHM_PROC0x2 /**< Share with external processes */
>
>
>
> but odp_pktio uses 0. Could we assume that this value has a kind of
> ODP_SHM_PACKET meaning?
>
> We could also explicitly extend this flag list to work across platforms.
>
>
>
> Alex
>
>
>
>
>
>
>
>
>
>
>
> On 22 October 2014 14:59, Ciprian Barbu  wrote:
>
> On Wed, Oct 22, 2014 at 2:47 PM, Ciprian Barbu 
> wrote:
> > This thread has been cold for 5 days, so the assumption is that we can
> > go forward with the design right now. This patch series proposed by
> > Bala updates some part of the API to the final form of the Buffer
> > Design Document, we should have it merged if there are no more
> > objections. For that more people with the right expertise should have
> > a look at it and get the thread back on track.
> >
> > I for example have observed the following issue. All the examples
> > create buffer pools over shared memory, which doesn't make sense for
> > some platforms, linux-dpdk for example, which ignores the base_addr
> > argument altogether. I think we need more clarity on this subject, for
> > sure the creation of buffer pools will differ from platform to
> > platform, which migrates to the application responsibility.
> >
> > I think we should have a helper function to easily create buffer pools
> > without worrying too much about the difference in buffer management
> > between platforms, so that one can write a simple portable application
> > with no sweat. For the hardcore programmers the API still gives fine
> > control to buffer management that depending on the platform could
> > involve additional prerequisites, like creating a shared memory
> > segment to hold the buffer pool.
>
> Ok, so I had another look at the Buffer Management final design. I now
> see that the option of creating buffer pools from regions has been
> removed, so in this case things will be simpler for the applications.
> In other words we should really start working on the full
> implementation of the API because from there the problem I just stated
> above (having to create shared memory segments) will disappear.
>
>
> >
> > On Fri, Oct 17, 2014 at 4:33 PM, Bill Fischofer
> >  wrote:
> >> Let's consider the implications of removing segmentation support from
> >> buffers and only having that concept be part of packets.
> >>
> >> The first question that arises is what is the relationship between the
> >> abstract types odp_packet_t and odp_buffer_t? This is important because
> >> currently we say that packets are allocated from ODP buffer pools, not
> from
> >> packet pools.  Do we need a separate odp_packet_pool_t that is used for
> >> packets?
> >>
> >> Today, when I allocate a packet I'm allocating a single object that
> happens
> >> to be a single buffer object of type ODP_BUFFER_TYPE_PACKET.  But that
> only
> >> works if the two objects have compatible semantics (including
> segmentation).
> >> If the semantics are not compatible, then an odp_packet_t may in fact be
> >> composed of multiple odp_buffer_t's because the packet may consist of
> >> multiple segments and buffers no longer recognize the concept of
> segments so
> >> a single buffer can only be a single segment.
> >>
> >> So now an odp_packet_segment_t may be an odp_buffer_t but an
> odp_packet_t in
> >> fact is some meta-object that is constructed (by whom?) from multiple
> >> odp_packet_segment_ts that are themselves odp_buffer_ts.  So
> >> odp_packet_to_buffer() no longer makes sense since there is no longer a
> >> one-to-one correspondence between packets and buffers.  We could have an
> >> odp_packet_segment_to_buffer() routine instead.
> >>
> >> Next question: What about meta data?  If an odp_packet_t is a type of an
> >> odp_buffer_t then this is very straightforward since all b

Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Savolainen, Petri (NSN - FI/Espoo)

> I think it might be better to add extend odp_buffer_pool_create:
> 
> odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>   void *base_addr, uint64_t size,
>   size_t buf_size, size_t
> buf_align,
>   int buf_type)
> 
> provide base_addr as remote_pool_mapped_base address and buf_type as:
> ODP_BUFFER_TYPE_IPC
> 
> In that case I will not allocate memory, will link pktio with remote
> base_addr.
> Will check how is it implementable.
> 
> Will send new version of patch.
> 
> Maxim.

After my latest shm changes (reference shm with handle), I was going change 
buffer pool create to use that handle. It should be done still, and after that 
pool create can see e.g. the shm flags. No need to " remote_pool_mapped_base 
address", right?

odp_buffer_pool_t odp_buffer_pool_create(const char *name,
 odp_shm_t shm,
 size_t buf_size, size_t buf_align, int 
buf_type)

-Petri


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv4] Timer API and and priority queue-based implementation

2014-10-22 Thread Ola Liljedahl
On 13 October 2014 16:24, Savolainen, Petri (NSN - FI/Espoo) <
petri.savolai...@nsn.com> wrote:

>
>
>
>
> *From:* ext Ola Liljedahl [mailto:ola.liljed...@linaro.org]
> *Sent:* Friday, October 10, 2014 4:34 PM
> *To:* Savolainen, Petri (NSN - FI/Espoo)
> *Cc:* lng-odp@lists.linaro.org
> *Subject:* Re: [lng-odp] [PATCHv4] Timer API and and priority queue-based
> implementation
>
>
>
> On 10 October 2014 10:05, Savolainen, Petri (NSN - FI/Espoo) <
> petri.savolai...@nsn.com> wrote:
>
>
>
>
>
> *From:* ext Ola Liljedahl [mailto:ola.liljed...@linaro.org]
> *Sent:* Thursday, October 09, 2014 6:10 PM
> *To:* Savolainen, Petri (NSN - FI/Espoo)
> *Cc:* lng-odp@lists.linaro.org
> *Subject:* Re: [lng-odp] [PATCHv4] Timer API and and priority queue-based
> implementation
>
>
>
>
> Need a success/fail return value? User would need to know if the timeout
> is still coming, or not...
>
>  Why? When (if) the timeout is received, odp_timer_tmo_status() will tell
> the application is the timeout is fresh or stale (or orphaned). Which use
> case requires the application to immediately know if the timeout was
> successfully cancelled?
>
>
>
> For example, if I have a number of re-transmission timers. Outgoing
> packets and incoming ack packets are handled in the same atomic queue. Also
> timeouts would be sent to the same queue. Mostly (99.99% of the packets)
> I’ll receive the ack packet before the tmo expires. So I’ll cancel timer
> during ack packet processing. Now if cancel() does not tell me whether the
> operation was successful (usually it is), how I’d know when I can reuse the
> timer for some other packet?
>
>
>
> If cancel failed, it’s OK - I’ll receive a stale tmo later and there I can
> mark the timer reusable again. If cancel succeeded, I don’t get a
> confirmation of that, ever. I don’t want the timer send me a stale tmo on
> every cancel, since that would increase per packet event rate 50% (from 2
> to 3).
>
>
>
> So, the cancel status is needed. Right?
>
>  Probably we mean different things by cancel failing. In my API and
> implementation, cancel will always succeed in the sense that any
> outstanding timeout will never be seen as a fresh timeout. The only
> question is whether we will be able to prevent the timeout from being
> received and we can't do that if the timer has already expired. But a sent
> timeout will be detected as stale.
>
>
>
> The timer can be reused and reset even if the cancel operation "failed".
> It is only the last set or cancel operating that defines the state of the
> timer and the freshness of any received timeouts. When the timeout is
> returned it will be re-associated with the timer and if the timer has
> already expired, the timeout will be enqueued for immediate delivery.
>
>
>
> In your specific example, the application should just (re-) set the
> re-transmission timer whenever it receives another packet. No need to first
> cancel the timer and check any return code. I did specify something like
> that in one of my first proposals, this put responsibility on the
> application to remember status of previous ODP operations. With the help of
> Bill, I realized we didn't have to expose this to the application, keep it
> hidden under the API and enable more implementation choices (e.g.
> asynchronous HW or SW implementations that do not have to perform the
> operation immediately and return a success code which the application has
> to stall waiting for).
>
>
>
> Sorry if this was not clear. Probably something in the documentation needs
> to be enhanced to convey this message better.
>
>
>
> There are many packets/timers in-flight (e.g. 1000). One timer per
> outgoing packet. E.g. a packet+ack roundtrip could be 10us and retransmit
> timeout 10ms.
>
>
>
> I’d pick a timer and set it on packet output, and cancel it when ack is
> received. I have to cancel it here (not reset) because I cannot predict
> when the timer is needed for another outgoing packet. Now, in the rare case
> the cancel would fail (ack was received too close to tmo expiration). I
> would not notice that, but mark the timer ready for reuse. The tmo is now
> marked stale but not yet in my queue. On next outgoing packet (<1us) later
> I’ll reuse the same timer (reset it for 10ms). 10us later, I receive ack
> for the packet and cancel the same timer, and so on. Maybe I’ll
> cancel/reset the same timer multiple times before the stale tmo would
> travel through the timer HW/queues/scheduling back to me (and the status
> check function magic).
>
> It is always the latest set or cancel operation that is active. You can
> cancel that timer (even if it is "too late"), should the timeout be
> received, it will be considered stale. At some later time, you could (re-)
> set the timer. And cancel it. And reset it. The original timeout (if
> enqueued) is still considered stale (e.g. it contains the wrong expiration
> time). As soon as the stale timeout is received and returned, the latest
> set operation will re-evaluated and 

Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API

2014-10-22 Thread Alexandru Badicioiu
The option of creating buffer pools out of memory regions does not solve
existing applications portability problem. The odp_pktio application
expects packets to be allocated in shared memory regions. Is this a
semantic that should be satisfied by all platforms?
 odp_shm_create takes a flag argument which has two values for
linux-generic:
/*
 * Shared memory flags
 */

/* Share level */
#define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */
#define ODP_SHM_PROC0x2 /**< Share with external processes */

but odp_pktio uses 0. Could we assume that this value has a kind of
ODP_SHM_PACKET meaning?
We could also explicitly extend this flag list to work across platforms.

Alex





On 22 October 2014 14:59, Ciprian Barbu  wrote:

> On Wed, Oct 22, 2014 at 2:47 PM, Ciprian Barbu 
> wrote:
> > This thread has been cold for 5 days, so the assumption is that we can
> > go forward with the design right now. This patch series proposed by
> > Bala updates some part of the API to the final form of the Buffer
> > Design Document, we should have it merged if there are no more
> > objections. For that more people with the right expertise should have
> > a look at it and get the thread back on track.
> >
> > I for example have observed the following issue. All the examples
> > create buffer pools over shared memory, which doesn't make sense for
> > some platforms, linux-dpdk for example, which ignores the base_addr
> > argument altogether. I think we need more clarity on this subject, for
> > sure the creation of buffer pools will differ from platform to
> > platform, which migrates to the application responsibility.
> >
> > I think we should have a helper function to easily create buffer pools
> > without worrying too much about the difference in buffer management
> > between platforms, so that one can write a simple portable application
> > with no sweat. For the hardcore programmers the API still gives fine
> > control to buffer management that depending on the platform could
> > involve additional prerequisites, like creating a shared memory
> > segment to hold the buffer pool.
>
> Ok, so I had another look at the Buffer Management final design. I now
> see that the option of creating buffer pools from regions has been
> removed, so in this case things will be simpler for the applications.
> In other words we should really start working on the full
> implementation of the API because from there the problem I just stated
> above (having to create shared memory segments) will disappear.
>
> >
> > On Fri, Oct 17, 2014 at 4:33 PM, Bill Fischofer
> >  wrote:
> >> Let's consider the implications of removing segmentation support from
> >> buffers and only having that concept be part of packets.
> >>
> >> The first question that arises is what is the relationship between the
> >> abstract types odp_packet_t and odp_buffer_t? This is important because
> >> currently we say that packets are allocated from ODP buffer pools, not
> from
> >> packet pools.  Do we need a separate odp_packet_pool_t that is used for
> >> packets?
> >>
> >> Today, when I allocate a packet I'm allocating a single object that
> happens
> >> to be a single buffer object of type ODP_BUFFER_TYPE_PACKET.  But that
> only
> >> works if the two objects have compatible semantics (including
> segmentation).
> >> If the semantics are not compatible, then an odp_packet_t may in fact be
> >> composed of multiple odp_buffer_t's because the packet may consist of
> >> multiple segments and buffers no longer recognize the concept of
> segments so
> >> a single buffer can only be a single segment.
> >>
> >> So now an odp_packet_segment_t may be an odp_buffer_t but an
> odp_packet_t in
> >> fact is some meta-object that is constructed (by whom?) from multiple
> >> odp_packet_segment_ts that are themselves odp_buffer_ts.  So
> >> odp_packet_to_buffer() no longer makes sense since there is no longer a
> >> one-to-one correspondence between packets and buffers.  We could have an
> >> odp_packet_segment_to_buffer() routine instead.
> >>
> >> Next question: What about meta data?  If an odp_packet_t is a type of an
> >> odp_buffer_t then this is very straightforward since all buffer meta
> data is
> >> reusable as packet meta data and the packet type can just add its own
> >> specific meta data to this set.  But if an odp_packet_t is now a
> separate
> >> object then where does the storage for its meta data come from? If we
> try to
> >> map it into an odp_buffer_t that doesn't work since an odp_packet_t may
> >> consist of multiple underlying odp_buffer_ts, one for each
> >> odp_packet_segment_t.  Is the packet meta data duplicated in each
> segment?
> >> Is the first segment of a packet special (odp_packet_first_segment_t)?
> And
> >> what about user meta data, since this is of potentially variable size?
> >>
> >> I submit that there are a lot of implications to this that need to be
> fully
> >> thought through, which is why I believe it's simpler to keep
> se

[lng-odp] [PATCH 0/2] linux-generic remove netmap

2014-10-22 Thread Anders Roxell
Hi,

This patch set removes netmap from linux-generic.

Cheers,
Anders

Anders Roxell (2):
  linux-generic: remove odp_packet_netmap.h
  gitignore: remove odp_packet_netmap

 .gitignore |  1 -
 platform/linux-generic/include/odp_packet_netmap.h | 67 --
 2 files changed, 68 deletions(-)
 delete mode 100644 platform/linux-generic/include/odp_packet_netmap.h

-- 
2.1.0


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCH 2/2] gitignore: remove odp_packet_netmap

2014-10-22 Thread Anders Roxell
Signed-off-by: Anders Roxell 
---
 .gitignore | 1 -
 1 file changed, 1 deletion(-)

diff --git a/.gitignore b/.gitignore
index 6342e34..a721904 100644
--- a/.gitignore
+++ b/.gitignore
@@ -34,7 +34,6 @@ obj/
 build/
 odp_example
 odp_packet
-odp_packet_netmap
 odp_atomic
 odp_shm
 odp_ring
-- 
2.1.0


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCH 1/2] linux-generic: remove odp_packet_netmap.h

2014-10-22 Thread Anders Roxell
Signed-off-by: Anders Roxell 
---
 platform/linux-generic/include/odp_packet_netmap.h | 67 --
 1 file changed, 67 deletions(-)
 delete mode 100644 platform/linux-generic/include/odp_packet_netmap.h

diff --git a/platform/linux-generic/include/odp_packet_netmap.h 
b/platform/linux-generic/include/odp_packet_netmap.h
deleted file mode 100644
index 1ab50d0..000
--- a/platform/linux-generic/include/odp_packet_netmap.h
+++ /dev/null
@@ -1,67 +0,0 @@
-/* Copyright (c) 2013, Linaro Limited
- * All rights reserved.
- *
- * SPDX-License-Identifier: BSD-3-Clause
- */
-
-#ifndef ODP_PACKET_NETMAP_H
-#define ODP_PACKET_NETMAP_H
-
-#include 
-
-#include 
-#include 
-#include 
-
-#include 
-#include 
-#include 
-#include 
-
-#include 
-
-#define ODP_NETMAP_MODE_HW 0
-#define ODP_NETMAP_MODE_SW 1
-
-#define NETMAP_BLOCKING_IO
-
-/** Packet socket using netmap mmaped rings for both Rx and Tx */
-typedef struct {
-   odp_buffer_pool_t pool;
-   size_t max_frame_len; /**< max frame len = buf_size - sizeof(pkt_hdr) */
-   size_t frame_offset; /**< frame start offset from start of pkt buf */
-   size_t buf_size; /**< size of buffer payload in 'pool' */
-   int netmap_mode;
-   struct nm_desc_t *nm_desc;
-   uint32_t begin;
-   uint32_t end;
-   struct netmap_ring *rxring;
-   struct netmap_ring *txring;
-   odp_queue_t tx_access; /* Used for exclusive access to send packets */
-   uint32_t if_flags;
-   char ifname[32];
-} pkt_netmap_t;
-
-/**
- * Configure an interface to work in netmap mode
- */
-int setup_pkt_netmap(pkt_netmap_t * const pkt_nm, const char *netdev,
-odp_buffer_pool_t pool, netmap_params_t *nm_params);
-
-/**
- * Switch interface from netmap mode to normal mode
- */
-int close_pkt_netmap(pkt_netmap_t * const pkt_nm);
-
-/**
- * Receive packets using netmap
- */
-int recv_pkt_netmap(pkt_netmap_t * const pkt_nm, odp_packet_t pkt_table[],
-   unsigned len);
-
-/**
- * Send packets using netmap
- */
-int send_pkt_netmap(pkt_netmap_t * const pkt_nm, odp_packet_t pkt_table[],
-   unsigned len);
-#endif
-- 
2.1.0


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Maxim Uvarov

On 10/22/2014 03:53 PM, Jerin Jacob wrote:

Some review comments,

1) Any specific reason to choose pktio instead of queue for IPC abstraction ?


Yes. First versions I did for queues. Refer to ipc v7 patch in mailing 
list. After that most everyone complained that it has to be on packet 
i/o level.
And the main reason for that is that in hardware accelerated packet i/o 
communication between different processes has to be easy. I.e. one process
just sends packet to hw and other receives and it's hw deal how to 
deliver this packet. In hw case even shared memory is not a requirement.



2) How classification fits into the equation if we choose the IPC through pktio 
?
API's like odp_pktio_pmr_cos..
for non linux-generic platform, May be we can map loopback port as IPC pktio
to implement complete ODP pktio capabilities.
But does application needs such capability ?
or just a "queue" is fine to meet the use cases ?


queue has to be fine. classifier is in the same process, where queue is 
shared. If we will move classifier to separate odp app then IPC is needed.


  
3) Currently odp_pktio_open creates the queue and application can get

the created queues through odp_pktio_outq_getdef.
Do we really need to introduce new "odp_pktio_outq_setdef" API now ?


I need somehow to link queue and pktio. I.e. say to queue to use 
specific output function.

For that reason I implemented odp_pktio_outq_setdef.


4) Do we really need to introduce new API "odp_shm_lookup_ipc" ?
Is possible to abstract through existing odp_shm_lookup API ?


That is good question. Probably we can add some flag to odp_shm_lookup 
where to search, in local table or shared memory.



5) Assuming pool value == 0 for odp_pktio_open as IPC port is not portable.
Need to introduce macro or standard name for IPC

I think it might be better to add extend odp_buffer_pool_create:

odp_buffer_pool_t odp_buffer_pool_create(const char *name,
 void *base_addr, uint64_t size,
 size_t buf_size, size_t buf_align,
 int buf_type)

provide base_addr as remote_pool_mapped_base address and buf_type as:
ODP_BUFFER_TYPE_IPC

In that case I will not allocate memory, will link pktio with remote 
base_addr.

Will check how is it implementable.

Will send new version of patch.

Maxim.





___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API

2014-10-22 Thread Savolainen, Petri (NSN - FI/Espoo)
Hi,

In short, I think we must not bring segmentation support to the buffer level 
"just in case" someone would need it there. Real use cases for segmentation are 
on packet level (large packets, packet fragmentation/reassembly, etc), so the 
feature should be implemented there. 

-Petri

> -Original Message-
> From: ext Ciprian Barbu [mailto:ciprian.ba...@linaro.org]
> Sent: Wednesday, October 22, 2014 3:00 PM
> To: Bill Fischofer
> Cc: Ola Liljedahl; Savolainen, Petri (NSN - FI/Espoo); lng-
> o...@lists.linaro.org
> Subject: Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API
> 
> On Wed, Oct 22, 2014 at 2:47 PM, Ciprian Barbu 
> wrote:
> > This thread has been cold for 5 days, so the assumption is that we can
> > go forward with the design right now. This patch series proposed by
> > Bala updates some part of the API to the final form of the Buffer
> > Design Document, we should have it merged if there are no more
> > objections. For that more people with the right expertise should have
> > a look at it and get the thread back on track.
> >
> > I for example have observed the following issue. All the examples
> > create buffer pools over shared memory, which doesn't make sense for
> > some platforms, linux-dpdk for example, which ignores the base_addr
> > argument altogether. I think we need more clarity on this subject, for
> > sure the creation of buffer pools will differ from platform to
> > platform, which migrates to the application responsibility.
> >
> > I think we should have a helper function to easily create buffer pools
> > without worrying too much about the difference in buffer management
> > between platforms, so that one can write a simple portable application
> > with no sweat. For the hardcore programmers the API still gives fine
> > control to buffer management that depending on the platform could
> > involve additional prerequisites, like creating a shared memory
> > segment to hold the buffer pool.
> 
> Ok, so I had another look at the Buffer Management final design. I now
> see that the option of creating buffer pools from regions has been
> removed, so in this case things will be simpler for the applications.
> In other words we should really start working on the full
> implementation of the API because from there the problem I just stated
> above (having to create shared memory segments) will disappear.
> 
> >
> > On Fri, Oct 17, 2014 at 4:33 PM, Bill Fischofer
> >  wrote:
> >> Let's consider the implications of removing segmentation support from
> >> buffers and only having that concept be part of packets.
> >>
> >> The first question that arises is what is the relationship between the
> >> abstract types odp_packet_t and odp_buffer_t? This is important because
> >> currently we say that packets are allocated from ODP buffer pools, not
> from
> >> packet pools.  Do we need a separate odp_packet_pool_t that is used for
> >> packets?
> >>
> >> Today, when I allocate a packet I'm allocating a single object that
> happens
> >> to be a single buffer object of type ODP_BUFFER_TYPE_PACKET.  But that
> only
> >> works if the two objects have compatible semantics (including
> segmentation).
> >> If the semantics are not compatible, then an odp_packet_t may in fact
> be
> >> composed of multiple odp_buffer_t's because the packet may consist of
> >> multiple segments and buffers no longer recognize the concept of
> segments so
> >> a single buffer can only be a single segment.
> >>
> >> So now an odp_packet_segment_t may be an odp_buffer_t but an
> odp_packet_t in
> >> fact is some meta-object that is constructed (by whom?) from multiple
> >> odp_packet_segment_ts that are themselves odp_buffer_ts.  So
> >> odp_packet_to_buffer() no longer makes sense since there is no longer a
> >> one-to-one correspondence between packets and buffers.  We could have
> an
> >> odp_packet_segment_to_buffer() routine instead.
> >>
> >> Next question: What about meta data?  If an odp_packet_t is a type of
> an
> >> odp_buffer_t then this is very straightforward since all buffer meta
> data is
> >> reusable as packet meta data and the packet type can just add its own
> >> specific meta data to this set.  But if an odp_packet_t is now a
> separate
> >> object then where does the storage for its meta data come from? If we
> try to
> >> map it into an odp_buffer_t that doesn't work since an odp_packet_t may
> >> consist of multiple underlying odp_buffer_ts, one for each
> >> odp_packet_segment_t.  Is the packet meta data duplicated in each
> segment?
> >> Is the first segment of a packet special (odp_packet_first_segment_t)?
> And
> >> what about user meta data, since this is of potentially variable size?
> >>
> >> I submit that there are a lot of implications to this that need to be
> fully
> >> thought through, which is why I believe it's simpler to keep
> segmentation as
> >> part of buffers that (for now) only happens to be used by a particular
> type
> >> of buffer, namely packets.
> >>
> >> 

Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API

2014-10-22 Thread Ciprian Barbu
On Wed, Oct 22, 2014 at 2:47 PM, Ciprian Barbu  wrote:
> This thread has been cold for 5 days, so the assumption is that we can
> go forward with the design right now. This patch series proposed by
> Bala updates some part of the API to the final form of the Buffer
> Design Document, we should have it merged if there are no more
> objections. For that more people with the right expertise should have
> a look at it and get the thread back on track.
>
> I for example have observed the following issue. All the examples
> create buffer pools over shared memory, which doesn't make sense for
> some platforms, linux-dpdk for example, which ignores the base_addr
> argument altogether. I think we need more clarity on this subject, for
> sure the creation of buffer pools will differ from platform to
> platform, which migrates to the application responsibility.
>
> I think we should have a helper function to easily create buffer pools
> without worrying too much about the difference in buffer management
> between platforms, so that one can write a simple portable application
> with no sweat. For the hardcore programmers the API still gives fine
> control to buffer management that depending on the platform could
> involve additional prerequisites, like creating a shared memory
> segment to hold the buffer pool.

Ok, so I had another look at the Buffer Management final design. I now
see that the option of creating buffer pools from regions has been
removed, so in this case things will be simpler for the applications.
In other words we should really start working on the full
implementation of the API because from there the problem I just stated
above (having to create shared memory segments) will disappear.

>
> On Fri, Oct 17, 2014 at 4:33 PM, Bill Fischofer
>  wrote:
>> Let's consider the implications of removing segmentation support from
>> buffers and only having that concept be part of packets.
>>
>> The first question that arises is what is the relationship between the
>> abstract types odp_packet_t and odp_buffer_t? This is important because
>> currently we say that packets are allocated from ODP buffer pools, not from
>> packet pools.  Do we need a separate odp_packet_pool_t that is used for
>> packets?
>>
>> Today, when I allocate a packet I'm allocating a single object that happens
>> to be a single buffer object of type ODP_BUFFER_TYPE_PACKET.  But that only
>> works if the two objects have compatible semantics (including segmentation).
>> If the semantics are not compatible, then an odp_packet_t may in fact be
>> composed of multiple odp_buffer_t's because the packet may consist of
>> multiple segments and buffers no longer recognize the concept of segments so
>> a single buffer can only be a single segment.
>>
>> So now an odp_packet_segment_t may be an odp_buffer_t but an odp_packet_t in
>> fact is some meta-object that is constructed (by whom?) from multiple
>> odp_packet_segment_ts that are themselves odp_buffer_ts.  So
>> odp_packet_to_buffer() no longer makes sense since there is no longer a
>> one-to-one correspondence between packets and buffers.  We could have an
>> odp_packet_segment_to_buffer() routine instead.
>>
>> Next question: What about meta data?  If an odp_packet_t is a type of an
>> odp_buffer_t then this is very straightforward since all buffer meta data is
>> reusable as packet meta data and the packet type can just add its own
>> specific meta data to this set.  But if an odp_packet_t is now a separate
>> object then where does the storage for its meta data come from? If we try to
>> map it into an odp_buffer_t that doesn't work since an odp_packet_t may
>> consist of multiple underlying odp_buffer_ts, one for each
>> odp_packet_segment_t.  Is the packet meta data duplicated in each segment?
>> Is the first segment of a packet special (odp_packet_first_segment_t)?  And
>> what about user meta data, since this is of potentially variable size?
>>
>> I submit that there are a lot of implications to this that need to be fully
>> thought through, which is why I believe it's simpler to keep segmentation as
>> part of buffers that (for now) only happens to be used by a particular type
>> of buffer, namely packets.
>>
>> Bill
>>
>> On Fri, Oct 17, 2014 at 8:05 AM, Ola Liljedahl 
>> wrote:
>>>
>>> Personally I don't see any need for segmentation support in buffers. I am
>>> just trying to shoot down what I think is flawed reasoning.
>>>
>>> -- Ola#1
>>>
>>> On 17 October 2014 15:03, Ola Liljedahl  wrote:

 But segmentation is already needed in a current and known subclass (i.e.
 packets). We are not talking about some other feature which we don't know 
 if
 it will be needed. So this is not a case of "just in case".

 -- Ola#1


 On 17 October 2014 14:45, Ola Dahl  wrote:
>
> Hi,
>
> I do not think it is wise to put features in the base class "just in
> case" they would be needed in some future (not yet known) subclass.
>
> So 

Re: [lng-odp] [PATCH ARCH 1/2] Add release management

2014-10-22 Thread Mike Holmes
ping

On 17 October 2014 11:01, Bill Fischofer  wrote:

>
>
> On Fri, Oct 17, 2014 at 9:54 AM, Mike Holmes 
> wrote:
>
>> Add text defining the release procedure and release numbering.
>>
>> Signed-off-by: Mike Holmes 
>>
> Reviewed-by: Bill Fischofer 
>
>> ---
>>  release.dox | 70
>> +
>>  1 file changed, 70 insertions(+)
>>  create mode 100644 release.dox
>>
>> diff --git a/release.dox b/release.dox
>> new file mode 100644
>> index 000..7cde777
>> --- /dev/null
>> +++ b/release.dox
>> @@ -0,0 +1,70 @@
>> +/* Copyright (c) 2014, Linaro Limited
>> + * All rights reserved
>> + *
>> + * SPDX-License-Identifier: BSD-3-Clause
>> + */
>> +
>> +/**
>> +@page release Release Management
>> +The odp.git repo contains the API which is of primary concern when
>> addressing the release numbering, the linux-generic implementation is not
>> the focus of the release.
>> +
>> +@section release_numbering Release Numbering
>> (ODP-..)
>> +
>> +The API uses a three digit release number, for ODP this number refers to
>> +- The API header definitions
>> +- The reference implementation (linux-generic)
>> +- The documentation
>> +- The API test & validation suite that certifies each of the above.
>> +
>> +@image html  ODP_versioning.png "Version history diagram"
>> width=\textwidth
>> +@image latex ODP_versioning.eps "Version histroy diagram"
>> width=\textwidth
>> +
>> +
>> +The ODP API generation.major version will only change at well-defined
>> release points.
>> +An API release will be tagged @code ODP-..
>> @endcode and bug fix releases on the platform will be tagged @code
>> ODP-..- @endcode
>> +No release will ever be made without incrementing the release number,
>> the change will be according to the following guidelines.
>> +Every change in API version will require a recompilation, relinking will
>> not be sufficient.
>> +The header file odp_version.h contains helper macros for dealing with
>> ODP versions in application code.
>> +@note The version refers to API source compatibility and not binary
>> compatibility.
>> +
>> +@subsection generation (ODP-)
>> +The ODP generation is intended to be a very slow moving digit that will
>> only increment on very significant changes the ODP API is structured.
>> +A change to this digit indicates a break in backwards compatibility.
>> +
>> +@subsection major (ODP-.)
>> +A change to this digit indicates a break in backwards compatibility.
>> +@note The incompatibility covers the whole ODP API, however the change
>> may be a tiny change to an esoteric function that is not used by a given
>> application.
>> +
>> +- Altering API signature
>> +- Altering a structure other than adding optional items at the end.
>> +- Changing the required calling sequence for APIs
>> +- Changes to the installed structure i.e. the output from "make install"
>> moves a file in a way that breaks compilation.
>> +- New elements added to an enum that is an output from ODP.
>> +
>> +@subsection minor (ODP-..)
>> +The minor digit is for changes that are backwards compatible.
>> +For example changes such as the addition of a new API.
>> +Existing application code shall not have to change if the new API is not
>> used.
>> +- Adding a new struct
>> +- Adding a new function
>> +- Adding an additional alternate API to an existing on such that the old
>> API remains.
>> +- New element to an enum that is an input to ODP
>> +
>> +@subsection sub (ODP-..-)
>> +The sub digit is used for backward compatible changes
>> +The sub number is implementation driven.
>> +Any existing app should work as before with the caveat that a bug fix
>> may change the executable behavior (hopefully improve it)
>> +- Optimize the implementation
>> +- Documentation updates that do not affect the API
>> +- bug fixes in implementation
>> +
>> +@section point_release Point Release  Schedule
>> +Point releases will be made throughout the year, a lose target is once
>> per quarter depending on the feature load.
>> +For example a release 1.1.0 that is made will regularly have support
>> until the next point release is made at which time support for 1.1.0 would
>> be dropped.
>> +
>> +@section lts Long Term Stable (LTS)
>> +Long term stable release will be retroactively be selected from the
>> point releases already made.
>> +The determination will be made by the ODP steering committee (SC).
>> +The duration of support will be decided by the SC.
>> +
>> +*/
>> --
>> 1.9.1
>>
>>
>> ___
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>
>


-- 
*Mike Holmes*
Linaro  Sr Technical Manager
LNG - ODP
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] ipc linux-generic implementation based on pktio

2014-10-22 Thread Jerin Jacob
On Tue, Oct 21, 2014 at 03:38:11PM +0400, Maxim Uvarov wrote:
> Signed-off-by: Maxim Uvarov 
> ---
> 
>  Hello,
> 
>  Please find here IPC implementation for linux generic.
> 
>  As we discussed prior IPC patches we got following decisions:
>  1. IPC should be done on PKTIO, not on QUEUE level.
>  2. For initial implementation we can go with packet copy, but it's
>better to avoid copying.
>  3. IPC should be done for 2 separate processes. Difference from
>first implementation is fork() before odp_init_global, just after
>main().
>  4. Doing IPC in linux generic we should relay on some portable 
> implementation,
>   like shared memory.


Some review comments,

1) Any specific reason to choose pktio instead of queue for IPC abstraction ?
2) How classification fits into the equation if we choose the IPC through pktio 
?
API's like odp_pktio_pmr_cos..
for non linux-generic platform, May be we can map loopback port as IPC pktio
to implement complete ODP pktio capabilities.
But does application needs such capability ?
or just a "queue" is fine to meet the use cases ? 
3) Currently odp_pktio_open creates the queue and application can get
the created queues through odp_pktio_outq_getdef.
Do we really need to introduce new "odp_pktio_outq_setdef" API now ?
4) Do we really need to introduce new API "odp_shm_lookup_ipc" ?
Is possible to abstract through existing odp_shm_lookup API ?
5) Assuming pool value == 0 for odp_pktio_open as IPC port is not portable.
Need to introduce macro or standard name for IPC


> 
> 
>  In current patch I implemented IPC with shared memory in following way:
> 
>  1. First shared memory block between processes exist for shared packet pool,
> I.e. place where packet data is actually stored.;
>  2. Second and Third shared memory blocks between processes is for messages 
> for
> consumed and produced packets. In case if it will be HW implementation, 
> then
> second and third chunks of shm are not needed.
> 
>  Please review current implementation how it corresponds to what we discussed.
> 
>  For further implementation I think we can add timers for remote IPC buffers 
> to do
>  some action if remote app does not handle them. Do full zero copy with saving
>  remote odp_packet_t value to packet meta data or additional field in odp 
> packet
>  header.
> 
>  Best regards,
>  Maxim.
> 
>  example/Makefile.am|   2 +-
>  example/ipc/Makefile.am|   6 +
>  example/ipc/odp_pktio.c| 765 
> +
>  helper/include/odph_ring.h |   3 +
>  platform/linux-generic/include/api/odp_packet_io.h |  22 +
>  .../linux-generic/include/api/odp_shared_memory.h  |  11 +
>  .../include/odp_buffer_pool_internal.h |   7 +-
>  .../linux-generic/include/odp_packet_io_internal.h |   7 +
>  platform/linux-generic/odp_buffer_pool.c   |   9 -
>  platform/linux-generic/odp_init.c  |   6 +
>  platform/linux-generic/odp_packet_io.c | 229 ++
>  platform/linux-generic/odp_ring.c  |   9 +-
>  platform/linux-generic/odp_shared_memory.c |  35 +-
>  13 files changed, 1097 insertions(+), 14 deletions(-)
>  create mode 100644 example/ipc/Makefile.am
>  create mode 100644 example/ipc/odp_pktio.c
> 
> diff --git a/example/Makefile.am b/example/Makefile.am
> index b2a22a3..7911069 100644
> --- a/example/Makefile.am
> +++ b/example/Makefile.am
> @@ -1 +1 @@
> -SUBDIRS = generator ipsec l2fwd odp_example packet timer
> +SUBDIRS = generator ipsec l2fwd odp_example packet timer ipc
> diff --git a/example/ipc/Makefile.am b/example/ipc/Makefile.am
> new file mode 100644
> index 000..603a1ab
> --- /dev/null
> +++ b/example/ipc/Makefile.am
> @@ -0,0 +1,6 @@
> +include $(top_srcdir)/example/Makefile.inc
> +
> +bin_PROGRAMS = odp_pktio
> +odp_pktio_LDFLAGS = $(AM_LDFLAGS) -static
> +
> +dist_odp_pktio_SOURCES = odp_pktio.c
> diff --git a/example/ipc/odp_pktio.c b/example/ipc/odp_pktio.c
> new file mode 100644
> index 000..1eb4a95
> --- /dev/null
> +++ b/example/ipc/odp_pktio.c
> @@ -0,0 +1,765 @@
> +/* Copyright (c) 2013, Linaro Limited
> + * All rights reserved.
> + *
> + * SPDX-License-Identifier: BSD-3-Clause
> + */
> +
> +/**
> + * @file
> + *
> + * @example odp_pktio.c  ODP basic packet IO loopback test application
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +/** @def MAX_WORKERS
> + * @brief Maximum number of worker threads
> + */
> +#define MAX_WORKERS32
> +
> +/** @def SHM_PKT_POOL_SIZE
> + * @brief Size of the shared memory block
> + */
> +#define SHM_PKT_POOL_SIZE  (512*2048)
> +
> +/** @def SHM_PKT_POOL_BUF_SIZE
> + * @brief Buffer size of the packet pool buffer
> + */
> +#define SHM_PKT_POOL_BUF_SIZE  1856
> +
> +/** @def MAX_PKT_BURST
> + * @brief Maximum number of packet bursts
> + */
> +#

Re: [lng-odp] [ODP/PATCH v1] ODP Buffer Segment Support API

2014-10-22 Thread Ciprian Barbu
This thread has been cold for 5 days, so the assumption is that we can
go forward with the design right now. This patch series proposed by
Bala updates some part of the API to the final form of the Buffer
Design Document, we should have it merged if there are no more
objections. For that more people with the right expertise should have
a look at it and get the thread back on track.

I for example have observed the following issue. All the examples
create buffer pools over shared memory, which doesn't make sense for
some platforms, linux-dpdk for example, which ignores the base_addr
argument altogether. I think we need more clarity on this subject, for
sure the creation of buffer pools will differ from platform to
platform, which migrates to the application responsibility.

I think we should have a helper function to easily create buffer pools
without worrying too much about the difference in buffer management
between platforms, so that one can write a simple portable application
with no sweat. For the hardcore programmers the API still gives fine
control to buffer management that depending on the platform could
involve additional prerequisites, like creating a shared memory
segment to hold the buffer pool.

On Fri, Oct 17, 2014 at 4:33 PM, Bill Fischofer
 wrote:
> Let's consider the implications of removing segmentation support from
> buffers and only having that concept be part of packets.
>
> The first question that arises is what is the relationship between the
> abstract types odp_packet_t and odp_buffer_t? This is important because
> currently we say that packets are allocated from ODP buffer pools, not from
> packet pools.  Do we need a separate odp_packet_pool_t that is used for
> packets?
>
> Today, when I allocate a packet I'm allocating a single object that happens
> to be a single buffer object of type ODP_BUFFER_TYPE_PACKET.  But that only
> works if the two objects have compatible semantics (including segmentation).
> If the semantics are not compatible, then an odp_packet_t may in fact be
> composed of multiple odp_buffer_t's because the packet may consist of
> multiple segments and buffers no longer recognize the concept of segments so
> a single buffer can only be a single segment.
>
> So now an odp_packet_segment_t may be an odp_buffer_t but an odp_packet_t in
> fact is some meta-object that is constructed (by whom?) from multiple
> odp_packet_segment_ts that are themselves odp_buffer_ts.  So
> odp_packet_to_buffer() no longer makes sense since there is no longer a
> one-to-one correspondence between packets and buffers.  We could have an
> odp_packet_segment_to_buffer() routine instead.
>
> Next question: What about meta data?  If an odp_packet_t is a type of an
> odp_buffer_t then this is very straightforward since all buffer meta data is
> reusable as packet meta data and the packet type can just add its own
> specific meta data to this set.  But if an odp_packet_t is now a separate
> object then where does the storage for its meta data come from? If we try to
> map it into an odp_buffer_t that doesn't work since an odp_packet_t may
> consist of multiple underlying odp_buffer_ts, one for each
> odp_packet_segment_t.  Is the packet meta data duplicated in each segment?
> Is the first segment of a packet special (odp_packet_first_segment_t)?  And
> what about user meta data, since this is of potentially variable size?
>
> I submit that there are a lot of implications to this that need to be fully
> thought through, which is why I believe it's simpler to keep segmentation as
> part of buffers that (for now) only happens to be used by a particular type
> of buffer, namely packets.
>
> Bill
>
> On Fri, Oct 17, 2014 at 8:05 AM, Ola Liljedahl 
> wrote:
>>
>> Personally I don't see any need for segmentation support in buffers. I am
>> just trying to shoot down what I think is flawed reasoning.
>>
>> -- Ola#1
>>
>> On 17 October 2014 15:03, Ola Liljedahl  wrote:
>>>
>>> But segmentation is already needed in a current and known subclass (i.e.
>>> packets). We are not talking about some other feature which we don't know if
>>> it will be needed. So this is not a case of "just in case".
>>>
>>> -- Ola#1
>>>
>>>
>>> On 17 October 2014 14:45, Ola Dahl  wrote:

 Hi,

 I do not think it is wise to put features in the base class "just in
 case" they would be needed in some future (not yet known) subclass.

 So if the concept of segmentation is relevant for packets but not for
 timers then I think it should be implemented as a feature of packets.

 Best regards,

 Ola D

 On Fri, Oct 17, 2014 at 2:33 PM, Bill Fischofer
  wrote:
>
> I agree that packets are the buffer type that most likely uses
> segments, however there are many advantages to putting this support in the
> base class rather than the subclass independent of the number of buffer
> subclasses that will use this support today.
>
> It's simpler
> It's mo

Re: [lng-odp] [PATCHv2 NETMAP] linux-netmap: Include README in User's Guide

2014-10-22 Thread Ciprian Barbu
ping

On Mon, Oct 20, 2014 at 4:12 PM, Ciprian Barbu  wrote:
> Signed-off-by: Ciprian Barbu 
> ---
> v2:
> - Added linux-generic back to users guide
> - Added information about default platform in linux-netmap README
>
>  doc/doxygen.cfg  |  2 +-
>  doc/users-guide/guide.dox|  6 ++
>  platform/linux-netmap/README | 14 +-
>  3 files changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/doc/doxygen.cfg b/doc/doxygen.cfg
> index a77ae1e..fc56317 100644
> --- a/doc/doxygen.cfg
> +++ b/doc/doxygen.cfg
> @@ -1,4 +1,4 @@
> -PROJECT_NAME = "API Reference Manual"
> +PROJECT_NAME = "ODP Netmap API Reference Manual"
>  PROJECT_LOGO = $(SRCDIR)/doc/images/ODP-Logo-HQ.png
>  QUIET = YES
>  OUTPUT_DIRECTORY = $(DOCDIR)
> diff --git a/doc/users-guide/guide.dox b/doc/users-guide/guide.dox
> index 314d295..11ec395 100644
> --- a/doc/users-guide/guide.dox
> +++ b/doc/users-guide/guide.dox
> @@ -10,9 +10,7 @@
>   *
>   * @section sec_gene Linux Generic
>   * @verbinclude linux-generic/README
> - * @section sec_dpdk Linux DPDK
> - * @verbinclude linux-dpdk/README
> - * @section sec_keys Linux Keystone2
> - * @verbinclude linux-keystone2/README
> + * @section sec_netm Linux Netmap
> + * @verbinclude linux-netmap/README
>   *
>   */
> diff --git a/platform/linux-netmap/README b/platform/linux-netmap/README
> index 6021445..795a6fc 100644
> --- a/platform/linux-netmap/README
> +++ b/platform/linux-netmap/README
> @@ -66,8 +66,20 @@ Now compile netmap:
>  2.2 Building ODP
>  
>
> +This default platform for this repository is linux-netmap, if not otherwise
> +specified using the --with-platform configure variable.
> +The optional --with-sdk-install-path variable is used to point to the netmap
> +sources, if it's not specified the build system will try to find netmap 
> header
> +files in the standard include directories.
> +
>  ./bootstrap
> -./configure --with-platform=linux-netmap --with-sdk-install-path=
> +
> +To configure ODP for linux-netmap:
> +./configure --with-sdk-install-path=
> +
> +To configure ODP for linux-generic:
> +./configure --with-platform=linux-generic
> +
>  make
>
>  3. Running the example application
> --
> 1.8.3.2
>

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] odp_packet_io.c: fix unreachable code

2014-10-22 Thread Maxim Uvarov

That looks good. Merged.

Maxim.

On 10/21/2014 08:02 PM, Mike Holmes wrote:

The code after the break cannot be executed.
If the entry is not free, free it.

Signed-off-by: Mike Holmes 
---
  platform/linux-generic/odp_packet_io.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/platform/linux-generic/odp_packet_io.c 
b/platform/linux-generic/odp_packet_io.c
index 0c30f0f..fb9fe2d 100644
--- a/platform/linux-generic/odp_packet_io.c
+++ b/platform/linux-generic/odp_packet_io.c
@@ -228,8 +228,8 @@ int odp_pktio_close(odp_pktio_t id)
break;
default:
break;
-   res |= free_pktio_entry(id);
}
+   res |= free_pktio_entry(id);
}
unlock_entry(entry);
  



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] Rename ODPH_PACKED macro

2014-10-22 Thread Maxim Uvarov

Merged, thank you.

Maxim.

On 10/22/2014 01:16 AM, Anders Roxell wrote:

On 2014-10-20 17:19, Jerin Jacob wrote:

- Definition of ODPH_PACKED is in include/api/odp_align.h so changing to 
ODP_PACKED

Signed-off-by: Jerin Jacob 

Reviewed-and-Tested-by: Anders Roxell 


---
  example/ipsec/odp_ipsec_stream.c   | 2 +-
  helper/include/odph_eth.h  | 6 +++---
  helper/include/odph_icmp.h | 2 +-
  helper/include/odph_ip.h   | 4 ++--
  helper/include/odph_ipsec.h| 6 +++---
  helper/include/odph_udp.h  | 2 +-
  platform/linux-generic/include/api/odp_align.h | 2 +-
  7 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/example/ipsec/odp_ipsec_stream.c b/example/ipsec/odp_ipsec_stream.c
index fba425c..a9a52ec 100644
--- a/example/ipsec/odp_ipsec_stream.c
+++ b/example/ipsec/odp_ipsec_stream.c
@@ -39,7 +39,7 @@
  /**
   * Stream packet header
   */
-typedef struct ODPH_PACKED stream_pkt_hdr_s {
+typedef struct ODP_PACKED stream_pkt_hdr_s {
uint64be_t magic;/**< Stream magic value for verification */
uint8_tdata[0];  /**< Incrementing data stream */
  } stream_pkt_hdr_t;
diff --git a/helper/include/odph_eth.h b/helper/include/odph_eth.h
index 55a2b1e..065a94b 100644
--- a/helper/include/odph_eth.h
+++ b/helper/include/odph_eth.h
@@ -34,7 +34,7 @@ extern "C" {
  /**
   * Ethernet MAC address
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint8_t addr[ODPH_ETHADDR_LEN]; /**< @private Address */
  } odph_ethaddr_t;
  
@@ -44,7 +44,7 @@ ODP_STATIC_ASSERT(sizeof(odph_ethaddr_t) == ODPH_ETHADDR_LEN, "ODPH_ETHADDR_T__S

  /**
   * Ethernet header
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
odph_ethaddr_t dst; /**< Destination address */
odph_ethaddr_t src; /**< Source address */
uint16be_t type;   /**< Type */
@@ -58,7 +58,7 @@ ODP_STATIC_ASSERT(sizeof(odph_ethhdr_t) == ODPH_ETHHDR_LEN, 
"ODPH_ETHHDR_T__SIZE
   *
   * @todo Check usage of tpid vs ethertype. Check outer VLAN TPID.
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint16be_t tpid;   /**< Tag protocol ID (located after ethhdr.src) */
uint16be_t tci;/**< Priority / CFI / VLAN ID */
  } odph_vlanhdr_t;
diff --git a/helper/include/odph_icmp.h b/helper/include/odph_icmp.h
index 8414d7e..8533fb5 100644
--- a/helper/include/odph_icmp.h
+++ b/helper/include/odph_icmp.h
@@ -26,7 +26,7 @@ extern "C" {
  #define ODPH_ICMPHDR_LEN 8
  
  /** ICMP header */

-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint8_t type;   /**< message type */
uint8_t code;   /**< type sub-code */
uint16sum_t chksum; /**< checksum of icmp header */
diff --git a/helper/include/odph_ip.h b/helper/include/odph_ip.h
index ca71c44..2c83c0f 100644
--- a/helper/include/odph_ip.h
+++ b/helper/include/odph_ip.h
@@ -48,7 +48,7 @@ extern "C" {
  #define ODPH_IPV4HDR_IS_FRAGMENT(frag_offset) ((frag_offset) & 0x3fff)
  
  /** IPv4 header */

-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint8_tver_ihl; /**< Version / Header length */
uint8_ttos; /**< Type of service */
uint16be_t tot_len; /**< Total length */
@@ -125,7 +125,7 @@ static inline uint16sum_t 
odph_ipv4_csum_update(odp_packet_t pkt)
  /**
   * IPv6 header
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint32be_t ver_tc_flow;  /**< Version / Traffic class / Flow label */
uint16be_t payload_len;  /**< Payload length */
uint8_tnext_hdr; /**< Next header */
diff --git a/helper/include/odph_ipsec.h b/helper/include/odph_ipsec.h
index f547b90..c58a1c8 100644
--- a/helper/include/odph_ipsec.h
+++ b/helper/include/odph_ipsec.h
@@ -30,7 +30,7 @@ extern "C" {
  /**
   * IPSec ESP header
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint32be_t spi;  /**< Security Parameter Index */
uint32be_t seq_no;   /**< Sequence Number */
uint8_tiv[0];/**< Initialization vector */
@@ -42,7 +42,7 @@ ODP_STATIC_ASSERT(sizeof(odph_esphdr_t) == ODPH_ESPHDR_LEN, 
"ODPH_ESPHDR_T__SIZE
  /**
   * IPSec ESP trailer
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint8_t pad_len;  /**< Padding length (0-255) */
uint8_t next_header;  /**< Next header protocol */
uint8_t icv[0];   /**< Integrity Check Value (optional) */
@@ -54,7 +54,7 @@ ODP_STATIC_ASSERT(sizeof(odph_esptrl_t) == ODPH_ESPTRL_LEN, 
"ODPH_ESPTRL_T__SIZE
  /**
   * IPSec AH header
   */
-typedef struct ODPH_PACKED {
+typedef struct ODP_PACKED {
uint8_tnext_header;  /**< Next header protocol */
uint8_tah_len;   /**< AH header length */
uint16be_t pad;  /**< Padding (must be 0) */
diff --git a/helper/include/odph_udp.h b/helper/include/odph_udp.h
index 39

Re: [lng-odp] linux-d01's rebase progress

2014-10-22 Thread Weilong Chen
Yes

On 22 October 2014 18:13, Maxim Uvarov  wrote:

> On 10/22/2014 12:58 PM, Weilong Chen wrote:
>
>> Hi Anders
>>
>> I push linux-d01 to git.linaro.org/people/weilong.chen/linux-d01.git <
>> http://git.linaro.org/people/weilong.chen/linux-d01.git>
>> There's still some bugs to workout.
>> If you want to have a quit look at it, you can get it.
>>
>> Thanks,
>> Weilong
>>
>
> Looks good. But you need to remove not d01 code. Like #ifdef NETMAP and
> etc...
>
>
>
>>
>>
>> ___
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] linux-d01's rebase progress

2014-10-22 Thread Maxim Uvarov

On 10/22/2014 12:58 PM, Weilong Chen wrote:

Hi Anders

I push linux-d01 to git.linaro.org/people/weilong.chen/linux-d01.git 


There's still some bugs to workout.
If you want to have a quit look at it, you can get it.

Thanks,
Weilong


Looks good. But you need to remove not d01 code. Like #ifdef NETMAP and 
etc...







___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] linux-d01's rebase progress

2014-10-22 Thread Weilong Chen
Hi Anders

I push linux-d01 to git.linaro.org/people/weilong.chen/linux-d01.git
There's still some bugs to workout.
If you want to have a quit look at it, you can get it.

Thanks,
Weilong
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp