[lng-odp] [PATCHv4 2/2] configure.ac update version numbers

2016-11-21 Thread Maxim Uvarov
Signed-off-by: Maxim Uvarov 
---
 configure.ac | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/configure.ac b/configure.ac
index be5a292..287e478 100644
--- a/configure.ac
+++ b/configure.ac
@@ -3,7 +3,7 @@ AC_PREREQ([2.5])
 # Set correct API version
 ##
 m4_define([odpapi_generation_version], [1])
-m4_define([odpapi_major_version], [11])
+m4_define([odpapi_major_version], [12])
 m4_define([odpapi_minor_version], [0])
 m4_define([odpapi_point_version], [0])
 m4_define([odpapi_version],
@@ -30,10 +30,10 @@ AM_SILENT_RULES([yes])
 ##
 # Set correct platform library version
 ##
-ODP_LIBSO_VERSION=111:0:0
+ODP_LIBSO_VERSION=111:1:0
 AC_SUBST(ODP_LIBSO_VERSION)
 
-ODPHELPER_LIBSO_VERSION=110:0:1
+ODPHELPER_LIBSO_VERSION=110:1:1
 AC_SUBST(ODPHELPER_LIBSO_VERSION)
 
 # Checks for programs.
-- 
2.7.1.250.gff4ea60



[lng-odp] [PATCHv4 1/2] changelog: summary of changes for odp v1.12.0.0

2016-11-21 Thread Maxim Uvarov
From: Bill Fischofer 

Signed-off-by: Bill Fischofer 
Signed-off-by: Maxim Uvarov 
---
 CHANGELOG | 186 ++
 1 file changed, 186 insertions(+)

diff --git a/CHANGELOG b/CHANGELOG
index 1d652a8..f4cce08 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,189 @@
+== OpenDataPlane (1.12.0.0)
+
+=== New Features
+
+ APIs
+ODP v1.12.0.0 has no API changes from previous v1.11.0 Monarch LTS. Version
+is increased in current development release to make room for Monarch updates
+numbers.
+
+ Application Binary Interface (ABI) Support
+Support is added to enable ODP applications to be binary compatible across
+different implementations of ODP sharing the same Instruction Set Architecture
+(ISA). This support introduces a new `configure` option:
+
+`--enable-abi-compat=yes`::
+This is the default and specifies that the ODP library is to be built to
+support ABI compatibility mode. In this mode ODP APIs are never inlined. ABI
+compatibility ensures maximum application portability in cloud environments.
+
+`--enable-abi-compat=no`::
+Specify this option to enable the inlining of ODP APIs. This may result in
+improved performance at the cost of ABI compatibility and is suitable for
+applications running in embedded environments.
+
+Note that ODP applications retain source code portability between ODP
+implementations regardless of the ABI mode chosen. To move to a different ODP
+application running on a different ISA, code need simply be recompiled against
+that target ODP implementation.
+
+ SCTP Parsing Support
+The ODP classifier adds support for recognizing Stream Control Transmission
+Protocol (SCTP) packets. The APIs for this were previously not implemented.
+
+=== Packaging and Implementation Refinements
+
+ Remove dependency on Linux headers
+ODP no longer has a dependency on Linux headers. This will help make the
+odp-linux reference implementation more easily portable to non-Linux
+environments.
+
+ Remove dependency on helpers
+The odp-linux implementation has been made independent of the helper library
+to avoid circular dependency issues with packaging. Helper functions may use
+ODP APIs, however ODP implementations should not use helper functions.
+
+ Reorganization of `test` directory
+The `test` directory has been reorganized to better support a unified approach
+to ODP component testing. API tests now live in
+`test/common_plat/validation/api` instead of the former
+`test/validation`. With this change performance and validation tests, as well
+as common and platform-specific tests can all be part of a unified test
+hierarchy.
+
+The resulting test tree now looks like:
+
+.New `test` directory hierarchy
+-
+test
+? common_plat
+??? ? common
+??? ? m4
+??? ? miscellaneous
+??? ? performance
+??? ? validation
+??? ? api
+??? ? atomic
+??? ? barrier
+??? ? buffer
+??? ? classification
+??? ? cpumask
+??? ? crypto
+??? ? errno
+??? ? hash
+??? ? init
+??? ? lock
+??? ? packet
+??? ? pktio
+??? ? pool
+??? ? queue
+??? ? random
+??? ? scheduler
+??? ? shmem
+??? ? std_clib
+??? ? system
+??? ? thread
+??? ? time
+??? ? timer
+??? ? traffic_mngr
+? linux-generic
+??? ? m4
+??? ? mmap_vlan_ins
+??? ? performance
+??? ? pktio_ipc
+??? ? ring
+??? ? validation
+??? ? api
+??? ? pktio
+??? ? shmem
+? m4
+-
+
+ Pools
+The maximum number of pools that may be created in the odp-linux reference
+implementation has been raised from 16 to 64.
+
+ Upgrade to DPDK 16.07
+The DPDK pktio support in odp-linux has been upgraded to work with DPDK 16.07.
+A number of miscellaneous fixes and performance improvements in this support
+are also present.
+
+ PktIO TAP Interface Classifier Support
+Packet I/O interfaces operating in TAP mode now can feed packets to the ODP
+classifier the same as other pktio modes can do.
+
+=== Performance Improvements
+
+ Thread-local cache optimizations
+The thread-local buffer cache has been reorganized and optimized for burst-mode
+operation, yielding a measurable performance gain in almost all cases.
+
+ Burst-mode buffer allocation
+The scheduler and pktio components have been reworked to use burst-mode
+buffer 

Re: [lng-odp] [PATCHv3 2/2] configure.ac update version numbers

2016-11-21 Thread Mike Holmes
On 21 November 2016 at 10:08, Maxim Uvarov  wrote:
> Signed-off-by: Maxim Uvarov 
> ---
>  configure.ac | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/configure.ac b/configure.ac
> index be5a292..3880b19 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -3,7 +3,7 @@ AC_PREREQ([2.5])
>  # Set correct API version
>  ##
>  m4_define([odpapi_generation_version], [1])
> -m4_define([odpapi_major_version], [11])
> +m4_define([odpapi_major_version], [12])
>  m4_define([odpapi_minor_version], [0])
>  m4_define([odpapi_point_version], [0])
>  m4_define([odpapi_version],
> @@ -30,10 +30,10 @@ AM_SILENT_RULES([yes])
>  ##
>  # Set correct platform library version
>  ##
> -ODP_LIBSO_VERSION=111:0:0
> +ODP_LIBSO_VERSION=111:0:1

http://www.ibm.com/developerworks/linux/library/l-shlibs/index.html

A major number indicates a potential incompatibility between library
versions, and I think from the release notes we have one so I think
is is 112 is it not

+ Traffic Manager Egress Function
+The `odp_tm_egress_t` struct has been changed to add an explicit Boolean
+(`egress_fcn_supported`) that indicates whether the TM system supports
+a user-provided egress function. When specified this function is called to
+"output" a packet rather than having TM transmit it directly to a PktIO
+interface.
+
+ Traffic Manager Coupling Change
+The `odp_tm_egress_t` struct has been changed to associate a TM system with an
+output `odp_pktio_t` rather than the previous `odp_pktout_queue_t`. This makes
+an explicit 1-to-1 map between a TM system and a PktIO interface.


>  AC_SUBST(ODP_LIBSO_VERSION)
>
> -ODPHELPER_LIBSO_VERSION=110:0:1
> +ODPHELPER_LIBSO_VERSION=110:0:2
>  AC_SUBST(ODPHELPER_LIBSO_VERSION)
>
>  # Checks for programs.
> --
> 2.7.1.250.gff4ea60
>



-- 
Mike Holmes
Program Manager - Linaro Networking Group
Linaro.org │ Open source software for ARM SoCs
"Work should be fun and collaborative, the rest follows"


[lng-odp] [PATCHv3 1/2] changelog: summary of changes for odp v1.12.0.0

2016-11-21 Thread Maxim Uvarov
From: Bill Fischofer 

Signed-off-by: Bill Fischofer 
Signed-off-by: Maxim Uvarov 
---
 CHANGELOG | 211 ++
 1 file changed, 211 insertions(+)

diff --git a/CHANGELOG b/CHANGELOG
index 1d652a8..25d1375 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,214 @@
+== OpenDataPlane (1.12.0.0)
+
+=== New Features
+
+ APIs
+ODP v1.12.0.0 is a minor API revision level beyond the Monarch Long Term
+Support (LTS) release and introduces a small change to the Traffic Manager API
+as well as some documentation clarification surrounding other APIs.
+
+ Traffic Manager Egress Function
+The `odp_tm_egress_t` struct has been changed to add an explicit Boolean
+(`egress_fcn_supported`) that indicates whether the TM system supports
+a user-provided egress function. When specified this function is called to
+"output" a packet rather than having TM transmit it directly to a PktIO
+interface.
+
+ Traffic Manager Coupling Change
+The `odp_tm_egress_t` struct has been changed to associate a TM system with an
+output `odp_pktio_t` rather than the previous `odp_pktout_queue_t`. This makes
+an explicit 1-to-1 map between a TM system and a PktIO interface.
+
+ Default huge page size clarification
+The documentation for the `odp_sys_huge_page_size()` API has been reworded to
+clarify that this API refers to default huge page size.
+
+ Strict Priority (SP) Scheduler
+Building on the modular scheduler framework added in v1.10.1.0, an alternate
+Strict Priority (SP) Scheduler is now available for latency-sensitive
+workloads. Applications wishing to use the SP scheduler should specify
+the `./configure` option `--enable-schedule-sp`. This scheduler emphasizes low
+latency processing of high priority events at the expense of throughput. This
+alternate scheduler is considered experimental and should not be used for
+production at this time.
+
+ Application Binary Interface (ABI) Support
+Support is added to enable ODP applications to be binary compatible across
+different implementations of ODP sharing the same Instruction Set Architecture
+(ISA). This support introduces a new `configure` option:
+
+`--enable-abi-compat=yes`::
+This is the default and specifies that the ODP library is to be built to
+support ABI compatibility mode. In this mode ODP APIs are never inlined. ABI
+compatibility ensures maximum application portability in cloud environments.
+
+`--enable-abi-compat=no`::
+Specify this option to enable the inlining of ODP APIs. This may result in
+improved performance at the cost of ABI compatibility and is suitable for
+applications running in embedded environments.
+
+Note that ODP applications retain source code portability between ODP
+implementations regardless of the ABI mode chosen. To move to a different ODP
+application running on a different ISA, code need simply be recompiled against
+that target ODP implementation.
+
+ SCTP Parsing Support
+The ODP classifier adds support for recognizing Stream Control Transmission
+Protocol (SCTP) packets. The APIs for this were previously not implemented.
+
+=== Packaging and Implementation Refinements
+
+ Remove dependency on Linux headers
+ODP no longer has a dependency on Linux headers. This will help make the
+odp-linux reference implementation more easily portable to non-Linux
+environments.
+
+ Remove dependency on helpers
+The odp-linux implementation has been made independent of the helper library
+to avoid circular dependency issues with packaging. Helper functions may use
+ODP APIs, however ODP implementations should not use helper functions.
+
+ Reorganization of `test` directory
+The `test` directory has been reorganized to better support a unified approach
+to ODP component testing. API tests now live in
+`test/common_plat/validation/api` instead of the former
+`test/validation`. With this change performance and validation tests, as well
+as common and platform-specific tests can all be part of a unified test
+hierarchy.
+
+The resulting test tree now looks like:
+
+.New `test` directory hierarchy
+-
+test
+├── common_plat
+│   ├── common
+│   ├── m4
+│   ├── miscellaneous
+│   ├── performance
+│   └── validation
+│   └── api
+│   ├── atomic
+│   ├── barrier
+│   ├── buffer
+│   ├── classification
+│   ├── cpumask
+│   ├── crypto
+│   ├── errno
+│   ├── hash
+│   ├── init
+│   ├── lock
+│   ├── packet
+│   ├── pktio
+│   ├── pool
+│   ├── queue
+│   ├── random
+│   ├── scheduler
+│   ├── shmem
+│   ├── std_clib
+│   ├── system
+│   ├── thread
+│   ├── time
+│   ├── timer
+│   └── traffic_mngr
+├── linux-generic
+│   ├── m4
+│   ├── mmap_vlan_ins
+│   ├── performance
+│   ├── pktio_ipc
+│   ├── ring
+│   

Re: [lng-odp] Content filtered message notification

2016-11-21 Thread Mike Holmes
All

The plain text mode appears to be working, I was notified when I sent
HTML, please monitor carefully for a few days.

Your post shoudl end up here [1]  if it made it to the list

[1] https://lists.linaro.org/pipermail/lng-odp/2016-November/thread.html

On 21 November 2016 at 09:59,  wrote:
>
> The attached message matched the lng-odp mailing list's content
> filtering rules and was prevented from being forwarded on to the list
> membership.  You are receiving the only remaining copy of the
> discarded message.
>
>
>
> -- Forwarded message --
> From: Mike Holmes 
> To: lng-odp 
> Cc:
> Date: Mon, 21 Nov 2016 09:55:57 -0500
> Subject: html test post
>
> All
>
> This is an experiment to check that the list can notify HTML posters
> that the list requires text mail.
>
> Mike
>
> --
> Mike Holmes
> Program Manager - Linaro Networking Group
> Linaro.org │ Open source software for ARM SoCs
> "Work should be fun and collaborative, the rest follows"
>
>
>



-- 
Mike Holmes
Program Manager - Linaro Networking Group
Linaro.org │ Open source software for ARM SoCs
"Work should be fun and collaborative, the rest follows"


Re: [lng-odp] [API-NEXT PATCHv7 05/13] api: shm: add flags to shm_reserve and function to find external mem

2016-11-21 Thread Savolainen, Petri (Nokia - FI/Espoo)
Reviewed-by: Petri Savolainen 


> -Original Message-
> From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> Sent: Thursday, November 17, 2016 5:46 PM
> To: Savolainen, Petri (Nokia - FI/Espoo)  labs.com>; mike.hol...@linaro.org; bill.fischo...@linaro.org; lng-
> o...@lists.linaro.org
> Cc: Christophe Milard 
> Subject: [API-NEXT PATCHv7 05/13] api: shm: add flags to shm_reserve and
> function to find external mem
> 
> The ODP_SHM_SINGLE_VA flag is created: when set (at odp_shm_reserve()),
> this flag guarantees that all ODP threads sharing this memory
> block will see the block at the same address (regadless of ODP
> thread type -pthread vs process- or fork time)
> 
> The flag ODP_SHM_EXPORT is added: when passed at odp_shm_reserve() time
> the memory block becomes visible to other ODP instances.
> The function odp_shm_import() is added: this function enables to
> reserve block of memories exported by other ODP instances (using the
> ODP_SHM_EXPORT flag).
> 
> Signed-off-by: Christophe Milard 
> ---
>  include/odp/api/spec/shared_memory.h | 46
> 
>  1 file changed, 41 insertions(+), 5 deletions(-)
> 
> diff --git a/include/odp/api/spec/shared_memory.h
> b/include/odp/api/spec/shared_memory.h
> index 8c76807..885751d 100644
> --- a/include/odp/api/spec/shared_memory.h
> +++ b/include/odp/api/spec/shared_memory.h
> @@ -14,6 +14,7 @@
>  #ifndef ODP_API_SHARED_MEMORY_H_
>  #define ODP_API_SHARED_MEMORY_H_
>  #include 
> +#include 
> 
>  #ifdef __cplusplus
>  extern "C" {
> @@ -43,12 +44,25 @@ extern "C" {
>  #define ODP_SHM_NAME_LEN 32
> 
>  /*
> - * Shared memory flags
> + * Shared memory flags:
>   */
> -
> -/* Share level */
> -#define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */
> -#define ODP_SHM_PROC0x2 /**< Share with external processes */
> +#define ODP_SHM_SW_ONLY  0x1 /**< Application SW only, no HW
> access   */
> +#define ODP_SHM_PROC 0x2 /**< Share with external processes
> */
> +/**
> + * Single virtual address
> + *
> + * When set, this flag guarantees that all ODP threads sharing this
> + * memory block will see the block at the same address - regardless
> + * of ODP thread type (e.g. pthread vs. process (or fork process time)).
> + */
> +#define ODP_SHM_SINGLE_VA0x4
> +/**
> + * Export memory
> + *
> + * When set, the memory block becomes visible to other ODP instances
> + * through odp_shm_import().
> + */
> +#define ODP_SHM_EXPORT   0x08
> 
>  /**
>   * Shared memory block info
> @@ -135,6 +149,28 @@ int odp_shm_free(odp_shm_t shm);
>   */
>  odp_shm_t odp_shm_lookup(const char *name);
> 
> +/**
> + * Import a block of shared memory, exported by another ODP instance
> + *
> + * This call creates a new handle for accessing a shared memory block
> created
> + * (with ODP_SHM_EXPORT flag) by another ODP instance. An instance may
> have
> + * only a single handle to the same block. Application must not access
> the
> + * block after freeing the handle. When an imported handle is freed, only
> + * the calling instance is affected. The exported block may be freed only
> + * after all other instances have stopped accessing the block.
> + *
> + * @param remote_name  Name of the block, in the remote ODP instance
> + * @param odp_inst Remote ODP instance, as returned by
> odp_init_global()
> + * @param local_name   Name given to the block, in the local ODP instance
> + *  May be NULL, if the application doesn't need a name
> + *  (for a lookup).
> + *
> + * @return A handle to access a block exported by another ODP instance.
> + * @retval ODP_SHM_INVALID on failure
> + */
> +odp_shm_t odp_shm_import(const char *remote_name,
> +  odp_instance_t odp_inst,
> +  const char *local_name);
> 
>  /**
>   * Shared memory block address
> --
> 2.7.4



[lng-odp] [PATCHv3 2/2] configure.ac update version numbers

2016-11-21 Thread Maxim Uvarov
Signed-off-by: Maxim Uvarov 
---
 configure.ac | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/configure.ac b/configure.ac
index be5a292..3880b19 100644
--- a/configure.ac
+++ b/configure.ac
@@ -3,7 +3,7 @@ AC_PREREQ([2.5])
 # Set correct API version
 ##
 m4_define([odpapi_generation_version], [1])
-m4_define([odpapi_major_version], [11])
+m4_define([odpapi_major_version], [12])
 m4_define([odpapi_minor_version], [0])
 m4_define([odpapi_point_version], [0])
 m4_define([odpapi_version],
@@ -30,10 +30,10 @@ AM_SILENT_RULES([yes])
 ##
 # Set correct platform library version
 ##
-ODP_LIBSO_VERSION=111:0:0
+ODP_LIBSO_VERSION=111:0:1
 AC_SUBST(ODP_LIBSO_VERSION)
 
-ODPHELPER_LIBSO_VERSION=110:0:1
+ODPHELPER_LIBSO_VERSION=110:0:2
 AC_SUBST(ODPHELPER_LIBSO_VERSION)
 
 # Checks for programs.
-- 
2.7.1.250.gff4ea60



[lng-odp] [PATCHv3 0/2] changelog: summary of changes for odp v1.12.0.0

2016-11-21 Thread Maxim Uvarov
Updated Bills patch to 1.12 version and alse update .so version numbers.

Bill Fischofer (1):
  changelog: summary of changes for odp v1.12.0.0

Maxim Uvarov (1):
  configure.ac update version numbers

 CHANGELOG| 211 +++
 configure.ac |   6 +-
 2 files changed, 214 insertions(+), 3 deletions(-)

-- 
2.7.1.250.gff4ea60



[lng-odp] [API-NEXT PATCH v4 05/23] linux-gen: pool: reimplement pool with ring

2016-11-21 Thread Petri Savolainen
Used the ring data structure to implement pool. Also
buffer structure was simplified to enable future driver
interface. Every buffer includes a packet header, so each
buffer can be used as a packet head or segment. Segmentation
was disabled and segment size was fixed to a large number
(64kB) to limit the number of modification in the commit.

Signed-off-by: Petri Savolainen 
---
 .../include/odp/api/plat/pool_types.h  |6 -
 .../linux-generic/include/odp_buffer_inlines.h |  160 +--
 .../linux-generic/include/odp_buffer_internal.h|  104 +-
 .../include/odp_classification_datamodel.h |2 +-
 .../linux-generic/include/odp_config_internal.h|   34 +-
 .../linux-generic/include/odp_packet_internal.h|   13 +-
 platform/linux-generic/include/odp_pool_internal.h |  270 +---
 .../linux-generic/include/odp_timer_internal.h |4 -
 platform/linux-generic/odp_buffer.c|8 -
 platform/linux-generic/odp_classification.c|   25 +-
 platform/linux-generic/odp_crypto.c|4 +-
 platform/linux-generic/odp_packet.c|   99 +-
 platform/linux-generic/odp_pool.c  | 1441 
 platform/linux-generic/odp_timer.c |1 +
 platform/linux-generic/pktio/socket.c  |   16 +-
 platform/linux-generic/pktio/socket_mmap.c |   10 +-
 test/common_plat/performance/odp_pktio_perf.c  |2 +-
 test/common_plat/performance/odp_scheduling.c  |8 +-
 test/common_plat/validation/api/packet/packet.c|8 +-
 19 files changed, 746 insertions(+), 1469 deletions(-)

diff --git a/platform/linux-generic/include/odp/api/plat/pool_types.h 
b/platform/linux-generic/include/odp/api/plat/pool_types.h
index 1ca8f02..4e39de5 100644
--- a/platform/linux-generic/include/odp/api/plat/pool_types.h
+++ b/platform/linux-generic/include/odp/api/plat/pool_types.h
@@ -39,12 +39,6 @@ typedef enum odp_pool_type_t {
ODP_POOL_TIMEOUT = ODP_EVENT_TIMEOUT,
 } odp_pool_type_t;
 
-/** Get printable format of odp_pool_t */
-static inline uint64_t odp_pool_to_u64(odp_pool_t hdl)
-{
-   return _odp_pri(hdl);
-}
-
 /**
  * @}
  */
diff --git a/platform/linux-generic/include/odp_buffer_inlines.h 
b/platform/linux-generic/include/odp_buffer_inlines.h
index 2b1ab42..2f5eb88 100644
--- a/platform/linux-generic/include/odp_buffer_inlines.h
+++ b/platform/linux-generic/include/odp_buffer_inlines.h
@@ -18,43 +18,20 @@ extern "C" {
 #endif
 
 #include 
-#include 
 
-static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr)
-{
-   odp_buffer_bits_t handle;
-   uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl);
-   struct pool_entry_s *pool = get_pool_entry(pool_id);
+odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf);
+void _odp_buffer_event_type_set(odp_buffer_t buf, int ev);
+int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf);
 
-   handle.handle = 0;
-   handle.pool_id = pool_id;
-   handle.index = ((uint8_t *)hdr - pool->pool_mdata_addr) /
-   ODP_CACHE_LINE_SIZE;
-   handle.seg = 0;
-
-   return handle.handle;
-}
+void *buffer_map(odp_buffer_hdr_t *buf, uint32_t offset, uint32_t *seglen,
+uint32_t limit);
 
 static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr)
 {
return hdr->handle.handle;
 }
 
-static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf)
-{
-   odp_buffer_bits_t handle;
-   uint32_t pool_id;
-   uint32_t index;
-   struct pool_entry_s *pool;
-
-   handle.handle = buf;
-   pool_id   = handle.pool_id;
-   index = handle.index;
-   pool  = get_pool_entry(pool_id);
-
-   return (odp_buffer_hdr_t *)(void *)
-   (pool->pool_mdata_addr + (index * ODP_CACHE_LINE_SIZE));
-}
+odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf);
 
 static inline uint32_t pool_id_from_buf(odp_buffer_t buf)
 {
@@ -64,131 +41,6 @@ static inline uint32_t pool_id_from_buf(odp_buffer_t buf)
return handle.pool_id;
 }
 
-static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf)
-{
-   odp_buffer_bits_t handle;
-   odp_buffer_hdr_t *buf_hdr;
-   handle.handle = buf;
-
-   /* For buffer handles, segment index must be 0 and pool id in range */
-   if (handle.seg != 0 || handle.pool_id >= ODP_CONFIG_POOLS)
-   return NULL;
-
-   pool_entry_t *pool =
-   odp_pool_to_entry(_odp_cast_scalar(odp_pool_t,
-  handle.pool_id));
-
-   /* If pool not created, handle is invalid */
-   if (pool->s.pool_shm == ODP_SHM_INVALID)
-   return NULL;
-
-   uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE;
-
-   /* A valid buffer index must be on stride, and must be in range */
-   if ((handle.index % buf_stride != 0) ||
-   ((uint32_t)(handle.index / 

[lng-odp] [API-NEXT PATCH v4 15/23] linux-gen: packet: added support for segmented packets

2016-11-21 Thread Petri Savolainen
Added support for multi-segmented packets. The first segments
is the packet descriptor, which contains all metadata and
pointers to other segments.

Signed-off-by: Petri Savolainen 
---
 .../include/odp/api/plat/packet_types.h|   6 +-
 .../linux-generic/include/odp_buffer_inlines.h |  11 -
 .../linux-generic/include/odp_buffer_internal.h|  23 +-
 .../linux-generic/include/odp_config_internal.h|  39 +-
 .../linux-generic/include/odp_packet_internal.h|  80 +--
 platform/linux-generic/include/odp_pool_internal.h |   3 -
 platform/linux-generic/odp_buffer.c|   8 +-
 platform/linux-generic/odp_crypto.c|   8 +-
 platform/linux-generic/odp_packet.c| 712 +
 platform/linux-generic/odp_pool.c  | 123 ++--
 platform/linux-generic/pktio/netmap.c  |   4 +-
 platform/linux-generic/pktio/socket.c  |   3 +-
 12 files changed, 692 insertions(+), 328 deletions(-)

diff --git a/platform/linux-generic/include/odp/api/plat/packet_types.h 
b/platform/linux-generic/include/odp/api/plat/packet_types.h
index b5345ed..864494d 100644
--- a/platform/linux-generic/include/odp/api/plat/packet_types.h
+++ b/platform/linux-generic/include/odp/api/plat/packet_types.h
@@ -32,9 +32,11 @@ typedef ODP_HANDLE_T(odp_packet_t);
 
 #define ODP_PACKET_OFFSET_INVALID (0x0fff)
 
-typedef ODP_HANDLE_T(odp_packet_seg_t);
+/* A packet segment handle stores a small index. Strong type handles are
+ * pointers, which would be wasteful in this case. */
+typedef uint8_t odp_packet_seg_t;
 
-#define ODP_PACKET_SEG_INVALID _odp_cast_scalar(odp_packet_seg_t, 0x)
+#define ODP_PACKET_SEG_INVALID ((odp_packet_seg_t)-1)
 
 /** odp_packet_color_t assigns names to the various pkt "colors" */
 typedef enum {
diff --git a/platform/linux-generic/include/odp_buffer_inlines.h 
b/platform/linux-generic/include/odp_buffer_inlines.h
index f8688f6..cf817d9 100644
--- a/platform/linux-generic/include/odp_buffer_inlines.h
+++ b/platform/linux-generic/include/odp_buffer_inlines.h
@@ -23,22 +23,11 @@ odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf);
 void _odp_buffer_event_type_set(odp_buffer_t buf, int ev);
 int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf);
 
-void *buffer_map(odp_buffer_hdr_t *buf, uint32_t offset, uint32_t *seglen,
-uint32_t limit);
-
 static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr)
 {
return hdr->handle.handle;
 }
 
-static inline uint32_t pool_id_from_buf(odp_buffer_t buf)
-{
-   odp_buffer_bits_t handle;
-
-   handle.handle = buf;
-   return handle.pool_id;
-}
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/platform/linux-generic/include/odp_buffer_internal.h 
b/platform/linux-generic/include/odp_buffer_internal.h
index 0ca13f8..4e75908 100644
--- a/platform/linux-generic/include/odp_buffer_internal.h
+++ b/platform/linux-generic/include/odp_buffer_internal.h
@@ -33,10 +33,6 @@ extern "C" {
 #include 
 #include 
 
-ODP_STATIC_ASSERT(ODP_CONFIG_PACKET_SEG_LEN_MIN >= 256,
- "ODP Segment size must be a minimum of 256 bytes");
-
-
 typedef union odp_buffer_bits_t {
odp_buffer_t handle;
 
@@ -65,6 +61,20 @@ struct odp_buffer_hdr_t {
int burst_first;
struct odp_buffer_hdr_t *burst[BUFFER_BURST_SIZE];
 
+   struct {
+   void *hdr;
+   uint8_t  *data;
+   uint32_t  len;
+   } seg[CONFIG_PACKET_MAX_SEGS];
+
+   /* max data size */
+   uint32_t size;
+
+   /* Initial buffer data pointer and length */
+   void *base_data;
+   uint32_t  base_len;
+   uint8_t  *buf_end;
+
union {
uint32_t all;
struct {
@@ -75,7 +85,6 @@ struct odp_buffer_hdr_t {
 
int8_t   type;   /* buffer type */
odp_event_type_t event_type; /* for reuse as event */
-   uint32_t size;   /* max data size */
odp_pool_t   pool_hdl;   /* buffer pool handle */
union {
uint64_t buf_u64;/* user u64 */
@@ -86,8 +95,6 @@ struct odp_buffer_hdr_t {
uint32_t uarea_size; /* size of user area */
uint32_t segcount;   /* segment count */
uint32_t segsize;/* segment size */
-   /* block addrs */
-   void*addr[ODP_CONFIG_PACKET_MAX_SEGS];
uint64_t order;  /* sequence for ordered queues */
queue_entry_t   *origin_qe;  /* ordered queue origin */
union {
@@ -105,8 +112,6 @@ struct odp_buffer_hdr_t {
 };
 
 /* Forward declarations */
-int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount);
-void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount);
 int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount);
 void 

[lng-odp] [API-NEXT PATCH v4 08/23] linux-gen: pool: optimize buffer alloc

2016-11-21 Thread Petri Savolainen
Round up global pool allocations to a burst size. Cache any
extra buffers for future use. Prefetch buffers header which
very newly allocated from global pool and will be returned to
the caller.

Signed-off-by: Petri Savolainen 
---
 .../linux-generic/include/odp_buffer_internal.h|  3 +-
 platform/linux-generic/odp_packet.c| 16 +++--
 platform/linux-generic/odp_pool.c  | 74 --
 3 files changed, 68 insertions(+), 25 deletions(-)

diff --git a/platform/linux-generic/include/odp_buffer_internal.h 
b/platform/linux-generic/include/odp_buffer_internal.h
index abe8591..64ba221 100644
--- a/platform/linux-generic/include/odp_buffer_internal.h
+++ b/platform/linux-generic/include/odp_buffer_internal.h
@@ -105,7 +105,8 @@ struct odp_buffer_hdr_t {
 };
 
 /* Forward declarations */
-int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int num);
+int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[],
+  odp_buffer_hdr_t *buf_hdr[], int num);
 void buffer_free_multi(const odp_buffer_t buf[], int num_free);
 
 int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount);
diff --git a/platform/linux-generic/odp_packet.c 
b/platform/linux-generic/odp_packet.c
index 6df1c5b..6565a5d 100644
--- a/platform/linux-generic/odp_packet.c
+++ b/platform/linux-generic/odp_packet.c
@@ -80,14 +80,16 @@ static void packet_init(pool_t *pool, odp_packet_hdr_t 
*pkt_hdr,
 int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len,
   odp_packet_t pkt[], int max_num)
 {
-   odp_packet_hdr_t *pkt_hdr;
pool_t *pool = odp_pool_to_entry(pool_hdl);
int num, i;
+   odp_packet_hdr_t *pkt_hdrs[max_num];
 
-   num = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, max_num);
+   num = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt,
+(odp_buffer_hdr_t **)pkt_hdrs, max_num);
 
for (i = 0; i < num; i++) {
-   pkt_hdr = odp_packet_hdr(pkt[i]);
+   odp_packet_hdr_t *pkt_hdr = pkt_hdrs[i];
+
packet_init(pool, pkt_hdr, len, 1 /* do parse */);
 
if (pkt_hdr->tailroom >= pkt_hdr->buf_hdr.segsize)
@@ -113,7 +115,7 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t 
len)
if (odp_unlikely(len > pool->max_len))
return ODP_PACKET_INVALID;
 
-   ret = buffer_alloc_multi(pool_hdl, (odp_buffer_t *), 1);
+   ret = buffer_alloc_multi(pool_hdl, (odp_buffer_t *), NULL, 1);
if (ret != 1)
return ODP_PACKET_INVALID;
 
@@ -134,6 +136,7 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t 
len,
pool_t *pool = odp_pool_to_entry(pool_hdl);
size_t pkt_size = len ? len : pool->data_size;
int count, i;
+   odp_packet_hdr_t *pkt_hdrs[num];
 
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) {
__odp_errno = EINVAL;
@@ -143,10 +146,11 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t 
len,
if (odp_unlikely(len > pool->max_len))
return -1;
 
-   count = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, num);
+   count = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt,
+  (odp_buffer_hdr_t **)pkt_hdrs, num);
 
for (i = 0; i < count; ++i) {
-   odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]);
+   odp_packet_hdr_t *pkt_hdr = pkt_hdrs[i];
 
packet_init(pool, pkt_hdr, pkt_size, 0 /* do not parse */);
if (len == 0)
diff --git a/platform/linux-generic/odp_pool.c 
b/platform/linux-generic/odp_pool.c
index a2e5d54..7dc0938 100644
--- a/platform/linux-generic/odp_pool.c
+++ b/platform/linux-generic/odp_pool.c
@@ -562,14 +562,14 @@ int odp_pool_info(odp_pool_t pool_hdl, odp_pool_info_t 
*info)
return 0;
 }
 
-int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int max_num)
+int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[],
+  odp_buffer_hdr_t *buf_hdr[], int max_num)
 {
pool_t *pool;
ring_t *ring;
-   uint32_t mask;
-   int i;
+   uint32_t mask, i;
pool_cache_t *cache;
-   uint32_t cache_num;
+   uint32_t cache_num, num_ch, num_deq, burst;
 
pool  = pool_entry_from_hdl(pool_hdl);
ring  = >ring.hdr;
@@ -577,28 +577,66 @@ int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t 
buf[], int max_num)
cache = local.cache[_odp_typeval(pool_hdl)];
 
cache_num = cache->num;
+   num_ch= max_num;
+   num_deq   = 0;
+   burst = CACHE_BURST;
 
-   if (odp_likely((int)cache_num >= max_num)) {
-   for (i = 0; i < max_num; i++)
-   buf[i] = cache->buf[cache_num - max_num + i];
+   if (odp_unlikely(cache_num < (uint32_t)max_num)) {
+   /* Cache does not have enough buffers 

[lng-odp] [API-NEXT PATCH v4 03/23] linux-gen: ring: created common ring implementation

2016-11-21 Thread Petri Savolainen
Moved scheduler ring code into a new header file, so that
it can be used also in other parts of the implementation.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/Makefile.am |   1 +
 platform/linux-generic/include/odp_ring_internal.h | 111 +
 platform/linux-generic/odp_schedule.c  | 102 ++-
 3 files changed, 120 insertions(+), 94 deletions(-)
 create mode 100644 platform/linux-generic/include/odp_ring_internal.h

diff --git a/platform/linux-generic/Makefile.am 
b/platform/linux-generic/Makefile.am
index 19dc0ba..b60eacb 100644
--- a/platform/linux-generic/Makefile.am
+++ b/platform/linux-generic/Makefile.am
@@ -151,6 +151,7 @@ noinst_HEADERS = \
  ${srcdir}/include/odp_pool_internal.h \
  ${srcdir}/include/odp_posix_extensions.h \
  ${srcdir}/include/odp_queue_internal.h \
+ ${srcdir}/include/odp_ring_internal.h \
  ${srcdir}/include/odp_schedule_if.h \
  ${srcdir}/include/odp_schedule_internal.h \
  ${srcdir}/include/odp_schedule_ordered_internal.h \
diff --git a/platform/linux-generic/include/odp_ring_internal.h 
b/platform/linux-generic/include/odp_ring_internal.h
new file mode 100644
index 000..6a6291a
--- /dev/null
+++ b/platform/linux-generic/include/odp_ring_internal.h
@@ -0,0 +1,111 @@
+/* Copyright (c) 2016, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#ifndef ODP_RING_INTERNAL_H_
+#define ODP_RING_INTERNAL_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include 
+#include 
+#include 
+
+/* Ring empty, not a valid data value. */
+#define RING_EMPTY ((uint32_t)-1)
+
+/* Ring of uint32_t data
+ *
+ * Ring stores head and tail counters. Ring indexes are formed from these
+ * counters with a mask (mask = ring_size - 1), which requires that ring size
+ * must be a power of two. Also ring size must be larger than the maximum
+ * number of data items that will be stored on it (there's no check against
+ * overwriting). */
+typedef struct {
+   /* Writer head and tail */
+   odp_atomic_u32_t w_head;
+   odp_atomic_u32_t w_tail;
+   uint8_t pad[ODP_CACHE_LINE_SIZE - (2 * sizeof(odp_atomic_u32_t))];
+
+   /* Reader head and tail */
+   odp_atomic_u32_t r_head;
+   odp_atomic_u32_t r_tail;
+
+   uint32_t data[0];
+} ring_t ODP_ALIGNED_CACHE;
+
+/* Initialize ring */
+static inline void ring_init(ring_t *ring)
+{
+   odp_atomic_init_u32(>w_head, 0);
+   odp_atomic_init_u32(>w_tail, 0);
+   odp_atomic_init_u32(>r_head, 0);
+   odp_atomic_init_u32(>r_tail, 0);
+}
+
+/* Dequeue data from the ring head */
+static inline uint32_t ring_deq(ring_t *ring, uint32_t mask)
+{
+   uint32_t head, tail, new_head;
+   uint32_t data;
+
+   head = odp_atomic_load_u32(>r_head);
+
+   /* Move reader head. This thread owns data at the new head. */
+   do {
+   tail = odp_atomic_load_u32(>w_tail);
+
+   if (head == tail)
+   return RING_EMPTY;
+
+   new_head = head + 1;
+
+   } while (odp_unlikely(odp_atomic_cas_acq_u32(>r_head, ,
+ new_head) == 0));
+
+   /* Read queue index */
+   data = ring->data[new_head & mask];
+
+   /* Wait until other readers have updated the tail */
+   while (odp_unlikely(odp_atomic_load_acq_u32(>r_tail) != head))
+   odp_cpu_pause();
+
+   /* Now update the reader tail */
+   odp_atomic_store_rel_u32(>r_tail, new_head);
+
+   return data;
+}
+
+/* Enqueue data into the ring tail */
+static inline void ring_enq(ring_t *ring, uint32_t mask, uint32_t data)
+{
+   uint32_t old_head, new_head;
+
+   /* Reserve a slot in the ring for writing */
+   old_head = odp_atomic_fetch_inc_u32(>w_head);
+   new_head = old_head + 1;
+
+   /* Ring is full. Wait for the last reader to finish. */
+   while (odp_unlikely(odp_atomic_load_acq_u32(>r_tail) == new_head))
+   odp_cpu_pause();
+
+   /* Write data */
+   ring->data[new_head & mask] = data;
+
+   /* Wait until other writers have updated the tail */
+   while (odp_unlikely(odp_atomic_load_acq_u32(>w_tail) != old_head))
+   odp_cpu_pause();
+
+   /* Now update the writer tail */
+   odp_atomic_store_rel_u32(>w_tail, new_head);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/platform/linux-generic/odp_schedule.c 
b/platform/linux-generic/odp_schedule.c
index 86b1cec..dfc9555 100644
--- a/platform/linux-generic/odp_schedule.c
+++ b/platform/linux-generic/odp_schedule.c
@@ -17,12 +17,12 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
 #include 
 #include 
+#include 
 
 /* Number of priority levels  */
 #define NUM_PRIO 8
@@ -82,9 +82,6 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) &&
 

[lng-odp] [API-NEXT PATCH v4 12/23] test: performance: crypto: use capability to select max packet

2016-11-21 Thread Petri Savolainen
Applications must use pool capabilibty to check maximum values
for parameters. Used maximum segment length since application
seems to support only single segment packets.

Signed-off-by: Petri Savolainen 
---
 test/common_plat/performance/odp_crypto.c | 47 +++
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/test/common_plat/performance/odp_crypto.c 
b/test/common_plat/performance/odp_crypto.c
index 49a9f4b..39df78b 100644
--- a/test/common_plat/performance/odp_crypto.c
+++ b/test/common_plat/performance/odp_crypto.c
@@ -23,15 +23,10 @@
fprintf(stderr, "%s:%d:%s(): Error: " fmt, __FILE__, \
__LINE__, __func__, ##__VA_ARGS__)
 
-/** @def SHM_PKT_POOL_SIZE
- * @brief Size of the shared memory block
+/** @def POOL_NUM_PKT
+ * Number of packets in the pool
  */
-#define SHM_PKT_POOL_SIZE  (512 * 2048 * 2)
-
-/** @def SHM_PKT_POOL_BUF_SIZE
- * @brief Buffer size of the packet pool buffer
- */
-#define SHM_PKT_POOL_BUF_SIZE  (1024 * 32)
+#define POOL_NUM_PKT  64
 
 static uint8_t test_iv[8] = "01234567";
 
@@ -165,9 +160,7 @@ static void parse_args(int argc, char *argv[], 
crypto_args_t *cargs);
 static void usage(char *progname);
 
 /**
- * Set of predefined payloads. Make sure that maximum payload
- * size is not bigger than SHM_PKT_POOL_BUF_SIZE. May relax when
- * implementation start support segmented buffers/packets.
+ * Set of predefined payloads.
  */
 static unsigned int payloads[] = {
16,
@@ -178,6 +171,9 @@ static unsigned int payloads[] = {
16384
 };
 
+/** Number of payloads used in the test */
+static unsigned num_payloads;
+
 /**
  * Set of known algorithms to test
  */
@@ -680,12 +676,10 @@ run_measure_one_config(crypto_args_t *cargs,
 config, );
}
} else {
-   unsigned int i;
+   unsigned i;
 
print_result_header();
-   for (i = 0;
-i < (sizeof(payloads) / sizeof(unsigned int));
-i++) {
+   for (i = 0; i < num_payloads; i++) {
rc = run_measure_one(cargs, config, ,
 payloads[i], );
if (rc)
@@ -728,6 +722,9 @@ int main(int argc, char *argv[])
int num_workers = 1;
odph_odpthread_t thr[num_workers];
odp_instance_t instance;
+   odp_pool_capability_t capa;
+   uint32_t max_seg_len;
+   unsigned i;
 
memset(, 0, sizeof(cargs));
 
@@ -743,11 +740,25 @@ int main(int argc, char *argv[])
/* Init this thread */
odp_init_local(instance, ODP_THREAD_WORKER);
 
+   if (odp_pool_capability()) {
+   app_err("Pool capability request failed.\n");
+   exit(EXIT_FAILURE);
+   }
+
+   max_seg_len = capa.pkt.max_seg_len;
+
+   for (i = 0; i < sizeof(payloads) / sizeof(unsigned int); i++) {
+   if (payloads[i] > max_seg_len)
+   break;
+   }
+
+   num_payloads = i;
+
/* Create packet pool */
odp_pool_param_init();
-   params.pkt.seg_len = SHM_PKT_POOL_BUF_SIZE;
-   params.pkt.len = SHM_PKT_POOL_BUF_SIZE;
-   params.pkt.num = SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUF_SIZE;
+   params.pkt.seg_len = max_seg_len;
+   params.pkt.len = max_seg_len;
+   params.pkt.num = POOL_NUM_PKT;
params.type= ODP_POOL_PACKET;
pool = odp_pool_create("packet_pool", );
 
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 09/23] linux-gen: pool: clean up pool inlines functions

2016-11-21 Thread Petri Savolainen
Removed odp_pool_to_entry(), which was a duplicate of
pool_entry_from_hdl(). Renamed odp_buf_to_hdr() to
buf_hdl_to_hdr(), which describes more accurately the internal
function. Inlined pool_entry(), pool_entry_from_hdl() and
buf_hdl_to_hdr(), which are used often and also outside of
pool.c. Renamed odp_buffer_pool_headroom() and _tailroom() to
simply pool_headroom() and _tailroom(), since those are internal
functions (not API as previous names hint). Also moved those
into pool.c, since inlining is not needed for functions that are
called only in (netmap) init phase.

Signed-off-by: Petri Savolainen 
---
 .../linux-generic/include/odp_buffer_inlines.h |  2 -
 .../linux-generic/include/odp_packet_internal.h|  2 +-
 platform/linux-generic/include/odp_pool_internal.h | 36 ---
 platform/linux-generic/odp_buffer.c|  6 +--
 platform/linux-generic/odp_packet.c|  8 ++--
 platform/linux-generic/odp_packet_io.c |  2 +-
 platform/linux-generic/odp_pool.c  | 54 ++
 platform/linux-generic/odp_queue.c |  4 +-
 platform/linux-generic/odp_schedule_ordered.c  |  4 +-
 platform/linux-generic/odp_timer.c |  2 +-
 platform/linux-generic/pktio/loop.c|  2 +-
 platform/linux-generic/pktio/netmap.c  |  4 +-
 platform/linux-generic/pktio/socket_mmap.c |  2 +-
 13 files changed, 62 insertions(+), 66 deletions(-)

diff --git a/platform/linux-generic/include/odp_buffer_inlines.h 
b/platform/linux-generic/include/odp_buffer_inlines.h
index 2f5eb88..f8688f6 100644
--- a/platform/linux-generic/include/odp_buffer_inlines.h
+++ b/platform/linux-generic/include/odp_buffer_inlines.h
@@ -31,8 +31,6 @@ static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t 
*hdr)
return hdr->handle.handle;
 }
 
-odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf);
-
 static inline uint32_t pool_id_from_buf(odp_buffer_t buf)
 {
odp_buffer_bits_t handle;
diff --git a/platform/linux-generic/include/odp_packet_internal.h 
b/platform/linux-generic/include/odp_packet_internal.h
index 2cad71f..0cdd5ca 100644
--- a/platform/linux-generic/include/odp_packet_internal.h
+++ b/platform/linux-generic/include/odp_packet_internal.h
@@ -199,7 +199,7 @@ typedef struct {
  */
 static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt)
 {
-   return (odp_packet_hdr_t *)odp_buf_to_hdr((odp_buffer_t)pkt);
+   return (odp_packet_hdr_t *)buf_hdl_to_hdr((odp_buffer_t)pkt);
 }
 
 static inline void copy_packet_parser_metadata(odp_packet_hdr_t *src_hdr,
diff --git a/platform/linux-generic/include/odp_pool_internal.h 
b/platform/linux-generic/include/odp_pool_internal.h
index 278c553..f7c315c 100644
--- a/platform/linux-generic/include/odp_pool_internal.h
+++ b/platform/linux-generic/include/odp_pool_internal.h
@@ -73,23 +73,45 @@ typedef struct pool_t {
 
 } pool_t;
 
-pool_t *pool_entry(uint32_t pool_idx);
+typedef struct pool_table_t {
+   pool_tpool[ODP_CONFIG_POOLS];
+   odp_shm_t shm;
+} pool_table_t;
 
-static inline pool_t *odp_pool_to_entry(odp_pool_t pool_hdl)
+extern pool_table_t *pool_tbl;
+
+static inline pool_t *pool_entry(uint32_t pool_idx)
 {
-   return pool_entry(_odp_typeval(pool_hdl));
+   return _tbl->pool[pool_idx];
 }
 
-static inline uint32_t odp_buffer_pool_headroom(odp_pool_t pool)
+static inline pool_t *pool_entry_from_hdl(odp_pool_t pool_hdl)
 {
-   return odp_pool_to_entry(pool)->headroom;
+   return _tbl->pool[_odp_typeval(pool_hdl)];
 }
 
-static inline uint32_t odp_buffer_pool_tailroom(odp_pool_t pool)
+static inline odp_buffer_hdr_t *buf_hdl_to_hdr(odp_buffer_t buf)
 {
-   return odp_pool_to_entry(pool)->tailroom;
+   odp_buffer_bits_t handle;
+   uint32_t pool_id, index, block_offset;
+   pool_t *pool;
+   odp_buffer_hdr_t *buf_hdr;
+
+   handle.handle = buf;
+   pool_id   = handle.pool_id;
+   index = handle.index;
+   pool  = pool_entry(pool_id);
+   block_offset  = index * pool->block_size;
+
+   /* clang requires cast to uintptr_t */
+   buf_hdr = (odp_buffer_hdr_t *)(uintptr_t)>base_addr[block_offset];
+
+   return buf_hdr;
 }
 
+uint32_t pool_headroom(odp_pool_t pool);
+uint32_t pool_tailroom(odp_pool_t pool);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/platform/linux-generic/odp_buffer.c 
b/platform/linux-generic/odp_buffer.c
index 0ddaf95..eed15c0 100644
--- a/platform/linux-generic/odp_buffer.c
+++ b/platform/linux-generic/odp_buffer.c
@@ -26,14 +26,14 @@ odp_event_t odp_buffer_to_event(odp_buffer_t buf)
 
 void *odp_buffer_addr(odp_buffer_t buf)
 {
-   odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf);
+   odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf);
 
return hdr->addr[0];
 }
 
 uint32_t odp_buffer_size(odp_buffer_t buf)
 {
-   odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf);
+   odp_buffer_hdr_t 

[lng-odp] [API-NEXT PATCH v4 17/23] api: packet: added limits for packet len on alloc

2016-11-21 Thread Petri Savolainen
There's no use case for application to allocate zero length
packets. Application should always have some knowledge about
the new packet data length before allocation. Also
implementations are more efficient when a check for zero length
is avoided.

Also added a pool parameter to specify the maximum packet length
to be allocated from the pool. Implementations may use this
information to optimize e.g. memory usage, etc. Application must
not exceed the max_len parameter value on alloc calls. Pool
capabilities define already max_len.

Signed-off-by: Petri Savolainen 
---
 include/odp/api/spec/packet.h | 9 +
 include/odp/api/spec/pool.h   | 6 ++
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/include/odp/api/spec/packet.h b/include/odp/api/spec/packet.h
index 4a14f2d..faf62e2 100644
--- a/include/odp/api/spec/packet.h
+++ b/include/odp/api/spec/packet.h
@@ -82,13 +82,14 @@ extern "C" {
  * Allocate a packet from a packet pool
  *
  * Allocates a packet of the requested length from the specified packet pool.
- * Pool must have been created with ODP_POOL_PACKET type. The
+ * The pool must have been created with ODP_POOL_PACKET type. The
  * packet is initialized with data pointers and lengths set according to the
  * specified len, and the default headroom and tailroom length settings. All
- * other packet metadata are set to their default values.
+ * other packet metadata are set to their default values. Packet length must
+ * be greater than zero and not exceed packet pool parameter 'max_len' value.
  *
  * @param pool  Pool handle
- * @param len   Packet data length
+ * @param len   Packet data length (1 ... pool max_len)
  *
  * @return Handle of allocated packet
  * @retval ODP_PACKET_INVALID  Packet could not be allocated
@@ -105,7 +106,7 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t 
len);
  * packets from a pool.
  *
  * @param pool  Pool handle
- * @param len   Packet data length
+ * @param len   Packet data length (1 ... pool max_len)
  * @param[out] pkt  Array of packet handles for output
  * @param num   Maximum number of packets to allocate
  *
diff --git a/include/odp/api/spec/pool.h b/include/odp/api/spec/pool.h
index a1331e3..041f4af 100644
--- a/include/odp/api/spec/pool.h
+++ b/include/odp/api/spec/pool.h
@@ -192,6 +192,12 @@ typedef struct odp_pool_param_t {
pkt.max_len. Use 0 for default. */
uint32_t len;
 
+   /** Maximum packet length that will be allocated from
+   the pool. The maximum value is defined by pool
+   capability pkt.max_len. Use 0 for default (the
+   pool maximum). */
+   uint32_t max_len;
+
/** Minimum number of packet data bytes that are stored
in the first segment of a packet. The maximum value
is defined by pool capability pkt.max_seg_len.
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 21/23] validation: pktio: honour pool capability limits

2016-11-21 Thread Petri Savolainen
Check pool capability limits for packet length and segment
length, and do not exceed those.

Signed-off-by: Petri Savolainen 
---
 test/common_plat/validation/api/pktio/pktio.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/test/common_plat/validation/api/pktio/pktio.c 
b/test/common_plat/validation/api/pktio/pktio.c
index befaa7e..97db626 100644
--- a/test/common_plat/validation/api/pktio/pktio.c
+++ b/test/common_plat/validation/api/pktio/pktio.c
@@ -120,8 +120,12 @@ static inline void _pktio_wait_linkup(odp_pktio_t pktio)
}
 }
 
-static void set_pool_len(odp_pool_param_t *params)
+static void set_pool_len(odp_pool_param_t *params, odp_pool_capability_t *capa)
 {
+   uint32_t seg_len;
+
+   seg_len = capa->pkt.max_seg_len;
+
switch (pool_segmentation) {
case PKT_POOL_SEGMENTED:
/* Force segment to minimum size */
@@ -130,7 +134,7 @@ static void set_pool_len(odp_pool_param_t *params)
break;
case PKT_POOL_UNSEGMENTED:
default:
-   params->pkt.seg_len = PKT_BUF_SIZE;
+   params->pkt.seg_len = seg_len;
params->pkt.len = PKT_BUF_SIZE;
break;
}
@@ -305,13 +309,17 @@ static int pktio_fixup_checksums(odp_packet_t pkt)
 static int default_pool_create(void)
 {
odp_pool_param_t params;
+   odp_pool_capability_t pool_capa;
char pool_name[ODP_POOL_NAME_LEN];
 
+   if (odp_pool_capability(_capa) != 0)
+   return -1;
+
if (default_pkt_pool != ODP_POOL_INVALID)
return -1;
 
odp_pool_param_init();
-   set_pool_len();
+   set_pool_len(, _capa);
params.pkt.num = PKT_BUF_NUM;
params.type= ODP_POOL_PACKET;
 
@@ -594,6 +602,7 @@ static void pktio_txrx_multi(pktio_info_t *pktio_a, 
pktio_info_t *pktio_b,
int i, ret, num_rx;
 
if (packet_len == USE_MTU) {
+   odp_pool_capability_t pool_capa;
uint32_t mtu;
 
mtu = odp_pktio_mtu(pktio_a->id);
@@ -603,6 +612,11 @@ static void pktio_txrx_multi(pktio_info_t *pktio_a, 
pktio_info_t *pktio_b,
packet_len = mtu;
if (packet_len > PKT_LEN_MAX)
packet_len = PKT_LEN_MAX;
+
+   CU_ASSERT_FATAL(odp_pool_capability(_capa) == 0);
+
+   if (packet_len > pool_capa.pkt.max_len)
+   packet_len = pool_capa.pkt.max_len;
}
 
/* generate test packets to send */
@@ -2004,9 +2018,13 @@ static int create_pool(const char *iface, int num)
 {
char pool_name[ODP_POOL_NAME_LEN];
odp_pool_param_t params;
+   odp_pool_capability_t pool_capa;
+
+   if (odp_pool_capability(_capa) != 0)
+   return -1;
 
odp_pool_param_init();
-   set_pool_len();
+   set_pool_len(, _capa);
params.pkt.num = PKT_BUF_NUM;
params.type= ODP_POOL_PACKET;
 
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 00/23] pool optimization

2016-11-21 Thread Petri Savolainen
Pool performance is optimized by using a ring as the global buffer storage. 
IPC build is disabled, since it needs large modifications due to dependency to 
pool internals. Old pool implementation was based on locks and linked list of 
buffer headers. New implementation maintain a ring of buffer handles, which 
enable fast, burst based allocs and frees. Also ring scales better with number 
of cpus than a list (enq and deq operations update opposite ends of the pool). 

L2fwd link rate (%), 2 x 40GE, 64 byte packets

direct- parallel-   atomic-
cpusorigdirect  difforigparall  difforigatomic  diff
1   7 % 8 % 1 % 6 % 6 % 2 % 5.4 %   5.6 %   4 %
2   14 %15 %7 % 9 % 9 % 5 % 8 % 9 % 8 %
4   28 %30 %6 % 13 %14 %13 %12 %15 %19 %
6   42 %44 %6 % 16 %19 %19 %8 % 20 %150 %
8   46 %59 %28 %19 %23 %26 %18 %24 %34 %
10  55 %57 %3 % 20 %27 %37 %8 % 28 %264 %
12  56 %56 %-1 %22 %31 %43 %7 % 32 %357 %

Max packet rate of NICs are reached with 10-12 cpu on direct mode. Otherwise, 
all cases were improved. Especially, scheduler driven cases suffered on bad 
pool scalability.

v4:
* rebased
* fixed bugs in pktio socket and crypto/pktio validation tests (patches 19 - 21)
* added pool parameter check to catch applications which do not honour pool
  capabilities (patch 22)

v3:
* rebased
* ipc disabled with #ifdef
* added support for multi-segment packets
* API: added explicit limits for packet length in alloc calls
* Corrected validation test and example application bugs found during
  segmentation implementation
* Reviewed-and-tested-by: Bill Fischofer 

v2:
* rebased to api-next branch
* added a comment that ring size must be larger than number of items in it
* fixed clang build issue
* added parens in align macro

v1:
* Reviewed-by: Brian Brooks 

Petri Savolainen (23):
  linux-gen: ipc: disable build of ipc pktio
  linux-gen: pktio: do not free zero packets
  linux-gen: ring: created common ring implementation
  linux-gen: align: added round up power of two
  linux-gen: pool: reimplement pool with ring
  linux-gen: ring: added multi enq and deq
  linux-gen: pool: use ring multi enq and deq operations
  linux-gen: pool: optimize buffer alloc
  linux-gen: pool: clean up pool inlines functions
  linux-gen: pool: ptr instead of hdl in buffer_alloc_multi
  test: validation: buf: test alignment
  test: performance: crypto: use capability to select max packet
  test: correctly initialize pool parameters
  test: validation: packet: fix bugs in tailroom and concat tests
  linux-gen: packet: added support for segmented packets
  test: validation: packet: improved multi-segment alloc test
  api: packet: added limits for packet len on alloc
  linux-gen: packet: remove zero len support from alloc
  linux-gen: socket: use trunc instead of pull tail
  validation: crypto: honour pool capability limits
  validation: pktio: honour pool capability limits
  linux-gen: pool: check pool parameters
  linux-gen: packet: enable multi-segment packets

 example/generator/odp_generator.c  |2 +-
 include/odp/api/spec/packet.h  |9 +-
 include/odp/api/spec/pool.h|6 +
 platform/linux-generic/Makefile.am |1 +
 .../include/odp/api/plat/packet_types.h|6 +-
 .../include/odp/api/plat/pool_types.h  |6 -
 .../linux-generic/include/odp_align_internal.h |   34 +-
 .../linux-generic/include/odp_buffer_inlines.h |  167 +--
 .../linux-generic/include/odp_buffer_internal.h|  120 +-
 .../include/odp_classification_datamodel.h |2 +-
 .../linux-generic/include/odp_config_internal.h|   55 +-
 .../linux-generic/include/odp_packet_internal.h|   87 +-
 platform/linux-generic/include/odp_pool_internal.h |  289 +---
 platform/linux-generic/include/odp_ring_internal.h |  176 +++
 .../linux-generic/include/odp_timer_internal.h |4 -
 platform/linux-generic/odp_buffer.c|   22 +-
 platform/linux-generic/odp_classification.c|   25 +-
 platform/linux-generic/odp_crypto.c|   12 +-
 platform/linux-generic/odp_packet.c|  717 --
 platform/linux-generic/odp_packet_io.c |2 +-
 platform/linux-generic/odp_pool.c  | 1487 
 platform/linux-generic/odp_queue.c |4 +-
 platform/linux-generic/odp_schedule.c  |  102 +-
 platform/linux-generic/odp_schedule_ordered.c  |4 +-
 platform/linux-generic/odp_timer.c |3 +-
 platform/linux-generic/pktio/dpdk.c|   10 +-
 platform/linux-generic/pktio/ipc.c   

[lng-odp] [API-NEXT PATCH v4 11/23] test: validation: buf: test alignment

2016-11-21 Thread Petri Savolainen
Added checks for correct alignment. Also updated tests to call
odp_pool_param_init() for parameter initialization.

Signed-off-by: Petri Savolainen 
---
 test/common_plat/validation/api/buffer/buffer.c | 113 +---
 1 file changed, 63 insertions(+), 50 deletions(-)

diff --git a/test/common_plat/validation/api/buffer/buffer.c 
b/test/common_plat/validation/api/buffer/buffer.c
index d26d5e8..7c723d4 100644
--- a/test/common_plat/validation/api/buffer/buffer.c
+++ b/test/common_plat/validation/api/buffer/buffer.c
@@ -8,20 +8,21 @@
 #include "odp_cunit_common.h"
 #include "buffer.h"
 
+#define BUF_ALIGN  ODP_CACHE_LINE_SIZE
+#define BUF_SIZE   1500
+
 static odp_pool_t raw_pool;
 static odp_buffer_t raw_buffer = ODP_BUFFER_INVALID;
-static const size_t raw_buffer_size = 1500;
 
 int buffer_suite_init(void)
 {
-   odp_pool_param_t params = {
-   .buf = {
-   .size  = raw_buffer_size,
-   .align = ODP_CACHE_LINE_SIZE,
-   .num   = 100,
-   },
-   .type  = ODP_POOL_BUFFER,
-   };
+   odp_pool_param_t params;
+
+   odp_pool_param_init();
+   params.type  = ODP_POOL_BUFFER;
+   params.buf.size  = BUF_SIZE;
+   params.buf.align = BUF_ALIGN;
+   params.buf.num   = 100;
 
raw_pool = odp_pool_create("raw_pool", );
if (raw_pool == ODP_POOL_INVALID)
@@ -44,25 +45,25 @@ void buffer_test_pool_alloc(void)
 {
odp_pool_t pool;
const int num = 3;
-   const size_t size = 1500;
odp_buffer_t buffer[num];
odp_event_t ev;
int index;
-   char wrong_type = 0, wrong_size = 0;
-   odp_pool_param_t params = {
-   .buf = {
-   .size  = size,
-   .align = ODP_CACHE_LINE_SIZE,
-   .num   = num,
-   },
-   .type  = ODP_POOL_BUFFER,
-   };
+   char wrong_type = 0, wrong_size = 0, wrong_align = 0;
+   odp_pool_param_t params;
+
+   odp_pool_param_init();
+   params.type  = ODP_POOL_BUFFER;
+   params.buf.size  = BUF_SIZE;
+   params.buf.align = BUF_ALIGN;
+   params.buf.num   = num;
 
pool = odp_pool_create("buffer_pool_alloc", );
odp_pool_print(pool);
 
/* Try to allocate num items from the pool */
for (index = 0; index < num; index++) {
+   uintptr_t addr;
+
buffer[index] = odp_buffer_alloc(pool);
 
if (buffer[index] == ODP_BUFFER_INVALID)
@@ -71,9 +72,15 @@ void buffer_test_pool_alloc(void)
ev = odp_buffer_to_event(buffer[index]);
if (odp_event_type(ev) != ODP_EVENT_BUFFER)
wrong_type = 1;
-   if (odp_buffer_size(buffer[index]) < size)
+   if (odp_buffer_size(buffer[index]) < BUF_SIZE)
wrong_size = 1;
-   if (wrong_type || wrong_size)
+
+   addr = (uintptr_t)odp_buffer_addr(buffer[index]);
+
+   if ((addr % BUF_ALIGN) != 0)
+   wrong_align = 1;
+
+   if (wrong_type || wrong_size || wrong_align)
odp_buffer_print(buffer[index]);
}
 
@@ -85,6 +92,7 @@ void buffer_test_pool_alloc(void)
/* Check that the pool had correct buffers */
CU_ASSERT(wrong_type == 0);
CU_ASSERT(wrong_size == 0);
+   CU_ASSERT(wrong_align == 0);
 
for (; index >= 0; index--)
odp_buffer_free(buffer[index]);
@@ -112,19 +120,17 @@ void buffer_test_pool_alloc_multi(void)
 {
odp_pool_t pool;
const int num = 3;
-   const size_t size = 1500;
odp_buffer_t buffer[num + 1];
odp_event_t ev;
int index;
-   char wrong_type = 0, wrong_size = 0;
-   odp_pool_param_t params = {
-   .buf = {
-   .size  = size,
-   .align = ODP_CACHE_LINE_SIZE,
-   .num   = num,
-   },
-   .type  = ODP_POOL_BUFFER,
-   };
+   char wrong_type = 0, wrong_size = 0, wrong_align = 0;
+   odp_pool_param_t params;
+
+   odp_pool_param_init();
+   params.type  = ODP_POOL_BUFFER;
+   params.buf.size  = BUF_SIZE;
+   params.buf.align = BUF_ALIGN;
+   params.buf.num   = num;
 
pool = odp_pool_create("buffer_pool_alloc_multi", );
odp_pool_print(pool);
@@ -133,15 +139,23 @@ void buffer_test_pool_alloc_multi(void)
CU_ASSERT_FATAL(buffer_alloc_multi(pool, buffer, num + 1) == num);
 
for (index = 0; index < num; index++) {
+   uintptr_t addr;
+
if (buffer[index] == ODP_BUFFER_INVALID)
break;
 
ev = 

[lng-odp] [API-NEXT PATCH v4 22/23] linux-gen: pool: check pool parameters

2016-11-21 Thread Petri Savolainen
Check pool parameters against maximum capabilities. Also defined
a limit for maximum buffer and user area sizes. Chose 10 MB as
a limit since it's small enough to be available in all Linux systems
and it should be more than enough for normal pool usage.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/odp_pool.c | 75 +--
 1 file changed, 73 insertions(+), 2 deletions(-)

diff --git a/platform/linux-generic/odp_pool.c 
b/platform/linux-generic/odp_pool.c
index 7c462e5..4be3827 100644
--- a/platform/linux-generic/odp_pool.c
+++ b/platform/linux-generic/odp_pool.c
@@ -29,6 +29,9 @@
 #define CACHE_BURST32
 #define RING_SIZE_MIN  (2 * CACHE_BURST)
 
+/* Define a practical limit for contiguous memory allocations */
+#define MAX_SIZE   (10 * 1024 * 1024)
+
 ODP_STATIC_ASSERT(CONFIG_POOL_CACHE_SIZE > (2 * CACHE_BURST),
  "cache_burst_size_too_large_compared_to_cache_size");
 
@@ -426,6 +429,71 @@ error:
return ODP_POOL_INVALID;
 }
 
+static int check_params(odp_pool_param_t *params)
+{
+   odp_pool_capability_t capa;
+
+   odp_pool_capability();
+
+   switch (params->type) {
+   case ODP_POOL_BUFFER:
+   if (params->buf.num > capa.buf.max_num) {
+   printf("buf.num too large %u\n", params->buf.num);
+   return -1;
+   }
+
+   if (params->buf.size > capa.buf.max_size) {
+   printf("buf.size too large %u\n", params->buf.size);
+   return -1;
+   }
+
+   if (params->buf.align > capa.buf.max_align) {
+   printf("buf.align too large %u\n", params->buf.align);
+   return -1;
+   }
+
+   break;
+
+   case ODP_POOL_PACKET:
+   if (params->pkt.len > capa.pkt.max_len) {
+   printf("pkt.len too large %u\n", params->pkt.len);
+   return -1;
+   }
+
+   if (params->pkt.max_len > capa.pkt.max_len) {
+   printf("pkt.max_len too large %u\n",
+  params->pkt.max_len);
+   return -1;
+   }
+
+   if (params->pkt.seg_len > capa.pkt.max_seg_len) {
+   printf("pkt.seg_len too large %u\n",
+  params->pkt.seg_len);
+   return -1;
+   }
+
+   if (params->pkt.uarea_size > capa.pkt.max_uarea_size) {
+   printf("pkt.uarea_size too large %u\n",
+  params->pkt.uarea_size);
+   return -1;
+   }
+
+   break;
+
+   case ODP_POOL_TIMEOUT:
+   if (params->tmo.num > capa.tmo.max_num) {
+   printf("tmo.num too large %u\n", params->tmo.num);
+   return -1;
+   }
+   break;
+
+   default:
+   printf("bad pool type %i\n", params->type);
+   return -1;
+   }
+
+   return 0;
+}
 
 odp_pool_t odp_pool_create(const char *name, odp_pool_param_t *params)
 {
@@ -433,6 +501,9 @@ odp_pool_t odp_pool_create(const char *name, 
odp_pool_param_t *params)
if (params && (params->type == ODP_POOL_PACKET))
return pool_create(name, params, ODP_SHM_PROC);
 #endif
+   if (check_params(params))
+   return ODP_POOL_INVALID;
+
return pool_create(name, params, 0);
 }
 
@@ -718,7 +789,7 @@ int odp_pool_capability(odp_pool_capability_t *capa)
/* Buffer pools */
capa->buf.max_pools = ODP_CONFIG_POOLS;
capa->buf.max_align = ODP_CONFIG_BUFFER_ALIGN_MAX;
-   capa->buf.max_size  = 0;
+   capa->buf.max_size  = MAX_SIZE;
capa->buf.max_num   = CONFIG_POOL_MAX_NUM;
 
/* Packet pools */
@@ -730,7 +801,7 @@ int odp_pool_capability(odp_pool_capability_t *capa)
capa->pkt.max_segs_per_pkt = CONFIG_PACKET_MAX_SEGS;
capa->pkt.min_seg_len  = max_seg_len;
capa->pkt.max_seg_len  = max_seg_len;
-   capa->pkt.max_uarea_size   = 0;
+   capa->pkt.max_uarea_size   = MAX_SIZE;
 
/* Timeout pools */
capa->tmo.max_pools = ODP_CONFIG_POOLS;
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 06/23] linux-gen: ring: added multi enq and deq

2016-11-21 Thread Petri Savolainen
Added multi-data versions of ring enqueue and dequeue operations.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/include/odp_ring_internal.h | 65 ++
 1 file changed, 65 insertions(+)

diff --git a/platform/linux-generic/include/odp_ring_internal.h 
b/platform/linux-generic/include/odp_ring_internal.h
index 6a6291a..55fedeb 100644
--- a/platform/linux-generic/include/odp_ring_internal.h
+++ b/platform/linux-generic/include/odp_ring_internal.h
@@ -80,6 +80,45 @@ static inline uint32_t ring_deq(ring_t *ring, uint32_t mask)
return data;
 }
 
+/* Dequeue multiple data from the ring head. Num is smaller than ring size. */
+static inline uint32_t ring_deq_multi(ring_t *ring, uint32_t mask,
+ uint32_t data[], uint32_t num)
+{
+   uint32_t head, tail, new_head, i;
+
+   head = odp_atomic_load_u32(>r_head);
+
+   /* Move reader head. This thread owns data at the new head. */
+   do {
+   tail = odp_atomic_load_u32(>w_tail);
+
+   /* Ring is empty */
+   if (head == tail)
+   return 0;
+
+   /* Try to take all available */
+   if ((tail - head) < num)
+   num = tail - head;
+
+   new_head = head + num;
+
+   } while (odp_unlikely(odp_atomic_cas_acq_u32(>r_head, ,
+ new_head) == 0));
+
+   /* Read queue index */
+   for (i = 0; i < num; i++)
+   data[i] = ring->data[(head + 1 + i) & mask];
+
+   /* Wait until other readers have updated the tail */
+   while (odp_unlikely(odp_atomic_load_acq_u32(>r_tail) != head))
+   odp_cpu_pause();
+
+   /* Now update the reader tail */
+   odp_atomic_store_rel_u32(>r_tail, new_head);
+
+   return num;
+}
+
 /* Enqueue data into the ring tail */
 static inline void ring_enq(ring_t *ring, uint32_t mask, uint32_t data)
 {
@@ -104,6 +143,32 @@ static inline void ring_enq(ring_t *ring, uint32_t mask, 
uint32_t data)
odp_atomic_store_rel_u32(>w_tail, new_head);
 }
 
+/* Enqueue multiple data into the ring tail. Num is smaller than ring size. */
+static inline void ring_enq_multi(ring_t *ring, uint32_t mask, uint32_t data[],
+ uint32_t num)
+{
+   uint32_t old_head, new_head, i;
+
+   /* Reserve a slot in the ring for writing */
+   old_head = odp_atomic_fetch_add_u32(>w_head, num);
+   new_head = old_head + 1;
+
+   /* Ring is full. Wait for the last reader to finish. */
+   while (odp_unlikely(odp_atomic_load_acq_u32(>r_tail) == new_head))
+   odp_cpu_pause();
+
+   /* Write data */
+   for (i = 0; i < num; i++)
+   ring->data[(new_head + i) & mask] = data[i];
+
+   /* Wait until other writers have updated the tail */
+   while (odp_unlikely(odp_atomic_load_acq_u32(>w_tail) != old_head))
+   odp_cpu_pause();
+
+   /* Now update the writer tail */
+   odp_atomic_store_rel_u32(>w_tail, old_head + num);
+}
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 07/23] linux-gen: pool: use ring multi enq and deq operations

2016-11-21 Thread Petri Savolainen
Use multi enq and deq operations to optimize global pool
access performance. Temporary uint32_t arrays are needed
since handles are pointer size variables.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/odp_pool.c | 32 
 1 file changed, 20 insertions(+), 12 deletions(-)

diff --git a/platform/linux-generic/odp_pool.c 
b/platform/linux-generic/odp_pool.c
index 1286753..a2e5d54 100644
--- a/platform/linux-generic/odp_pool.c
+++ b/platform/linux-generic/odp_pool.c
@@ -586,15 +586,16 @@ int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t 
buf[], int max_num)
return max_num;
}
 
-   for (i = 0; i < max_num; i++) {
-   uint32_t data;
+   {
+   /* Temporary copy needed since odp_buffer_t is uintptr_t
+* and not uint32_t. */
+   int num;
+   uint32_t data[max_num];
 
-   data = ring_deq(ring, mask);
+   num = ring_deq_multi(ring, mask, data, max_num);
 
-   if (data == RING_EMPTY)
-   break;
-
-   buf[i] = (odp_buffer_t)(uintptr_t)data;
+   for (i = 0; i < num; i++)
+   buf[i] = (odp_buffer_t)(uintptr_t)data[i];
}
 
return i;
@@ -629,17 +630,24 @@ static inline void buffer_free_to_pool(uint32_t pool_id,
cache_num = cache->num;
 
if (odp_unlikely((int)(CONFIG_POOL_CACHE_SIZE - cache_num) < num)) {
+   uint32_t index;
int burst = CACHE_BURST;
 
if (odp_unlikely(num > CACHE_BURST))
burst = num;
 
-   for (i = 0; i < burst; i++) {
-   uint32_t data, index;
+   {
+   /* Temporary copy needed since odp_buffer_t is
+* uintptr_t and not uint32_t. */
+   uint32_t data[burst];
+
+   index = cache_num - burst;
+
+   for (i = 0; i < burst; i++)
+   data[i] = (uint32_t)
+ (uintptr_t)cache->buf[index + i];
 
-   index = cache_num - burst + i;
-   data  = (uint32_t)(uintptr_t)cache->buf[index];
-   ring_enq(ring, mask, data);
+   ring_enq_multi(ring, mask, data, burst);
}
 
cache_num -= burst;
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 18/23] linux-gen: packet: remove zero len support from alloc

2016-11-21 Thread Petri Savolainen
Remove support for zero length allocations which were never
required by the API specification or tested by the validation
suite.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/odp_packet.c | 28 
 1 file changed, 28 deletions(-)

diff --git a/platform/linux-generic/odp_packet.c 
b/platform/linux-generic/odp_packet.c
index a5c6ff4..0d3fd05 100644
--- a/platform/linux-generic/odp_packet.c
+++ b/platform/linux-generic/odp_packet.c
@@ -478,7 +478,6 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t 
len)
pool_t *pool = pool_entry_from_hdl(pool_hdl);
odp_packet_t pkt;
int num, num_seg;
-   int zero_len = 0;
 
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) {
__odp_errno = EINVAL;
@@ -488,23 +487,12 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, 
uint32_t len)
if (odp_unlikely(len > pool->max_len))
return ODP_PACKET_INVALID;
 
-   if (odp_unlikely(len == 0)) {
-   len = pool->data_size;
-   zero_len = 1;
-   }
-
num_seg = num_segments(len);
num = packet_alloc(pool, len, 1, num_seg, , 0);
 
if (odp_unlikely(num == 0))
return ODP_PACKET_INVALID;
 
-   if (odp_unlikely(zero_len)) {
-   odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
-
-   pull_tail(pkt_hdr, len);
-   }
-
return pkt;
 }
 
@@ -513,7 +501,6 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t 
len,
 {
pool_t *pool = pool_entry_from_hdl(pool_hdl);
int num, num_seg;
-   int zero_len = 0;
 
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) {
__odp_errno = EINVAL;
@@ -523,24 +510,9 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t 
len,
if (odp_unlikely(len > pool->max_len))
return -1;
 
-   if (odp_unlikely(len == 0)) {
-   len = pool->data_size;
-   zero_len = 1;
-   }
-
num_seg = num_segments(len);
num = packet_alloc(pool, len, max_num, num_seg, pkt, 0);
 
-   if (odp_unlikely(zero_len)) {
-   int i;
-
-   for (i = 0; i < num; i++) {
-   odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]);
-
-   pull_tail(pkt_hdr, len);
-   }
-   }
-
return num;
 }
 
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 20/23] validation: crypto: honour pool capability limits

2016-11-21 Thread Petri Savolainen
Reduced oversized packet length and segment length requirements
from 32 kB to 1 kB (tens of bytes are actually used). Also check
that lengths are not larger than pool capabilities for those.

Signed-off-by: Petri Savolainen 
---
 test/common_plat/validation/api/crypto/crypto.c | 24 ++--
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/test/common_plat/validation/api/crypto/crypto.c 
b/test/common_plat/validation/api/crypto/crypto.c
index 9c9a00d..2089016 100644
--- a/test/common_plat/validation/api/crypto/crypto.c
+++ b/test/common_plat/validation/api/crypto/crypto.c
@@ -9,11 +9,8 @@
 #include "odp_crypto_test_inp.h"
 #include "crypto.h"
 
-#define SHM_PKT_POOL_SIZE  (512 * 2048 * 2)
-#define SHM_PKT_POOL_BUF_SIZE  (1024 * 32)
-
-#define SHM_COMPL_POOL_SIZE(128 * 1024)
-#define SHM_COMPL_POOL_BUF_SIZE128
+#define PKT_POOL_NUM  64
+#define PKT_POOL_LEN  (1 * 1024)
 
 odp_suiteinfo_t crypto_suites[] = {
{ODP_CRYPTO_SYNC_INP, crypto_suite_sync_init, NULL, crypto_suite},
@@ -44,13 +41,20 @@ int crypto_init(odp_instance_t *inst)
}
 
odp_pool_param_init();
-   params.pkt.seg_len = SHM_PKT_POOL_BUF_SIZE;
-   params.pkt.len = SHM_PKT_POOL_BUF_SIZE;
-   params.pkt.num = SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUF_SIZE;
+   params.pkt.seg_len = PKT_POOL_LEN;
+   params.pkt.len = PKT_POOL_LEN;
+   params.pkt.num = PKT_POOL_NUM;
params.type= ODP_POOL_PACKET;
 
-   if (SHM_PKT_POOL_BUF_SIZE > pool_capa.pkt.max_len)
-   params.pkt.len = pool_capa.pkt.max_len;
+   if (PKT_POOL_LEN > pool_capa.pkt.max_seg_len) {
+   fprintf(stderr, "Warning: small packet segment length\n");
+   params.pkt.seg_len = pool_capa.pkt.max_seg_len;
+   }
+
+   if (PKT_POOL_LEN > pool_capa.pkt.max_len) {
+   fprintf(stderr, "Pool max packet length too small\n");
+   return -1;
+   }
 
pool = odp_pool_create("packet_pool", );
 
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 23/23] linux-gen: packet: enable multi-segment packets

2016-11-21 Thread Petri Savolainen
Enable segmentation support with CONFIG_PACKET_MAX_SEGS
configuration option.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/include/odp_config_internal.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/platform/linux-generic/include/odp_config_internal.h 
b/platform/linux-generic/include/odp_config_internal.h
index 9a4e6eb..8818cda 100644
--- a/platform/linux-generic/include/odp_config_internal.h
+++ b/platform/linux-generic/include/odp_config_internal.h
@@ -70,12 +70,12 @@ extern "C" {
 /*
  * Maximum number of segments per packet
  */
-#define CONFIG_PACKET_MAX_SEGS 1
+#define CONFIG_PACKET_MAX_SEGS 2
 
 /*
  * Maximum packet segment size including head- and tailrooms
  */
-#define CONFIG_PACKET_SEG_SIZE (64 * 1024)
+#define CONFIG_PACKET_SEG_SIZE (8 * 1024)
 
 /* Maximum data length in a segment
  *
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 04/23] linux-gen: align: added round up power of two

2016-11-21 Thread Petri Savolainen
Added a macro to round up a value to the next power of two,
if it's not already a power of two. Also removed duplicated
code from the same file.

Signed-off-by: Petri Savolainen 
---
 .../linux-generic/include/odp_align_internal.h | 34 +-
 1 file changed, 7 insertions(+), 27 deletions(-)

diff --git a/platform/linux-generic/include/odp_align_internal.h 
b/platform/linux-generic/include/odp_align_internal.h
index 9ccde53..d9cd30b 100644
--- a/platform/linux-generic/include/odp_align_internal.h
+++ b/platform/linux-generic/include/odp_align_internal.h
@@ -29,24 +29,18 @@ extern "C" {
 
 /**
  * @internal
- * Round up pointer 'x' to alignment 'align'
- */
-#define ODP_ALIGN_ROUNDUP_PTR(x, align)\
-   ((void *)ODP_ALIGN_ROUNDUP((uintptr_t)(x), (uintptr_t)(align)))
-
-/**
- * @internal
- * Round up pointer 'x' to cache line size alignment
+ * Round up 'x' to alignment 'align'
  */
-#define ODP_CACHE_LINE_SIZE_ROUNDUP_PTR(x)\
-   ((void *)ODP_CACHE_LINE_SIZE_ROUNDUP((uintptr_t)(x)))
+#define ODP_ALIGN_ROUNDUP(x, align)\
+   ((align) * (((x) + (align) - 1) / (align)))
 
 /**
  * @internal
- * Round up 'x' to alignment 'align'
+ * When 'x' is not already a power of two, round it up to the next
+ * power of two value. Zero is not supported as an input value.
  */
-#define ODP_ALIGN_ROUNDUP(x, align)\
-   ((align) * (((x) + align - 1) / (align)))
+#define ODP_ROUNDUP_POWER_2(x)\
+   (1 << (((int)(8 * sizeof(x))) - __builtin_clz((x) - 1)))
 
 /**
  * @internal
@@ -82,20 +76,6 @@ extern "C" {
 
 /**
  * @internal
- * Round down pointer 'x' to 'align' alignment, which is a power of two
- */
-#define ODP_ALIGN_ROUNDDOWN_PTR_POWER_2(x, align)\
-((void *)ODP_ALIGN_ROUNDDOWN_POWER_2((uintptr_t)(x), (uintptr_t)(align)))
-
-/**
- * @internal
- * Round down pointer 'x' to cache line size alignment
- */
-#define ODP_CACHE_LINE_SIZE_ROUNDDOWN_PTR(x)\
-   ((void *)ODP_CACHE_LINE_SIZE_ROUNDDOWN((uintptr_t)(x)))
-
-/**
- * @internal
  * Round down 'x' to 'align' alignment, which is a power of two
  */
 #define ODP_ALIGN_ROUNDDOWN_POWER_2(x, align)\
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 19/23] linux-gen: socket: use trunc instead of pull tail

2016-11-21 Thread Petri Savolainen
This is a bug correction for multi-segment packet handling. Packet
pull tail cannot decrement packet length more than there are data
in the last segment. Trunc tail must be used instead.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/pktio/socket.c | 25 +++--
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/platform/linux-generic/pktio/socket.c 
b/platform/linux-generic/pktio/socket.c
index 9fe4a7e..7d23968 100644
--- a/platform/linux-generic/pktio/socket.c
+++ b/platform/linux-generic/pktio/socket.c
@@ -674,6 +674,7 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int 
index ODP_UNUSED,
if (cls_classify_packet(pktio_entry, base, pkt_len,
pkt_len, , _hdr))
continue;
+
num = packet_alloc_multi(pool, pkt_len, , 1);
if (num != 1)
continue;
@@ -700,6 +701,7 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int 
index ODP_UNUSED,
 
num = packet_alloc_multi(pkt_sock->pool, pkt_sock->mtu,
 _table[i], 1);
+
if (odp_unlikely(num != 1)) {
pkt_table[i] = ODP_PACKET_INVALID;
break;
@@ -724,23 +726,34 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int 
index ODP_UNUSED,
void *base = msgvec[i].msg_hdr.msg_iov->iov_base;
struct ethhdr *eth_hdr = base;
odp_packet_hdr_t *pkt_hdr;
+   odp_packet_t pkt;
+   int ret;
+
+   pkt = pkt_table[i];
 
/* Don't receive packets sent by ourselves */
if (odp_unlikely(ethaddrs_equal(pkt_sock->if_mac,
eth_hdr->h_source))) {
-   odp_packet_free(pkt_table[i]);
+   odp_packet_free(pkt);
continue;
}
-   pkt_hdr = odp_packet_hdr(pkt_table[i]);
+
/* Parse and set packet header data */
-   odp_packet_pull_tail(pkt_table[i],
-odp_packet_len(pkt_table[i]) -
-msgvec[i].msg_len);
+   ret = odp_packet_trunc_tail(, odp_packet_len(pkt) -
+   msgvec[i].msg_len,
+   NULL, NULL);
+   if (ret < 0) {
+   ODP_ERR("trunk_tail failed");
+   odp_packet_free(pkt);
+   continue;
+   }
+
+   pkt_hdr = odp_packet_hdr(pkt);
packet_parse_l2(_hdr->p, pkt_hdr->frame_len);
packet_set_ts(pkt_hdr, ts);
pkt_hdr->input = pktio_entry->s.handle;
 
-   pkt_table[nb_rx] = pkt_table[i];
+   pkt_table[nb_rx] = pkt;
nb_rx++;
}
 
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 14/23] test: validation: packet: fix bugs in tailroom and concat tests

2016-11-21 Thread Petri Savolainen
Tailroom test did not call odp_packet_extend_tail() since it pushed
tail too few bytes. Corrected the test to extend the tail by 100
bytes.

Concat test did pass the same packet as src and dst packets. There's
no valid use case to concatenate a packet into itself (forms
a loop). Corrected the test to concatenate two copies of the same
packet.

Signed-off-by: Petri Savolainen 
---
 test/common_plat/validation/api/packet/packet.c | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/test/common_plat/validation/api/packet/packet.c 
b/test/common_plat/validation/api/packet/packet.c
index 454c73f..8b31872 100644
--- a/test/common_plat/validation/api/packet/packet.c
+++ b/test/common_plat/validation/api/packet/packet.c
@@ -642,9 +642,10 @@ void packet_test_tailroom(void)
_verify_tailroom_shift(, 0);
 
if (segmentation_supported) {
-   _verify_tailroom_shift(, pull_val);
+   push_val = room + 100;
+   _verify_tailroom_shift(, push_val);
_verify_tailroom_shift(, 0);
-   _verify_tailroom_shift(, -pull_val);
+   _verify_tailroom_shift(, -push_val);
}
 
odp_packet_free(pkt);
@@ -1157,12 +1158,18 @@ void packet_test_concatsplit(void)
odp_packet_t pkt, pkt2;
uint32_t pkt_len;
odp_packet_t splits[4];
+   odp_pool_t pool;
 
-   pkt = odp_packet_copy(test_packet, odp_packet_pool(test_packet));
+   pool = odp_packet_pool(test_packet);
+   pkt  = odp_packet_copy(test_packet, pool);
+   pkt2 = odp_packet_copy(test_packet, pool);
pkt_len = odp_packet_len(test_packet);
CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID);
+   CU_ASSERT_FATAL(pkt2 != ODP_PACKET_INVALID);
+   CU_ASSERT(pkt_len == odp_packet_len(pkt));
+   CU_ASSERT(pkt_len == odp_packet_len(pkt2));
 
-   CU_ASSERT(odp_packet_concat(, pkt) == 0);
+   CU_ASSERT(odp_packet_concat(, pkt2) == 0);
CU_ASSERT(odp_packet_len(pkt) == pkt_len * 2);
_packet_compare_offset(pkt, 0, pkt, pkt_len, pkt_len);
 
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 02/23] linux-gen: pktio: do not free zero packets

2016-11-21 Thread Petri Savolainen
In some error cases, netmap and dpdk pktios were calling
odp_packet_free_multi with zero packets. Moved existing error
check to avoid a free call with zero packets.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/pktio/dpdk.c   | 10 ++
 platform/linux-generic/pktio/netmap.c | 10 ++
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/platform/linux-generic/pktio/dpdk.c 
b/platform/linux-generic/pktio/dpdk.c
index 11f3509..0eb025a 100644
--- a/platform/linux-generic/pktio/dpdk.c
+++ b/platform/linux-generic/pktio/dpdk.c
@@ -956,10 +956,12 @@ static int dpdk_send(pktio_entry_t *pktio_entry, int 
index,
rte_pktmbuf_free(tx_mbufs[i]);
}
 
-   odp_packet_free_multi(pkt_table, tx_pkts);
-
-   if (odp_unlikely(tx_pkts == 0 && __odp_errno != 0))
-   return -1;
+   if (odp_unlikely(tx_pkts == 0)) {
+   if (__odp_errno != 0)
+   return -1;
+   } else {
+   odp_packet_free_multi(pkt_table, tx_pkts);
+   }
 
return tx_pkts;
 }
diff --git a/platform/linux-generic/pktio/netmap.c 
b/platform/linux-generic/pktio/netmap.c
index 412beec..c1cdf72 100644
--- a/platform/linux-generic/pktio/netmap.c
+++ b/platform/linux-generic/pktio/netmap.c
@@ -830,10 +830,12 @@ static int netmap_send(pktio_entry_t *pktio_entry, int 
index,
if (!pkt_nm->lockless_tx)
odp_ticketlock_unlock(_nm->tx_desc_ring[index].s.lock);
 
-   odp_packet_free_multi(pkt_table, nb_tx);
-
-   if (odp_unlikely(nb_tx == 0 && __odp_errno != 0))
-   return -1;
+   if (odp_unlikely(nb_tx == 0)) {
+   if (__odp_errno != 0)
+   return -1;
+   } else {
+   odp_packet_free_multi(pkt_table, nb_tx);
+   }
 
return nb_tx;
 }
-- 
2.8.1



[lng-odp] [API-NEXT PATCH v4 01/23] linux-gen: ipc: disable build of ipc pktio

2016-11-21 Thread Petri Savolainen
IPC pktio implementation depends heavily on pool internals. It's
build is disabled due to pool re-implementation. IPC should be
re-implemented with a cleaner internal interface towards pool and
shm.

Signed-off-by: Petri Savolainen 
---
 platform/linux-generic/pktio/ipc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/platform/linux-generic/pktio/ipc.c 
b/platform/linux-generic/pktio/ipc.c
index c1f28db..0e99c6e 100644
--- a/platform/linux-generic/pktio/ipc.c
+++ b/platform/linux-generic/pktio/ipc.c
@@ -3,7 +3,7 @@
  *
  * SPDX-License-Identifier: BSD-3-Clause
  */
-
+#ifdef _ODP_PKTIO_IPC
 #include 
 #include 
 #include 
@@ -795,3 +795,4 @@ const pktio_if_ops_t ipc_pktio_ops = {
.pktin_ts_from_ns = NULL,
.config = NULL
 };
+#endif
-- 
2.8.1



[lng-odp] Test of plain text mail

2016-11-21 Thread Mike Holmes
All

This is an experiment to check that the list can notify HTML posters
that the list requires text mail.

Mike

-- 
Mike Holmes
Program Manager - Linaro Networking Group
Linaro.org │ Open source software for ARM SoCs
"Work should be fun and collaborative, the rest follows"


Re: [lng-odp] [API-NEXT PATCHv7 00/13] using _ishm as north API mem allocator

2016-11-21 Thread Christophe Milard
ping

On 17 November 2016 at 16:46, Christophe Milard
 wrote:
> since v6:
> - All points according to Petri's request i.e.:
> Odp_shm_find_external() changed again: now odp_shm_import().
> Function description updated.
>
> since v5:
> -fixed doxygen issue (Bill)
>
> since v4:
> - All points of v3 according to Petri's request i.e.:
> -All API patches merges as one single patch.
> -lock flag removed: memory is now always locked.
> -alignement and flag inherited from remote ODP instance when sharing
>  memory.
> -function name changed to: odp_shm_find_external().
>
> since v3:
> -Comments from Petri addressed, with exceptions for:
> -Patches related to the API not merged as they are part of different
>  functions.
> -Lock flag not removed: Why wouldn't any ODP application being allowed
>  to reserve memory for slower (control or other things) businnes.
>  Should really ALL shared memory bee locked?
> -Alignment and flag parameter of odp_shm_reserve_external() kept:
>  The alignment (and some flags) affect virtual memory, which is different
>  between for each ODP instance.
> -function name odp_shm_reserve_external() kept, as this function does
>  return a new handle (which requires free() as any handle would)
> These 4 items are sheduled for arch call today
>
> since v2:
> -some minor changes on the doc (Bill)
> -doc for the ODP_SHM_EXPORT flag (Christophe)
> -reduction of memory reguirement for shm tests (Mattias)
>
> since v1:
> -flag _ODP_SHM_PROC_NOCREAT and _ODP_SHM_O_EXCL get new values
> (but remain useless: Should be removed when IPC is updated)  (Maxim)
>
> -In get_ishm_flags(), odp_shm_capability() local variable flgs renamed
> (for be better distinction from other "flags" variable. (Maxim)
>
> -Added doc updates with shm api extensions.
>
> This Patch series aims at using _ishm as north API memory allocator.
> odp_shared_memory.c just becomes a wrapper around _ishm.
> _ishm supports "process mode", i.e. memory allocated  with _ishm
> is sharable by all ODP threads of a given ODP instance, regardless of
> thread type (e.g. process) or thread creation time (for time).
>
> NOTE: This patch series will break IPC: This is due to the fact that
> IPC relied on a "special case" in the former memory allocator that broke
> ODP instance scoping. I don't think this should be kept.
> I have included in this patch series a function to share memory between
> designated ODP instances. If we want to have IPC, it should use that.
>
> Christophe Milard (13):
>   linux-gen: _ishm: create description file for external memory sharing
>   linux-gen: _ishm: allow memory alloc/free at global init/term
>   linux-gen: use ishm as north API mem allocator
>   linux-gen: Push internal flag definition
>   api: shm: add flags to shm_reserve and function to find external mem
>   linux-gen: shm: new ODP_SHM_SINGLE_VA flag implementation
>   test: api: shmem: new proper tests for shm API
>   linux-gen: _ishm: adding function to map memory from other ODP
>   linux-gen: shm: add flag and function to share memory between ODP
> instances
>   test: linux-gen: api: shmem: test sharing memory between ODP instances
>   linux-gen: _ishm: cleaning remaining block at odp_term_global
>   linux_gen: _ishm: decreasing the number of error messages when no huge
> pages
>   doc: updating docs for the shm interface extension
>
>  doc/users-guide/users-guide.adoc   |  68 +-
>  include/odp/api/spec/shared_memory.h   |  46 +-
>  platform/linux-generic/_ishm.c | 456 ++
>  platform/linux-generic/include/_ishm_internal.h|   6 +
>  platform/linux-generic/include/odp_internal.h  |   5 -
>  platform/linux-generic/include/odp_shm_internal.h  |   4 +-
>  platform/linux-generic/odp_init.c  |  19 -
>  platform/linux-generic/odp_shared_memory.c | 412 ++--
>  test/common_plat/validation/api/shmem/shmem.c  | 687 
> -
>  test/common_plat/validation/api/shmem/shmem.h  |   5 +-
>  test/linux-generic/validation/api/shmem/.gitignore |   3 +-
>  .../linux-generic/validation/api/shmem/Makefile.am |  22 +-
>  .../validation/api/shmem/shmem_linux.c | 220 +--
>  .../api/shmem/{shmem_odp.c => shmem_odp1.c}|  10 +-
>  .../api/shmem/{shmem_odp.h => shmem_odp1.h}|   0
>  .../validation/api/shmem/shmem_odp2.c  |  95 +++
>  .../validation/api/shmem/shmem_odp2.h  |   7 +
>  17 files changed, 1464 insertions(+), 601 deletions(-)
>  rename test/linux-generic/validation/api/shmem/{shmem_odp.c => shmem_odp1.c} 
> (81%)
>  rename test/linux-generic/validation/api/shmem/{shmem_odp.h => shmem_odp1.h} 
> (100%)
>  create mode 100644 test/linux-generic/validation/api/shmem/shmem_odp2.c
>  create mode 100644 test/linux-generic/validation/api/shmem/shmem_odp2.h

Re: [lng-odp] enumerator and driver registration

2016-11-21 Thread Francois Ozog
Hello,

I made a google doc version of this to simplify comments and resolving.
https://docs.google.com/a/linaro.org/document/d/10gS9wPNza-EXfxu9iVdk6o7-
cfhUz2XJ2dp50t0ERas/edit?usp=sharing

(I gave edit rights to Christophe, Forest and Yi: you should you PrettyCode
addon for syntax highlighting).

FF

On 18 November 2016 at 14:57, Christophe Milard <
christophe.mil...@linaro.org> wrote:

> Here follows a first draft of what the API around this topic could be.
> A few comments:
>
> -The part concerning the enumerator registration and the driver
> registration could well be separated in different files. Nevertheless
> I think they should both be part of the same south interface
> (odpdrv_*): first because they both regard drivers, but also because a
> driver mat well be acting as a enumerator. (yes I habe a bit changed
> my mind here: we can try to have a nice name scheme to distinguish the
> areas these fuctions takle, but I am afraid that talking about
> different interfaces with this granularity will just imply too many
> interfaces).
>
> -In my mind, an enumerator will be OS dependent, but hopefully not the
> drivers.
>
> -A enumerator becomes something external to ODP ( as the drivers). ODP
> will nevertheless provide some default enumerator (still as a module)
>
> -The enumerator defines the interface(s) to the devices it enumerates.
> as an example pci.h, pci_vfio.h and pci_uio.h will be delivered by the
> PCI enumerator, not ODP as such (of course, if the PCI enumerator is
> delivered with ODP these files will be there anyway, but as a
> principle the interface is defined by the enumerator: new enumerators
> will have to provide the interface(s) to their devices.)
>
> -This is why I have introduced the version number of for the
> enumerator: Because the driver is compiled both against its enumerator
> *.h files (and also soome odp *.h files in a smaller extend), it is
> important for the driver to be sure that it is running against the
> correct version of its enumerator: loading an old driver.so against a
> newer enumerator should fail.
> Any better approach welcome.
>
> ODP should then provide functions to create ODP objects:
> create_pktio() to start with, but possibly other things like
> create_crypto_engine() or create_swith() in the future.
>
> Hope that makes sense. comment welcomes.
>
> Forrest: does this make dense for you? would your card be happy woith that?
>
> Also: Should this notion of interface be extended for other objects?
> for instance allocation receive memory space for RX may be done either
> in SW or in HW: are we talking about 2 different drv<->pktio
> interfaces here?
>
> Christophe
>



-- 
[image: Linaro] 
François-Frédéric Ozog | *Director Linaro Networking Group*
T: +33.67221.6485
francois.o...@linaro.org | Skype: ffozog


Re: [lng-odp] virtio and vhost support in ODP

2016-11-21 Thread Francois Ozog
OVS is a standard component of OpenStack. I am not sure I would say OVS is
fundamental: Juniper Contrail is probably more DEPLOYED in real NFV
architectures. Very large deployments need OpenStack Distributed Virtual
Routing (https://wiki.openstack.org/wiki/Neutron/DVR) which is not using
OVS because of its Layer 2 nature. The same applies to IPSec termination:
OpenStack VPNaaS (https://wiki.openstack.org/wiki/Neutron/VPNaaS).

OVS is fundamental for marketing benchmarking today.

Furthermore when you accelerate with OVS with userland frameworks (ODP or
DPDK):
- you loose the muti-tenant capabilities of OpenStack.
- OVS features has to be almost entirely rewritten in the userland framework

LNG is working on integrating VPP and ODP and plan to leverage P4 (which
supersedes OpenFlow) capable hardware.

FF

On 20 November 2016 at 08:37, Yehuda Yitschak  wrote:

> Hi Francois
>
> > From: Francois Ozog [mailto:francois.o...@linaro.org]
> > Sent: Wednesday, November 16, 2016 13:42
> > To: Christophe Milard
> > Cc: Yehuda Yitschak; Shadi Ammouri; lng-odp@lists.linaro.org
> > Subject: Re: [lng-odp] virtio and vhost support in ODP
> >
> > If I may complement Christophe answer:
> >
> > the target is to have ODP being able to leverage virtio-net in the VM
> directly. Today, ODP can make use of virtio-net in a VM through DPDK packet
> io using ODP-DPDK.
> >
> > Accessing virtio-net device directly from ODP require both a device
> framework and a driver framework which we are working on.
> >
> > On the host side, we do not plan to do anything yet because it indrectly
> means building OVS-ODP which may not happen at all.
>
> Just curious... Why may that not happen ?
> Today, OVS seems like a fundamental building block of network
> virtualization systems.
> I know there was an initial initiative of OVS-ODP but it seems to be
> inactive for a while
>
> Thanks
>
> Yehuda
>
> >
> > That said, we started to work on VPP on top of ODP. At some point in
> time, VPP may be a first class citizen of OpenStack and then we will be
> implementing vhost-user.
> >
> > Cordially,
> >
> >  FF
>
> On 15 November 2016 at 08:23, Christophe Milard <
> christophe.mil...@linaro.org> wrote:
> On 15 November 2016 at 06:46, Yehuda Yitschak  wrote:
> > Hi Christophe
> >
> > Sorry for the extremely late response. things  got lost in my e-mail.
> >
> > I'll try to clarify my question. Hope I'm not repeating the obvious :-)
> >
> > Virtio network implementation has 2 sides:
> > - The driver side: used by the VM, a.k.a frontend or simply
> virtio-net
> > - The device side: (a.k.a backend) implemented in 3 different
> ways today:
> > - inside QEMU.
> > - inside the kernel  (a.k.a vhost-net)
> > - inside user-space and attached to DPDK or OVS (a.k.a
> vhost-user)
> >
> > For ODP, both virtio-net and vhost-user can be useful.
> >  - vhost-net, for running ODP inside a VM and accelerating the
> traffic to the virtio NIC
> >  - vhost-user for running ODP in the hypervisor and intercepting the
> VM's traffic directly into hypervisor's user-space.
> >
> > I was wondering which of the 2 you are implementing
>
> The target is:
> The driver side:  virtio-net
>
> Christophe
>
>
> > Thanks
> >
> > Yehuda
> >
> >
> >> -Original Message-
> >> From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> >> Sent: Wednesday, October 26, 2016 17:35
> >> To: Yehuda Yitschak
> >> Cc: Shadi Ammouri; Mike Holmes
> >> Subject: Re: [lng-odp] virtio and vhost support in ODP
> >>
> >> Not sure I understand your question:
> >> I am just aiming at a simple virtio driver to start with. To run over
> Qemu.
> >> I though Qemu's backend is qemu's problem :-).
> >>
> >> Hope that does answer your question. If not, please tell me. I could
> >> definitively be the one missing something!
> >>
> >> Christophe
> >>
> >> On 26 October 2016 at 15:56, Yehuda Yitschak 
> >> wrote:
> >> > Hi Christophe
> >> >
> >> > Thanks for sharing the information. It's good to know ODP is taking
> >> > this direction
> >> >
> >> > Just wondering what is planned approach for virtio.
> >> > Are you going to port vhost-user for ODP or do you plan to use a
> >> > different method
> >> >
> >> > Thanks a lot
> >> >
> >> > Yehuda
> >> >
> >> >> -Original Message-
> >> >> From: Christophe Milard [mailto:christophe.mil...@linaro.org]
> >> >> Sent: Wednesday, October 26, 2016 14:23
> >> >> To: Yehuda Yitschak
> >> >> Cc: lng-odp@lists.linaro.org; santosh.shu...@linaro.org; Shadi
> >> >> Ammouri; Mike Holmes
> >> >> Subject: Re: [lng-odp] virtio and vhost support in ODP
> >> >>
> >> >> Hi,
> >> >>
> >> >> ODP is currentley weak when it comes to drivers, sadly.
> >> >> The way to go right now is to use the ODP-DPDK implementation, so one
> >> >> could use the functionnality available there.
> >> >>
> >> >> But I am trying to change that: I am right now trying to