Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Alexandru Badicioiu
On 22 May 2015 at 00:09, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 May 2015 at 17:45, Maxim Uvarov maxim.uva...@linaro.org wrote:

 From the rfc 3549 netlink looks like good protocol to communicate between
 data plane and control plane. And messages are defined by that protocol
 also. At least we should do something the same.

 Netlink seems limited to the specific functionality already present in the
 Linux kernel. An ODP IPC/message passing mechanism must be extensible and
 support user-defined messages. There's no reason for ODP MBUS to impose any
 message format.

Netlink is extensively implemented in Linux kernel but the RFC explicitly
doesn't limit it to this scope.
Netlink messages have a  header , defined by Netlink protocol and a payload
which contains user-defined messages in TLV format (e.g - RTM_XXX messages
for routing control). Doesn't TLV format suffice for the need of ODP
applications?


 Any (set of) applications can model their message formats on Netlink.

 I don't understand how Netlink can be used to communicate between (any
 two) two applications. Please enlighten me.

Netlink is not limited to user-kernel communication, only some of the
current services like RTM_XXX for routing configuration. For example ,
Generic Netlink allows users in both kernel and userspace -
https://lwn.net/Articles/208755/:

When looking at figure #1 it is important to note that any Generic Netlink
user can communicate with any other user over the bus using the same API
regardless of where the user resides in relation to the kernel/userspace
boundary.


 -- Ola




 Maxim.

 On 21 May 2015 at 17:46, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 May 2015 at 15:56, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 I got the impression that ODP MBUS API would define a transport
 protocol/API between an ODP

 No the MBUS API is just an API for message passing (think of the OSE IPC
 API) and doesn't specify use cases or content. Just like the ODP packet API
 doesn't specify what the content in a packet means or the format of the
 content.


 application and a control plane application, like TCP is the transport
 protocol for HTTP applications (e.g Web). Netlink defines exactly that -
 transport protocol for configuration messages.
 Maxim asked about the messages - should applications define the message
 format and/or the message content? Wouldn't be an easier task for the
 application to define only the content and let ODP to define a format?

 How can you define a format when you don't know what the messages are
 used for and what data needs to be transferred? Why should the MBUS API or
 implementations care about the message format? It's just payload and none
 of their business.

 If you want to, you can specify formats for specific purposes, e.g.
 reuse Netlink formats for the functions that Netlink supports. Some ODP
 applications may use this, other not (because they use some other protocol
 or they implement some other functionality).



 Reliability could be an issue but Netlink spec says how applications
 can create reliable protocols:


 One could create a reliable protocol between an FEC and a CPC by
using the combination of sequence numbers, ACKs, and retransmit
timers.  Both sequence numbers and ACKs are provided by Netlink;
timers are provided by Linux.

 And you could do the same in ODP but I prefer not to, this adds a level
 of complexity to the application code I do not want. Perhaps the actual
 MBUS implementation has to do this but then hidden from the applications.
 Just like TCP reliability and ordering etc is hidden from the applications
 that just do read and write.

One could create a heartbeat protocol between the FEC and CPC by
using the ECHO flags and the NLMSG_NOOP message.







 On 21 May 2015 at 16:23, Ola Liljedahl ola.liljed...@linaro.org
 wrote:

 On 21 May 2015 at 15:05, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 I was referring to the  Netlink protocol in itself, as a model for
 ODP MBUS (or IPC).

 Isn't the Netlink protocol what the endpoints send between them? This
 is not specified by the ODP IPC/MBUS API, applications can define or 
 re-use
 whatever protocol they like. The protocol definition is heavily dependent
 on what you actually use the IPC for and we shouldn't force ODP users to
 use some specific predefined protocol.

 Also the wire protocol is left undefined, this is up to the
 implementation to define and each platform can have its own definition.

 And netlink isn't even reliable. I know that that creates problems,
 e.g. impossible to get a clean and complete snapshot of e.g. the routing
 table.


 The interaction between the FEC and the CPC, in the Netlink context,
defines a protocol.  Netlink provides mechanisms for the CPC
(residing in user space) and the FEC (residing in kernel space) to
have their own protocol definition -- *kernel space and user space
just mean different 

Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Ola Liljedahl
On 22 May 2015 at 08:14, Alexandru Badicioiu alexandru.badici...@linaro.org
 wrote:



 On 22 May 2015 at 00:09, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 May 2015 at 17:45, Maxim Uvarov maxim.uva...@linaro.org wrote:

 From the rfc 3549 netlink looks like good protocol to communicate
 between data plane and control plane. And messages are defined by that
 protocol also. At least we should do something the same.

 Netlink seems limited to the specific functionality already present in
 the Linux kernel. An ODP IPC/message passing mechanism must be extensible
 and support user-defined messages. There's no reason for ODP MBUS to impose
 any message format.

 Netlink is extensively implemented in Linux kernel but the RFC explicitly
 doesn't limit it to this scope.
 Netlink messages have a  header , defined by Netlink protocol and a
 payload which contains user-defined messages in TLV format (e.g - RTM_XXX
 messages for routing control). Doesn't TLV format suffice for the need of
 ODP applications?

Why should we impose any message format on ODP applications?
An ODP MBUS implementation could perhaps use Netlink as the mechanism to
connect to other endpoints and transfer messages in both directions. By not
specifying irrelevant details in the MBUS API, we give more freedom to
implementations. I doubt Netlink will always be available or will be the
best choice on all platforms where people are trying to implement ODP.

Since the ODP implementation will control the definition of the message
event type, it can reserve memory for necessary (implementation specific)
headers preceding the user-defined payload.



 Any (set of) applications can model their message formats on Netlink.

 I don't understand how Netlink can be used to communicate between (any
 two) two applications. Please enlighten me.

 Netlink is not limited to user-kernel communication, only some of the
 current services like RTM_XXX for routing configuration. For example ,
 Generic Netlink allows users in both kernel and userspace -
 https://lwn.net/Articles/208755/:

 When looking at figure #1 it is important to note that any Generic Netlink
 user can communicate with any other user over the bus using the same API
 regardless of where the user resides in relation to the kernel/userspace
 boundary.

 Another claim but no description of or examples hon ow this is actually
accomplished.
All the examples in this articles are from the kernel perspective. Not very
useful for a user-to-user messaging mechanism.



 -- Ola




 Maxim.

 On 21 May 2015 at 17:46, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 May 2015 at 15:56, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 I got the impression that ODP MBUS API would define a transport
 protocol/API between an ODP

 No the MBUS API is just an API for message passing (think of the OSE
 IPC API) and doesn't specify use cases or content. Just like the ODP packet
 API doesn't specify what the content in a packet means or the format of the
 content.


 application and a control plane application, like TCP is the transport
 protocol for HTTP applications (e.g Web). Netlink defines exactly that -
 transport protocol for configuration messages.
 Maxim asked about the messages - should applications define the
 message format and/or the message content? Wouldn't be an easier task for
 the application to define only the content and let ODP to define a format?

 How can you define a format when you don't know what the messages are
 used for and what data needs to be transferred? Why should the MBUS API or
 implementations care about the message format? It's just payload and none
 of their business.

 If you want to, you can specify formats for specific purposes, e.g.
 reuse Netlink formats for the functions that Netlink supports. Some ODP
 applications may use this, other not (because they use some other protocol
 or they implement some other functionality).



 Reliability could be an issue but Netlink spec says how applications
 can create reliable protocols:


 One could create a reliable protocol between an FEC and a CPC by
using the combination of sequence numbers, ACKs, and retransmit
timers.  Both sequence numbers and ACKs are provided by Netlink;
timers are provided by Linux.

 And you could do the same in ODP but I prefer not to, this adds a
 level of complexity to the application code I do not want. Perhaps the
 actual MBUS implementation has to do this but then hidden from the
 applications. Just like TCP reliability and ordering etc is hidden from the
 applications that just do read and write.

One could create a heartbeat protocol between the FEC and CPC by
using the ECHO flags and the NLMSG_NOOP message.







 On 21 May 2015 at 16:23, Ola Liljedahl ola.liljed...@linaro.org
 wrote:

 On 21 May 2015 at 15:05, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 I was referring to the  Netlink protocol in itself, as a model for
 ODP MBUS (or 

Re: [lng-odp] buffer_alloc length parameter

2015-05-22 Thread Bill Fischofer
In linux-generic that value represents the default allocation length.

On Friday, May 22, 2015, Zoltan Kiss zoltan.k...@linaro.org wrote:

 Hi,

 While fixing up things in the DPDK implementation I've found that
 linux-generic might have some troubles too. odp_buffer_alloc() and
 odp_packet_alloc() uses odp_pool_to_entry(pool_hdl)-s.params.buf.size, but
 if it's a packet pool (which is always true in case of odp_packet_alloc(),
 and might be true with odp_buffer_alloc()).
 My first idea would be to use s.params.pkt.seg_len in that case, but it
 might be 0. Maybe s.seg_size would be the right value?
 If anyone has time to come up with a patch to fix this, feel free, I
 probably won't have time to work on this in the near future.

 Zoli
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [check-odp PATCH] enable optional perf testing

2015-05-22 Thread Anders Roxell
On 2015-05-20 17:18, Zoltan Kiss wrote:
 I've tested it as well:
 
 Reviewed-by: Zoltan Kiss zoltan.k...@linaro.org
 
 On 14/05/15 20:22, Mike Holmes wrote:
 Signed-off-by: Mike Holmesmike.hol...@linaro.org
 ---
   apply-and-build.sh | 1 +
   build.sh   | 1 +
   helper/generic | 9 -
   3 files changed, 10 insertions(+), 1 deletion(-)
 
 diff --git a/apply-and-build.sh b/apply-and-build.sh
 index 9d73a2f..e41c297 100755
 --- a/apply-and-build.sh
 +++ b/apply-and-build.sh
 @@ -40,6 +40,7 @@ usage() {
   echo -e \tFILE_EXT:\tsupported extensions in patch file names, 
  default: ${FILE_EXT}
   echo -e \tM32_ON_64:\tenable 32 bit builds on a 64 bit host, default: 
  0
   echo -e \tCPP_TEST:\tenable cpp test, default: 0
 +common_usage
   echo -e \tDISTCHECK:\tset to 0 to disable DISTCHECK build, default: 1
   openssl_usage
   tc_usage
 diff --git a/build.sh b/build.sh
 index 2f69bc0..2bf2ac3 100755
 --- a/build.sh
 +++ b/build.sh
 @@ -93,6 +93,7 @@ usage() {
   echo -e \tCLANG:\t\t build with clang, default: 0
   echo -e \tEXIT_ON_ERROR:\t bail out on error, default: 1
   echo -e \tCPP_TEST:\t enable cpp test, default: 0
 +common_usage
   echo -e \tDISTCHECK:\tset to 1 to enable DISTCHECK build, default: 0
   echo -e \tE_VALGRIND:\t run Valgrind, default: 0
   ${PLATFORM_SHORT}_usage
 diff --git a/helper/generic b/helper/generic
 index aa1b10a..d4be657 100644
 --- a/helper/generic
 +++ b/helper/generic
 @@ -3,7 +3,10 @@
   export SRCDIR=${ROOT_DIR}/src
   export BUILDDIR=${ROOT_DIR}/build
   export LOGDIR=${ROOT_DIR}/log
 -export EXTRA_FLAGS=${EXTRA_FLAGS} --enable-test-perf

Did a minor fixup,

export EXTRA_FLAGS=${EXTRA_FLAGS:-}

Applied,

Cheers,
Anders
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Alexandru Badicioiu
On 22 May 2015 at 12:10, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 22 May 2015 at 08:14, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:



 On 22 May 2015 at 00:09, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 May 2015 at 17:45, Maxim Uvarov maxim.uva...@linaro.org wrote:

 From the rfc 3549 netlink looks like good protocol to communicate
 between data plane and control plane. And messages are defined by that
 protocol also. At least we should do something the same.

 Netlink seems limited to the specific functionality already present in
 the Linux kernel. An ODP IPC/message passing mechanism must be extensible
 and support user-defined messages. There's no reason for ODP MBUS to impose
 any message format.

 Netlink is extensively implemented in Linux kernel but the RFC explicitly
 doesn't limit it to this scope.
 Netlink messages have a  header , defined by Netlink protocol and a
 payload which contains user-defined messages in TLV format (e.g - RTM_XXX
 messages for routing control). Doesn't TLV format suffice for the need of
 ODP applications?

 Why should we impose any message format on ODP applications?

A message format , in this case TLV, seems to be adequate for the purpose
of dataplane - control plane communication. I see it more like an useful
thing rather than a constraint. Isn't dataplane-control plane communication
the purpose of ODP MBUS?  Or is more general?

 An ODP MBUS implementation could perhaps use Netlink as the mechanism to
 connect to other endpoints and transfer messages in both directions. By not
 specifying irrelevant details in the MBUS API, we give more freedom to
 implementations. I doubt Netlink will always be available or will be the
 best choice on all platforms where people are trying to implement ODP.

You see Linux Netlink as a possible implementation for ODP MBUS, I see
Netlink as the protocol for ODP MBUS. ODP implementation must provide the
Netlink protocol, applications will use the MBUS API to build and send
messages (libnl is an example). Particular implementations can use Linux
kernel Netlink , others can do a complete userspace implementation even
with HW acceleration (DMA copy for example).


 Since the ODP implementation will control the definition of the message
 event type, it can reserve memory for necessary (implementation specific)
 headers preceding the user-defined payload.



 Any (set of) applications can model their message formats on Netlink.

 I don't understand how Netlink can be used to communicate between (any
 two) two applications. Please enlighten me.

 Netlink is not limited to user-kernel communication, only some of the
 current services like RTM_XXX for routing configuration. For example ,
 Generic Netlink allows users in both kernel and userspace -
 https://lwn.net/Articles/208755/:

 When looking at figure #1 it is important to note that any Generic Netlink
 user can communicate with any other user over the bus using the same API
 regardless of where the user resides in relation to the kernel/userspace
 boundary.

 Another claim but no description of or examples hon ow this is actually
 accomplished.
 All the examples in this articles are from the kernel perspective. Not
 very useful for a user-to-user messaging mechanism.

This is accomplished by the means of socket communication. Netlink protocol
works over sockets like any other protocol using sockets (UDP/TCP).
AF_NETLINK address has a pid member which identifies the destination
process (http://man7.org/linux/man-pages/man7/netlink.7.html - Address
formats paragraph).




 -- Ola




 Maxim.

 On 21 May 2015 at 17:46, Ola Liljedahl ola.liljed...@linaro.org
 wrote:

 On 21 May 2015 at 15:56, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 I got the impression that ODP MBUS API would define a transport
 protocol/API between an ODP

 No the MBUS API is just an API for message passing (think of the OSE
 IPC API) and doesn't specify use cases or content. Just like the ODP 
 packet
 API doesn't specify what the content in a packet means or the format of 
 the
 content.


 application and a control plane application, like TCP is the
 transport protocol for HTTP applications (e.g Web). Netlink defines 
 exactly
 that - transport protocol for configuration messages.
 Maxim asked about the messages - should applications define the
 message format and/or the message content? Wouldn't be an easier task for
 the application to define only the content and let ODP to define a 
 format?

 How can you define a format when you don't know what the messages are
 used for and what data needs to be transferred? Why should the MBUS API or
 implementations care about the message format? It's just payload and none
 of their business.

 If you want to, you can specify formats for specific purposes, e.g.
 reuse Netlink formats for the functions that Netlink supports. Some ODP
 applications may use this, other not (because they use some other protocol
 or they 

[lng-odp] [PATCH 1/2 v2] examples: ipsec: tunnel mode support

2015-05-22 Thread alexandru.badicioiu
From: Alexandru Badicioiu alexandru.badici...@linaro.org

v1 - added comment for tunnel DB entry use
v2 - fixed tun pointer initialization, new checkpatch compliance

Tunnel mode is enabled from the command line using -t argument with
the following format: SrcIP:DstIP:TunnelSrcIP:TunnelDstIP.
SrcIP - cleartext packet source IP
DstIP - cleartext packet destination IP
TunnelSrcIP - tunnel source IP
TunnelDstIP - tunnel destination IP

The outbound packets matching SrcIP:DstIP will be encapsulated
in a TunnelSrcIP:TunnelDstIP IPSec tunnel (AH/ESP/AH+ESP)
if a matching outbound SA is determined (as for transport mode).
For inbound packets each entry in the IPSec cache is matched
for the cleartext addresses, as in the transport mode (SrcIP:DstIP)
and then for the tunnel addresses (TunnelSrcIP:TunnelDstIP)
in case cleartext addresses didn't match. After authentication and
decryption tunneled packets are verified against the tunnel entry
(packets came in from the expected tunnel).

Signed-off-by: Alexandru Badicioiu alexandru.badici...@linaro.org
Reviewed-by: Steve Kordus skor...@cisco.com
---
 example/ipsec/odp_ipsec.c|  107 +++---
 example/ipsec/odp_ipsec_cache.c  |   32 +-
 example/ipsec/odp_ipsec_cache.h  |6 ++
 example/ipsec/odp_ipsec_sa_db.c  |  134 +-
 example/ipsec/odp_ipsec_sa_db.h  |   55 
 example/ipsec/odp_ipsec_stream.c |  102 
 6 files changed, 406 insertions(+), 30 deletions(-)

diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c
index 99ccd6b..ed56e14 100644
--- a/example/ipsec/odp_ipsec.c
+++ b/example/ipsec/odp_ipsec.c
@@ -135,13 +135,20 @@ typedef struct {
uint8_t  ip_ttl; /** Saved IP TTL value */
int  hdr_len;/** Length of IPsec headers */
int  trl_len;/** Length of IPsec trailers */
+   uint16_t tun_hdr_offset; /** Offset of tunnel header from
+ buffer start */
uint16_t ah_offset;  /** Offset of AH header from buffer start */
uint16_t esp_offset; /** Offset of ESP header from buffer start */
 
+   /* Input only */
+   uint32_t src_ip; /** SA source IP address */
+   uint32_t dst_ip; /** SA dest IP address */
+
/* Output only */
odp_crypto_op_params_t params;  /** Parameters for crypto call */
uint32_t *ah_seq;   /** AH sequence number location */
uint32_t *esp_seq;  /** ESP sequence number location */
+   uint16_t *tun_hdr_id;   /** Tunnel header ID  */
 } ipsec_ctx_t;
 
 /**
@@ -357,6 +364,7 @@ void ipsec_init_pre(void)
/* Initialize our data bases */
init_sp_db();
init_sa_db();
+   init_tun_db();
init_ipsec_cache();
 }
 
@@ -376,19 +384,27 @@ void ipsec_init_post(crypto_api_mode_e api_mode)
for (entry = sp_db-list; NULL != entry; entry = entry-next) {
sa_db_entry_t *cipher_sa = NULL;
sa_db_entry_t *auth_sa = NULL;
+   tun_db_entry_t *tun = NULL;
 
-   if (entry-esp)
+   if (entry-esp) {
cipher_sa = find_sa_db_entry(entry-src_subnet,
 entry-dst_subnet,
 1);
-   if (entry-ah)
+   tun = find_tun_db_entry(cipher_sa-src_ip,
+   cipher_sa-dst_ip);
+   }
+   if (entry-ah) {
auth_sa = find_sa_db_entry(entry-src_subnet,
   entry-dst_subnet,
   0);
+   tun = find_tun_db_entry(auth_sa-src_ip,
+   auth_sa-dst_ip);
+   }
 
if (cipher_sa || auth_sa) {
if (create_ipsec_cache_entry(cipher_sa,
 auth_sa,
+tun,
 api_mode,
 entry-input,
 completionq,
@@ -670,6 +686,8 @@ pkt_disposition_e do_ipsec_in_classify(odp_packet_t pkt,
ctx-ipsec.esp_offset = esp ? ((uint8_t *)esp) - buf : 0;
ctx-ipsec.hdr_len = hdr_len;
ctx-ipsec.trl_len = 0;
+   ctx-ipsec.src_ip = entry-src_ip;
+   ctx-ipsec.dst_ip = entry-dst_ip;
 
/*If authenticating, zero the mutable fields build the request */
if (ah) {
@@ -750,6 +768,24 @@ pkt_disposition_e do_ipsec_in_finish(odp_packet_t pkt,
trl_len += esp_t-pad_len + sizeof(*esp_t);
}
 
+   /* We have a tunneled IPv4 packet */
+   if (ip-proto == 

[lng-odp] Makefile problem on v.1.1.0.0 tag

2015-05-22 Thread Radu-Andrei Bulie
Hi,

I tried to make a build based on the aforementioned tag (after doing the normal 
bootstrap sequence)
but an error was thrown:

make[2]: Entering directory /path/odp/linux-generic
Makefile:1073: *** missing separator.  Stop.

After some investigations I found that if I do the following(inserting a tab) - 
in platform/Makefile.inc :

diff --git a/platform/Makefile.inc b/platform/Makefile.inc
index f232daa..a153b93 100644
--- a/platform/Makefile.inc
+++ b/platform/Makefile.inc
@@ -12,6 +12,6 @@ lib_LTLIBRARIES = $(LIB)/libodp.la

AM_LDFLAGS += -version-number '$(ODP_LIBSO_VERSION)'

-GIT_DESC !=$(top_srcdir)/scripts/git_hash.sh
+   GIT_DESC !=$(top_srcdir)/scripts/git_hash.sh
AM_CFLAGS += -DGIT_HASH=$(GIT_DESC)
AM_CFLAGS += -DPLATFORM=${with_platform}

the compilation succeeds.

Can you confirm this error?

Regards,

Radu
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv6 3/5] ipc: pool_create implement _ODP_SHM_NULL_LOCAL for linux-generic

2015-05-22 Thread Ciprian Barbu
On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:
 On init odp creates odp_sched_pool. Because of we can not
 modify API to add new parameter to odp_pool_param_t this
 pool should not be shared between different processes. To
 make that add special value to shm provided to pool saying that
 this pool should be in local memory only.

This description is not very clear.


 Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
 ---
  platform/linux-generic/include/odp/plat/shared_memory_types.h | 4 
  platform/linux-generic/odp_pool.c | 9 +++--
  platform/linux-generic/odp_schedule.c | 2 +-
  3 files changed, 12 insertions(+), 3 deletions(-)

 diff --git a/platform/linux-generic/include/odp/plat/shared_memory_types.h 
 b/platform/linux-generic/include/odp/plat/shared_memory_types.h
 index 4be7356..908bb2e 100644
 --- a/platform/linux-generic/include/odp/plat/shared_memory_types.h
 +++ b/platform/linux-generic/include/odp/plat/shared_memory_types.h
 @@ -30,6 +30,10 @@ typedef ODP_HANDLE_T(odp_shm_t);

  #define ODP_SHM_INVALID _odp_cast_scalar(odp_shm_t, 0)
  #define ODP_SHM_NULL ODP_SHM_INVALID
 +/** NULL shared memory but do not create IPC object for it.
 + *  Platfrom specific flag.
 + */
 +#define _ODP_SHM_NULL_LOCAL _odp_cast_scalar(odp_shm_t, 0xULL - 1)

I don't think we should have this visible to ODP applications, because
it's a workaround.


  /** Get printable format of odp_shm_t */
  static inline uint64_t odp_shm_to_u64(odp_shm_t hdl)
 diff --git a/platform/linux-generic/odp_pool.c 
 b/platform/linux-generic/odp_pool.c
 index cd2c449..b2f30dc 100644
 --- a/platform/linux-generic/odp_pool.c
 +++ b/platform/linux-generic/odp_pool.c
 @@ -151,10 +151,14 @@ odp_pool_t odp_pool_create(const char *name,
 odp_pool_t pool_hdl = ODP_POOL_INVALID;
 pool_entry_t *pool;
 uint32_t i, headroom = 0, tailroom = 0;
 +   uint32_t shm_flags = 0;

 if (params == NULL)
 return ODP_POOL_INVALID;

 +   if (shm == ODP_SHM_NULL)
 +   shm_flags = ODP_SHM_PROC;
 +
 /* Default size and align for timeouts */
 if (params-type == ODP_POOL_TIMEOUT) {
 params-buf.size  = 0; /* tmo.__res1 */
 @@ -289,10 +293,11 @@ odp_pool_t odp_pool_create(const char *name,
   mdata_size +
   udata_size);

 -   if (shm == ODP_SHM_NULL) {
 +   if (shm == ODP_SHM_NULL || shm == _ODP_SHM_NULL_LOCAL) {
 shm = odp_shm_reserve(pool-s.name,
   pool-s.pool_size,
 - ODP_PAGE_SIZE, 0);
 + ODP_PAGE_SIZE,
 + shm_flags);
 if (shm == ODP_SHM_INVALID) {
 POOL_UNLOCK(pool-s.lock);
 return ODP_POOL_INVALID;
 diff --git a/platform/linux-generic/odp_schedule.c 
 b/platform/linux-generic/odp_schedule.c
 index a63f97a..07422bd 100644
 --- a/platform/linux-generic/odp_schedule.c
 +++ b/platform/linux-generic/odp_schedule.c
 @@ -129,7 +129,7 @@ int odp_schedule_init_global(void)
 params.buf.num   = NUM_SCHED_CMD;
 params.type  = ODP_POOL_BUFFER;

 -   pool = odp_pool_create(odp_sched_pool, ODP_SHM_NULL, params);
 +   pool = odp_pool_create(odp_sched_pool, _ODP_SHM_NULL_LOCAL, 
 params);

I think you're going about this the wrong way. ODP_SHM_NULL is a
convenient way to let the implementation chose the memory where it
will create buffers / packets / tmo. It was introduced because most
hardware platforms don't use shared memory, since they have their own
buffer management. But linux-generic can only use shared memory, and
it's ok to allocate an shm here because it's inside the
implementation, there is no visibility towards the application. So I
think you should allocated the shm here with the right flags and
remove the _ODP_SHM_NULL_LOCAL hack.


 if (pool == ODP_POOL_INVALID) {
 ODP_ERR(Schedule init: Pool create failed.\n);
 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 1/2 v1] examples: ipsec: tunnel mode support

2015-05-22 Thread Steve Kordus (skordus)
Reviewed-by: Steve Kordus skor...@cisco.com

-Original Message-
From: Maxim Uvarov [mailto:maxim.uva...@linaro.org] 
Sent: Thursday, May 21, 2015 12:47 PM
To: lng-odp@lists.linaro.org; Steve Kordus (skordus)
Subject: Re: [lng-odp] [PATCH 1/2 v1] examples: ipsec: tunnel mode support

CC Steve, it he is not in mailing list.

Maxim.

On 05/19/2015 11:38, alexandru.badici...@linaro.org wrote:
 From: Alexandru Badicioiu alexandru.badici...@linaro.org

 v1 - added comment for tunnel DB entry use

 Tunnel mode is enabled from the command line using -t argument with 
 the following format: SrcIP:DstIP:TunnelSrcIP:TunnelDstIP.
 SrcIP - cleartext packet source IP
 DstIP - cleartext packet destination IP TunnelSrcIP - tunnel source IP 
 TunnelDstIP - tunnel destination IP

 The outbound packets matching SrcIP:DstIP will be encapsulated in a 
 TunnelSrcIP:TunnelDstIP IPSec tunnel (AH/ESP/AH+ESP) if a matching 
 outbound SA is determined (as for transport mode).
 For inbound packets each entry in the IPSec cache is matched for the 
 cleartext addresses, as in the transport mode (SrcIP:DstIP) and then 
 for the tunnel addresses (TunnelSrcIP:TunnelDstIP) in case cleartext 
 addresses didn't match. After authentication and decryption tunneled 
 packets are verified against the tunnel entry (packets came in from 
 the expected tunnel).

 Signed-off-by: Alexandru Badicioiu alexandru.badici...@linaro.org
 ---
   example/ipsec/odp_ipsec.c|  105 +++---
   example/ipsec/odp_ipsec_cache.c  |   31 +-
   example/ipsec/odp_ipsec_cache.h  |6 ++
   example/ipsec/odp_ipsec_sa_db.c  |  133 
 +-
   example/ipsec/odp_ipsec_sa_db.h  |   57 
   example/ipsec/odp_ipsec_stream.c |  101 
   6 files changed, 403 insertions(+), 30 deletions(-)

 diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c 
 index 82ed0cb..3931fef 100644
 --- a/example/ipsec/odp_ipsec.c
 +++ b/example/ipsec/odp_ipsec.c
 @@ -135,13 +135,20 @@ typedef struct {
   uint8_t  ip_ttl; /** Saved IP TTL value */
   int  hdr_len;/** Length of IPsec headers */
   int  trl_len;/** Length of IPsec trailers */
 + uint16_t tun_hdr_offset; /** Offset of tunnel header from
 +   buffer start */
   uint16_t ah_offset;  /** Offset of AH header from buffer start */
   uint16_t esp_offset; /** Offset of ESP header from buffer start */
   
 + /* Input only */
 + uint32_t src_ip; /** SA source IP address */
 + uint32_t dst_ip; /** SA dest IP address */
 +
   /* Output only */
   odp_crypto_op_params_t params;  /** Parameters for crypto call */
   uint32_t *ah_seq;   /** AH sequence number location */
   uint32_t *esp_seq;  /** ESP sequence number location */
 + uint16_t *tun_hdr_id;   /** Tunnel header ID  */
   } ipsec_ctx_t;
   
   /**
 @@ -368,6 +375,7 @@ void ipsec_init_pre(void)
   /* Initialize our data bases */
   init_sp_db();
   init_sa_db();
 + init_tun_db();
   init_ipsec_cache();
   }
   
 @@ -387,19 +395,27 @@ void ipsec_init_post(crypto_api_mode_e api_mode)
   for (entry = sp_db-list; NULL != entry; entry = entry-next) {
   sa_db_entry_t *cipher_sa = NULL;
   sa_db_entry_t *auth_sa = NULL;
 + tun_db_entry_t *tun;
   
 - if (entry-esp)
 + if (entry-esp) {
   cipher_sa = find_sa_db_entry(entry-src_subnet,
entry-dst_subnet,
1);
 - if (entry-ah)
 + tun = find_tun_db_entry(cipher_sa-src_ip,
 +   cipher_sa-dst_ip);
 + }
 + if (entry-ah) {
   auth_sa = find_sa_db_entry(entry-src_subnet,
  entry-dst_subnet,
  0);
 + tun = find_tun_db_entry(auth_sa-src_ip,
 +   auth_sa-dst_ip);
 + }
   
   if (cipher_sa || auth_sa) {
   if (create_ipsec_cache_entry(cipher_sa,
auth_sa,
 +  tun,
api_mode,
entry-input,
completionq,
 @@ -672,6 +688,8 @@ pkt_disposition_e do_ipsec_in_classify(odp_packet_t pkt,
   ctx-ipsec.esp_offset = esp ? ((uint8_t *)esp) - buf : 0;
   ctx-ipsec.hdr_len = hdr_len;
   ctx-ipsec.trl_len = 0;
 + ctx-ipsec.src_ip = entry-src_ip;
 + ctx-ipsec.dst_ip = 

Re: [lng-odp] [PATCHv6 3/5] ipc: pool_create implement _ODP_SHM_NULL_LOCAL for linux-generic

2015-05-22 Thread Maxim Uvarov

On 05/22/15 13:25, Ciprian Barbu wrote:

On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:

On init odp creates odp_sched_pool. Because of we can not
modify API to add new parameter to odp_pool_param_t this
pool should not be shared between different processes. To
make that add special value to shm provided to pool saying that
this pool should be in local memory only.

This description is not very clear.


Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
---
  platform/linux-generic/include/odp/plat/shared_memory_types.h | 4 
  platform/linux-generic/odp_pool.c | 9 +++--
  platform/linux-generic/odp_schedule.c | 2 +-
  3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/platform/linux-generic/include/odp/plat/shared_memory_types.h 
b/platform/linux-generic/include/odp/plat/shared_memory_types.h
index 4be7356..908bb2e 100644
--- a/platform/linux-generic/include/odp/plat/shared_memory_types.h
+++ b/platform/linux-generic/include/odp/plat/shared_memory_types.h
@@ -30,6 +30,10 @@ typedef ODP_HANDLE_T(odp_shm_t);

  #define ODP_SHM_INVALID _odp_cast_scalar(odp_shm_t, 0)
  #define ODP_SHM_NULL ODP_SHM_INVALID
+/** NULL shared memory but do not create IPC object for it.
+ *  Platfrom specific flag.
+ */
+#define _ODP_SHM_NULL_LOCAL _odp_cast_scalar(odp_shm_t, 0xULL - 1)

I don't think we should have this visible to ODP applications, because
it's a workaround.


  /** Get printable format of odp_shm_t */
  static inline uint64_t odp_shm_to_u64(odp_shm_t hdl)
diff --git a/platform/linux-generic/odp_pool.c 
b/platform/linux-generic/odp_pool.c
index cd2c449..b2f30dc 100644
--- a/platform/linux-generic/odp_pool.c
+++ b/platform/linux-generic/odp_pool.c
@@ -151,10 +151,14 @@ odp_pool_t odp_pool_create(const char *name,
 odp_pool_t pool_hdl = ODP_POOL_INVALID;
 pool_entry_t *pool;
 uint32_t i, headroom = 0, tailroom = 0;
+   uint32_t shm_flags = 0;

 if (params == NULL)
 return ODP_POOL_INVALID;

+   if (shm == ODP_SHM_NULL)
+   shm_flags = ODP_SHM_PROC;
+
 /* Default size and align for timeouts */
 if (params-type == ODP_POOL_TIMEOUT) {
 params-buf.size  = 0; /* tmo.__res1 */
@@ -289,10 +293,11 @@ odp_pool_t odp_pool_create(const char *name,
   mdata_size +
   udata_size);

-   if (shm == ODP_SHM_NULL) {
+   if (shm == ODP_SHM_NULL || shm == _ODP_SHM_NULL_LOCAL) {
 shm = odp_shm_reserve(pool-s.name,
   pool-s.pool_size,
- ODP_PAGE_SIZE, 0);
+ ODP_PAGE_SIZE,
+ shm_flags);
 if (shm == ODP_SHM_INVALID) {
 POOL_UNLOCK(pool-s.lock);
 return ODP_POOL_INVALID;
diff --git a/platform/linux-generic/odp_schedule.c 
b/platform/linux-generic/odp_schedule.c
index a63f97a..07422bd 100644
--- a/platform/linux-generic/odp_schedule.c
+++ b/platform/linux-generic/odp_schedule.c
@@ -129,7 +129,7 @@ int odp_schedule_init_global(void)
 params.buf.num   = NUM_SCHED_CMD;
 params.type  = ODP_POOL_BUFFER;

-   pool = odp_pool_create(odp_sched_pool, ODP_SHM_NULL, params);
+   pool = odp_pool_create(odp_sched_pool, _ODP_SHM_NULL_LOCAL, params);

I think you're going about this the wrong way. ODP_SHM_NULL is a
convenient way to let the implementation chose the memory where it
will create buffers / packets / tmo. It was introduced because most
hardware platforms don't use shared memory, since they have their own
buffer management. But linux-generic can only use shared memory, and
it's ok to allocate an shm here because it's inside the
implementation, there is no visibility towards the application. So I
think you should allocated the shm here with the right flags and
remove the _ODP_SHM_NULL_LOCAL hack.


we were planned to remove shm argument from pool_create() in future. For 
not it will work, but later

I'm not sure what is the good solution for that.

Maxim.


 if (pool == ODP_POOL_INVALID) {
 ODP_ERR(Schedule init: Pool create failed.\n);
--
1.9.1

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 1/2 v1] examples: ipsec: tunnel mode support

2015-05-22 Thread Maxim Uvarov

Alex you need to update patch to match new checkpatch.

Feel free to include Steve review-by to the patch.

also one comment about tun bellow.

Thank you,
Maxim.

On 05/19/15 11:38, alexandru.badici...@linaro.org wrote:

From: Alexandru Badicioiu alexandru.badici...@linaro.org

v1 - added comment for tunnel DB entry use

Tunnel mode is enabled from the command line using -t argument with
the following format: SrcIP:DstIP:TunnelSrcIP:TunnelDstIP.
SrcIP - cleartext packet source IP
DstIP - cleartext packet destination IP
TunnelSrcIP - tunnel source IP
TunnelDstIP - tunnel destination IP

The outbound packets matching SrcIP:DstIP will be encapsulated
in a TunnelSrcIP:TunnelDstIP IPSec tunnel (AH/ESP/AH+ESP)
if a matching outbound SA is determined (as for transport mode).
For inbound packets each entry in the IPSec cache is matched
for the cleartext addresses, as in the transport mode (SrcIP:DstIP)
and then for the tunnel addresses (TunnelSrcIP:TunnelDstIP)
in case cleartext addresses didn't match. After authentication and
decryption tunneled packets are verified against the tunnel entry
(packets came in from the expected tunnel).

Signed-off-by: Alexandru Badicioiu alexandru.badici...@linaro.org
---
  example/ipsec/odp_ipsec.c|  105 +++---
  example/ipsec/odp_ipsec_cache.c  |   31 +-
  example/ipsec/odp_ipsec_cache.h  |6 ++
  example/ipsec/odp_ipsec_sa_db.c  |  133 +-
  example/ipsec/odp_ipsec_sa_db.h  |   57 
  example/ipsec/odp_ipsec_stream.c |  101 
  6 files changed, 403 insertions(+), 30 deletions(-)

diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c
index 82ed0cb..3931fef 100644
--- a/example/ipsec/odp_ipsec.c
+++ b/example/ipsec/odp_ipsec.c
@@ -135,13 +135,20 @@ typedef struct {
uint8_t  ip_ttl; /** Saved IP TTL value */
int  hdr_len;/** Length of IPsec headers */
int  trl_len;/** Length of IPsec trailers */
+   uint16_t tun_hdr_offset; /** Offset of tunnel header from
+ buffer start */
uint16_t ah_offset;  /** Offset of AH header from buffer start */
uint16_t esp_offset; /** Offset of ESP header from buffer start */
  
+	/* Input only */

+   uint32_t src_ip; /** SA source IP address */
+   uint32_t dst_ip; /** SA dest IP address */
+
/* Output only */
odp_crypto_op_params_t params;  /** Parameters for crypto call */
uint32_t *ah_seq;   /** AH sequence number location */
uint32_t *esp_seq;  /** ESP sequence number location */
+   uint16_t *tun_hdr_id;   /** Tunnel header ID  */
  } ipsec_ctx_t;
  
  /**

@@ -368,6 +375,7 @@ void ipsec_init_pre(void)
/* Initialize our data bases */
init_sp_db();
init_sa_db();
+   init_tun_db();
init_ipsec_cache();
  }
  
@@ -387,19 +395,27 @@ void ipsec_init_post(crypto_api_mode_e api_mode)

for (entry = sp_db-list; NULL != entry; entry = entry-next) {
sa_db_entry_t *cipher_sa = NULL;
sa_db_entry_t *auth_sa = NULL;
+   tun_db_entry_t *tun;
  


you also need to set tun to NULL. Arm gcc complains about it:

odp_ipsec.c: In function ‘main’:
odp_ipsec.c:405:32: error: ‘tun’ may be used uninitialized in this 
function [-Werror=maybe-uninitialized]

if (create_ipsec_cache_entry(cipher_sa,
^
odp_ipsec.c:387:19: note: ‘tun’ was declared here
   tun_db_entry_t *tun;
   ^


-   if (entry-esp)
+   if (entry-esp) {
cipher_sa = find_sa_db_entry(entry-src_subnet,
 entry-dst_subnet,
 1);
-   if (entry-ah)
+   tun = find_tun_db_entry(cipher_sa-src_ip,
+ cipher_sa-dst_ip);
+   }
+   if (entry-ah) {
auth_sa = find_sa_db_entry(entry-src_subnet,
   entry-dst_subnet,
   0);
+   tun = find_tun_db_entry(auth_sa-src_ip,
+ auth_sa-dst_ip);
+   }
  
  		if (cipher_sa || auth_sa) {

if (create_ipsec_cache_entry(cipher_sa,
 auth_sa,
+tun,
 api_mode,
 entry-input,
 completionq,
@@ -672,6 +688,8 @@ pkt_disposition_e do_ipsec_in_classify(odp_packet_t pkt,

Re: [lng-odp] odp timer unit test case question

2015-05-22 Thread Ola Liljedahl
On 22 May 2015 at 06:57, Bala Manoharan bala.manoha...@linaro.org wrote:

 On 21 May 2015 at 16:44, Ola Liljedahl ola.liljed...@linaro.org wrote:
  On 21 May 2015 at 09:53, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:
  On Wed, May 20, 2015 at 05:28:24PM +0200, Ola Liljedahl wrote:
  On 20 May 2015 at 16:16, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:
   On Wed, May 20, 2015 at 12:42:29PM +0200, Ola Liljedahl wrote:
   On 20 May 2015 at 06:56, Jerin Jacob 
 jerin.ja...@caviumnetworks.com wrote:
On Wed, May 20, 2015 at 12:25:12AM +0200, Ola Liljedahl wrote:
On 19 May 2015 at 15:34, Jacob,  Jerin 
 jerin.ja...@caviumnetworks.com wrote:
 Ola,

 Is there any specific reason for following check in timer
 validation test ?
pa

 diff --git a/test/validation/odp_timer.c
 b/test/validation/odp_timer.c
 index 554b353..724026e 100644
 --- a/test/validation/odp_timer.c
 +++ b/test/validation/odp_timer.c
 @@ -260,7 +260,7 @@ static void handle_tmo(odp_event_t ev,
 bool stale, uint64_t prev_tick)

 if (ttp != NULL) {
 /* Internal error */
 -   CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
 +--CU_ASSERT_FATAL(ttp-ev == ODP_EVENT_INVALID);
 ttp-ev = ev;
 }
  }

 AFAIU, I should be CU_ASSERT_FATAL(ttp-ev !=
 ODP_EVENT_INVALID) as
 tt[i].ev = odp_timeout_to_event(odp_timeout_alloc(tbp))
 specified while preparing  all timers.
Yes the timers are still inactive and the timeout event is
 stored in
the 'ev' member.
   
handle_timeout() is called for received timeouts (timer has
 expired).
In that case, the corresponding 'ev' member should not contain
 any
timeout event.
   

 Am I missing something in the timer specification ?
Or the timer specification is missing something?
   
odp_timer_set_abs(tt[i].tim, tck, tt[i].ev); (line 309) is
 supposed
to grab the timeout event (on success) and clear the variable
 (write
ODP_TIMEOUT_INVALID), that's why the timeout is passed by
 reference
(tt[i].ev).
   
Possibly this is not specified clearly enough in timer.h:
 * @param[in,out] tmo_ev  Reference to an event variable that
 points to
 * timeout event or NULL to reuse the existing timeout event.
 Any existing
 * timeout event that is replaced by a successful set operation
 will be
 * returned here.
   
The new timeout event is read from *tmo_ev. The old timeout
 event (if
timer was active) or ODP_TIMEOUT_INVALID (if timer was inactive)
 is
stored in *tmo_ev. I hope this is at least clear in the reference
implementation.
   
We are on same page, except the last notes
IMO, linux generic timer implementation details leaked into
 creating the test case.
   Well I don't agree and I hope I can convince you.
  
   
AFAIU, *tmo_ev should have the event that used for _arming_ the
 timer so
that application can do some look up after receiving event
 through queue or something similar..
What is the point of providing ODP_TIMEOUT_INVALID to
 application back, What the
use of it for the application.
   It is possible to set an already active timer (which then is already
   associated with a timeout). If the user specifies a new timeout, the
   old timeout must be returned to the user (because all alloc and free
   of timeouts is the responsibility of the user). So any old timeout
   (for an already active timer) is return in *tmo_ev. But it is
   possible that the timer has already expired (and the timeout been
   delivered) or wasn't active to start with. We want the application
 to
   be able to differ between these two scenarios and we achieve this by
   updating *tmo_ev accordingly. When the timer_set call return, if
   *tmo_ev != ODP_EVENT_INVALID, an timeout has been returned and the
   application needs to do something with it. If *tno_ev ==
   ODP_EVENT_INVALID, no timeout was returned.
  
  
   Just to understand the usecase, What application is gonna do with
 returned *tmp_ev
   if timer is active and it returned the associated timeout ?
  Either the application specified a new timeout in the timer_set call
  and it is that timeout which will be delivered upon timer expiration.
  If a timeout is returned (the old timeout for an already active
  timer), the application should free it or re-use it.
 
   it can't free as it will be cause double free when it comes back in
   app mainloop(it will have odp_timeout_free() there).
  If a timeout is returned in *tmo_ev then it is not the same timeout.
  Old vs. new.
 
  
   and application can't use the returned associated timeout for long
 time
   what if it event is delivered and  free'ed it in the main loop.
   Typical main loop application
   processing will be check for event type, process it and free the
 resources
  
   Is this scheme is replacement for the API like odp_timer_active() to
 find the timer active 

Re: [lng-odp] [PATCHv6 5/5] ipc: example app

2015-05-22 Thread Maxim Uvarov

On 05/22/15 13:32, Ciprian Barbu wrote:

On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:

Simple example app creates one packet i/o to external interface
and one ipc pktio to other process. Then transfer packet from
external interface to other process and back thought ipc queue.

Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
---
  configure.ac|   1 +
  example/Makefile.am |   2 +-
  example/ipc/.gitignore  |   1 +
  example/ipc/Makefile.am |   7 +
  example/ipc/odp_ipc.c   | 445 
  5 files changed, 455 insertions(+), 1 deletion(-)
  create mode 100644 example/ipc/.gitignore
  create mode 100644 example/ipc/Makefile.am
  create mode 100644 example/ipc/odp_ipc.c

diff --git a/configure.ac b/configure.ac
index d20bad2..1ceb922 100644
--- a/configure.ac
+++ b/configure.ac
@@ -274,6 +274,7 @@ AC_CONFIG_FILES([Makefile
  example/Makefile
  example/classifier/Makefile
  example/generator/Makefile
+example/ipc/Makefile
  example/ipsec/Makefile
  example/packet/Makefile
  example/timer/Makefile
diff --git a/example/Makefile.am b/example/Makefile.am
index 353f397..506963f 100644
--- a/example/Makefile.am
+++ b/example/Makefile.am
@@ -1 +1 @@
-SUBDIRS = classifier generator ipsec packet timer
+SUBDIRS = classifier generator ipc ipsec packet timer
diff --git a/example/ipc/.gitignore b/example/ipc/.gitignore
new file mode 100644
index 000..963d99d
--- /dev/null
+++ b/example/ipc/.gitignore
@@ -0,0 +1 @@
+odp_ipc
diff --git a/example/ipc/Makefile.am b/example/ipc/Makefile.am
new file mode 100644
index 000..3da9549
--- /dev/null
+++ b/example/ipc/Makefile.am
@@ -0,0 +1,7 @@
+include $(top_srcdir)/example/Makefile.inc
+
+bin_PROGRAMS = odp_ipc
+odp_ipc_LDFLAGS = $(AM_LDFLAGS) -static
+odp_ipc_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+dist_odp_ipc_SOURCES = odp_ipc.c
diff --git a/example/ipc/odp_ipc.c b/example/ipc/odp_ipc.c
new file mode 100644
index 000..1120467
--- /dev/null
+++ b/example/ipc/odp_ipc.c
@@ -0,0 +1,445 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+/**
+ * @file
+ *
+ * @example odp_ipc.c  ODP IPC test application.
+ */
+
+#include stdlib.h
+#include string.h
+#include getopt.h
+#include unistd.h
+
+#include example_debug.h
+
+#include odp.h
+#include odp/helper/linux.h
+
+/** @def SHM_PKT_POOL_SIZE
+ * @brief Size of the shared memory block
+ */
+#define SHM_PKT_POOL_SIZE  (512 * 2048)
+
+/** @def SHM_PKT_POOL_BUF_SIZE
+ * @brief Buffer size of the packet pool buffer
+ */
+#define SHM_PKT_POOL_BUF_SIZE  1856
+
+/** @def MAX_PKT_BURST
+ * @brief Maximum number of packet bursts
+ */
+#define MAX_PKT_BURST  16
+
+/** Get rid of path in filename - only for unix-type paths using '/' */
+#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
+   strrchr((file_name), '/') + 1 : (file_name))
+
+/** Application argument */
+static char *pktio_name;
+
+/* helper funcs */
+static void parse_args(int argc, char *argv[]);
+static void print_info(char *progname);
+static void usage(char *progname);
+
+static void busy_sleep(int sec)
+{
+   uint64_t start_cycle;
+   uint64_t cycle;
+   uint64_t diff;
+   uint64_t wait;
+
+   wait = odp_time_ns_to_cycles(sec * ODP_TIME_SEC);
+
+   start_cycle = odp_time_cycles();
+   while (1) {
+   cycle = odp_time_cycles();
+   diff  = odp_time_diff_cycles(start_cycle, cycle);
+   if (wait  diff)
+   break;
+   }
+}
+
+/**
+ * Create a pktio handle.
+ *
+ * @param dev Name of device to open
+ * @param pool Pool to associate with device for packet RX/TX
+ *
+ * @return The handle of the created pktio object.
+ * @retval ODP_PKTIO_INVALID if the create fails.
+ */
+static odp_pktio_t create_pktio(const char *dev, odp_pool_t pool)
+{
+   odp_pktio_t pktio;
+   odp_pktio_t ipc_pktio;
+
+   /* Open a packet IO instance */
+   pktio = odp_pktio_open(dev, pool);
+   if (pktio == ODP_PKTIO_INVALID)
+   EXAMPLE_ABORT(Error: pktio create failed for %s\n, dev);
+
+   printf(pid: %d, create IPC pktio\n, getpid());
+   ipc_pktio = odp_pktio_open(ipc_pktio, pool);
+   if (ipc_pktio == ODP_PKTIO_INVALID)
+   EXAMPLE_ABORT(Error: ipc pktio create failed.\n);
+
+   return pktio;
+}
+
+/**
+ * Packet IO loopback worker thread using bursts from/to IO resources
+ *
+ * @param arg  thread arguments of type 'thread_args_t *'
+ */
+static void *pktio_run_loop(odp_pool_t pool)
+{
+   int thr;
+   odp_pktio_t pktio;
+   int pkts;
+   odp_packet_t pkt_tbl[MAX_PKT_BURST];
+   odp_pktio_t ipc_pktio;
+
+   thr = odp_thread_id();
+
+   pktio = odp_pktio_lookup(pktio_name);
+   if (pktio == 

Re: [lng-odp] [PATCHv6 3/5] ipc: pool_create implement _ODP_SHM_NULL_LOCAL for linux-generic

2015-05-22 Thread Ciprian Barbu
On Fri, May 22, 2015 at 2:04 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:
 On 05/22/15 13:25, Ciprian Barbu wrote:

 On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org
 wrote:

 On init odp creates odp_sched_pool. Because of we can not
 modify API to add new parameter to odp_pool_param_t this
 pool should not be shared between different processes. To
 make that add special value to shm provided to pool saying that
 this pool should be in local memory only.

 This description is not very clear.

 Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
 ---
   platform/linux-generic/include/odp/plat/shared_memory_types.h | 4 
   platform/linux-generic/odp_pool.c | 9
 +++--
   platform/linux-generic/odp_schedule.c | 2 +-
   3 files changed, 12 insertions(+), 3 deletions(-)

 diff --git
 a/platform/linux-generic/include/odp/plat/shared_memory_types.h
 b/platform/linux-generic/include/odp/plat/shared_memory_types.h
 index 4be7356..908bb2e 100644
 --- a/platform/linux-generic/include/odp/plat/shared_memory_types.h
 +++ b/platform/linux-generic/include/odp/plat/shared_memory_types.h
 @@ -30,6 +30,10 @@ typedef ODP_HANDLE_T(odp_shm_t);

   #define ODP_SHM_INVALID _odp_cast_scalar(odp_shm_t, 0)
   #define ODP_SHM_NULL ODP_SHM_INVALID
 +/** NULL shared memory but do not create IPC object for it.
 + *  Platfrom specific flag.
 + */
 +#define _ODP_SHM_NULL_LOCAL _odp_cast_scalar(odp_shm_t, 0xULL -
 1)

 I don't think we should have this visible to ODP applications, because
 it's a workaround.

   /** Get printable format of odp_shm_t */
   static inline uint64_t odp_shm_to_u64(odp_shm_t hdl)
 diff --git a/platform/linux-generic/odp_pool.c
 b/platform/linux-generic/odp_pool.c
 index cd2c449..b2f30dc 100644
 --- a/platform/linux-generic/odp_pool.c
 +++ b/platform/linux-generic/odp_pool.c
 @@ -151,10 +151,14 @@ odp_pool_t odp_pool_create(const char *name,
  odp_pool_t pool_hdl = ODP_POOL_INVALID;
  pool_entry_t *pool;
  uint32_t i, headroom = 0, tailroom = 0;
 +   uint32_t shm_flags = 0;

  if (params == NULL)
  return ODP_POOL_INVALID;

 +   if (shm == ODP_SHM_NULL)
 +   shm_flags = ODP_SHM_PROC;
 +
  /* Default size and align for timeouts */
  if (params-type == ODP_POOL_TIMEOUT) {
  params-buf.size  = 0; /* tmo.__res1 */
 @@ -289,10 +293,11 @@ odp_pool_t odp_pool_create(const char *name,
mdata_size +
udata_size);

 -   if (shm == ODP_SHM_NULL) {
 +   if (shm == ODP_SHM_NULL || shm == _ODP_SHM_NULL_LOCAL) {
  shm = odp_shm_reserve(pool-s.name,
pool-s.pool_size,
 - ODP_PAGE_SIZE, 0);
 + ODP_PAGE_SIZE,
 + shm_flags);
  if (shm == ODP_SHM_INVALID) {
  POOL_UNLOCK(pool-s.lock);
  return ODP_POOL_INVALID;
 diff --git a/platform/linux-generic/odp_schedule.c
 b/platform/linux-generic/odp_schedule.c
 index a63f97a..07422bd 100644
 --- a/platform/linux-generic/odp_schedule.c
 +++ b/platform/linux-generic/odp_schedule.c
 @@ -129,7 +129,7 @@ int odp_schedule_init_global(void)
  params.buf.num   = NUM_SCHED_CMD;
  params.type  = ODP_POOL_BUFFER;

 -   pool = odp_pool_create(odp_sched_pool, ODP_SHM_NULL, params);
 +   pool = odp_pool_create(odp_sched_pool, _ODP_SHM_NULL_LOCAL,
 params);

 I think you're going about this the wrong way. ODP_SHM_NULL is a
 convenient way to let the implementation chose the memory where it
 will create buffers / packets / tmo. It was introduced because most
 hardware platforms don't use shared memory, since they have their own
 buffer management. But linux-generic can only use shared memory, and
 it's ok to allocate an shm here because it's inside the
 implementation, there is no visibility towards the application. So I
 think you should allocated the shm here with the right flags and
 remove the _ODP_SHM_NULL_LOCAL hack.


 we were planned to remove shm argument from pool_create() in future. For not
 it will work, but later
 I'm not sure what is the good solution for that.

So why try to workaround it when we haven't even decided when to
remove the shm param from odp_pktio_open?


 Maxim.


  if (pool == ODP_POOL_INVALID) {
  ODP_ERR(Schedule init: Pool create failed.\n);
 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org

Re: [lng-odp] [PATCHv6 3/5] ipc: pool_create implement _ODP_SHM_NULL_LOCAL for linux-generic

2015-05-22 Thread Maxim Uvarov

On 05/22/15 14:09, Ciprian Barbu wrote:

On Fri, May 22, 2015 at 2:04 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:

On 05/22/15 13:25, Ciprian Barbu wrote:

On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org
wrote:

On init odp creates odp_sched_pool. Because of we can not
modify API to add new parameter to odp_pool_param_t this
pool should not be shared between different processes. To
make that add special value to shm provided to pool saying that
this pool should be in local memory only.

This description is not very clear.


Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
---
   platform/linux-generic/include/odp/plat/shared_memory_types.h | 4 
   platform/linux-generic/odp_pool.c | 9
+++--
   platform/linux-generic/odp_schedule.c | 2 +-
   3 files changed, 12 insertions(+), 3 deletions(-)

diff --git
a/platform/linux-generic/include/odp/plat/shared_memory_types.h
b/platform/linux-generic/include/odp/plat/shared_memory_types.h
index 4be7356..908bb2e 100644
--- a/platform/linux-generic/include/odp/plat/shared_memory_types.h
+++ b/platform/linux-generic/include/odp/plat/shared_memory_types.h
@@ -30,6 +30,10 @@ typedef ODP_HANDLE_T(odp_shm_t);

   #define ODP_SHM_INVALID _odp_cast_scalar(odp_shm_t, 0)
   #define ODP_SHM_NULL ODP_SHM_INVALID
+/** NULL shared memory but do not create IPC object for it.
+ *  Platfrom specific flag.
+ */
+#define _ODP_SHM_NULL_LOCAL _odp_cast_scalar(odp_shm_t, 0xULL -
1)

I don't think we should have this visible to ODP applications, because
it's a workaround.


   /** Get printable format of odp_shm_t */
   static inline uint64_t odp_shm_to_u64(odp_shm_t hdl)
diff --git a/platform/linux-generic/odp_pool.c
b/platform/linux-generic/odp_pool.c
index cd2c449..b2f30dc 100644
--- a/platform/linux-generic/odp_pool.c
+++ b/platform/linux-generic/odp_pool.c
@@ -151,10 +151,14 @@ odp_pool_t odp_pool_create(const char *name,
  odp_pool_t pool_hdl = ODP_POOL_INVALID;
  pool_entry_t *pool;
  uint32_t i, headroom = 0, tailroom = 0;
+   uint32_t shm_flags = 0;

  if (params == NULL)
  return ODP_POOL_INVALID;

+   if (shm == ODP_SHM_NULL)
+   shm_flags = ODP_SHM_PROC;
+
  /* Default size and align for timeouts */
  if (params-type == ODP_POOL_TIMEOUT) {
  params-buf.size  = 0; /* tmo.__res1 */
@@ -289,10 +293,11 @@ odp_pool_t odp_pool_create(const char *name,
mdata_size +
udata_size);

-   if (shm == ODP_SHM_NULL) {
+   if (shm == ODP_SHM_NULL || shm == _ODP_SHM_NULL_LOCAL) {
  shm = odp_shm_reserve(pool-s.name,
pool-s.pool_size,
- ODP_PAGE_SIZE, 0);
+ ODP_PAGE_SIZE,
+ shm_flags);
  if (shm == ODP_SHM_INVALID) {
  POOL_UNLOCK(pool-s.lock);
  return ODP_POOL_INVALID;
diff --git a/platform/linux-generic/odp_schedule.c
b/platform/linux-generic/odp_schedule.c
index a63f97a..07422bd 100644
--- a/platform/linux-generic/odp_schedule.c
+++ b/platform/linux-generic/odp_schedule.c
@@ -129,7 +129,7 @@ int odp_schedule_init_global(void)
  params.buf.num   = NUM_SCHED_CMD;
  params.type  = ODP_POOL_BUFFER;

-   pool = odp_pool_create(odp_sched_pool, ODP_SHM_NULL, params);
+   pool = odp_pool_create(odp_sched_pool, _ODP_SHM_NULL_LOCAL,
params);

I think you're going about this the wrong way. ODP_SHM_NULL is a
convenient way to let the implementation chose the memory where it
will create buffers / packets / tmo. It was introduced because most
hardware platforms don't use shared memory, since they have their own
buffer management. But linux-generic can only use shared memory, and
it's ok to allocate an shm here because it's inside the
implementation, there is no visibility towards the application. So I
think you should allocated the shm here with the right flags and
remove the _ODP_SHM_NULL_LOCAL hack.


we were planned to remove shm argument from pool_create() in future. For not
it will work, but later
I'm not sure what is the good solution for that.

So why try to workaround it when we haven't even decided when to
remove the shm param from odp_pktio_open?

From odp_pool_create(). But it's good suggestion I will do it.

Maxim.


Maxim.



  if (pool == ODP_POOL_INVALID) {
  ODP_ERR(Schedule init: Pool create failed.\n);
--
1.9.1

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp




___
lng-odp 

Re: [lng-odp] Makefile problem on v.1.1.0.0 tag

2015-05-22 Thread Maxim Uvarov

On 05/22/15 14:13, Radu-Andrei Bulie wrote:


Hi,

I tried to make a build based on the aforementioned tag (after doing 
the normal bootstrap sequence)


but an error was thrown:

make[2]: Entering directory /path/odp/linux-generic

Makefile:1073: *** missing separator.  Stop.

After some investigations I found that if I do the following(inserting 
a tab) - in *platform/Makefile.inc *:


*diff --git a/platform/Makefile.inc b/platform/Makefile.inc*

*index f232daa..a153b93 100644*

*--- a/platform/Makefile.inc*

*+++ b/platform/Makefile.inc*

@@ -12,6 +12,6 @@lib_LTLIBRARIES = $(LIB)/libodp.la

AM_LDFLAGS += -version-number '$(ODP_LIBSO_VERSION)'

-GIT_DESC !=$(top_srcdir)/scripts/git_hash.sh

+GIT_DESC !=$(top_srcdir)/scripts/git_hash.sh

AM_CFLAGS += -DGIT_HASH=$(GIT_DESC)

AM_CFLAGS += -DPLATFORM=${with_platform}

the compilation succeeds.

Can you confirm this error?


Old Makefile. Already fixed with:
a9cc0fc platform: Makefile.inc: use `` instead of != for compatibility 
with older versions of Make


Maxim.


Regards,

Radu



___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Ola Liljedahl
On 22 May 2015 at 12:13, Alexandru Badicioiu alexandru.badici...@linaro.org
 wrote:



 On 22 May 2015 at 12:10, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 22 May 2015 at 08:14, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:



 On 22 May 2015 at 00:09, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 May 2015 at 17:45, Maxim Uvarov maxim.uva...@linaro.org wrote:

 From the rfc 3549 netlink looks like good protocol to communicate
 between data plane and control plane. And messages are defined by that
 protocol also. At least we should do something the same.

 Netlink seems limited to the specific functionality already present in
 the Linux kernel. An ODP IPC/message passing mechanism must be extensible
 and support user-defined messages. There's no reason for ODP MBUS to impose
 any message format.

 Netlink is extensively implemented in Linux kernel but the RFC
 explicitly doesn't limit it to this scope.
 Netlink messages have a  header , defined by Netlink protocol and a
 payload which contains user-defined messages in TLV format (e.g - RTM_XXX
 messages for routing control). Doesn't TLV format suffice for the need of
 ODP applications?

 Why should we impose any message format on ODP applications?

 A message format , in this case TLV, seems to be adequate for the purpose
 of dataplane - control plane communication.

Possibly it is adequate for *some* use cases. But for *all* use cases?


 I see it more like an useful thing rather than a constraint.

Applications can, if they so choose, use the TLV message format. But why
should this be imposed on applications by ODP MBUS? Why should the MBUS API
or implementation care about the message format of the payload?

Does Linux care about the format of data in your files?


Isn't dataplane-control plane communication the purpose of ODP MBUS?

Yes. But this is an open-ended definition. We are not limiting what ODP can
be used for so we have no idea what control and dataplane would mean in
every possible case where ODP is used.


   Or is more general?

 An ODP MBUS implementation could perhaps use Netlink as the mechanism to
 connect to other endpoints and transfer messages in both directions. By not
 specifying irrelevant details in the MBUS API, we give more freedom to
 implementations. I doubt Netlink will always be available or will be the
 best choice on all platforms where people are trying to implement ODP.

 You see Linux Netlink as a possible implementation for ODP MBUS, I see
 Netlink as the protocol for ODP MBUS. ODP implementation must provide the
 Netlink protocol, applications will use the MBUS API to build and send
 messages (libnl is an example). Particular implementations can use Linux
 kernel Netlink , others can do a complete userspace implementation even
 with HW acceleration (DMA copy for example).

So how do users benefit from forcing all of them to use Netlink message
formats? And how do the implementations benefit?

If you are introducing limitations, there has to be good reasons for them.
I have seen none so far.



 Since the ODP implementation will control the definition of the message
 event type, it can reserve memory for necessary (implementation specific)
 headers preceding the user-defined payload.



 Any (set of) applications can model their message formats on Netlink.

 I don't understand how Netlink can be used to communicate between (any
 two) two applications. Please enlighten me.

 Netlink is not limited to user-kernel communication, only some of the
 current services like RTM_XXX for routing configuration. For example ,
 Generic Netlink allows users in both kernel and userspace -
 https://lwn.net/Articles/208755/:

 When looking at figure #1 it is important to note that any Generic Netlink
 user can communicate with any other user over the bus using the same API
 regardless of where the user resides in relation to the kernel/userspace
 boundary.

 Another claim but no description of or examples hon ow this is actually
 accomplished.
 All the examples in this articles are from the kernel perspective. Not
 very useful for a user-to-user messaging mechanism.

 This is accomplished by the means of socket communication. Netlink
 protocol works over sockets like any other protocol using sockets
 (UDP/TCP). AF_NETLINK address has a pid member which identifies the
 destination process (http://man7.org/linux/man-pages/man7/netlink.7.html
 - Address formats paragraph).

Sockets, my favourite API. Not.






 -- Ola




 Maxim.

 On 21 May 2015 at 17:46, Ola Liljedahl ola.liljed...@linaro.org
 wrote:

 On 21 May 2015 at 15:56, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 I got the impression that ODP MBUS API would define a transport
 protocol/API between an ODP

 No the MBUS API is just an API for message passing (think of the OSE
 IPC API) and doesn't specify use cases or content. Just like the ODP 
 packet
 API doesn't specify what the content in a packet means or the format of 
 the
 

Re: [lng-odp] [PATCHv6 5/5] ipc: example app

2015-05-22 Thread Ciprian Barbu
On Fri, May 22, 2015 at 1:58 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:
 On 05/22/15 13:32, Ciprian Barbu wrote:

 On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org
 wrote:

 Simple example app creates one packet i/o to external interface
 and one ipc pktio to other process. Then transfer packet from
 external interface to other process and back thought ipc queue.

 Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
 ---
   configure.ac|   1 +
   example/Makefile.am |   2 +-
   example/ipc/.gitignore  |   1 +
   example/ipc/Makefile.am |   7 +
   example/ipc/odp_ipc.c   | 445
 
   5 files changed, 455 insertions(+), 1 deletion(-)
   create mode 100644 example/ipc/.gitignore
   create mode 100644 example/ipc/Makefile.am
   create mode 100644 example/ipc/odp_ipc.c

 diff --git a/configure.ac b/configure.ac
 index d20bad2..1ceb922 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -274,6 +274,7 @@ AC_CONFIG_FILES([Makefile
   example/Makefile
   example/classifier/Makefile
   example/generator/Makefile
 +example/ipc/Makefile
   example/ipsec/Makefile
   example/packet/Makefile
   example/timer/Makefile
 diff --git a/example/Makefile.am b/example/Makefile.am
 index 353f397..506963f 100644
 --- a/example/Makefile.am
 +++ b/example/Makefile.am
 @@ -1 +1 @@
 -SUBDIRS = classifier generator ipsec packet timer
 +SUBDIRS = classifier generator ipc ipsec packet timer
 diff --git a/example/ipc/.gitignore b/example/ipc/.gitignore
 new file mode 100644
 index 000..963d99d
 --- /dev/null
 +++ b/example/ipc/.gitignore
 @@ -0,0 +1 @@
 +odp_ipc
 diff --git a/example/ipc/Makefile.am b/example/ipc/Makefile.am
 new file mode 100644
 index 000..3da9549
 --- /dev/null
 +++ b/example/ipc/Makefile.am
 @@ -0,0 +1,7 @@
 +include $(top_srcdir)/example/Makefile.inc
 +
 +bin_PROGRAMS = odp_ipc
 +odp_ipc_LDFLAGS = $(AM_LDFLAGS) -static
 +odp_ipc_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
 +
 +dist_odp_ipc_SOURCES = odp_ipc.c
 diff --git a/example/ipc/odp_ipc.c b/example/ipc/odp_ipc.c
 new file mode 100644
 index 000..1120467
 --- /dev/null
 +++ b/example/ipc/odp_ipc.c
 @@ -0,0 +1,445 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier: BSD-3-Clause
 + */
 +
 +/**
 + * @file
 + *
 + * @example odp_ipc.c  ODP IPC test application.
 + */
 +
 +#include stdlib.h
 +#include string.h
 +#include getopt.h
 +#include unistd.h
 +
 +#include example_debug.h
 +
 +#include odp.h
 +#include odp/helper/linux.h
 +
 +/** @def SHM_PKT_POOL_SIZE
 + * @brief Size of the shared memory block
 + */
 +#define SHM_PKT_POOL_SIZE  (512 * 2048)
 +
 +/** @def SHM_PKT_POOL_BUF_SIZE
 + * @brief Buffer size of the packet pool buffer
 + */
 +#define SHM_PKT_POOL_BUF_SIZE  1856
 +
 +/** @def MAX_PKT_BURST
 + * @brief Maximum number of packet bursts
 + */
 +#define MAX_PKT_BURST  16
 +
 +/** Get rid of path in filename - only for unix-type paths using '/' */
 +#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
 +   strrchr((file_name), '/') + 1 : (file_name))
 +
 +/** Application argument */
 +static char *pktio_name;
 +
 +/* helper funcs */
 +static void parse_args(int argc, char *argv[]);
 +static void print_info(char *progname);
 +static void usage(char *progname);
 +
 +static void busy_sleep(int sec)
 +{
 +   uint64_t start_cycle;
 +   uint64_t cycle;
 +   uint64_t diff;
 +   uint64_t wait;
 +
 +   wait = odp_time_ns_to_cycles(sec * ODP_TIME_SEC);
 +
 +   start_cycle = odp_time_cycles();
 +   while (1) {
 +   cycle = odp_time_cycles();
 +   diff  = odp_time_diff_cycles(start_cycle, cycle);
 +   if (wait  diff)
 +   break;
 +   }
 +}
 +
 +/**
 + * Create a pktio handle.
 + *
 + * @param dev Name of device to open
 + * @param pool Pool to associate with device for packet RX/TX
 + *
 + * @return The handle of the created pktio object.
 + * @retval ODP_PKTIO_INVALID if the create fails.
 + */
 +static odp_pktio_t create_pktio(const char *dev, odp_pool_t pool)
 +{
 +   odp_pktio_t pktio;
 +   odp_pktio_t ipc_pktio;
 +
 +   /* Open a packet IO instance */
 +   pktio = odp_pktio_open(dev, pool);
 +   if (pktio == ODP_PKTIO_INVALID)
 +   EXAMPLE_ABORT(Error: pktio create failed for %s\n,
 dev);
 +
 +   printf(pid: %d, create IPC pktio\n, getpid());
 +   ipc_pktio = odp_pktio_open(ipc_pktio, pool);
 +   if (ipc_pktio == ODP_PKTIO_INVALID)
 +   EXAMPLE_ABORT(Error: ipc pktio create failed.\n);
 +
 +   return pktio;
 +}
 +
 +/**
 + * Packet IO loopback worker thread using bursts from/to IO resources
 + *
 + * @param arg  thread arguments of type 'thread_args_t *'
 + */
 +static void *pktio_run_loop(odp_pool_t pool)
 +{
 +   

[lng-odp] [PATCH] validation: queue: schedule parameters are not valid for poll type queue

2015-05-22 Thread Jerin Jacob
Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
---
 test/validation/odp_queue.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/test/validation/odp_queue.c b/test/validation/odp_queue.c
index 5123939..01a704c 100644
--- a/test/validation/odp_queue.c
+++ b/test/validation/odp_queue.c
@@ -45,24 +45,18 @@ static void test_odp_queue_sunnyday(void)
odp_buffer_t buf;
odp_event_t ev;
odp_pool_t msg_pool;
-   odp_queue_param_t param;
odp_event_t *pev_tmp;
int i, deq_ret, ret;
int nr_deq_entries = 0;
int max_iteration = CONFIG_MAX_ITERATION;
void *prtn = NULL;
 
-   memset(param, 0, sizeof(param));
-   param.sched.sync  = ODP_SCHED_SYNC_NONE;
-
queue_creat_id = odp_queue_create(test_queue,
- ODP_QUEUE_TYPE_POLL, param);
+ ODP_QUEUE_TYPE_POLL, NULL);
CU_ASSERT(ODP_QUEUE_INVALID != queue_creat_id);
 
CU_ASSERT_EQUAL(ODP_QUEUE_TYPE_POLL,
odp_queue_type(queue_creat_id));
-   CU_ASSERT_EQUAL(ODP_SCHED_SYNC_NONE,
-   odp_queue_sched_type(queue_creat_id));
 
queue_id = odp_queue_lookup(test_queue);
CU_ASSERT_EQUAL(queue_creat_id, queue_id);
-- 
2.1.0

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv6 5/5] ipc: example app

2015-05-22 Thread Ciprian Barbu
On Thu, May 21, 2015 at 6:32 PM, Maxim Uvarov maxim.uva...@linaro.org wrote:
 Simple example app creates one packet i/o to external interface
 and one ipc pktio to other process. Then transfer packet from
 external interface to other process and back thought ipc queue.

 Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
 ---
  configure.ac|   1 +
  example/Makefile.am |   2 +-
  example/ipc/.gitignore  |   1 +
  example/ipc/Makefile.am |   7 +
  example/ipc/odp_ipc.c   | 445 
 
  5 files changed, 455 insertions(+), 1 deletion(-)
  create mode 100644 example/ipc/.gitignore
  create mode 100644 example/ipc/Makefile.am
  create mode 100644 example/ipc/odp_ipc.c

 diff --git a/configure.ac b/configure.ac
 index d20bad2..1ceb922 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -274,6 +274,7 @@ AC_CONFIG_FILES([Makefile
  example/Makefile
  example/classifier/Makefile
  example/generator/Makefile
 +example/ipc/Makefile
  example/ipsec/Makefile
  example/packet/Makefile
  example/timer/Makefile
 diff --git a/example/Makefile.am b/example/Makefile.am
 index 353f397..506963f 100644
 --- a/example/Makefile.am
 +++ b/example/Makefile.am
 @@ -1 +1 @@
 -SUBDIRS = classifier generator ipsec packet timer
 +SUBDIRS = classifier generator ipc ipsec packet timer
 diff --git a/example/ipc/.gitignore b/example/ipc/.gitignore
 new file mode 100644
 index 000..963d99d
 --- /dev/null
 +++ b/example/ipc/.gitignore
 @@ -0,0 +1 @@
 +odp_ipc
 diff --git a/example/ipc/Makefile.am b/example/ipc/Makefile.am
 new file mode 100644
 index 000..3da9549
 --- /dev/null
 +++ b/example/ipc/Makefile.am
 @@ -0,0 +1,7 @@
 +include $(top_srcdir)/example/Makefile.inc
 +
 +bin_PROGRAMS = odp_ipc
 +odp_ipc_LDFLAGS = $(AM_LDFLAGS) -static
 +odp_ipc_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
 +
 +dist_odp_ipc_SOURCES = odp_ipc.c
 diff --git a/example/ipc/odp_ipc.c b/example/ipc/odp_ipc.c
 new file mode 100644
 index 000..1120467
 --- /dev/null
 +++ b/example/ipc/odp_ipc.c
 @@ -0,0 +1,445 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier: BSD-3-Clause
 + */
 +
 +/**
 + * @file
 + *
 + * @example odp_ipc.c  ODP IPC test application.
 + */
 +
 +#include stdlib.h
 +#include string.h
 +#include getopt.h
 +#include unistd.h
 +
 +#include example_debug.h
 +
 +#include odp.h
 +#include odp/helper/linux.h
 +
 +/** @def SHM_PKT_POOL_SIZE
 + * @brief Size of the shared memory block
 + */
 +#define SHM_PKT_POOL_SIZE  (512 * 2048)
 +
 +/** @def SHM_PKT_POOL_BUF_SIZE
 + * @brief Buffer size of the packet pool buffer
 + */
 +#define SHM_PKT_POOL_BUF_SIZE  1856
 +
 +/** @def MAX_PKT_BURST
 + * @brief Maximum number of packet bursts
 + */
 +#define MAX_PKT_BURST  16
 +
 +/** Get rid of path in filename - only for unix-type paths using '/' */
 +#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
 +   strrchr((file_name), '/') + 1 : (file_name))
 +
 +/** Application argument */
 +static char *pktio_name;
 +
 +/* helper funcs */
 +static void parse_args(int argc, char *argv[]);
 +static void print_info(char *progname);
 +static void usage(char *progname);
 +
 +static void busy_sleep(int sec)
 +{
 +   uint64_t start_cycle;
 +   uint64_t cycle;
 +   uint64_t diff;
 +   uint64_t wait;
 +
 +   wait = odp_time_ns_to_cycles(sec * ODP_TIME_SEC);
 +
 +   start_cycle = odp_time_cycles();
 +   while (1) {
 +   cycle = odp_time_cycles();
 +   diff  = odp_time_diff_cycles(start_cycle, cycle);
 +   if (wait  diff)
 +   break;
 +   }
 +}
 +
 +/**
 + * Create a pktio handle.
 + *
 + * @param dev Name of device to open
 + * @param pool Pool to associate with device for packet RX/TX
 + *
 + * @return The handle of the created pktio object.
 + * @retval ODP_PKTIO_INVALID if the create fails.
 + */
 +static odp_pktio_t create_pktio(const char *dev, odp_pool_t pool)
 +{
 +   odp_pktio_t pktio;
 +   odp_pktio_t ipc_pktio;
 +
 +   /* Open a packet IO instance */
 +   pktio = odp_pktio_open(dev, pool);
 +   if (pktio == ODP_PKTIO_INVALID)
 +   EXAMPLE_ABORT(Error: pktio create failed for %s\n, dev);
 +
 +   printf(pid: %d, create IPC pktio\n, getpid());
 +   ipc_pktio = odp_pktio_open(ipc_pktio, pool);
 +   if (ipc_pktio == ODP_PKTIO_INVALID)
 +   EXAMPLE_ABORT(Error: ipc pktio create failed.\n);
 +
 +   return pktio;
 +}
 +
 +/**
 + * Packet IO loopback worker thread using bursts from/to IO resources
 + *
 + * @param arg  thread arguments of type 'thread_args_t *'
 + */
 +static void *pktio_run_loop(odp_pool_t pool)
 +{
 +   int thr;
 +   odp_pktio_t pktio;
 +   int pkts;
 +   odp_packet_t pkt_tbl[MAX_PKT_BURST];
 +   odp_pktio_t 

Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Ola Liljedahl
On 21 May 2015 at 11:50, Savolainen, Petri (Nokia - FI/Espoo) 
petri.savolai...@nokia.com wrote:



  -Original Message-
  From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of ext
  Ola Liljedahl
  Sent: Tuesday, May 19, 2015 1:04 AM
  To: lng-odp@lists.linaro.org
  Subject: [lng-odp] [RFC] Add ipc.h
 
  As promised, here is my first attempt at a standalone API for IPC - inter
  process communication in a shared nothing architecture (message passing
  between processes which do not share memory).
 
  Currently all definitions are in the file ipc.h but it is possible to
  break out some message/event related definitions (everything from
  odp_ipc_sender) in a separate file message.h. This would mimic the
  packet_io.h/packet.h separation.
 
  The semantics of message passing is that sending a message to an endpoint
  will always look like it succeeds. The appearance of endpoints is
  explicitly
  notified through user-defined messages specified in the odp_ipc_resolve()
  call. Similarly, the disappearance (e.g. death or otherwise lost
  connection)
  is also explicitly notified through user-defined messages specified in
 the
  odp_ipc_monitor() call. The send call does not fail because the addressed
  endpoints has disappeared.
 
  Messages (from endpoint A to endpoint B) are delivered in order. If
  message
  N sent to an endpoint is delivered, then all messages N have also been
  delivered. Message delivery does not guarantee actual processing by the

 Ordered is OK requirement, but all messages N have also been delivered
 means in practice loss less delivery (== re-tries and retransmission
 windows, etc). Lossy vs loss less link should be an configuration option.

 Also what delivered means?

 Message:
  - transmitted successfully over the link ?
  - is now under control of the remote node (post office) ?
  - delivered into application input queue ?
  - has been dequeued from application queue ?


  recipient. End-to-end acknowledgements (using messages) should be used if
  this guarantee is important to the user.
 
  IPC endpoints can be seen as interfaces (taps) to an internal reliable
  multidrop network where each endpoint has a unique address which is only
  valid for the lifetime of the endpoint. I.e. if an endpoint is destroyed
  and then recreated (with the same name), the new endpoint will have a
  new address (eventually endpoints addresses will have to be recycled but
  not for a very long time). Endpoints names do not necessarily have to be
  unique.

 How widely these addresses are unique: inside one VM, multiple VMs under
 the same host, multiple devices on a LAN (VLAN), ...

I have added that the scope is expected to be an OS instance (e.g. VM).



 
  Signed-off-by: Ola Liljedahl ola.liljed...@linaro.org
  ---
  (This document/code contribution attached is provided under the terms of
  agreement LES-LTM-21309)
 


  +/**
  + * Create IPC endpoint
  + *
  + * @param name Name of local IPC endpoint
  + * @param pool Pool for incoming messages
  + *
  + * @return IPC handle on success
  + * @retval ODP_IPC_INVALID on failure and errno set
  + */
  +odp_ipc_t odp_ipc_create(const char *name, odp_pool_t pool);

 This creates (implicitly) the local end point address.

Yes. Does that have to be described?




  +
  +/**
  + * Set the default input queue for an IPC endpoint
  + *
  + * @param ipc   IPC handle
  + * @param queue Queue handle
  + *
  + * @retval  0 on success
  + * @retval 0 on failure
  + */
  +int odp_ipc_inq_setdef(odp_ipc_t ipc, odp_queue_t queue);

 Multiple input queues are likely needed for different priority messages.

I have added priorities (copied from queue.h SCHED priorities) and a
priority parameter to the send() call.

packet_io.h doesn't have any API for associating a list of queues with the
different (packet) priorities so there is no template to follow. I could
invent a new call for doing this on MBUS endpoints.
E.g.
int odp_mbus_inq_set(odp_mbus_t mbus, odp_mbus_prio_t prio, odp_queue_t
queue);
Call once for each priority, I think this is better than having a call
which specifies all queues at once (the number of priorities is
implementation specific).

I now think that the default queue should be specified when the endpoint is
created. Messages could start pouring in immediately and might have to be
enqueued somewhere (in certain implementations, I did not experience this
problem in my prototype so did not think about it).


  +
  +/**
  + * Resolve endpoint by name
  + *
  + * Look up an existing or future endpoint by name.
  + * When the endpoint exists, return the specified message with the
  endpoint
  + * as the sender.
  + *
  + * @param ipc IPC handle
  + * @param name Name to resolve
  + * @param msg Message to return
  + */
  +void odp_ipc_resolve(odp_ipc_t ipc,
  +  const char *name,
  +  odp_ipc_msg_t msg);

 How widely these names are visible? Inside one VM, multiple VMs under the
 same host, 

[lng-odp] [RFCv2] Add message bus (MBUS) API's

2015-05-22 Thread Ola Liljedahl
Here is my second attempt at a standalone API for MBUS - message passing
based IPC for a shared nothing architecture.

The semantics of message passing is that sending a message to an endpoint
will always look like it succeeds. The appearance of endpoints is explicitly
notified through user-defined messages specified in the odp_mbus_lookup()
call. Similarly, the disappearance (e.g. death or otherwise lost connection)
is also explicitly notified through user-defined messages specified in the
odp_mbus_monitor() call. The send call does not fail because the addressed
endpoints has disappeared.

Message delivery into the recipient address space is ordered (per priority)
and reliable. Delivery of message N implies delivery of all messages N
(of the same priority). All messages (accepted by MBUS) will be delivered
up to the point of endpoint termination or lost connection where no more
messages will be delivered.
Actual reception (dequeueing) and processing by the recipient is not
guaranteed (use end-to-end acknowledgements for that).

MBUS endpoints can be seen as interfaces (taps) to an internal reliable
multidrop network where each endpoint has a unique address which is only
valid for the lifetime of the endpoint. I.e. if an endpoint is destroyed
and then recreated (with the same name), the new endpoint will have a
new address (eventually endpoints addresses will have to be recycled but
not for a very long time). Endpoints names do not necessarily have to be
unique.

v2:
Split off all message definitions in a separate file message.h
Renamed Inter Process Communication (IPC) to Message Bus (MBUS).
Changed all definitions to use mbus instead of ipc prefix/infix.
Renamed odp_ipc_msg_t to odp_message_t.
odp_mbus_create(): Added parameter for default input queue. Explicitly state
that the pool must be use type ODP_EVENT_MESSAGE.
Renamed odp_ipc_resolve() to odp_mbus_lookup().
odp_mbus_send(): Added priority parameter.
Renamed odp_ipc_sender() to odp_message_sender().
Renamed odp_ipc_data() to odp_message_data().
Renamed odp_ipc_length() to odp_message_length().
Renamed odp_ipc_reset() to odp_message_length_set().
Renamed odp_ipc_alloc() to odp_message_alloc().
Renamed odp_ipc_free() to odp_message_free().
odp_message_alloc(): Corrected name of invalid message handle.
Added message priorities and calls to set and remove input queues for
specific priorities: odp_mbus_inq_set(), odp_mbus_inq_rem().

Signed-off-by: Ola Liljedahl ola.liljed...@linaro.org
---
(This document/code contribution attached is provided under the terms of
agreement LES-LTM-21309)

 include/odp/api/mbus.h | 229 +
 include/odp/api/message.h  | 141 +
 platform/linux-generic/include/odp/mbus.h  |  40 
 platform/linux-generic/include/odp/message.h   |  39 
 .../linux-generic/include/odp/plat/mbus_types.h|  59 ++
 .../linux-generic/include/odp/plat/message_types.h |  47 +
 6 files changed, 555 insertions(+)
 create mode 100644 include/odp/api/mbus.h
 create mode 100644 include/odp/api/message.h
 create mode 100644 platform/linux-generic/include/odp/mbus.h
 create mode 100644 platform/linux-generic/include/odp/message.h
 create mode 100644 platform/linux-generic/include/odp/plat/mbus_types.h
 create mode 100644 platform/linux-generic/include/odp/plat/message_types.h

diff --git a/include/odp/api/mbus.h b/include/odp/api/mbus.h
new file mode 100644
index 000..60fdc62
--- /dev/null
+++ b/include/odp/api/mbus.h
@@ -0,0 +1,229 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+
+/**
+ * @file
+ *
+ * ODP Message bus API
+ */
+
+#ifndef ODP_API_MBUS_H_
+#define ODP_API_MBUS_H_
+
+#ifdef __cplusplus
+extern C {
+#endif
+
+/** @defgroup odp_mbus ODP MBUS
+ *  @{
+ */
+
+/**
+ * @typedef odp_mbus_t
+ * ODP message bus handle
+ */
+
+/**
+ * @def ODP_MBUS_ADDR_SIZE
+ * Size of the address of a message bus endpoint
+ */
+
+/**
+ * @typedef odp_mbus_prio_t
+ * ODP MBUS message priority
+ */
+
+/**
+ * @def ODP_MBUS_PRIO_HIGHEST
+ * Highest MBUS message priority
+ */
+
+/**
+ * @def ODP_MBUS_PRIO_NORMAL
+ * Normal MBUS message priority
+ */
+
+/**
+ * @def ODP_MBUS_PRIO_LOWEST
+ * Lowest MBUS message priority
+ */
+
+/**
+ * @def ODP_MBUS_PRIO_DEFAULT
+ * Default MBUS message priority
+ */
+
+
+/**
+ * Create message bus endpoint
+ *
+ * Create an endpoint on the message bus. The scope of the message bus is
+ * not defined but it is expected that it encompasses the OS instance but
+ * no more.
+ * 
+ * A unique address for the endpoint is created.
+ *
+ * @param name Name of our endpoint
+ * @param pool Pool (of type ODP_EVENT_MESSAGE) for incoming messages
+ * @param queue Handle for default input queue
+ *
+ * @return Message bus handle on success
+ * @retval ODP_MBUS_INVALID on failure and errno set
+ */
+odp_mbus_t odp_mbus_create(const char *name,
+   

Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Savolainen, Petri (Nokia - FI/Espoo)
Hi,

Instead of message bus (mbus), I’d use terms message and message IO (similar to 
packets and packet IO).

odp_msg_t == message event
odp_msgio_t == message io port/interface/tap/socket/mailbox/…

// create msg io port
odp_msgio_t odp_msgio_create(…);

// msg io port local address
odp_msgio_addr_t odp_msgio_addr(odp_msgio_t msgio);


more comments inlined …


From: ext Ola Liljedahl [mailto:ola.liljed...@linaro.org]
Sent: Friday, May 22, 2015 2:20 PM
To: Savolainen, Petri (Nokia - FI/Espoo)
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] [RFC] Add ipc.h

On 21 May 2015 at 11:50, Savolainen, Petri (Nokia - FI/Espoo) 
petri.savolai...@nokia.commailto:petri.savolai...@nokia.com wrote:


 -Original Message-
 From: lng-odp 
 [mailto:lng-odp-boun...@lists.linaro.orgmailto:lng-odp-boun...@lists.linaro.org]
  On Behalf Of ext
 Ola Liljedahl
 Sent: Tuesday, May 19, 2015 1:04 AM
 To: lng-odp@lists.linaro.orgmailto:lng-odp@lists.linaro.org
 Subject: [lng-odp] [RFC] Add ipc.h

 As promised, here is my first attempt at a standalone API for IPC - inter
 process communication in a shared nothing architecture (message passing
 between processes which do not share memory).

 Currently all definitions are in the file ipc.h but it is possible to
 break out some message/event related definitions (everything from
 odp_ipc_sender) in a separate file message.h. This would mimic the
 packet_io.h/packet.h separation.

 The semantics of message passing is that sending a message to an endpoint
 will always look like it succeeds. The appearance of endpoints is
 explicitly
 notified through user-defined messages specified in the odp_ipc_resolve()
 call. Similarly, the disappearance (e.g. death or otherwise lost
 connection)
 is also explicitly notified through user-defined messages specified in the
 odp_ipc_monitor() call. The send call does not fail because the addressed
 endpoints has disappeared.

 Messages (from endpoint A to endpoint B) are delivered in order. If
 message
 N sent to an endpoint is delivered, then all messages N have also been
 delivered. Message delivery does not guarantee actual processing by the
Ordered is OK requirement, but all messages N have also been delivered means 
in practice loss less delivery (== re-tries and retransmission windows, etc). 
Lossy vs loss less link should be an configuration option.

Also what delivered means?

Message:
 - transmitted successfully over the link ?
 - is now under control of the remote node (post office) ?
 - delivered into application input queue ?
 - has been dequeued from application queue ?


 recipient. End-to-end acknowledgements (using messages) should be used if
 this guarantee is important to the user.

 IPC endpoints can be seen as interfaces (taps) to an internal reliable
 multidrop network where each endpoint has a unique address which is only
 valid for the lifetime of the endpoint. I.e. if an endpoint is destroyed
 and then recreated (with the same name), the new endpoint will have a
 new address (eventually endpoints addresses will have to be recycled but
 not for a very long time). Endpoints names do not necessarily have to be
 unique.

How widely these addresses are unique: inside one VM, multiple VMs under the 
same host, multiple devices on a LAN (VLAN), ...
I have added that the scope is expected to be an OS instance (e.g. VM).

OK, it’s likely the scope mostly needed anyway.

Still need to define if addressing (and protocol) is implementation specific or 
standardized. I think you are suggesting implementation specific, which is fine 
but need to note that it’s suitable only between two ODP instances of the same 
_implementation_ version. E.g. messaging between linux-generic and odp-dpdk,  
or between odp-dpdk-1.1.0.0 and odp-dpdk-1.1.0.1 would not necessary work. A 
protocol spec (with version numbering) would be needed to guaranteed intra 
implementation version communication.

Implementation specific messaging would be sufficient for SW (ODP instance) 
coming from single SW vendor, but integration of SW from multiple vendors would 
need packets or proper “IPC” protocol.



 Signed-off-by: Ola Liljedahl 
 ola.liljed...@linaro.orgmailto:ola.liljed...@linaro.org
 ---
 (This document/code contribution attached is provided under the terms of
 agreement LES-LTM-21309)



 +/**
 + * Create IPC endpoint
 + *
 + * @param name Name of local IPC endpoint
 + * @param pool Pool for incoming messages
 + *
 + * @return IPC handle on success
 + * @retval ODP_IPC_INVALID on failure and errno set
 + */
 +odp_ipc_t odp_ipc_create(const char *name, odp_pool_t pool);

This creates (implicitly) the local end point address.
Yes. Does that have to be described?

Maybe to highlight that “name” is not the address.



 +
 +/**
 + * Set the default input queue for an IPC endpoint
 + *
 + * @param ipc   IPC handle
 + * @param queue Queue handle
 + *
 + * @retval  0 on success
 + * @retval 0 on failure
 + */
 +int odp_ipc_inq_setdef(odp_ipc_t ipc, odp_queue_t 

Re: [lng-odp] [RFC] Add ipc.h

2015-05-22 Thread Ola Liljedahl
On 22 May 2015 at 15:16, Savolainen, Petri (Nokia - FI/Espoo) 
petri.savolai...@nokia.com wrote:

  Hi,



 Instead of message bus (mbus), I’d use terms message and message IO
 (similar to packets and packet IO).

That could work as well. But the concepts named and described here actually
matches quite nicely (a subset of) those of kdbus
https://code.google.com/p/d-bus/source/browse/kdbus.txt?name=policy.

Anyway suggestion accepted.





 odp_msg_t == message event

But packet events are called odp_packet_t, not odp_pkt_t, and timeout
events are called odp_timeout_t, not odp_tmo_t. I prefer the long name
odp_message_t. Do you still insist?

 odp_msgio_t == message io port/interface/tap/socket/mailbox/…



 // create msg io port

 odp_msgio_t odp_msgio_create(…);



 // msg io port local address

 odp_msgio_addr_t odp_msgio_addr(odp_msgio_t msgio);





 more comments inlined …

Your inlined comments are difficult to find. They seem to be more indented
that the text they comment.






 *From:* ext Ola Liljedahl [mailto:ola.liljed...@linaro.org]
 *Sent:* Friday, May 22, 2015 2:20 PM
 *To:* Savolainen, Petri (Nokia - FI/Espoo)
 *Cc:* lng-odp@lists.linaro.org
 *Subject:* Re: [lng-odp] [RFC] Add ipc.h



 On 21 May 2015 at 11:50, Savolainen, Petri (Nokia - FI/Espoo) 
 petri.savolai...@nokia.com wrote:



  -Original Message-
  From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of ext
  Ola Liljedahl
  Sent: Tuesday, May 19, 2015 1:04 AM
  To: lng-odp@lists.linaro.org
  Subject: [lng-odp] [RFC] Add ipc.h
 
  As promised, here is my first attempt at a standalone API for IPC - inter
  process communication in a shared nothing architecture (message passing
  between processes which do not share memory).
 
  Currently all definitions are in the file ipc.h but it is possible to
  break out some message/event related definitions (everything from
  odp_ipc_sender) in a separate file message.h. This would mimic the
  packet_io.h/packet.h separation.
 
  The semantics of message passing is that sending a message to an endpoint
  will always look like it succeeds. The appearance of endpoints is
  explicitly
  notified through user-defined messages specified in the odp_ipc_resolve()
  call. Similarly, the disappearance (e.g. death or otherwise lost
  connection)
  is also explicitly notified through user-defined messages specified in
 the
  odp_ipc_monitor() call. The send call does not fail because the addressed
  endpoints has disappeared.
 
  Messages (from endpoint A to endpoint B) are delivered in order. If
  message
  N sent to an endpoint is delivered, then all messages N have also been
  delivered. Message delivery does not guarantee actual processing by the

 Ordered is OK requirement, but all messages N have also been delivered
 means in practice loss less delivery (== re-tries and retransmission
 windows, etc). Lossy vs loss less link should be an configuration option.

 Also what delivered means?

 Message:
  - transmitted successfully over the link ?
  - is now under control of the remote node (post office) ?
  - delivered into application input queue ?
  - has been dequeued from application queue ?


  recipient. End-to-end acknowledgements (using messages) should be used if
  this guarantee is important to the user.
 
  IPC endpoints can be seen as interfaces (taps) to an internal reliable
  multidrop network where each endpoint has a unique address which is only
  valid for the lifetime of the endpoint. I.e. if an endpoint is destroyed
  and then recreated (with the same name), the new endpoint will have a
  new address (eventually endpoints addresses will have to be recycled but
  not for a very long time). Endpoints names do not necessarily have to be
  unique.

 How widely these addresses are unique: inside one VM, multiple VMs under
 the same host, multiple devices on a LAN (VLAN), ...

 I have added that the scope is expected to be an OS instance (e.g. VM).



 OK, it’s likely the scope mostly needed anyway.



 Still need to define if addressing (and protocol) is implementation
 specific or standardized.

 Implementation specific for now.

 I think you are suggesting implementation specific, which is fine but need
 to note that it’s suitable only between two ODP instances of the same _
 *implementation*_ version. E.g. messaging between linux-generic and
 odp-dpdk,  or between odp-dpdk-1.1.0.0 and odp-dpdk-1.1.0.1 would not
 necessary work. A protocol spec (with version numbering) would be needed to
 guaranteed intra implementation version communication.

  I don't want to walk into the tar pit of defining a binary
(on-the-wire) protocol. But this can be done later.

Message formats, API and transport are separate things. I am only trying to
define the API here.

The incompatibility issues you describe should be noted.




 Implementation specific messaging would be sufficient for SW (ODP
 instance) coming from single SW vendor, but integration of SW from multiple
 vendors 

Re: [lng-odp] [PATCH 1/2 v1] examples: ipsec: tunnel mode support

2015-05-22 Thread Steve Kordus (skordus)
Reviewed-by: Steve Kordus skor...@cisco.com

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of 
alexandru.badici...@linaro.org
Sent: Tuesday, May 19, 2015 4:39 AM
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [PATCH 1/2 v1] examples: ipsec: tunnel mode support

From: Alexandru Badicioiu alexandru.badici...@linaro.org

v1 - added comment for tunnel DB entry use

Tunnel mode is enabled from the command line using -t argument with the 
following format: SrcIP:DstIP:TunnelSrcIP:TunnelDstIP.
SrcIP - cleartext packet source IP
DstIP - cleartext packet destination IP
TunnelSrcIP - tunnel source IP
TunnelDstIP - tunnel destination IP

The outbound packets matching SrcIP:DstIP will be encapsulated in a 
TunnelSrcIP:TunnelDstIP IPSec tunnel (AH/ESP/AH+ESP) if a matching outbound SA 
is determined (as for transport mode).
For inbound packets each entry in the IPSec cache is matched for the cleartext 
addresses, as in the transport mode (SrcIP:DstIP) and then for the tunnel 
addresses (TunnelSrcIP:TunnelDstIP) in case cleartext addresses didn't match. 
After authentication and decryption tunneled packets are verified against the 
tunnel entry (packets came in from the expected tunnel).

Signed-off-by: Alexandru Badicioiu alexandru.badici...@linaro.org
---
 example/ipsec/odp_ipsec.c|  105 +++---
 example/ipsec/odp_ipsec_cache.c  |   31 +-
 example/ipsec/odp_ipsec_cache.h  |6 ++
 example/ipsec/odp_ipsec_sa_db.c  |  133 +-
 example/ipsec/odp_ipsec_sa_db.h  |   57 
 example/ipsec/odp_ipsec_stream.c |  101 
 6 files changed, 403 insertions(+), 30 deletions(-)

diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c index 
82ed0cb..3931fef 100644
--- a/example/ipsec/odp_ipsec.c
+++ b/example/ipsec/odp_ipsec.c
@@ -135,13 +135,20 @@ typedef struct {
uint8_t  ip_ttl; /** Saved IP TTL value */
int  hdr_len;/** Length of IPsec headers */
int  trl_len;/** Length of IPsec trailers */
+   uint16_t tun_hdr_offset; /** Offset of tunnel header from
+ buffer start */
uint16_t ah_offset;  /** Offset of AH header from buffer start */
uint16_t esp_offset; /** Offset of ESP header from buffer start */
 
+   /* Input only */
+   uint32_t src_ip; /** SA source IP address */
+   uint32_t dst_ip; /** SA dest IP address */
+
/* Output only */
odp_crypto_op_params_t params;  /** Parameters for crypto call */
uint32_t *ah_seq;   /** AH sequence number location */
uint32_t *esp_seq;  /** ESP sequence number location */
+   uint16_t *tun_hdr_id;   /** Tunnel header ID  */
 } ipsec_ctx_t;
 
 /**
@@ -368,6 +375,7 @@ void ipsec_init_pre(void)
/* Initialize our data bases */
init_sp_db();
init_sa_db();
+   init_tun_db();
init_ipsec_cache();
 }
 
@@ -387,19 +395,27 @@ void ipsec_init_post(crypto_api_mode_e api_mode)
for (entry = sp_db-list; NULL != entry; entry = entry-next) {
sa_db_entry_t *cipher_sa = NULL;
sa_db_entry_t *auth_sa = NULL;
+   tun_db_entry_t *tun;
 
-   if (entry-esp)
+   if (entry-esp) {
cipher_sa = find_sa_db_entry(entry-src_subnet,
 entry-dst_subnet,
 1);
-   if (entry-ah)
+   tun = find_tun_db_entry(cipher_sa-src_ip,
+ cipher_sa-dst_ip);
+   }
+   if (entry-ah) {
auth_sa = find_sa_db_entry(entry-src_subnet,
   entry-dst_subnet,
   0);
+   tun = find_tun_db_entry(auth_sa-src_ip,
+ auth_sa-dst_ip);
+   }
 
if (cipher_sa || auth_sa) {
if (create_ipsec_cache_entry(cipher_sa,
 auth_sa,
+tun,
 api_mode,
 entry-input,
 completionq,
@@ -672,6 +688,8 @@ pkt_disposition_e do_ipsec_in_classify(odp_packet_t pkt,
ctx-ipsec.esp_offset = esp ? ((uint8_t *)esp) - buf : 0;
ctx-ipsec.hdr_len = hdr_len;
ctx-ipsec.trl_len = 0;
+   ctx-ipsec.src_ip = entry-src_ip;
+   ctx-ipsec.dst_ip = entry-dst_ip;
 
/*If authenticating, zero the mutable fields build the request */
if (ah) {

[lng-odp] buffer_alloc length parameter

2015-05-22 Thread Zoltan Kiss

Hi,

While fixing up things in the DPDK implementation I've found that 
linux-generic might have some troubles too. odp_buffer_alloc() and 
odp_packet_alloc() uses odp_pool_to_entry(pool_hdl)-s.params.buf.size, 
but if it's a packet pool (which is always true in case of 
odp_packet_alloc(), and might be true with odp_buffer_alloc()).
My first idea would be to use s.params.pkt.seg_len in that case, but it 
might be 0. Maybe s.seg_size would be the right value?
If anyone has time to come up with a patch to fix this, feel free, I 
probably won't have time to work on this in the near future.


Zoli
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp