[lng-odp] Regarding odp_packet_t and buffer chaining

2015-01-19 Thread Prashant Upadhyaya
Hi,

Suppose I have pkt1 and pk2 which are instances of odp_packet_t.
Now I have two questions -


1.   what is the way to chain the buffers of pkt1 and pkt2 into pkt1 so 
that from now on I can just use pkt1 for transmission via, say, odp_pktio_send 
API

2.   if I create a new odp_packet_t pkt3, then how can I take buffers out 
of pkt1 and pk2 and chain them up in pkt3 so that I can send pkt3 via 
odp_pktio_send API

We can assume that all pkt's are obtained from the same buffer pool.
If there is a way to chain up pkt's themselves instead of the buffers 
associated with them, that is also ok by me.

Unfortunately I am not clear regarding the API calls to be used at the 
application level for this, that is the guidance I am looking for - what is the 
official way in an ODP compliant application to achieve the above usecases.

The ultimate idea ofcourse is to chain up the packets/buffers so that I can 
avoid mem copies to create a single buffer to be sent out. (something similar 
to 'mbuf chaining' supported by eg. DPDK)

Regards
-Prashant

DISCLAIMER: This message is proprietary to Aricent and is intended solely for 
the use of the individual to whom it is addressed. It may contain privileged or 
confidential information and should not be circulated or used for any purpose 
other than for what it is intended. If you have received this message in error, 
please notify the originator immediately. If you are not the intended 
recipient, you are notified that you are strictly prohibited from using, 
copying, altering, or disclosing the contents of this message. Aricent accepts 
no responsibility for loss or damage arising from the use of the information 
transmitted by this email including damage from virus.
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [PATCH] helper: ip: add IP protocol value for sctp

2015-01-19 Thread Jerin Jacob
Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
---
 helper/include/odph_ip.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/helper/include/odph_ip.h b/helper/include/odph_ip.h
index 272fd96..f2638ba 100644
--- a/helper/include/odph_ip.h
+++ b/helper/include/odph_ip.h
@@ -167,6 +167,7 @@ typedef struct ODP_PACKED {
 #define ODPH_IPPROTO_FRAG0x2C /** IPv6 Fragment (44) */
 #define ODPH_IPPROTO_AH  0x33 /** Authentication Header (51) */
 #define ODPH_IPPROTO_ESP 0x32 /** Encapsulating Security Payload (50) */
+#define ODPH_IPPROTO_SCTP0x84 /** Stream Control Transmission (132) */
 #define ODPH_IPPROTO_INVALID 0xFF /** Reserved invalid by IANA */
 
 /**@}*/
-- 
1.9.3


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv8] linux-generic: fix odp_pktio_inq_remdef

2015-01-19 Thread Maxim Uvarov

On 01/16/2015 03:58 PM, Maxim Uvarov wrote:

Correctly remove queue from packet i/o and remove it from scheduler.

Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
---
  v8: fixed Stuart comments and added test. Implementation of 
odp_pktio_inq_remdef
also fixes segfault in existance pktio test.

Looks like I thought about something else. I should say:

because of odp_pktio_inq_remdef was not implemented before, ODP can have 
special conditions where odp_schedule() for closed
pktio can pull from old incoming queue. In linux-generic implementation 
packet recv() will be called and implementation will try to
place packet to destroyed queue. So there is segfault. Because current 
odp examples do not destroy/create packet i/o and then call
scheduler we did not see that segfault. So in this patch I implemented 
odp_pktio_inq_rmdef and added tests case for odp_schedule

on closed packet i.o.

Maxim.

  .../linux-generic/include/odp_queue_internal.h | 10 
  platform/linux-generic/odp_packet_io.c | 29 +-
  platform/linux-generic/odp_schedule.c  |  5 
  test/validation/odp_pktio.c| 17 +
  4 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/platform/linux-generic/include/odp_queue_internal.h 
b/platform/linux-generic/include/odp_queue_internal.h
index d5c8e4e..dbc42c0 100644
--- a/platform/linux-generic/include/odp_queue_internal.h
+++ b/platform/linux-generic/include/odp_queue_internal.h
@@ -129,6 +129,16 @@ static inline int queue_is_destroyed(odp_queue_t handle)
  
  	return queue-s.status == QUEUE_STATUS_DESTROYED;

  }
+
+static inline int queue_is_sched(odp_queue_t handle)
+{
+   queue_entry_t *queue;
+
+   queue = queue_to_qentry(handle);
+
+   return ((queue-s.status == QUEUE_STATUS_SCHED) 
+   (queue-s.pktin != ODP_PKTIO_INVALID));
+}
  #ifdef __cplusplus
  }
  #endif
diff --git a/platform/linux-generic/odp_packet_io.c 
b/platform/linux-generic/odp_packet_io.c
index cd109d2..04de756 100644
--- a/platform/linux-generic/odp_packet_io.c
+++ b/platform/linux-generic/odp_packet_io.c
@@ -429,7 +429,34 @@ int odp_pktio_inq_setdef(odp_pktio_t id, odp_queue_t queue)
  
  int odp_pktio_inq_remdef(odp_pktio_t id)

  {
-   return odp_pktio_inq_setdef(id, ODP_QUEUE_INVALID);
+   pktio_entry_t *pktio_entry = get_pktio_entry(id);
+   odp_queue_t queue;
+   queue_entry_t *qentry;
+
+   if (pktio_entry == NULL)
+   return -1;
+
+   lock_entry(pktio_entry);
+   queue = pktio_entry-s.inq_default;
+   qentry = queue_to_qentry(queue);
+
+   queue_lock(qentry);
+   if (qentry-s.status == QUEUE_STATUS_FREE) {
+   queue_unlock(qentry);
+   unlock_entry(pktio_entry);
+   return -1;
+   }
+
+   qentry-s.enqueue = queue_enq_dummy;
+   qentry-s.enqueue_multi = queue_enq_multi_dummy;
+   qentry-s.status = QUEUE_STATUS_NOTSCHED;
+   qentry-s.pktin = ODP_PKTIO_INVALID;
+   queue_unlock(qentry);
+
+   pktio_entry-s.inq_default = ODP_QUEUE_INVALID;
+   unlock_entry(pktio_entry);
+
+   return 0;
  }
  
  odp_queue_t odp_pktio_inq_getdef(odp_pktio_t id)

diff --git a/platform/linux-generic/odp_schedule.c 
b/platform/linux-generic/odp_schedule.c
index a14de4f..775b788 100644
--- a/platform/linux-generic/odp_schedule.c
+++ b/platform/linux-generic/odp_schedule.c
@@ -286,6 +286,11 @@ static int schedule(odp_queue_t *out_queue, odp_buffer_t 
out_buf[],
desc  = odp_buffer_addr(desc_buf);
queue = desc-queue;
  
+if (odp_queue_type(queue) ==

+   ODP_QUEUE_TYPE_PKTIN 
+   !queue_is_sched(queue))
+   continue;
+
num = odp_queue_deq_multi(queue,
  sched_local.buf,
  max_deq);
diff --git a/test/validation/odp_pktio.c b/test/validation/odp_pktio.c
index d1eb0d5..03e954a 100644
--- a/test/validation/odp_pktio.c
+++ b/test/validation/odp_pktio.c
@@ -379,6 +379,7 @@ static void pktio_test_txrx(odp_queue_type_t q_type, int 
num_pkts)
pktio_txrx_multi(pktios[0], pktios[if_b], num_pkts);
  
  	for (i = 0; i  num_ifaces; ++i) {

+   odp_pktio_inq_remdef(pktios[i].id);
ret = odp_pktio_close(pktios[i].id);
CU_ASSERT(ret == 0);
}
@@ -472,6 +473,21 @@ static void test_odp_pktio_mac(void)
return;
  }
  
+static void test_odp_pktio_inq_remdef(void)

+{
+   odp_pktio_t pktio = create_pktio(iface_name[0]);
+   int i;
+
+   CU_ASSERT(pktio != ODP_PKTIO_INVALID);
+   CU_ASSERT(create_inq(pktio) == 0);
+   CU_ASSERT(odp_pktio_inq_remdef(pktio) == 0);
+
+   for (i = 0; i  100; i++)
+   

Re: [lng-odp] [PATCH] helper: ip: add IP protocol value for sctp

2015-01-19 Thread Jerin Jacob
On Mon, Jan 19, 2015 at 06:13:42AM -0600, Bill Fischofer wrote:
 Two questions:
 
 
1. We previously said we didn't need SCTP support for ODP v1.0, so why
is this needed?

Then there is a disconnect, exiting API has reference to SCTP

/**
 * Check for SCTP
 *
 * @param pkt Packet handle
 * @return 1 if packet contains an SCTP header, 0 otherwise
 */
int odp_packet_has_sctp(odp_packet_t pkt);

/**
 * Check for L4 header, e.g. UDP, TCP, SCTP (also ICMP)
 *
 * @param pkt Packet handle
 * @return 1 if packet contains a valid  known L4 header, 0 otherwise
 */
int odp_packet_has_l4(odp_packet_t pkt);


2. This is a helper, so it's not necessarily constrained by what may be
covered by ODP v1.0, but in that case why limit this to SCTP?  There are
lots of other IP protocols that this helper file doesn't define besides
SCTP. Should they be included as well?
 
 
 
 On Mon, Jan 19, 2015 at 5:44 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
  wrote:
 
  Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
  ---
   helper/include/odph_ip.h | 1 +
   1 file changed, 1 insertion(+)
 
  diff --git a/helper/include/odph_ip.h b/helper/include/odph_ip.h
  index 272fd96..f2638ba 100644
  --- a/helper/include/odph_ip.h
  +++ b/helper/include/odph_ip.h
  @@ -167,6 +167,7 @@ typedef struct ODP_PACKED {
   #define ODPH_IPPROTO_FRAG0x2C /** IPv6 Fragment (44) */
   #define ODPH_IPPROTO_AH  0x33 /** Authentication Header (51) */
   #define ODPH_IPPROTO_ESP 0x32 /** Encapsulating Security Payload
  (50) */
  +#define ODPH_IPPROTO_SCTP0x84 /** Stream Control Transmission (132)
  */
   #define ODPH_IPPROTO_INVALID 0xFF /** Reserved invalid by IANA */
 
   /**@}*/
  --
  1.9.3
 
 
  ___
  lng-odp mailing list
  lng-odp@lists.linaro.org
  http://lists.linaro.org/mailman/listinfo/lng-odp
 

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv2 7/8] debian: add debian packaging framework

2015-01-19 Thread Steve McIntyre
Bah, almost...

On Thu, Jan 15, 2015 at 02:14:50PM +0100, Anders Roxell wrote:
Signed-off-by: Anders Roxell anders.rox...@linaro.org

 debian/libodp0-dev.dirs|  2 ++
 debian/libodp0-dev.install |  4 
 debian/libodp0.dirs|  1 +
 debian/libodp0.install |  1 +
 debian/odp0-bin.dirs   |  1 +
 debian/odp0-bin.install|  1 +

These files need renaming s/odp0/odp7/

--- /dev/null
+++ b/debian/control
@@ -0,0 +1,42 @@
+Source: opendataplane
+Priority: optional
+Maintainer: Anders Roxell anders.rox...@linaro.org
+Build-Depends: debhelper (= 9), autotools-dev
+Standards-Version: 3.9.6
+Section: libs
+Homepage: http://www.opendataplane.org/
+Vcs-Git: git://git.linaro.org/lng/odp.git
+Vcs-Browser: https://git.linaro.org/lng/odp.git
+
+Package: odp7-bin
+Section: libdevel
+Architecture: any
+Multi-Arch: allowed
+Depends: libodp0 (= ${binary:Version}), ${misc:Depends}, ${shlibs:Depends}

libodp7

+Description: Example binaries for OpenDataPlane
+ These are the executable examples from the reference implementation.
+
+Package: libodp7-dbg
+Priority: extra
+Section: debug
+Architecture: any
+Multi-Arch: same
+Depends: libodp0 (= ${binary:Version}), ${misc:Depends}

libodp7

+Description: Debug symbols for OpenDataPlane
+ This is the OpenDataPlane library from the reference implementation
+ with debug turned on.
+
+Package: libodp7-dev
+Section: libdevel
+Architecture: any
+Multi-Arch: same
+Depends: libodp0 (= ${binary:Version}), ${misc:Depends}, libssl-dev

libodp7

Cheers,
-- 
Steve McIntyresteve.mcint...@linaro.org
http://www.linaro.org/ Linaro.org | Open source software for ARM SoCs


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_packet API queries

2015-01-19 Thread Jerin Jacob
On Mon, Jan 19, 2015 at 06:09:34AM -0600, Bill Fischofer wrote:
 I think Petri should weigh in on these questions.  For the first one, what
 problems do you anticipate some platforms having with that equation?

I have two issues around the unit test case,
1) packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM -
ODP_CONFIG_PACKET_TAILROOM creates two segments in my platform and 
tailroom/headroom expects 
to work within a segment ?

2) pool creation with number of buffers as one and creating a segmented buffers 
as
packet_len is more than one segment.

 
 I think the cleanest solution would be to have the platform segment size
 for a given pool accessible as pool metadata, e.g.,
 odp_pool_seg_size(pool), but the real issue is why does the application
 want this information?  If an application wants to ensure that packets are
 unsegmented then the simplest solution is to re-introduce the notion of
 unsegmented pools.  If an application creates an unsegmented pool then by
 definition any object allocated from that pool will only consist of a
 single segment.  By contrast, if the application is designed to support
 segments then it shouldn't care.

IMO, its simple to add a ODP_CONFIG or odp_packet_alloc of len == 0 for default 
packet size

 
 On Mon, Jan 19, 2015 at 3:27 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
  wrote:
 
  On Sat, Jan 17, 2015 at 09:45:12AM -0600, Bill Fischofer wrote:
   Application-visible sizes refer to application-visible data.  Metadata is
   always implementation-specific and not included in such counts.  Metadata
   is off books data that is associated with the packet but is not part of
   any addressable packet storage. The advantage of having a packet object
  is
   that the packet APIs can refer to the packet independent of any
   implementation and not to how the packet may be represented in storage
  on a
   particular platform.
 
  But coming back to my question, How an application can create a one segment
  full length packet ?
  Following equation may not be correct in all platforms
  packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM -
  ODP_CONFIG_PACKET_TAILROOM;
 
 
  
   Trying to reason about buffers that are used to store packet data is
   inherently non-portable and should be discouraged. Hopefully the switch
  to
   events will help move us in that direction since packets are no longer a
   type of buffer using the new nomenclature.
 
  Should we remove  odp_buffer_size(buf) == odp_packet_buf_len(pkt)) test
  case
  or wait for event rework to happen ?
 
  
   On Sat, Jan 17, 2015 at 5:52 AM, Jacob, Jerin 
   jerin.ja...@caviumnetworks.com wrote:
  
Some odp_packet API queries based on exiting odp packet unit test case,
   
1) In exiting odp packet unit test case, In order to create one full
length packet in one segment,
We have used following formula,
packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
  ODP_CONFIG_PACKET_HEADROOM -
ODP_CONFIG_PACKET_TAILROOM;
   
This may not be valid in all platform if the packet segment has segment
specific meta data.
I think, we need to create either new ODP_CONFIG to define the default
packet size
or odp_packet_alloc of len == 0 can be used to create default packet
  size.
   
2) If buffer is NOT aware of segmentation then odp_buffer_size(buf) of
packet should be ODP_CONFIG_PACKET_BUF_LEN_MIN
instead of odp_buffer_size(buf) == odp_packet_buf_len(pkt)) .
   
Any thoughts ?
   
- Jerin
___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp
   
 

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_packet API queries

2015-01-19 Thread Bill Fischofer
On Mon, Jan 19, 2015 at 10:00 AM, Jerin Jacob 
jerin.ja...@caviumnetworks.com wrote:

 On Mon, Jan 19, 2015 at 09:26:08AM -0600, Bill Fischofer wrote:
  On Mon, Jan 19, 2015 at 7:22 AM, Jerin Jacob 
 jerin.ja...@caviumnetworks.com
   wrote:
 
   On Mon, Jan 19, 2015 at 06:09:34AM -0600, Bill Fischofer wrote:
I think Petri should weigh in on these questions.  For the first one,
   what
problems do you anticipate some platforms having with that equation?
  
   I have two issues around the unit test case,
   1) packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
 ODP_CONFIG_PACKET_HEADROOM
   -
   ODP_CONFIG_PACKET_TAILROOM creates two segments in my platform and
   tailroom/headroom expects
   to work within a segment ?
  
 
  Can you elaborate on why this is the case?  The intent here was to define
  what constituted a single segment so if it's not accomplishing that goal
 it
  would be useful to understand why not.

 OK. We have segment specific meta data(as I mentioned in beginning of the
 mail thread)
 in each segment that can't be counted in
 ODP_CONFIG_PACKET_HEADROOM and/or ODP_CONFIG_PACKET_TAILROOM.


If it's metadata then is doesn't come out of either of these. Regardless of
how a platform stores metadata it is not addressable by the application as
part of the packet and hence not included in
ODP_CONFIG_PACKET_BUF_LEN_MIN.  So the equation should still hold.

You're articulating why applications neither know nor care about physical
segment sizes used by implementations. The only thing applications can see
are the logical segments exposed by the ODP APIs.




 
  
   2) pool creation with number of buffers as one and creating a segmented
   buffers as
   packet_len is more than one segment.
  
 
  A packet (I use that term here since in our current definition only
 packets
  can support segmentation or headroom) is an object that consists of
 packet
  metadata plus packet data.  Packet data is stored in one or more
 segments,
  depending on how the pool it is allocated from is created, but
 independent
  of the number of segments used to store this data it is still a single
  packet.  So num_bufs (which will presumably be num_packets in the new
 pool
  definitions) always has a precise meaning.

 but it has to be num_bufs == num_packet segments


And why is that?  They are logically different concepts.  Conflating them
only leads to confusion.


 
 
  
   
I think the cleanest solution would be to have the platform segment
 size
for a given pool accessible as pool metadata, e.g.,
odp_pool_seg_size(pool), but the real issue is why does the
 application
want this information?  If an application wants to ensure that
 packets
   are
unsegmented then the simplest solution is to re-introduce the notion
 of
unsegmented pools.  If an application creates an unsegmented pool
 then by
definition any object allocated from that pool will only consist of a
single segment.  By contrast, if the application is designed to
 support
segments then it shouldn't care.
  
   IMO, its simple to add a ODP_CONFIG or odp_packet_alloc of len == 0 for
   default packet size
  
 
  ODP_CONFIG is how we're doing things now.  More specific configurations
  should be doable on a per-pool basis (subject to implementation
  restrictions) given an expanded odp_pool_param_t definition.
 
 
  
   
On Mon, Jan 19, 2015 at 3:27 AM, Jerin Jacob 
   jerin.ja...@caviumnetworks.com
 wrote:
   
 On Sat, Jan 17, 2015 at 09:45:12AM -0600, Bill Fischofer wrote:
  Application-visible sizes refer to application-visible data.
   Metadata is
  always implementation-specific and not included in such counts.
   Metadata
  is off books data that is associated with the packet but is not
   part of
  any addressable packet storage. The advantage of having a packet
   object
 is
  that the packet APIs can refer to the packet independent of any
  implementation and not to how the packet may be represented in
   storage
 on a
  particular platform.

 But coming back to my question, How an application can create a one
   segment
 full length packet ?
 Following equation may not be correct in all platforms
 packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
   ODP_CONFIG_PACKET_HEADROOM -
 ODP_CONFIG_PACKET_TAILROOM;


 
  Trying to reason about buffers that are used to store packet
 data is
  inherently non-portable and should be discouraged. Hopefully the
   switch
 to
  events will help move us in that direction since packets are no
   longer a
  type of buffer using the new nomenclature.

 Should we remove  odp_buffer_size(buf) == odp_packet_buf_len(pkt))
 test
 case
 or wait for event rework to happen ?

 
  On Sat, Jan 17, 2015 at 5:52 AM, Jacob, Jerin 
  jerin.ja...@caviumnetworks.com wrote:
 
   Some odp_packet API queries based on exiting odp packet unit
 test
   case,
  
   

[lng-odp] 0.9.0 staging patches

2015-01-19 Thread Maxim Uvarov
For next 0.9.0 release I created tempory branch with current patches. 
Main reason is that events show go to repo first, then all other things.

Branch is here:

https://git.linaro.org/people/maxim.uvarov/odp.git/shortlog/refs/heads/odp_0.9.0


Maxim.

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_packet API queries

2015-01-19 Thread Jerin Jacob
On Mon, Jan 19, 2015 at 09:26:08AM -0600, Bill Fischofer wrote:
 On Mon, Jan 19, 2015 at 7:22 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
  wrote:
 
  On Mon, Jan 19, 2015 at 06:09:34AM -0600, Bill Fischofer wrote:
   I think Petri should weigh in on these questions.  For the first one,
  what
   problems do you anticipate some platforms having with that equation?
 
  I have two issues around the unit test case,
  1) packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM
  -
  ODP_CONFIG_PACKET_TAILROOM creates two segments in my platform and
  tailroom/headroom expects
  to work within a segment ?
 
 
 Can you elaborate on why this is the case?  The intent here was to define
 what constituted a single segment so if it's not accomplishing that goal it
 would be useful to understand why not.

OK. We have segment specific meta data(as I mentioned in beginning of the mail 
thread) 
in each segment that can't be counted in
ODP_CONFIG_PACKET_HEADROOM and/or ODP_CONFIG_PACKET_TAILROOM.


 
 
  2) pool creation with number of buffers as one and creating a segmented
  buffers as
  packet_len is more than one segment.
 
 
 A packet (I use that term here since in our current definition only packets
 can support segmentation or headroom) is an object that consists of packet
 metadata plus packet data.  Packet data is stored in one or more segments,
 depending on how the pool it is allocated from is created, but independent
 of the number of segments used to store this data it is still a single
 packet.  So num_bufs (which will presumably be num_packets in the new pool
 definitions) always has a precise meaning.

but it has to be num_bufs == num_packet segments

 
 
 
  
   I think the cleanest solution would be to have the platform segment size
   for a given pool accessible as pool metadata, e.g.,
   odp_pool_seg_size(pool), but the real issue is why does the application
   want this information?  If an application wants to ensure that packets
  are
   unsegmented then the simplest solution is to re-introduce the notion of
   unsegmented pools.  If an application creates an unsegmented pool then by
   definition any object allocated from that pool will only consist of a
   single segment.  By contrast, if the application is designed to support
   segments then it shouldn't care.
 
  IMO, its simple to add a ODP_CONFIG or odp_packet_alloc of len == 0 for
  default packet size
 
 
 ODP_CONFIG is how we're doing things now.  More specific configurations
 should be doable on a per-pool basis (subject to implementation
 restrictions) given an expanded odp_pool_param_t definition.
 
 
 
  
   On Mon, Jan 19, 2015 at 3:27 AM, Jerin Jacob 
  jerin.ja...@caviumnetworks.com
wrote:
  
On Sat, Jan 17, 2015 at 09:45:12AM -0600, Bill Fischofer wrote:
 Application-visible sizes refer to application-visible data.
  Metadata is
 always implementation-specific and not included in such counts.
  Metadata
 is off books data that is associated with the packet but is not
  part of
 any addressable packet storage. The advantage of having a packet
  object
is
 that the packet APIs can refer to the packet independent of any
 implementation and not to how the packet may be represented in
  storage
on a
 particular platform.
   
But coming back to my question, How an application can create a one
  segment
full length packet ?
Following equation may not be correct in all platforms
packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
  ODP_CONFIG_PACKET_HEADROOM -
ODP_CONFIG_PACKET_TAILROOM;
   
   

 Trying to reason about buffers that are used to store packet data is
 inherently non-portable and should be discouraged. Hopefully the
  switch
to
 events will help move us in that direction since packets are no
  longer a
 type of buffer using the new nomenclature.
   
Should we remove  odp_buffer_size(buf) == odp_packet_buf_len(pkt)) test
case
or wait for event rework to happen ?
   

 On Sat, Jan 17, 2015 at 5:52 AM, Jacob, Jerin 
 jerin.ja...@caviumnetworks.com wrote:

  Some odp_packet API queries based on exiting odp packet unit test
  case,
 
  1) In exiting odp packet unit test case, In order to create one
  full
  length packet in one segment,
  We have used following formula,
  packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
ODP_CONFIG_PACKET_HEADROOM -
  ODP_CONFIG_PACKET_TAILROOM;
 
  This may not be valid in all platform if the packet segment has
  segment
  specific meta data.
  I think, we need to create either new ODP_CONFIG to define the
  default
  packet size
  or odp_packet_alloc of len == 0 can be used to create default
  packet
size.
 
  2) If buffer is NOT aware of segmentation then
  odp_buffer_size(buf) of
  packet should be ODP_CONFIG_PACKET_BUF_LEN_MIN
  instead of odp_buffer_size(buf) == odp_packet_buf_len(pkt)) .
 
  

Re: [lng-odp] [PATCH] helper: ip: add IP protocol value for sctp

2015-01-19 Thread Maxim Uvarov

On 01/19/2015 02:44 PM, Jerin Jacob wrote:

Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
---
  helper/include/odph_ip.h | 1 +
  1 file changed, 1 insertion(+)

diff --git a/helper/include/odph_ip.h b/helper/include/odph_ip.h
index 272fd96..f2638ba 100644
--- a/helper/include/odph_ip.h
+++ b/helper/include/odph_ip.h
@@ -167,6 +167,7 @@ typedef struct ODP_PACKED {
  #define ODPH_IPPROTO_FRAG0x2C /** IPv6 Fragment (44) */
  #define ODPH_IPPROTO_AH  0x33 /** Authentication Header (51) */
  #define ODPH_IPPROTO_ESP 0x32 /** Encapsulating Security Payload (50) */
+#define ODPH_IPPROTO_SCTP0x84 /** Stream Control Transmission (132) */
  #define ODPH_IPPROTO_INVALID 0xFF /** Reserved invalid by IANA */
  
  /**@}*/
We planned to remove IP stucturues and defines from ODP headers and use 
systems.

If for now nobody uses that define can you, in your case use it from:
/usr/include/netinet/in.h
IPPROTO_SCTP = 132,/* Stream Control Transmission Protocol.  */

Regards,
Maxim.



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_packet API queries

2015-01-19 Thread Bill Fischofer
On Mon, Jan 19, 2015 at 7:22 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:

 On Mon, Jan 19, 2015 at 06:09:34AM -0600, Bill Fischofer wrote:
  I think Petri should weigh in on these questions.  For the first one,
 what
  problems do you anticipate some platforms having with that equation?

 I have two issues around the unit test case,
 1) packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM
 -
 ODP_CONFIG_PACKET_TAILROOM creates two segments in my platform and
 tailroom/headroom expects
 to work within a segment ?


Can you elaborate on why this is the case?  The intent here was to define
what constituted a single segment so if it's not accomplishing that goal it
would be useful to understand why not.



 2) pool creation with number of buffers as one and creating a segmented
 buffers as
 packet_len is more than one segment.


A packet (I use that term here since in our current definition only packets
can support segmentation or headroom) is an object that consists of packet
metadata plus packet data.  Packet data is stored in one or more segments,
depending on how the pool it is allocated from is created, but independent
of the number of segments used to store this data it is still a single
packet.  So num_bufs (which will presumably be num_packets in the new pool
definitions) always has a precise meaning.



 
  I think the cleanest solution would be to have the platform segment size
  for a given pool accessible as pool metadata, e.g.,
  odp_pool_seg_size(pool), but the real issue is why does the application
  want this information?  If an application wants to ensure that packets
 are
  unsegmented then the simplest solution is to re-introduce the notion of
  unsegmented pools.  If an application creates an unsegmented pool then by
  definition any object allocated from that pool will only consist of a
  single segment.  By contrast, if the application is designed to support
  segments then it shouldn't care.

 IMO, its simple to add a ODP_CONFIG or odp_packet_alloc of len == 0 for
 default packet size


ODP_CONFIG is how we're doing things now.  More specific configurations
should be doable on a per-pool basis (subject to implementation
restrictions) given an expanded odp_pool_param_t definition.



 
  On Mon, Jan 19, 2015 at 3:27 AM, Jerin Jacob 
 jerin.ja...@caviumnetworks.com
   wrote:
 
   On Sat, Jan 17, 2015 at 09:45:12AM -0600, Bill Fischofer wrote:
Application-visible sizes refer to application-visible data.
 Metadata is
always implementation-specific and not included in such counts.
 Metadata
is off books data that is associated with the packet but is not
 part of
any addressable packet storage. The advantage of having a packet
 object
   is
that the packet APIs can refer to the packet independent of any
implementation and not to how the packet may be represented in
 storage
   on a
particular platform.
  
   But coming back to my question, How an application can create a one
 segment
   full length packet ?
   Following equation may not be correct in all platforms
   packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
 ODP_CONFIG_PACKET_HEADROOM -
   ODP_CONFIG_PACKET_TAILROOM;
  
  
   
Trying to reason about buffers that are used to store packet data is
inherently non-portable and should be discouraged. Hopefully the
 switch
   to
events will help move us in that direction since packets are no
 longer a
type of buffer using the new nomenclature.
  
   Should we remove  odp_buffer_size(buf) == odp_packet_buf_len(pkt)) test
   case
   or wait for event rework to happen ?
  
   
On Sat, Jan 17, 2015 at 5:52 AM, Jacob, Jerin 
jerin.ja...@caviumnetworks.com wrote:
   
 Some odp_packet API queries based on exiting odp packet unit test
 case,

 1) In exiting odp packet unit test case, In order to create one
 full
 length packet in one segment,
 We have used following formula,
 packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
   ODP_CONFIG_PACKET_HEADROOM -
 ODP_CONFIG_PACKET_TAILROOM;

 This may not be valid in all platform if the packet segment has
 segment
 specific meta data.
 I think, we need to create either new ODP_CONFIG to define the
 default
 packet size
 or odp_packet_alloc of len == 0 can be used to create default
 packet
   size.

 2) If buffer is NOT aware of segmentation then
 odp_buffer_size(buf) of
 packet should be ODP_CONFIG_PACKET_BUF_LEN_MIN
 instead of odp_buffer_size(buf) == odp_packet_buf_len(pkt)) .

 Any thoughts ?

 - Jerin
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp

  

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 3/3] validation: buffer: enable packet validation test to run on SW emulated odp packet pool on HW

2015-01-19 Thread Bill Fischofer
Are you suggesting that the issue here that some platforms may not be able
to allow applications precise control over the number of buffers in a
pool?  For example, they might have a minimum number of buffers or a
minimum buffer count granularity?  Do we need additional ODP_CONFIG values
to capture this?

On Mon, Jan 19, 2015 at 2:58 AM, Jacob, Jerin 
jerin.ja...@caviumnetworks.com wrote:

 I agree. Even then some hardware caches packet buffers for packet input
 hardware subsystem
 so coming up with negative test case for v1.0 may not be good idea.

 From: Bill Fischofer bill.fischo...@linaro.org
 Sent: Saturday, January 17, 2015 9:03 PM
 To: Jacob, Jerin
 Cc: LNG ODP Mailman List
 Subject: Re: [lng-odp] [PATCH 3/3] validation: buffer: enable packet
 validation test to run on SW emulated odp packet pool on HW


 Wouldn't platforms that implement virtual packet pools also implement
 virtual allocation limits?  Otherwise how would you prevent one logical
 pool from consuming the entire physical pool?  In this case it would seem
 the check would still be valid  since the pool_ids are different
 independent of how the pools are implemented.


 On Sat, Jan 17, 2015 at 5:29 AM, Jerin Jacob  
 jerin.ja...@caviumnetworks.com wrote:
  If a platform is limited to one HW packet pool then odp implementation
 can implement the virtual odp packet pools using same the HW packet
 pool(if the block size is same)
 In this specific test case has created a packet buffer pool on init with
 100 buffers
 and later a packet buffer pool of one buffer. So in this specific case
 assumption of later pool
 have only one buffer is not valid.

 Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
 ---
  test/validation/buffer/odp_packet_test.c | 2 --
  1 file changed, 2 deletions(-)

 diff --git a/test/validation/buffer/odp_packet_test.c
 b/test/validation/buffer/odp_packet_test.c
 index 7c2b169..86b6a04 100644
 --- a/test/validation/buffer/odp_packet_test.c
 +++ b/test/validation/buffer/odp_packet_test.c
 @@ -58,8 +58,6 @@ static void packet_alloc_free(void)
 packet = odp_packet_alloc(pool, packet_len);
 CU_ASSERT_FATAL(packet != ODP_PACKET_INVALID);
 CU_ASSERT(odp_packet_len(packet) == packet_len);
 -   /** @todo: is it correct to assume the pool had only one buffer? */
 -   CU_ASSERT_FATAL(odp_packet_alloc(pool, packet_len) ==
 ODP_PACKET_INVALID)

 odp_packet_free(packet);

 --
 1.9.3


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_packet API queries

2015-01-19 Thread Bill Fischofer
I think Petri should weigh in on these questions.  For the first one, what
problems do you anticipate some platforms having with that equation?

I think the cleanest solution would be to have the platform segment size
for a given pool accessible as pool metadata, e.g.,
odp_pool_seg_size(pool), but the real issue is why does the application
want this information?  If an application wants to ensure that packets are
unsegmented then the simplest solution is to re-introduce the notion of
unsegmented pools.  If an application creates an unsegmented pool then by
definition any object allocated from that pool will only consist of a
single segment.  By contrast, if the application is designed to support
segments then it shouldn't care.

On Mon, Jan 19, 2015 at 3:27 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:

 On Sat, Jan 17, 2015 at 09:45:12AM -0600, Bill Fischofer wrote:
  Application-visible sizes refer to application-visible data.  Metadata is
  always implementation-specific and not included in such counts.  Metadata
  is off books data that is associated with the packet but is not part of
  any addressable packet storage. The advantage of having a packet object
 is
  that the packet APIs can refer to the packet independent of any
  implementation and not to how the packet may be represented in storage
 on a
  particular platform.

 But coming back to my question, How an application can create a one segment
 full length packet ?
 Following equation may not be correct in all platforms
 packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM -
 ODP_CONFIG_PACKET_TAILROOM;


 
  Trying to reason about buffers that are used to store packet data is
  inherently non-portable and should be discouraged. Hopefully the switch
 to
  events will help move us in that direction since packets are no longer a
  type of buffer using the new nomenclature.

 Should we remove  odp_buffer_size(buf) == odp_packet_buf_len(pkt)) test
 case
 or wait for event rework to happen ?

 
  On Sat, Jan 17, 2015 at 5:52 AM, Jacob, Jerin 
  jerin.ja...@caviumnetworks.com wrote:
 
   Some odp_packet API queries based on exiting odp packet unit test case,
  
   1) In exiting odp packet unit test case, In order to create one full
   length packet in one segment,
   We have used following formula,
   packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN -
 ODP_CONFIG_PACKET_HEADROOM -
   ODP_CONFIG_PACKET_TAILROOM;
  
   This may not be valid in all platform if the packet segment has segment
   specific meta data.
   I think, we need to create either new ODP_CONFIG to define the default
   packet size
   or odp_packet_alloc of len == 0 can be used to create default packet
 size.
  
   2) If buffer is NOT aware of segmentation then odp_buffer_size(buf) of
   packet should be ODP_CONFIG_PACKET_BUF_LEN_MIN
   instead of odp_buffer_size(buf) == odp_packet_buf_len(pkt)) .
  
   Any thoughts ?
  
   - Jerin
   ___
   lng-odp mailing list
   lng-odp@lists.linaro.org
   http://lists.linaro.org/mailman/listinfo/lng-odp
  

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_packet API queries

2015-01-19 Thread Jerin Jacob
On Sat, Jan 17, 2015 at 09:45:12AM -0600, Bill Fischofer wrote:
 Application-visible sizes refer to application-visible data.  Metadata is
 always implementation-specific and not included in such counts.  Metadata
 is off books data that is associated with the packet but is not part of
 any addressable packet storage. The advantage of having a packet object is
 that the packet APIs can refer to the packet independent of any
 implementation and not to how the packet may be represented in storage on a
 particular platform.

But coming back to my question, How an application can create a one segment
full length packet ? 
Following equation may not be correct in all platforms 
packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM -
ODP_CONFIG_PACKET_TAILROOM;


 
 Trying to reason about buffers that are used to store packet data is
 inherently non-portable and should be discouraged. Hopefully the switch to
 events will help move us in that direction since packets are no longer a
 type of buffer using the new nomenclature.

Should we remove  odp_buffer_size(buf) == odp_packet_buf_len(pkt)) test case
or wait for event rework to happen ?

 
 On Sat, Jan 17, 2015 at 5:52 AM, Jacob, Jerin 
 jerin.ja...@caviumnetworks.com wrote:
 
  Some odp_packet API queries based on exiting odp packet unit test case,
 
  1) In exiting odp packet unit test case, In order to create one full
  length packet in one segment,
  We have used following formula,
  packet_len = ODP_CONFIG_PACKET_BUF_LEN_MIN - ODP_CONFIG_PACKET_HEADROOM -
  ODP_CONFIG_PACKET_TAILROOM;
 
  This may not be valid in all platform if the packet segment has segment
  specific meta data.
  I think, we need to create either new ODP_CONFIG to define the default
  packet size
  or odp_packet_alloc of len == 0 can be used to create default packet size.
 
  2) If buffer is NOT aware of segmentation then odp_buffer_size(buf) of
  packet should be ODP_CONFIG_PACKET_BUF_LEN_MIN
  instead of odp_buffer_size(buf) == odp_packet_buf_len(pkt)) .
 
  Any thoughts ?
 
  - Jerin
  ___
  lng-odp mailing list
  lng-odp@lists.linaro.org
  http://lists.linaro.org/mailman/listinfo/lng-odp
 

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 00/15] Event introduction

2015-01-19 Thread Ola Liljedahl
On 17 January 2015 at 16:28, Bill Fischofer bill.fischo...@linaro.org wrote:
 In the new model no buffers are queueable.  Only events are queueable.
But you can convert a buffer handle to the corresponding event handle
and enqueue the event. Thus buffers must have the metadata that allows
them to be enqueued and scheduled.

 Events can contain buffers, packets, timeouts, etc., but are logically
 distinct from them as they are the queueable/schedulable entity.

 On Sat, Jan 17, 2015 at 6:16 AM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:

 On Thu, Jan 15, 2015 at 05:40:08PM +0200, Petri Savolainen wrote:
  This patches introduces odp_event_t and replaces with that the usage of
  odp_buffer_t as the super class for other buffer types. What used to
  be a
  buffer type is now an event type.

 Should we also introduce a new buffer type that NOT queueable to odp_queue
 ?
 If some application is using the buffers only for the storage then I think
 this
 new type make sense as queueable buffers will take more resources and
 additional meta data(for hardware buffer schedule manager)
Wouldn't such buffers be more related to some type of malloc-like
memory manager?
ODP could use a multicore friendly memory manager for variable size
objects. Could such a memory manager be implemented by HW and thus be
considered as a part of the SoC abstraction layer? Or would it be a
pure SW construct and basically just a utility library?



 
  There are some lines over 80 char, since those are caused by temporary
  event - buffer, packet - event - buffer conversions and should be
  cleaned up
  from the implementation anyway.
 
  Petri Savolainen (15):
api: event: Add odp_event_t
api: event: odp_schedule and odp_queue_enq
api: event: schedule_multi and queue_enq_multi
api: event: odp_queue_deq
api: event: odp_queue_deq_multi
api: buffer: Removed odp_buffer_type
api: packet: Removed odp_packet_to_buffer
api: packet: Removed odp_packet_from_buffer
api: timer: Use odp_event_t instead of odp_buffer_t
api: crypto: Use odp_event_t instead of odp_buffer_t
linux-generic: crypto: Use packet alloc for packet
api: buffer_pool: Rename odp_buffer_pool.h to odp_pool.h
api: pool: Rename pool params and remove buffer types
api: pool: Rename odp_buffer_pool_ to odp_pool_
api: config: Renamed ODP_CONFIG_BUFFER_POOLS
 
   example/generator/odp_generator.c  |  38 ++---
   example/ipsec/odp_ipsec.c  |  70 
   example/ipsec/odp_ipsec_cache.c|   4 +-
   example/ipsec/odp_ipsec_cache.h|   2 +-
   example/ipsec/odp_ipsec_loop_db.c  |   2 +-
   example/ipsec/odp_ipsec_loop_db.h  |  12 +-
   example/ipsec/odp_ipsec_stream.c   |  20 +--
   example/ipsec/odp_ipsec_stream.h   |   2 +-
   example/l2fwd/odp_l2fwd.c  |  28 +--
   example/packet/odp_pktio.c |  28 +--
   example/timer/odp_timer_test.c |  64 +++
   platform/linux-generic/Makefile.am |   4 +-
   platform/linux-generic/include/api/odp.h   |   3 +-
   platform/linux-generic/include/api/odp_buffer.h|  40 +++--
   .../linux-generic/include/api/odp_buffer_pool.h| 177
  ---
   .../linux-generic/include/api/odp_classification.h |   2 +-
   platform/linux-generic/include/api/odp_config.h|   4 +-
   platform/linux-generic/include/api/odp_crypto.h|  16 +-
   platform/linux-generic/include/api/odp_event.h |  59 +++
   platform/linux-generic/include/api/odp_packet.h|  29 ++--
   platform/linux-generic/include/api/odp_packet_io.h |   4 +-
   .../linux-generic/include/api/odp_platform_types.h |  10 +-
   platform/linux-generic/include/api/odp_pool.h  | 189
  +
   platform/linux-generic/include/api/odp_queue.h |  32 ++--
   platform/linux-generic/include/api/odp_schedule.h  |  32 ++--
   platform/linux-generic/include/api/odp_timer.h |  56 +++---
   .../linux-generic/include/odp_buffer_inlines.h |   6 +-
   .../linux-generic/include/odp_buffer_internal.h|  20 ++-
   .../include/odp_buffer_pool_internal.h |  22 +--
   .../linux-generic/include/odp_crypto_internal.h|   2 +-
   .../linux-generic/include/odp_packet_internal.h|   8 +-
   platform/linux-generic/include/odp_packet_socket.h |  10 +-
   platform/linux-generic/odp_buffer.c|  12 +-
   platform/linux-generic/odp_buffer_pool.c   | 133
  +++
   platform/linux-generic/odp_crypto.c|  29 ++--
   platform/linux-generic/odp_event.c |  19 +++
   platform/linux-generic/odp_packet.c|  34 ++--
   platform/linux-generic/odp_packet_io.c |  14 +-
   platform/linux-generic/odp_packet_socket.c |  10 +-
   platform/linux-generic/odp_queue.c |  18 

Re: [lng-odp] [PATCH 00/15] Event introduction

2015-01-19 Thread Jerin Jacob
On Mon, Jan 19, 2015 at 11:26:04AM +0100, Ola Liljedahl wrote:
 On 17 January 2015 at 16:28, Bill Fischofer bill.fischo...@linaro.org wrote:
  In the new model no buffers are queueable.  Only events are queueable.
 But you can convert a buffer handle to the corresponding event handle
 and enqueue the event. Thus buffers must have the metadata that allows
 them to be enqueued and scheduled.

Then its like a queueable buffer only.


 
  Events can contain buffers, packets, timeouts, etc., but are logically
  distinct from them as they are the queueable/schedulable entity.
 
  On Sat, Jan 17, 2015 at 6:16 AM, Jerin Jacob
  jerin.ja...@caviumnetworks.com wrote:
 
  On Thu, Jan 15, 2015 at 05:40:08PM +0200, Petri Savolainen wrote:
   This patches introduces odp_event_t and replaces with that the usage of
   odp_buffer_t as the super class for other buffer types. What used to
   be a
   buffer type is now an event type.
 
  Should we also introduce a new buffer type that NOT queueable to odp_queue
  ?
  If some application is using the buffers only for the storage then I think
  this
  new type make sense as queueable buffers will take more resources and
  additional meta data(for hardware buffer schedule manager)
 Wouldn't such buffers be more related to some type of malloc-like
 memory manager?
 ODP could use a multicore friendly memory manager for variable size
 objects. Could such a memory manager be implemented by HW and thus be
 considered as a part of the SoC abstraction layer? Or would it be a
 pure SW construct and basically just a utility library?

No, I was considering the abstraction for the fixed size buffer pool only.
The new type can used to allocate a buffer pool from hardware fixed size buffer 
manager
without any metadata for queueable. something like,

pool = odp_buffer_pool_create();
odp_buffer_t x = odp_buffer_alloc(pool); // for queueable buffers

odp_buffer_xxx_t x = odp_buffer_xxx_alloc(pool);// for non queueable buffers, 
only for storage




 
 
 
  
   There are some lines over 80 char, since those are caused by temporary
   event - buffer, packet - event - buffer conversions and should be
   cleaned up
   from the implementation anyway.
  
   Petri Savolainen (15):
 api: event: Add odp_event_t
 api: event: odp_schedule and odp_queue_enq
 api: event: schedule_multi and queue_enq_multi
 api: event: odp_queue_deq
 api: event: odp_queue_deq_multi
 api: buffer: Removed odp_buffer_type
 api: packet: Removed odp_packet_to_buffer
 api: packet: Removed odp_packet_from_buffer
 api: timer: Use odp_event_t instead of odp_buffer_t
 api: crypto: Use odp_event_t instead of odp_buffer_t
 linux-generic: crypto: Use packet alloc for packet
 api: buffer_pool: Rename odp_buffer_pool.h to odp_pool.h
 api: pool: Rename pool params and remove buffer types
 api: pool: Rename odp_buffer_pool_ to odp_pool_
 api: config: Renamed ODP_CONFIG_BUFFER_POOLS
  
example/generator/odp_generator.c  |  38 ++---
example/ipsec/odp_ipsec.c  |  70 
example/ipsec/odp_ipsec_cache.c|   4 +-
example/ipsec/odp_ipsec_cache.h|   2 +-
example/ipsec/odp_ipsec_loop_db.c  |   2 +-
example/ipsec/odp_ipsec_loop_db.h  |  12 +-
example/ipsec/odp_ipsec_stream.c   |  20 +--
example/ipsec/odp_ipsec_stream.h   |   2 +-
example/l2fwd/odp_l2fwd.c  |  28 +--
example/packet/odp_pktio.c |  28 +--
example/timer/odp_timer_test.c |  64 +++
platform/linux-generic/Makefile.am |   4 +-
platform/linux-generic/include/api/odp.h   |   3 +-
platform/linux-generic/include/api/odp_buffer.h|  40 +++--
.../linux-generic/include/api/odp_buffer_pool.h| 177
   ---
.../linux-generic/include/api/odp_classification.h |   2 +-
platform/linux-generic/include/api/odp_config.h|   4 +-
platform/linux-generic/include/api/odp_crypto.h|  16 +-
platform/linux-generic/include/api/odp_event.h |  59 +++
platform/linux-generic/include/api/odp_packet.h|  29 ++--
platform/linux-generic/include/api/odp_packet_io.h |   4 +-
.../linux-generic/include/api/odp_platform_types.h |  10 +-
platform/linux-generic/include/api/odp_pool.h  | 189
   +
platform/linux-generic/include/api/odp_queue.h |  32 ++--
platform/linux-generic/include/api/odp_schedule.h  |  32 ++--
platform/linux-generic/include/api/odp_timer.h |  56 +++---
.../linux-generic/include/odp_buffer_inlines.h |   6 +-
.../linux-generic/include/odp_buffer_internal.h|  20 ++-
.../include/odp_buffer_pool_internal.h |  22 +--
.../linux-generic/include/odp_crypto_internal.h|   2 +-

[lng-odp] [PATCH] linux-generic: implement of odp_term_global.

2015-01-19 Thread Yan Songming
From: Yan Sonming yan.songm...@linaro.org

Free all resource of odp which include share memory, queue and buffer pool.
Fix the bug of odp_shm_free.

Signed-off-by: Yan Songming yan.songm...@linaro.org
---
 platform/linux-generic/include/odp_internal.h | 10 +++
 platform/linux-generic/odp_buffer_pool.c  | 88 +++
 platform/linux-generic/odp_classification.c   | 42 ++---
 platform/linux-generic/odp_crypto.c   | 12 
 platform/linux-generic/odp_init.c | 43 -
 platform/linux-generic/odp_packet_io.c| 35 ---
 platform/linux-generic/odp_queue.c| 24 
 platform/linux-generic/odp_schedule.c | 34 +--
 platform/linux-generic/odp_shared_memory.c| 12 +++-
 platform/linux-generic/odp_thread.c   | 13 
 10 files changed, 263 insertions(+), 50 deletions(-)

diff --git a/platform/linux-generic/include/odp_internal.h 
b/platform/linux-generic/include/odp_internal.h
index 549d406..d46f5ef 100644
--- a/platform/linux-generic/include/odp_internal.h
+++ b/platform/linux-generic/include/odp_internal.h
@@ -24,23 +24,33 @@ int odp_system_info_init(void);
 int odp_thread_init_global(void);
 int odp_thread_init_local(void);
 int odp_thread_term_local(void);
+int odp_thread_term_global(void);
 
 int odp_shm_init_global(void);
+int odp_shm_term_global(void);
 int odp_shm_init_local(void);
 
 int odp_buffer_pool_init_global(void);
+int odp_buffer_pool_term_global(void);
+int odp_buffer_pool_term_local(void);
 
 int odp_pktio_init_global(void);
+int odp_pktio_term_global(void);
 int odp_pktio_init_local(void);
 
 int odp_classification_init_global(void);
+int odp_classification_term_global(void);
 
 int odp_queue_init_global(void);
+int odp_queue_term_global(void);
 
 int odp_crypto_init_global(void);
+int odp_crypto_term_global(void);
 
 int odp_schedule_init_global(void);
+int odp_schedule_term_global(void);
 int odp_schedule_init_local(void);
+int odp_schedule_term_local(void);
 
 int odp_timer_init_global(void);
 int odp_timer_disarm_all(void);
diff --git a/platform/linux-generic/odp_buffer_pool.c 
b/platform/linux-generic/odp_buffer_pool.c
index eedb380..85e99e2 100644
--- a/platform/linux-generic/odp_buffer_pool.c
+++ b/platform/linux-generic/odp_buffer_pool.c
@@ -55,6 +55,7 @@ typedef struct pool_table_t {
 
 /* The pool table */
 static pool_table_t *pool_tbl;
+static const char shm_name[] = odp_buffer_pools;
 
 /* Pool entry pointers (for inlining) */
 void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS];
@@ -67,7 +68,7 @@ int odp_buffer_pool_init_global(void)
uint32_t i;
odp_shm_t shm;
 
-   shm = odp_shm_reserve(odp_buffer_pools,
+   shm = odp_shm_reserve(shm_name,
  sizeof(pool_table_t),
  sizeof(pool_entry_t), 0);
 
@@ -95,13 +96,48 @@ int odp_buffer_pool_init_global(void)
return 0;
 }
 
+int odp_buffer_pool_term_global(void)
+{
+   odp_shm_t shm;
+   int i;
+   pool_entry_t *pool;
+   int ret = 0;
+
+   for (i = 0; i  ODP_CONFIG_BUFFER_POOLS; i++) {
+   pool = get_pool_entry(i);
+
+   POOL_LOCK(pool-s.lock);
+   if (pool-s.pool_shm != ODP_SHM_INVALID) {
+   ODP_ERR(Not destroyed pool: %s\n, pool-s.name);
+   ret = -1;
+   }
+   POOL_UNLOCK(pool-s.lock);
+   }
+   if (ret)
+   return ret;
+
+   shm = odp_shm_lookup(shm_name);
+   if (shm == ODP_SHM_INVALID)
+   return -1;
+   ret = odp_shm_free(shm);
+
+   return ret;
+}
+
+int odp_buffer_pool_term_local(void)
+{
+   _odp_flush_caches();
+   return 0;
+}
+
+
 /**
  * Buffer pool creation
  */
 
 odp_buffer_pool_t odp_buffer_pool_create(const char *name,
-odp_shm_t shm,
-odp_buffer_pool_param_t *params)
+   odp_shm_t shm,
+   odp_buffer_pool_param_t *params)
 {
odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID;
pool_entry_t *pool;
@@ -127,8 +163,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name,
 
/* Restriction for v1.0: No udata support */
uint32_t udata_stride = (init_params-udata_size  sizeof(void *)) ?
-   ODP_CACHE_LINE_SIZE_ROUNDUP(init_params-udata_size) :
-   0;
+   ODP_CACHE_LINE_SIZE_ROUNDUP(init_params-udata_size) :
+   0;
 
uint32_t blk_size, buf_stride;
uint32_t buf_align = params-buf_align;
@@ -155,8 +191,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name,
blk_size = ODP_ALIGN_ROUNDUP(blk_size, buf_align);
 
buf_stride = params-buf_type == ODP_BUFFER_TYPE_RAW ?
-   sizeof(odp_buffer_hdr_stride) :
-   sizeof(odp_timeout_hdr_stride);
+

[lng-odp] [PATCHv2] linux-generic: Add odp_errno and adapt packet_io and timer implementations to use it

2015-01-19 Thread Mario Torrecillas Rodriguez
Added odp_errno.c and odp_errno.h
Changed odp_packet_io and odp_timer to use it.

Signed-off-by: Mario Torrecillas Rodriguez mario.torrecillasrodrig...@arm.com
---
(This code contribution is provided under the terms of agreement LES-LTM-21309)

Changes from previous version:
* Moved __odp_errno declaration to odp_internal.h
* Addressed other minor issues mentioned in the review

 platform/linux-generic/Makefile.am |  2 +
 platform/linux-generic/include/api/odp_errno.h | 61 ++
 platform/linux-generic/include/odp_internal.h  |  1 +
 platform/linux-generic/odp_errno.c | 35 +++
 platform/linux-generic/odp_packet_io.c |  4 +-
 platform/linux-generic/odp_packet_socket.c | 17 +++
 platform/linux-generic/odp_timer.c |  5 ++-
 7 files changed, 121 insertions(+), 4 deletions(-)
 create mode 100644 platform/linux-generic/include/api/odp_errno.h
 create mode 100644 platform/linux-generic/odp_errno.c

diff --git a/platform/linux-generic/Makefile.am 
b/platform/linux-generic/Makefile.am
index a699ea6..1b71b71 100644
--- a/platform/linux-generic/Makefile.am
+++ b/platform/linux-generic/Makefile.am
@@ -19,6 +19,7 @@ include_HEADERS = \
  
$(top_srcdir)/platform/linux-generic/include/api/odp_cpumask.h \
  $(top_srcdir)/platform/linux-generic/include/api/odp_crypto.h 
\
  $(top_srcdir)/platform/linux-generic/include/api/odp_debug.h \
+ $(top_srcdir)/platform/linux-generic/include/api/odp_errno.h \
  $(top_srcdir)/platform/linux-generic/include/api/odp_hints.h \
  $(top_srcdir)/platform/linux-generic/include/api/odp_init.h \
  
$(top_srcdir)/platform/linux-generic/include/api/odp_packet_flags.h \
@@ -80,6 +81,7 @@ __LIB__libodp_la_SOURCES = \
   odp_classification.c \
   odp_cpumask.c \
   odp_crypto.c \
+  odp_errno.c \
   odp_init.c \
   odp_impl.c \
   odp_linux.c \
diff --git a/platform/linux-generic/include/api/odp_errno.h 
b/platform/linux-generic/include/api/odp_errno.h
new file mode 100644
index 000..a01319d
--- /dev/null
+++ b/platform/linux-generic/include/api/odp_errno.h
@@ -0,0 +1,61 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier:BSD-3-Clause
+ */
+
+/**
+ * @file
+ *
+ * ODP errno API
+ */
+
+#ifndef ODP_ERRNO_H_
+#define ODP_ERRNO_H_
+
+#ifdef __cplusplus
+extern C {
+#endif
+
+#include errno.h
+
+/**
+* Return latest ODP errno
+*
+* @retval 0 == no error
+*/
+int odp_errno(void);
+
+/**
+* Set ODP errno to zero
+*/
+void odp_errno_zero(void);
+
+/**
+* Print ODP errno
+*
+* Interprets the value of ODP errno as an error message, and prints it,
+* optionally preceding it with the custom message specified in str.
+*
+* @param str   NULL, or pointer to the string to be appended
+*/
+void odp_errno_print(const char *str);
+
+/**
+* Error message string
+*
+* Interprets the value of ODP errno, generating a string with a
+* message that describes the error.
+* It uses the system definition of errno.
+*
+* @param errnumError code
+*
+* @retval  Pointer to the string
+*/
+const char *odp_errno_str(int errnum);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/platform/linux-generic/include/odp_internal.h 
b/platform/linux-generic/include/odp_internal.h
index 549d406..b953163 100644
--- a/platform/linux-generic/include/odp_internal.h
+++ b/platform/linux-generic/include/odp_internal.h
@@ -18,6 +18,7 @@
 extern C {
 #endif
 
+extern __thread int __odp_errno;
 
 int odp_system_info_init(void);
 
diff --git a/platform/linux-generic/odp_errno.c 
b/platform/linux-generic/odp_errno.c
new file mode 100644
index 000..ba080e7
--- /dev/null
+++ b/platform/linux-generic/odp_errno.c
@@ -0,0 +1,35 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier:BSD-3-Clause
+ */
+
+#include odp_errno.h
+#include odp_internal.h
+#include string.h
+#include stdio.h
+
+__thread int __odp_errno;
+
+int odp_errno(void)
+{
+   return __odp_errno;
+}
+
+void odp_errno_zero(void)
+{
+   __odp_errno = 0;
+}
+
+void odp_errno_print(const char *str)
+{
+   if (str != NULL)
+   printf(%s , str);
+
+   printf(%s\n, strerror(__odp_errno));
+}
+
+const char *odp_errno_str(int errnum)
+{
+   return strerror(errnum);
+}
diff --git a/platform/linux-generic/odp_packet_io.c 
b/platform/linux-generic/odp_packet_io.c
index cd109d2..c1c79d4 100644
--- a/platform/linux-generic/odp_packet_io.c
+++ b/platform/linux-generic/odp_packet_io.c
@@ -18,12 +18,12 @@
 #include odp_schedule_internal.h
 #include odp_classification_internal.h
 #include odp_debug_internal.h
+#include odp_errno.h
 
 #include string.h
 #include 

Re: [lng-odp] [PATCH 00/15] Event introduction

2015-01-19 Thread Savolainen, Petri (NSN - FI/Espoo)
 
 No, I was considering the abstraction for the fixed size buffer pool only.
 The new type can used to allocate a buffer pool from hardware fixed size
 buffer manager
 without any metadata for queueable. something like,
 
 pool = odp_buffer_pool_create();
 odp_buffer_t x = odp_buffer_alloc(pool); // for queueable buffers
 
 odp_buffer_xxx_t x = odp_buffer_xxx_alloc(pool);// for non queueable
 buffers, only for storage


This can be defined after v1.0. I already separated event (ODP_EVENT_XXX) and 
pool type (ODP_POOL_XXX) defines for rev2 of the event patch. With that, there 
can be event types that do not have a matching pool type (no pool, no alloc 
call), and pool types that do not have a matching event type (no xxx_to_event 
call = no queues, no scheduling).

-Petri 

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 2/3] validation: buffer: fix for the use of cached return value of odp_packet_last_seg

2015-01-19 Thread Ola Liljedahl
On 17 January 2015 at 23:22, Taras Kondratiuk
taras.kondrat...@linaro.org wrote:
 On 01/17/2015 01:29 PM, Jerin Jacob wrote:
 odp_packet_seg_t is an opaque type, based on the implementation, the return
 value of odp_packet_last_seg can be changed after headroom/tailroom push/pull
 operation.

 No. By definition headroom/tailroom push/pull operations don't change
 segmentation. So the last segment must remain the same.
Don't we make segmentation visible to allow ODP implementations to use
non-consecutive buffers to implement the buffer or packet the
application is working with? An implementation might need to add
another segment for a push operation e.g. if there is not space enough
in the current head or tail segment. And it might be useful for ODP
implementations to remove unused segments (for pull operations), some
HW might not like to operate on empty segments (or indeed any segment
with a small number of bytes of data).

So I can't understand how ODP can define that push and pull operations
shall *not* affect the underlying segmentation of the packet.



 Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
 ---
  test/validation/buffer/odp_packet_test.c | 3 +++
  1 file changed, 3 insertions(+)

 diff --git a/test/validation/buffer/odp_packet_test.c 
 b/test/validation/buffer/odp_packet_test.c
 index b6fa028..7c2b169 100644
 --- a/test/validation/buffer/odp_packet_test.c
 +++ b/test/validation/buffer/odp_packet_test.c
 @@ -289,6 +289,9 @@ static void _verify_tailroom_shift(odp_packet_t pkt,
   tail = odp_packet_pull_tail(pkt, -shift);
   }

 + seg = odp_packet_last_seg(pkt);
 + CU_ASSERT(seg != ODP_SEGMENT_INVALID);
 +
   CU_ASSERT(tail != NULL);
   CU_ASSERT(odp_packet_seg_data_len(pkt, seg) == seg_data_len + shift);
   CU_ASSERT(odp_packet_len(pkt) == pkt_data_len + shift);



 --
 Taras Kondratiuk

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] helper: ip: add IP protocol value for sctp

2015-01-19 Thread Bill Fischofer
Two questions:


   1. We previously said we didn't need SCTP support for ODP v1.0, so why
   is this needed?
   2. This is a helper, so it's not necessarily constrained by what may be
   covered by ODP v1.0, but in that case why limit this to SCTP?  There are
   lots of other IP protocols that this helper file doesn't define besides
   SCTP. Should they be included as well?



On Mon, Jan 19, 2015 at 5:44 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:

 Signed-off-by: Jerin Jacob jerin.ja...@caviumnetworks.com
 ---
  helper/include/odph_ip.h | 1 +
  1 file changed, 1 insertion(+)

 diff --git a/helper/include/odph_ip.h b/helper/include/odph_ip.h
 index 272fd96..f2638ba 100644
 --- a/helper/include/odph_ip.h
 +++ b/helper/include/odph_ip.h
 @@ -167,6 +167,7 @@ typedef struct ODP_PACKED {
  #define ODPH_IPPROTO_FRAG0x2C /** IPv6 Fragment (44) */
  #define ODPH_IPPROTO_AH  0x33 /** Authentication Header (51) */
  #define ODPH_IPPROTO_ESP 0x32 /** Encapsulating Security Payload
 (50) */
 +#define ODPH_IPPROTO_SCTP0x84 /** Stream Control Transmission (132)
 */
  #define ODPH_IPPROTO_INVALID 0xFF /** Reserved invalid by IANA */

  /**@}*/
 --
 1.9.3


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] dpdk-pktgen CPU load balancing (Is thread affinity a lie?)

2015-01-19 Thread Wiles, Keith


On 1/19/15, 10:44 AM, Zoltan Kiss zoltan.k...@linaro.org wrote:

Hi,

I've found this in the README:

A new feature for pktgen and DPDK is to run multiple instances of
pktgen. This
allows the developer to share ports on the same machine.

But nothing more about running it in multiple instances.

Sorry, must have dreamed I wrote more about it, but I am sure I have
written emails in the past.

When DPDK runs it consumes memory and resources, which you have to divide
between the multiple instances.

One area is you need to make sure you allocate enough huge pages to work
with both Pktgen’s at the same. If you need 256 huge pages then make sure
you at least allocate via sysconf.conf 512 pages. If you have multiple
sockets then you have to take that into account as well.

You can not share a port between two instances of Pktgen, which means you
need to have multiple ports and decide which ports below to which Pktgen
instance.

I did not find the command lines you used, so I will create some.

DPDK needs to allocate huge pages, which are mmapped in the /mnt/huge
directory. The issue is you need to make sure the two instances use
different set of huge page files using the —file-prefix option. Plus you
have to divide up the other resources as well CPUs and ports.

I do not have a machine yet to test this configuration, so you may have to
play with the options some.

# app/buil/pktgen -c 0x0e -n X —proc-type auto —socket-mem 256,256
—file-prefix pg1 — -P -m “[1:2].0

# app/buil/pktgen -c 0x70 -n X —proc-type auto —socket-mem 256,256
—file-prefix pg2 — -P -m “[1:2].0”

You will need to add the blacklist of ports to exclude them from one
instance and the other instance.

Also read the DPDK doc dpdk/doc/prog_guide/multi_proc_support.rst file.



I've tried to run the latest version with 1.8 DPDK, but it didn't even
start:

EAL: Master core 1 is ready (tid=f7fdf880)
EAL: Core 2 is ready (tid=17b15700)
2.0  = lcores(rx 0004, tx 0004)
ports(rx 0001, tx 0001)
Lua 5.2.3  Copyright (C) 1994-2013 Lua.org, PUC-Rio
  Packet Burst 32, RX Desc 512, TX Desc 512, mbufs/port 4096, mbuf
cache 512
!PANIC!: *** Did not find any ports to use ***
PANIC in pktgen_config_ports():

The PCI probing doesn't work. I use -w 04:00.0, with 1.7.1 it detects
the card properly:

EAL: PCI device :04:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x77f29000
EAL:   PCI memory mapped at 0x77fef000

Regards,

Zoltan


On 16/01/15 21:14, Wiles, Keith wrote:


 On 1/16/15, 1:36 PM, Zoltan Kiss zoltan.k...@linaro.org wrote:

 Hi,

 I've tried to figure this out why the TX speed depends on RX, so far no
 luck unfortunately. The oprofile problem still exists, it seems like in
 system-wide capture thread's appear on other cpu's, not just where they
 supposed to.
 I've tried to run two separate instances of pktgen on the same machine,
 but the second one failed with the following:

 I put some text in the README did you follow that text?


 EAL: Mapped segment 29 of size 0x20
 EAL: memzone_reserve_aligned_thread_unsafe(): memzone
 RG_MP_log_history already exists
 RING: Cannot reserve memory
 EAL: TSC frequency is ~3192607 KHz
 EAL: Master core 2 is ready (tid=f7fdf880)
 EAL: Core 3 is ready (tid=19bff700)
 EAL: Set returned by pthread_getaffinity_np() contained:
 EAL: CPU 3 (tid=19bff700)
 EAL: PCI device :04:00.1 on NUMA socket -1
 EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
 EAL: Cannot find resource for device
 3.0  = lcores(rx 0008, tx 0008)
 ports(rx 0001, tx 0001)
 !PANIC!: *** Did not find any ports to use ***
 PANIC in pktgen_config_ports():
 *** Did not find any ports to use ***6: [app/build/pktgen() [0x419445]]
 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
 [0x76ec7ec5]]
 4: [app/build/pktgen(main+0x20f) [0x42e05d]]
 3: [app/build/pktgen(pktgen_config_ports+0xbd) [0x435f0a]]
 2: [app/build/pktgen(__rte_panic+0xcb) [0x41927e]]
 1: [app/build/pktgen(rte_dump_stack+0x18) [0x4b9d98]]

 Does it makes sense to do this?

 Regards,

 Zoltan

 On 14/01/15 06:29, Wiles, Keith wrote:


 On 1/13/15, 6:41 PM, Zoltan Kiss zoltan.k...@linaro.org wrote:

 On 13/01/15 20:24, Wiles, Keith wrote:
 Comments below inline.


 On 1/13/15, 1:38 PM, Zoltan Kiss zoltan.k...@linaro.org wrote:


 Hi Keith,

 [I'm adding lng-odp list, maybe someone has a better knowledge
about
 scheduling. The main question is at the end of this mail]

 I'm still strugling with this issue. Here are some additional
 findings:
 -  I've changed the CPU mask, to allow all CPUs, that seemed to
 helped
 in the balancing of the tasks, but the throughput haven't changed
 - I've also figured out that if I change the testcase to non-random
 packets, the throughput goes up to 6.7 Gbps, I guess that 0.3 Gbps
is
 the penalty for port randomization. The receive throughput also
 climbed
 up, so my 

Re: [lng-odp] dpdk-pktgen CPU load balancing (Is thread affinity a lie?)

2015-01-19 Thread Zoltan Kiss

Hi,

I've found this in the README:

A new feature for pktgen and DPDK is to run multiple instances of 
pktgen. This

allows the developer to share ports on the same machine.

But nothing more about running it in multiple instances.

I've tried to run the latest version with 1.8 DPDK, but it didn't even 
start:


EAL: Master core 1 is ready (tid=f7fdf880)
EAL: Core 2 is ready (tid=17b15700)
2.0  = lcores(rx 0004, tx 0004) 
ports(rx 0001, tx 0001)

Lua 5.2.3  Copyright (C) 1994-2013 Lua.org, PUC-Rio
 Packet Burst 32, RX Desc 512, TX Desc 512, mbufs/port 4096, mbuf 
cache 512

!PANIC!: *** Did not find any ports to use ***
PANIC in pktgen_config_ports():

The PCI probing doesn't work. I use -w 04:00.0, with 1.7.1 it detects 
the card properly:


EAL: PCI device :04:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x77f29000
EAL:   PCI memory mapped at 0x77fef000

Regards,

Zoltan


On 16/01/15 21:14, Wiles, Keith wrote:



On 1/16/15, 1:36 PM, Zoltan Kiss zoltan.k...@linaro.org wrote:


Hi,

I've tried to figure this out why the TX speed depends on RX, so far no
luck unfortunately. The oprofile problem still exists, it seems like in
system-wide capture thread's appear on other cpu's, not just where they
supposed to.
I've tried to run two separate instances of pktgen on the same machine,
but the second one failed with the following:


I put some text in the README did you follow that text?




EAL: Mapped segment 29 of size 0x20
EAL: memzone_reserve_aligned_thread_unsafe(): memzone
RG_MP_log_history already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~3192607 KHz
EAL: Master core 2 is ready (tid=f7fdf880)
EAL: Core 3 is ready (tid=19bff700)
EAL: Set returned by pthread_getaffinity_np() contained:
EAL: CPU 3 (tid=19bff700)
EAL: PCI device :04:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: Cannot find resource for device
3.0  = lcores(rx 0008, tx 0008)
ports(rx 0001, tx 0001)
!PANIC!: *** Did not find any ports to use ***
PANIC in pktgen_config_ports():
*** Did not find any ports to use ***6: [app/build/pktgen() [0x419445]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x76ec7ec5]]
4: [app/build/pktgen(main+0x20f) [0x42e05d]]
3: [app/build/pktgen(pktgen_config_ports+0xbd) [0x435f0a]]
2: [app/build/pktgen(__rte_panic+0xcb) [0x41927e]]
1: [app/build/pktgen(rte_dump_stack+0x18) [0x4b9d98]]

Does it makes sense to do this?

Regards,

Zoltan

On 14/01/15 06:29, Wiles, Keith wrote:



On 1/13/15, 6:41 PM, Zoltan Kiss zoltan.k...@linaro.org wrote:


On 13/01/15 20:24, Wiles, Keith wrote:

Comments below inline.


On 1/13/15, 1:38 PM, Zoltan Kiss zoltan.k...@linaro.org wrote:



Hi Keith,

[I'm adding lng-odp list, maybe someone has a better knowledge about
scheduling. The main question is at the end of this mail]

I'm still strugling with this issue. Here are some additional
findings:
-  I've changed the CPU mask, to allow all CPUs, that seemed to
helped
in the balancing of the tasks, but the throughput haven't changed
- I've also figured out that if I change the testcase to non-random
packets, the throughput goes up to 6.7 Gbps, I guess that 0.3 Gbps is
the penalty for port randomization. The receive throughput also
climbed
up, so my DUT is slower when handling random ports. But I would still
expect the same 8.3 Gbps TX throughput.

If the system can transmit 8.3Gbps with the random mode seems
reasonable
as the code needs to handle each TX packet.

Yes, it can do 8.3Gbps with random port packets, if the DUT doesn't
forward the packets back on the another core.

The problem I see is when you move to a non-random packet case your
performance drops, unless the description is wrong.

On the contrary: if the DUT forwards back the packets on the other port
(so we need to deal with receive as well), the TX performance drops to
6.3 with random ports and 6.7 with non-random ports.


OK, that makes more sense then you described or I understood.


The max throughput here could be the CPU core or the PCIe bus, but I
think
it is the CPU.
The reason is the performance does not drop too much in the case
below.


- another interesting thing is if I used port randomization, it sent
TCP
packets despite I've specified UDP explicitly

That sounds like a bug, but I am not sure it can happen as the code is
pretty specific.


- I've printed out the lcore id in pktgen_main_transmit (and saved it
to
port_info_t in pktgen_main_rxtx_loop), it appears that the traffic is
handled by the right lcore.
- I've printed out pthread_getaffinity_np just after setting it in
eal_thread_loop. The threads affinty looks as it should be
- I've separated the TX and RX handling into separate cores, like
this:
-m [1:2].0,[3:1].1, so core 1 handles port 0 RX traffic and port 1
TX

The above port mapping can cause a 

Re: [lng-odp] [PATCH] linux-generic: implement of odp_term_global.

2015-01-19 Thread Mike Holmes
There appear to be a lot of whitespace changes, I assume they are
checkpatch cleanups in a number of cases.
Can we move all the non functional whitespace changes to their own patch so
that the logic of the change is more apparent in a single patch.



On 19 January 2015 at 06:34, Yan Songming yan.songm...@linaro.org wrote:

 From: Yan Sonming yan.songm...@linaro.org

 Free all resource of odp which include share memory, queue and buffer pool.
 Fix the bug of odp_shm_free.

 Signed-off-by: Yan Songming yan.songm...@linaro.org
 ---
  platform/linux-generic/include/odp_internal.h | 10 +++
  platform/linux-generic/odp_buffer_pool.c  | 88
 +++
  platform/linux-generic/odp_classification.c   | 42 ++---
  platform/linux-generic/odp_crypto.c   | 12 
  platform/linux-generic/odp_init.c | 43 -
  platform/linux-generic/odp_packet_io.c| 35 ---
  platform/linux-generic/odp_queue.c| 24 
  platform/linux-generic/odp_schedule.c | 34 +--
  platform/linux-generic/odp_shared_memory.c| 12 +++-
  platform/linux-generic/odp_thread.c   | 13 
  10 files changed, 263 insertions(+), 50 deletions(-)

 diff --git a/platform/linux-generic/include/odp_internal.h
 b/platform/linux-generic/include/odp_internal.h
 index 549d406..d46f5ef 100644
 --- a/platform/linux-generic/include/odp_internal.h
 +++ b/platform/linux-generic/include/odp_internal.h
 @@ -24,23 +24,33 @@ int odp_system_info_init(void);
  int odp_thread_init_global(void);
  int odp_thread_init_local(void);
  int odp_thread_term_local(void);
 +int odp_thread_term_global(void);

  int odp_shm_init_global(void);
 +int odp_shm_term_global(void);
  int odp_shm_init_local(void);

  int odp_buffer_pool_init_global(void);
 +int odp_buffer_pool_term_global(void);
 +int odp_buffer_pool_term_local(void);

  int odp_pktio_init_global(void);
 +int odp_pktio_term_global(void);
  int odp_pktio_init_local(void);

  int odp_classification_init_global(void);
 +int odp_classification_term_global(void);

  int odp_queue_init_global(void);
 +int odp_queue_term_global(void);

  int odp_crypto_init_global(void);
 +int odp_crypto_term_global(void);

  int odp_schedule_init_global(void);
 +int odp_schedule_term_global(void);
  int odp_schedule_init_local(void);
 +int odp_schedule_term_local(void);

  int odp_timer_init_global(void);
  int odp_timer_disarm_all(void);
 diff --git a/platform/linux-generic/odp_buffer_pool.c
 b/platform/linux-generic/odp_buffer_pool.c
 index eedb380..85e99e2 100644
 --- a/platform/linux-generic/odp_buffer_pool.c
 +++ b/platform/linux-generic/odp_buffer_pool.c
 @@ -55,6 +55,7 @@ typedef struct pool_table_t {

  /* The pool table */
  static pool_table_t *pool_tbl;
 +static const char shm_name[] = odp_buffer_pools;

  /* Pool entry pointers (for inlining) */
  void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS];
 @@ -67,7 +68,7 @@ int odp_buffer_pool_init_global(void)
 uint32_t i;
 odp_shm_t shm;

 -   shm = odp_shm_reserve(odp_buffer_pools,
 +   shm = odp_shm_reserve(shm_name,
   sizeof(pool_table_t),
   sizeof(pool_entry_t), 0);

 @@ -95,13 +96,48 @@ int odp_buffer_pool_init_global(void)
 return 0;
  }

 +int odp_buffer_pool_term_global(void)
 +{
 +   odp_shm_t shm;
 +   int i;
 +   pool_entry_t *pool;
 +   int ret = 0;
 +
 +   for (i = 0; i  ODP_CONFIG_BUFFER_POOLS; i++) {
 +   pool = get_pool_entry(i);
 +
 +   POOL_LOCK(pool-s.lock);
 +   if (pool-s.pool_shm != ODP_SHM_INVALID) {
 +   ODP_ERR(Not destroyed pool: %s\n, pool-s.name);
 +   ret = -1;
 +   }
 +   POOL_UNLOCK(pool-s.lock);
 +   }
 +   if (ret)
 +   return ret;
 +
 +   shm = odp_shm_lookup(shm_name);
 +   if (shm == ODP_SHM_INVALID)
 +   return -1;
 +   ret = odp_shm_free(shm);
 +
 +   return ret;
 +}
 +
 +int odp_buffer_pool_term_local(void)
 +{
 +   _odp_flush_caches();
 +   return 0;
 +}
 +
 +
  /**
   * Buffer pool creation
   */

  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
 -odp_shm_t shm,
 -odp_buffer_pool_param_t *params)
 +   odp_shm_t shm,
 +   odp_buffer_pool_param_t *params)
  {
 odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID;
 pool_entry_t *pool;
 @@ -127,8 +163,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char
 *name,

 /* Restriction for v1.0: No udata support */
 uint32_t udata_stride = (init_params-udata_size  sizeof(void *))
 ?
 -   ODP_CACHE_LINE_SIZE_ROUNDUP(init_params-udata_size) :
 -   0;
 +
  ODP_CACHE_LINE_SIZE_ROUNDUP(init_params-udata_size) :
 +   0;

 uint32_t blk_size, 

Re: [lng-odp] [PATCHv4 0/2] configure.ac check for atomic operations support

2015-01-19 Thread Maxim Uvarov

Ping!
Maxim.

On 12/29/2014 05:58 PM, Maxim Uvarov wrote:

v4: check issues after v3 discusstions: (spell, remote uint32_t, remove
 Octeon code).

Maxim Uvarov (2):
   linux-generic: remove octeon specific code from
 odp_atomic_fetch_inc_u32
   configure.ac check for atomic operations support

  configure.ac|  4 +++-
  platform/linux-generic/include/api/odp_atomic.h |  8 
  platform/linux-generic/m4/configure.m4  | 17 +
  3 files changed, 20 insertions(+), 9 deletions(-)
  create mode 100644 platform/linux-generic/m4/configure.m4




___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp