Re: [lng-odp] [API-NEXT 2/2] api: packet: add function to free multiple packets at once

2015-09-29 Thread Bala Manoharan
On 29 September 2015 at 13:48, Nicolas Morey-Chaisemartin
 wrote:
> Signed-off-by: Nicolas Morey-Chaisemartin 
> ---
>  include/odp/api/packet.h | 11 +++
>  1 file changed, 11 insertions(+)
>
> diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
> index 5d46b7b..a73be01 100644
> --- a/include/odp/api/packet.h
> +++ b/include/odp/api/packet.h
> @@ -86,6 +86,17 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t 
> len);
>  void odp_packet_free(odp_packet_t pkt);
>
>  /**
> + * Free packets
> + *
> + * Frees the packets into the buffer pools they were allocated from.
> + * Packets may have been allocated from different pools.
> + *
> + * @param pkts   Packet handles
> + * @param lenNumber of packet handles to free
> + */
> +void odp_packet_free_multi(odp_packet_t pkt);

The function signature is wrong it should have an array of packets and
a len field.

Regards,
Bala
> +
> +/**
>   * Reset packet
>   *
>   * Resets all packet metadata to their default values. Packet length is used
> --
> 2.5.0.3.gba4f141
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXTv2 2/7] api: packet: add functions to alloc/free multiple packets at once

2015-09-29 Thread Bala Manoharan
Hi,

I am not sure whether we need this call for alloc multiple packets at once.
The reason being in a high speed data plane system the packets which
are allocated and not processed will result in holding up of pool
space which will result in dropping of the incoming packets if the
pool space is depleted.

So it will always be advisable to allocated the packets as soon as the
core is ready to process them rather than allocating an array of
packets in a single call and then processing them in a serial manner.
Maybe it would be better if you can define the use-case and advantages
of allocating the packets in a single API and then we can decide if
this API is needed.

Regards,
Bala

On 29 September 2015 at 22:34, Nicolas Morey-Chaisemartin
 wrote:
>
>
> On 09/29/2015 04:34 PM, Bill Fischofer wrote:
>
>
>
> On Tue, Sep 29, 2015 at 9:15 AM, Nicolas Morey-Chaisemartin
>  wrote:
>>
>> Signed-off-by: Nicolas Morey-Chaisemartin 
>> ---
>>  include/odp/api/packet.h | 28 
>>  1 file changed, 28 insertions(+)
>>
>> diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
>> index 5d46b7b..c220329 100644
>> --- a/include/odp/api/packet.h
>> +++ b/include/odp/api/packet.h
>> @@ -77,6 +77,23 @@ extern "C" {
>>  odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t len);
>>
>>  /**
>> + * Allocate packets from a buffer pool
>> + *
>> + * @see odp_packet_alloc
>> + *
>> + * @param pool  Pool handle
>> + * @param len   Packet data length
>> + * @param pkt   Array of packet handles for output
>> + * @param num   Maximum number of packet to allocate
>> + *
>> + * @return Number of packet actually allocated (0 ... num)
>> + * @retval <0 on failure
>> + *
>> + */
>> +int odp_packet_alloc_multi(odp_pool_t pool, uint32_t len,
>> +  odp_packet_t pkt[], int num);
>
>
>
> 3rd parameter is an output array, so should be *odp_packet_t pkt[]
> Should 2nd parameter also be an array or is it sufficient to restrict this
> to allocating all pkts of the same length?
>
>
> I am not sure there is a way to efficiently alloc multiple packets with
> different sizes at once. And making sure all len are the same will reduce
> the benefits of having a simple multi alloc.
> If someone has a use case where a multiple len multi alloc is useful, we can
> either tweak this call or add another one.
>
> Nicolas
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] validation: classification: Add init calls for pool parameters

2015-09-23 Thread Bala Manoharan
For the series Reviewed-and-tested-by: Balasubramanian Manoharan


On 23 September 2015 at 06:11, Bill Fischofer  wrote:
> Signed-off-by: Bill Fischofer 
> ---
>  test/validation/classification/odp_classification_tests.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/test/validation/classification/odp_classification_tests.c 
> b/test/validation/classification/odp_classification_tests.c
> index 551c83d..98da732 100644
> --- a/test/validation/classification/odp_classification_tests.c
> +++ b/test/validation/classification/odp_classification_tests.c
> @@ -280,7 +280,7 @@ int classification_suite_init(void)
> int ret;
> odp_pktio_param_t pktio_param;
>
> -   memset(, 0, sizeof(param));
> +   odp_pool_param_init();
> param.pkt.seg_len = SHM_PKT_BUF_SIZE;
> param.pkt.len = SHM_PKT_BUF_SIZE;
> param.pkt.num = SHM_PKT_NUM_BUFS;
> --
> 2.1.4
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] linux-generic: pktio: enable classifier only when needed

2015-09-18 Thread Bala Manoharan
In this case the packet will not be dispatched to the default CoS in
the scenario when the application configures only the default CoS and
not the PMRs.
Is this the expected behaviour? in case not then
pktio_cls_enabled_set() should be configured in the
odp_pktio_default_cos_set() function also.


Regards,
Bala

On 18 September 2015 at 17:17, Petri Savolainen
 wrote:
> Skip packet_classifier function as long as there's no pmr
> set for an pktio interface.
>
> Signed-off-by: Petri Savolainen 
> ---
>  platform/linux-generic/include/odp_packet_io_internal.h | 10 ++
>  platform/linux-generic/odp_classification.c |  2 ++
>  platform/linux-generic/odp_packet_io.c  |  6 ++
>  3 files changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/platform/linux-generic/include/odp_packet_io_internal.h 
> b/platform/linux-generic/include/odp_packet_io_internal.h
> index a21c683..6b03051 100644
> --- a/platform/linux-generic/include/odp_packet_io_internal.h
> +++ b/platform/linux-generic/include/odp_packet_io_internal.h
> @@ -109,6 +109,16 @@ static inline pktio_entry_t *get_pktio_entry(odp_pktio_t 
> pktio)
> return pktio_entry_ptr[pktio_to_id(pktio)];
>  }
>
> +static inline int pktio_cls_enabled(pktio_entry_t *entry)
> +{
> +   return entry->s.cls_enabled;
> +}
> +
> +static inline void pktio_cls_enabled_set(pktio_entry_t *entry, int ena)
> +{
> +   entry->s.cls_enabled = ena;
> +}
> +
>  int pktin_poll(pktio_entry_t *entry);
>
>  extern const pktio_if_ops_t sock_mmsg_pktio_ops;
> diff --git a/platform/linux-generic/odp_classification.c 
> b/platform/linux-generic/odp_classification.c
> index 6c1aff4..7809a42 100644
> --- a/platform/linux-generic/odp_classification.c
> +++ b/platform/linux-generic/odp_classification.c
> @@ -488,6 +488,7 @@ int odp_pktio_pmr_cos(odp_pmr_t pmr_id,
> pktio_entry->s.cls.pmr[num_pmr] = pmr;
> pktio_entry->s.cls.cos[num_pmr] = cos;
> pktio_entry->s.cls.num_pmr++;
> +   pktio_cls_enabled_set(pktio_entry, 1);
> UNLOCK(_entry->s.cls.lock);
>
> return 0;
> @@ -625,6 +626,7 @@ int odp_pktio_pmr_match_set_cos(odp_pmr_set_t pmr_set_id, 
> odp_pktio_t src_pktio,
> pktio_entry->s.cls.pmr[num_pmr] = pmr;
> pktio_entry->s.cls.cos[num_pmr] = cos;
> pktio_entry->s.cls.num_pmr++;
> +   pktio_cls_enabled_set(pktio_entry, 1);
> UNLOCK(_entry->s.cls.lock);
>
> return 0;
> diff --git a/platform/linux-generic/odp_packet_io.c 
> b/platform/linux-generic/odp_packet_io.c
> index d724933..aa2b566 100644
> --- a/platform/linux-generic/odp_packet_io.c
> +++ b/platform/linux-generic/odp_packet_io.c
> @@ -154,9 +154,7 @@ static void unlock_entry_classifier(pktio_entry_t *entry)
>  static void init_pktio_entry(pktio_entry_t *entry)
>  {
> set_taken(entry);
> -   /* Currently classifier is enabled by default. It should be enabled
> -  only when used. */
> -   entry->s.cls_enabled = 1;
> +   pktio_cls_enabled_set(entry, 0);
> entry->s.inq_default = ODP_QUEUE_INVALID;
>
> pktio_classifier_init(entry);
> @@ -642,7 +640,7 @@ int pktin_poll(pktio_entry_t *entry)
> buf = _odp_packet_to_buffer(pkt_tbl[i]);
> hdr = odp_buf_to_hdr(buf);
>
> -   if (entry->s.cls_enabled) {
> +   if (pktio_cls_enabled(entry)) {
> if (packet_classifier(entry->s.handle, pkt_tbl[i]) < 
> 0)
> hdr_tbl[num_enq++] = hdr;
> } else {
> --
> 2.5.3
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv2 0/4] linux-generic: add pktio pcap type

2015-09-04 Thread Bala Manoharan
This method of using pcap file to generate packets is fine.
But why should we use a dedicated interface with "pcap" as the name?

I was imagining something like an ODP application which reads from a
given pcap file constructs the packet and sends the packet through an
interface.

The concern I have is this method of creating a dummy interface with
"pcap" name will work fine in Linux-generic code but if you want to do
the same in other platforms then the platforms will have to implement
this virtual pcap interface but if this was an ODP application then
the platforms can simply run the application over their platform
without any change.

Regards,
Bala

On 4 September 2015 at 18:50, Stuart Haslam  wrote:
> This is pretty handy for testing, for example to test classifier rules
> using packets from a pcap;
>
> odp_classifier -ipcap:in=test.pcap -p -m 0 
> "ODP_PMR_SIP_ADDR:192.168.111.2::queue1" -t 5
>
> Use the l2fwd app send packets from a pcap out over a real interface;
>
> odp_l2fwd -ipcap:in=test.pcap:loops=10,eth0 -t 5
>
> Check that l2fwd doesn't reorder packets;
>
> odp_l2fwd -m 0 -i pcap:in=test.pcap,pcap:out=test_out.pcap
> editcap -v -D 0 test.pcap /dev/null | awk '{print $7}' > test.txt
> editcap -v -D 0 test_out.pcap /dev/null | awk '{print $7}' > test_out.txt
> diff -q test.txt test_out.txt
>
> (oops, it does when using > 2 workers)
>
> Changes since v1;
>  - Increased the pktio name length
>  - Rebased
>
> Stuart Haslam (4):
>   linux-generic: pktio: extend maximum devname length
>   example: classifier: fix potential buffer overflow
>   linux-generic: pktio: add pcap pktio type
>   linux-generic: pktio: add test for pcap pktio
>
>  example/classifier/odp_classifier.c|  18 +-
>  platform/linux-generic/Makefile.am |   4 +
>  .../linux-generic/include/odp_packet_io_internal.h |  25 +-
>  platform/linux-generic/m4/configure.m4 |  16 +
>  platform/linux-generic/odp_packet_io.c |   9 +-
>  platform/linux-generic/pktio/io_ops.c  |   3 +
>  platform/linux-generic/pktio/pcap.c| 334 
> +
>  platform/linux-generic/test/Makefile.am|   5 +
>  platform/linux-generic/test/pktio/Makefile.am  |   4 +
>  platform/linux-generic/test/pktio/pktio_run_pcap   |  33 ++
>  10 files changed, 439 insertions(+), 12 deletions(-)
>  create mode 100644 platform/linux-generic/pktio/pcap.c
>  create mode 100755 platform/linux-generic/test/pktio/pktio_run_pcap
>
> --
> 2.1.1
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCHv2 0/4] classification API name change

2015-09-03 Thread Bala Manoharan
Ping.

On 25 August 2015 at 19:45, Maxim Uvarov  wrote:
> No more comments, reviews?
>
> Maxim.
>
> On 08/11/15 15:10, Balasubramanian Manoharan wrote:
>>
>> Changes in v2: Adds bug link in the patch description
>>
>> 1. This patch series renames the classification APIs for ODP consistency
>> odp_cos_set_queue() is changed to odp_cos_queue_set()
>> odp_cos_set_drop() is changed to odp_cos_drop_set()
>>
>> 2. Adds the following getter functions for classification
>> odp_cos_queue()
>> odp_cos_drop()
>>
>> Fixes https://bugs.linaro.org/show_bug.cgi?id=1711
>>
>> Balasubramanian Manoharan (4):
>>api: classification: queue and drop policy API name change
>>example: classifier:  queue and drop policy API name change
>>linux-generic: classification: queue and drop policy API name change
>>validation: classification:  queue and drop policy API name change
>>
>>   example/classifier/odp_classifier.c|  4 +--
>>   include/odp/api/classification.h   | 26
>> +++--
>>   .../include/odp_classification_datamodel.h |  1 +
>>   platform/linux-generic/odp_classification.c| 33
>> --
>>   .../classification/odp_classification_basic.c  |  6 ++--
>>   .../classification/odp_classification_tests.c  | 14 -
>>   6 files changed, 67 insertions(+), 17 deletions(-)
>>
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] Fwd: [API-NEXT PATCHv3 2/4] api: classification: add ODP_PMR_CUSTOM_FRAME

2015-08-28 Thread Bala Manoharan
Hi,

I have added the following comment on this patch 2/4 regarding the
naming for this patch. Other than this I am fine with this patch.

Regards,
Bala
-- Forwarded message --
From: Balasubramanian Manoharan bala.manoha...@linaro.org
Date: 20 August 2015 at 17:48
Subject: Re: [lng-odp] [API-NEXT PATCHv3 2/4] api: classification: add
ODP_PMR_CUSTOM_FRAME
To: lng-odp@lists.linaro.org


My suggestion was to actually defer this movement patch after your
existing patch series for including PMR_CUSTOM_FRAME gets applied.
Also the naming of this patch should be rename or move the header for
better readability.
These are required for tracking the changes in the header file.

Regards,
Bala

On Thursday 20 August 2015 04:41 PM, Benoît Ganne wrote:

This patch only moves definitions. It does
not modify any structures, functions or
comments.

Move odp_pmr_match_t and odp_pmr_create()
definitions before odp_pmr_detroy() for
better readability.

Signed-off-by: Benoît Ganne bga...@kalray.eu
---
 include/odp/api/classification.h | 46 
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/include/odp/api/classification.h b/include/odp/api/classification.h
index b63a6c8..d4e1330 100644
--- a/include/odp/api/classification.h
+++ b/include/odp/api/classification.h
@@ -226,6 +226,29 @@ typedef enum odp_pmr_term {
 } odp_pmr_term_e;
  /**
+ * Following structure is used to define a packet matching rule
+ */
+typedef struct odp_pmr_match_t {
+ odp_pmr_term_e  term; /** PMR term value to be matched */
+ const void *val; /** Value to be matched */
+ const void *mask; /** Masked set of bits to be matched */
+ uint32_t val_sz; /** Size of the term value */
+ uint32_t offset;  /** User-defined offset in packet
+ Used if term == ODP_PMR_CUSTOM_FRAME only,
+ otherwise must be 0 */
+} odp_pmr_match_t;
+
+/**
+ * Create a packet match rule with mask and value
+ *
+ * @param[in] match   packet matching rule definition
+ *
+ * @return Handle of the matching rule
+ * @retval ODP_PMR_INVAL on failure
+ */
+odp_pmr_t odp_pmr_create(const odp_pmr_match_t *match);
+
+/**
  * Invalidate a packet match rule and vacate its resources
  *
  * @param[in] pmr_id Identifier of the PMR to be destroyed
@@ -276,29 +299,6 @@ unsigned long long odp_pmr_terms_cap(void);
 unsigned odp_pmr_terms_avail(void);
  /**
- * Following structure is used to define a packet matching rule
- */
-typedef struct odp_pmr_match_t {
- odp_pmr_term_e  term; /** PMR term value to be matched */
- const void *val; /** Value to be matched */
- const void *mask; /** Masked set of bits to be matched */
- uint32_t val_sz; /** Size of the term value */
- uint32_t offset;  /** User-defined offset in packet
- Used if term == ODP_PMR_CUSTOM_FRAME only,
- otherwise must be 0 */
-} odp_pmr_match_t;
-
-/**
- * Create a packet match rule with mask and value
- *
- * @param[in] match   packet matching rule definition
- *
- * @return Handle of the matching rule
- * @retval ODP_PMR_INVAL on failure
- */
-odp_pmr_t odp_pmr_create(const odp_pmr_match_t *match);
-
-/**
  * @typedef odp_pmr_set_t
  * An opaque handle to a composite packet match rule-set
  */
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [ARCH] Order Resolution APIs

2015-08-27 Thread Bala Manoharan
On 26 August 2015 at 16:27, Savolainen, Petri (Nokia - FI/Espoo)
petri.savolai...@nokia.com wrote:




 From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of ext
 Bill Fischofer
 Sent: Wednesday, August 26, 2015 12:26 AM
 To: LNG ODP Mailman List
 Subject: [lng-odp] [ARCH] Order Resolution APIs



 We've been discussion the question of when and how ordered events get
 resolved and I'd like to summarize the pros and cons as well as offer an
 additional suggestion to consider:



 When odp_schedule() dispatches an event in an ordered context the system
 will guarantee that downstream queues will see events in the same relative
 order as they appeared on the originating ordered queue.  While most ordered
 events are expected to be processed in a straightforward manner (one event
 in, one event out) by a worker thread, there are two special cases of
 interest that require special consideration.



 The first special case is removing an event from an ordered flow (one event
 in, none out).  The most common use-case for this scenario is IPfrag
 reassembly where multiple fragments are received and stored into a
 reassembly buffer but none is emitted until the last fragment completing the
 packet arrives.



 The second special case is inserting one or more events into an ordered flow
 (one event in, multiple events out).  In this case what is desired is that
 the multiple output events should appear in the input event's order on any
 output queue(s) to which they are sent.  The simplest use-case for this
 scenario is packet segmentation where a large packet needs to be segmented
 for MTU or other reasons, or perhaps it is being replicated for multicast
 purposes.



 As currently defined, order is implicitly resolved upon the next call to
 odp_schedule().  Doing this, however,  may be very inefficient on some
 platforms and as a result it is RECOMMENDED that threads running in ordered
 contexts resolve order explicitly whenever possible.



 It is not defined how/where event order is resolved (inside enqueue,
 release_order, next schedule call, dequeue from destination queue,  …).



 The rules are:

 -  Order *must* be resolved before events are dequeued from the
 destination queue

 -  Ordering is based on the ordering context and order is maintained
 as long as thread holds the context

 -  The context *can* be released after
 odp_schedule_release_ordered() (user hints that ordering is not needed any
 more)

 -  The context *must* be released in next schedule call, if still
 holding it





 Order can be explicitly resolved via the odp_schedule_release_ordered() API
 that tells the scheduler that the thread no longer requires order to be
 maintained.  Following this call, the thread behaves as if it were running
 in a (normal) parallel context and the thread MUST assume that further
 enqueues it performs until the next odp_schedule() call will be unordered.



 For the first special case (removing events from an ordered flow), releasing
 order explicitly MAY improve performance, depending on what the caller does
 between the odp_schedule_release_ordered() call and its next call to
 odp_schedule().  The more interesting (and apparently controversial) case
 involves processing that involves one or more enqueues in an ordered
 context.



 odp_schedule_release_ordered() may improve performance in all cases. It
 tells to the implementation that

 -  All enqueues (and potential other operations like tm_enqueue)
 that need ordering are done

 -  The last enq operation was “the last”

 -  All remaining order locks will not be called

 -  In general, all serialization / synchronization for this context
 can be now freed





 In some implementations, ordering is maintained as part of the scheduler
 while in others it is maintained as part of the queueing system.  Especially
 in systems that use HW assists for ordering, this can have meaningful
 performance implications.  In such systems, it is highly desirable that it
 be known at enqueue time whether or not this is final enqueue that the
 caller will make in the current ordered context.



 Example 1:

 -



 enq()

 enq()

 enq()   // = final enqueue

 release_ordered()



 Example 2:

 -

 enq()



 if(1)

   enq()  // = final enqueue

 else

   enq()



 if(0)

   enq()



 release_ordered()





 Example 3:

 -

 enq()



 order_lock()

 order_unlock()



 if(1)

   tm_enq()  // = final operation which needs ordering

 else

   tm_enq()



 if(0) {

   order_lock()

   order_unlock()

   enq()

 }



 release_ordered()





 If implementation needs to identify the last enqueue, it needs to store the
 latest enqueue until the next enq or release_ordered() call. As soon as it
 sees release_ordered(), it knows which enqueue (or other operation requiring
 ordering) was the final.





 There are several approaches that can be taken to provide this 

Re: [lng-odp] [API-NEXT PATCH v8] api: packet: allow access to packet flow hash values

2015-08-27 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 27 August 2015 at 13:27, Savolainen, Petri (Nokia - FI/Espoo)
petri.savolai...@nokia.com wrote:
 Reviewed-by: Petri Savolainen petri.savolai...@nokia.com

 -Original Message-
 From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
 ext Zoltan Kiss
 Sent: Wednesday, August 26, 2015 9:53 PM
 To: lng-odp@lists.linaro.org
 Subject: [lng-odp] [API-NEXT PATCH v8] api: packet: allow access to
 packet flow hash values

 Applications can read the computed hash (if any) and set it if they
 want
 to store any extra information in it.

 Signed-off-by: Zoltan Kiss zoltan.k...@linaro.org
 ---


 v2:
 - focus on RSS hash only
 - use setter/getter's

 v3:
 - do not mention pointers
 - add a note
 - add new patches for implementation and test

 v4: I've accidentally skipped this version

 v5:
 - use separate flag get and clear, as hash can have any value (that
 maps to
 checking ol_flags in DPDK)
 - change terminology to flow hash, it reflects better what is
 actually hashed
 - add function to generate hash by the platform

 v6:
 - remove stale function definition from the end of packet.h
 - spell out in hash_set that if platform cares about the validity of
 this value,
   it has to maintain it internally.
 - with the above change OVS doesn't need the hash generator function
 anymore,
   so remove that too. We can introduce it later on.

 v7: add more comments on Bala's request

 v8:
 - previous version doesn't contain the mentioned changes, resend it
 - fix grammar on Petri's note

  include/odp/api/packet.h   | 37
 +
  include/odp/api/packet_flags.h | 18 ++
  2 files changed, 55 insertions(+)

 diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
 index 3a454b5..5d46b7b 100644
 --- a/include/odp/api/packet.h
 +++ b/include/odp/api/packet.h
 @@ -605,6 +605,43 @@ uint32_t odp_packet_l4_offset(odp_packet_t pkt);
  int odp_packet_l4_offset_set(odp_packet_t pkt, uint32_t offset);

  /**
 + * Packet flow hash value
 + *
 + * Returns the hash generated from the packet header. Use
 + * odp_packet_has_flow_hash() to check if packet contains a hash.
 + *
 + * @param  pkt  Packet handle
 + *
 + * @return  Hash value
 + *
 + * @note Zero can be a valid hash value.
 + * @note The hash algorithm and the header fields defining the flow
 (therefore
 + * used for hashing) is platform dependent. It is possible a platform
 doesn't
 + * generate any hash at all.
 + * @note The returned hash is either the platform generated (if any),
 or if
 + * odp_packet_flow_hash_set() were called then the value set there.
 + */
 +uint32_t odp_packet_flow_hash(odp_packet_t pkt);
 +
 +/**
 + * Set packet flow hash value
 + *
 + * Store the packet flow hash for the packet and sets the flow hash
 flag. This
 + * enables (but does not require!) application to reflect packet
 header
 + * changes in the hash.
 + *
 + * @param  pkt  Packet handle
 + * @param  flow_hashHash value to set
 + *
 + * @note If the platform needs to keep the original hash value, it has
 to
 + * maintain it internally. Overwriting the platform provided value
 doesn't
 + * change how the platform handles this packet after it.
 + * @note The application is not required to keep this hash valid for
 new or
 + * modified packets.
 + */
 +void odp_packet_flow_hash_set(odp_packet_t pkt, uint32_t flow_hash);
 +
 +/**
   * Tests if packet is segmented
   *
   * @param pkt  Packet handle
 diff --git a/include/odp/api/packet_flags.h
 b/include/odp/api/packet_flags.h
 index bfbcc94..7c3b247 100644
 --- a/include/odp/api/packet_flags.h
 +++ b/include/odp/api/packet_flags.h
 @@ -191,6 +191,15 @@ int odp_packet_has_sctp(odp_packet_t pkt);
  int odp_packet_has_icmp(odp_packet_t pkt);

  /**
 + * Check for packet flow hash
 + *
 + * @param pkt Packet handle
 + * @retval non-zero if packet contains a hash value
 + * @retval 0 if packet does not contain a hash value
 + */
 +int odp_packet_has_flow_hash(odp_packet_t pkt);
 +
 +/**
   * Set flag for L2 header, e.g. ethernet
   *
   * @param pkt Packet handle
 @@ -327,6 +336,15 @@ void odp_packet_has_sctp_set(odp_packet_t pkt, int
 val);
  void odp_packet_has_icmp_set(odp_packet_t pkt, int val);

  /**
 + * Clear flag for packet flow hash
 + *
 + * @param pkt Packet handle
 + *
 + * @note Set this flag is only possible through
 odp_packet_flow_hash_set()
 + */
 +void odp_packet_has_flow_hash_clr(odp_packet_t pkt);
 +
 +/**
   * @}
   */

 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org

Re: [lng-odp] [API-NEXT PATCH v6] api: packet: allow access to packet flow hash values

2015-08-25 Thread Bala Manoharan
Hi Zoltan,

Few comments inline...

On 24 August 2015 at 22:18, Zoltan Kiss zoltan.k...@linaro.org wrote:

 Applications can read the computed hash (if any) and set it if they want
 to store any extra information in it.

 Signed-off-by: Zoltan Kiss zoltan.k...@linaro.org
 ---

 v2:
 - focus on RSS hash only
 - use setter/getter's

 v3:
 - do not mention pointers
 - add a note
 - add new patches for implementation and test

 v4: I've accidentally skipped this version

 v5:
 - use separate flag get and clear, as hash can have any value (that maps to
 checking ol_flags in DPDK)
 - change terminology to flow hash, it reflects better what is actually 
 hashed
 - add function to generate hash by the platform

 v6:
 - remove stale function definition from the end of packet.h
 - spell out in hash_set that if platform cares about the validity of this 
 value,
   it has to maintain it internally.
 - with the above change OVS doesn't need the hash generator function anymore,
   so remove that too. We can introduce it later on.

  include/odp/api/packet.h   | 33 +
  include/odp/api/packet_flags.h | 18 ++
  2 files changed, 51 insertions(+)

 diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
 index 3a454b5..c983332 100644
 --- a/include/odp/api/packet.h
 +++ b/include/odp/api/packet.h
 @@ -605,6 +605,39 @@ uint32_t odp_packet_l4_offset(odp_packet_t pkt);
  int odp_packet_l4_offset_set(odp_packet_t pkt, uint32_t offset);

  /**
 + * Packet flow hash value
 + *
 + * Returns the hash generated from the packet header. Use
 + * odp_packet_has_flow_hash() to check if packet contains a hash.
 + *
 + * @param  pkt  Packet handle
 + *
 + * @return  Hash value
 + *
 + * @note Zero can be a valid hash value.
 + * @note The hash algorithm and the header fields defining the flow 
 (therefore
 + * used for hashing) is platform dependent.
 + */
 +uint32_t odp_packet_flow_hash(odp_packet_t pkt);
 +
 +/**
 + * Set packet flow hash value
 + *
 + * Store the packet flow hash for the packet and sets the flow hash flag. 
 This
 + * enables (but does not requires!) application to reflect packet header
 + * changes in the hash.
 + *
 + * @param  pkt  Packet handle
 + * @param  flow_hashHash value to set
 + *
 + * @note If the platform needs to keep the original hash value, it has to
 + * maintain it internally.
 + * @note The application is not required to keep this hash valid for new or
 + * modified packets.
 + */

I would like to add the information that this hash being set is stored
only as a meta data in the packet and this hash setting will not
affect the way implementation handles the flow or that this hash value
does not dictate any implementation logic.

Also add the information that by default the implementation returns
the original hash which was generated by the implementation lets say
in the case where the odp_packet_flow_hash_set() was not called by the
application.

 +void odp_packet_flow_hash_set(odp_packet_t pkt, uint32_t flow_hash);
 +
 +/**
   * Tests if packet is segmented
   *
   * @param pkt  Packet handle
 diff --git a/include/odp/api/packet_flags.h b/include/odp/api/packet_flags.h
 index bfbcc94..7c3b247 100644
 --- a/include/odp/api/packet_flags.h
 +++ b/include/odp/api/packet_flags.h
 @@ -191,6 +191,15 @@ int odp_packet_has_sctp(odp_packet_t pkt);
  int odp_packet_has_icmp(odp_packet_t pkt);

  /**
 + * Check for packet flow hash
 + *
 + * @param pkt Packet handle
 + * @retval non-zero if packet contains a hash value
 + * @retval 0 if packet does not contain a hash value
 + */
 +int odp_packet_has_flow_hash(odp_packet_t pkt);

Not sure why we need this function odp_packet_has_flow_hash() since
you have defined that zero is also a valid hash the application can
ignore the value if it is zero. Moreover in most platforms the hash
will always be generated for the incoming packets unless some special
cases like error packets, application generated packets, etc.
coz this expects the implementation to maintain a flag indicating the
validity of the flow hash and every packet by default will have a hash
generated by the implementation which I believe should be returned
even if odp_packet_flow_hash_set() was not called by the application.

 +
 +/**
   * Set flag for L2 header, e.g. ethernet
   *
   * @param pkt Packet handle
 @@ -327,6 +336,15 @@ void odp_packet_has_sctp_set(odp_packet_t pkt, int val);
  void odp_packet_has_icmp_set(odp_packet_t pkt, int val);

  /**
 + * Clear flag for packet flow hash
 + *
 + * @param pkt Packet handle
 + *
 + * @note Set this flag is only possible through odp_packet_flow_hash_set()
 + */
 +void odp_packet_has_flow_hash_clr(odp_packet_t pkt);

Same logic here. Not sure if we need to maintain the flag for hash.

 +
 +/**
   * @}
   */

 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 

Re: [lng-odp] [API-NEXT PATCH v6] api: packet: allow access to packet flow hash values

2015-08-25 Thread Bala Manoharan
Hi,

On 25 August 2015 at 16:09, Savolainen, Petri (Nokia - FI/Espoo)
petri.savolai...@nokia.com wrote:


 -Original Message-
 From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
 ext Bala Manoharan
 Sent: Tuesday, August 25, 2015 1:14 PM
 To: Zoltan Kiss
 Cc: LNG ODP Mailman List
 Subject: Re: [lng-odp] [API-NEXT PATCH v6] api: packet: allow access to
 packet flow hash values

 Hi Zoltan,

 Few comments inline...

 On 24 August 2015 at 22:18, Zoltan Kiss zoltan.k...@linaro.org wrote:
 
  Applications can read the computed hash (if any) and set it if they
 want
  to store any extra information in it.
 
  Signed-off-by: Zoltan Kiss zoltan.k...@linaro.org
  ---
 
  v2:
  - focus on RSS hash only
  - use setter/getter's
 
  v3:
  - do not mention pointers
  - add a note
  - add new patches for implementation and test
 
  v4: I've accidentally skipped this version
 
  v5:
  - use separate flag get and clear, as hash can have any value (that
 maps to
  checking ol_flags in DPDK)
  - change terminology to flow hash, it reflects better what is
 actually hashed
  - add function to generate hash by the platform
 
  v6:
  - remove stale function definition from the end of packet.h
  - spell out in hash_set that if platform cares about the validity of
 this value,
it has to maintain it internally.
  - with the above change OVS doesn't need the hash generator function
 anymore,
so remove that too. We can introduce it later on.
 
   include/odp/api/packet.h   | 33
 +
   include/odp/api/packet_flags.h | 18 ++
   2 files changed, 51 insertions(+)
 
  diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
  index 3a454b5..c983332 100644
  --- a/include/odp/api/packet.h
  +++ b/include/odp/api/packet.h
  @@ -605,6 +605,39 @@ uint32_t odp_packet_l4_offset(odp_packet_t pkt);
   int odp_packet_l4_offset_set(odp_packet_t pkt, uint32_t offset);
 
   /**
  + * Packet flow hash value
  + *
  + * Returns the hash generated from the packet header. Use
  + * odp_packet_has_flow_hash() to check if packet contains a hash.
  + *
  + * @param  pkt  Packet handle
  + *
  + * @return  Hash value
  + *
  + * @note Zero can be a valid hash value.
  + * @note The hash algorithm and the header fields defining the flow
 (therefore
  + * used for hashing) is platform dependent.
  + */
  +uint32_t odp_packet_flow_hash(odp_packet_t pkt);
  +
  +/**
  + * Set packet flow hash value
  + *
  + * Store the packet flow hash for the packet and sets the flow hash
 flag. This
  + * enables (but does not requires!) application to reflect packet
 header
  + * changes in the hash.
  + *
  + * @param  pkt  Packet handle
  + * @param  flow_hashHash value to set
  + *
  + * @note If the platform needs to keep the original hash value, it
 has to
  + * maintain it internally.
  + * @note The application is not required to keep this hash valid for
 new or
  + * modified packets.
  + */

 I would like to add the information that this hash being set is stored
 only as a meta data in the packet and this hash setting will not
 affect the way implementation handles the flow or that this hash value
 does not dictate any implementation logic.

 Also add the information that by default the implementation returns
 the original hash which was generated by the implementation lets say
 in the case where the odp_packet_flow_hash_set() was not called by the
 application.

  +void odp_packet_flow_hash_set(odp_packet_t pkt, uint32_t flow_hash);
  +
  +/**
* Tests if packet is segmented
*
* @param pkt  Packet handle
  diff --git a/include/odp/api/packet_flags.h
 b/include/odp/api/packet_flags.h
  index bfbcc94..7c3b247 100644
  --- a/include/odp/api/packet_flags.h
  +++ b/include/odp/api/packet_flags.h
  @@ -191,6 +191,15 @@ int odp_packet_has_sctp(odp_packet_t pkt);
   int odp_packet_has_icmp(odp_packet_t pkt);
 
   /**
  + * Check for packet flow hash
  + *
  + * @param pkt Packet handle
  + * @retval non-zero if packet contains a hash value
  + * @retval 0 if packet does not contain a hash value
  + */
  +int odp_packet_has_flow_hash(odp_packet_t pkt);

 Not sure why we need this function odp_packet_has_flow_hash() since
 you have defined that zero is also a valid hash the application can
 ignore the value if it is zero. Moreover in most platforms the hash
 will always be generated for the incoming packets unless some special
 cases like error packets, application generated packets, etc.
 coz this expects the implementation to maintain a flag indicating the
 validity of the flow hash and every packet by default will have a hash
 generated by the implementation which I believe should be returned
 even if odp_packet_flow_hash_set() was not called by the application.

 A valid hash can be anything, also zero. Application cannot tell from hash 
 value if the hash was actually generated and is valid. As you mention 
 sometimes hash cannot

Re: [lng-odp] [API-NEXT PATCH v4 1/3] api: pool: add packet user area initializer for pool creation parameters

2015-08-20 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 15 August 2015 at 00:25, Zoltan Kiss zoltan.k...@linaro.org wrote:

 Applications can preset certain parts of the packet user area, so when that
 memory will be allocated it starts from a known state. If the platform
 allocates the memory during pool creation, it's enough to run the
 constructor after that. If it's allocating memory on demand, it should
 call the constructor each time.
 Porting applications to ODP can benefit from this. If the application can't
 afford to change its whole packet handling to ODP, it's likely it needs to
 maintain its own metadata in the user area. And probably it needs to set
 constant fields in that metadata e.g. to mark that this is an ODP packet,
 and/or store the handle of the packet itself.

 Signed-off-by: Zoltan Kiss zoltan.k...@linaro.org
 ---
 v2:
 - restrict this feature to packet user area
 - expand comments

 v3:
 - include packet.h in pool.h

 v4:
 - fix grammar based on Bill's comments

  include/odp/api/packet.h |  3 +++
  include/odp/api/pool.h   | 26 ++
  2 files changed, 29 insertions(+)

 diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
 index 3a454b5..f5d2142 100644
 --- a/include/odp/api/packet.h
 +++ b/include/odp/api/packet.h
 @@ -73,6 +73,9 @@ extern C {
   * @note The default headroom and tailroom used for packets is specified
 by
   * the ODP_CONFIG_PACKET_HEADROOM and ODP_CONFIG_PACKET_TAILROOM defines
 in
   * odp_config.h.
 + * @note Data changed in user area might be preserved by the platform from
 + * previous usage of the buffer, so values preset in uarea_init() are not
 + * guaranteed.
   */
  odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t len);

 diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
 index 2e79a55..01f770f 100644
 --- a/include/odp/api/pool.h
 +++ b/include/odp/api/pool.h
 @@ -21,6 +21,7 @@ extern C {


  #include odp/std_types.h
 +#include odp/packet.h

  /** @defgroup odp_pool ODP POOL
   *  Operations on a pool.
 @@ -41,6 +42,23 @@ extern C {
  #define ODP_POOL_NAME_LEN  32

  /**
 + * Packet user area initializer callback function for pools.
 + *
 + * @param pkt   Handle of the packet
 + * @param uarea_init_argOpaque pointer defined in odp_pool_param_t
 + *
 + * @note If the application specifies this pointer, it expects that every
 buffer
 + * is initialized exactly once with it when the underlying memory is
 allocated.
 + * It is not called from odp_packet_alloc(), unless the platform chooses
 to
 + * allocate the memory at that point. Applications can only assume that
 this
 + * callback is called once before the packet is first used. Any subsequent
 + * change to the user area might be preserved after odp_packet_free() is
 called,
 + * so applications should take care of (re)initialization if they change
 data
 + * preset by this function.
 + */
 +typedef void (odp_packet_uarea_init_t)(odp_packet_t pkt, void
 *uarea_init_arg);
 +
 +/**
   * Pool parameters
   * Used to communicate pool creation options.
   */
 @@ -82,6 +100,14 @@ typedef struct odp_pool_param_t {
 /** User area size in bytes. Specify as 0 if no
 user
 area is needed. */
 uint32_t uarea_size;
 +
 +   /** Initialize every packet's user area at
 allocation
 +   time. Use NULL if no initialization needed. */
 +   odp_packet_uarea_init_t *uarea_init;
 +
 +   /** Opaque pointer passed to packet user area
 +   constructor. */
 +   void *uarea_init_arg;
 } pkt;
 struct {
 /** Number of timeouts in the pool */
 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH v3 1/3] api: packet: allow access to packet RSS hash values

2015-08-20 Thread Bala Manoharan
Hi,

On 20 August 2015 at 17:32, Bill Fischofer bill.fischo...@linaro.org
wrote:

 The RSS is relevant to packets originating from a NIC and is independent
 of the CoS or other flow designators.  It's there mainly because some
 applications (e.g., OVS) use it internally, so it's for legacy support.


This API might be used for legacy applications but if the HW generates RSS
by default then that value can be used by the application, somehow this API
is seen as a requirement only for OVS but getting the hash value of the
packet will be used by other applications also. So we should solve the
generic use-case rather than focussing for OVS alone.

The issue here is to avoid the application to configure the hash value
since it is possible that the application configures a hash value different
from what the implementation would have generated for that particular
packet flow and this would cause an issue in some platforms. This was the
basis on which the removal of set_rss_hash() function was suggested.

Regards,
Bala


 On Thu, Aug 20, 2015 at 6:43 AM, Jerin Jacob 
 jerin.ja...@caviumnetworks.com wrote:

 On Thu, Aug 20, 2015 at 04:52:29PM +0530, Santosh Shukla wrote:
  On 20 August 2015 at 16:23, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:
   On Thu, Aug 20, 2015 at 03:30:23PM +0530, Santosh Shukla wrote:
   On 20 August 2015 at 14:48, Balasubramanian Manoharan
   bala.manoha...@linaro.org wrote:
   
   
On Sunday 16 August 2015 06:14 PM, Santosh Shukla wrote:
   
On 15 August 2015 at 00:12, Zoltan Kiss zoltan.k...@linaro.org
 wrote:
   
Applications can read the computed hash (if any) and set it if
 they
changed the packet headers or if the platform haven't calculated
 the
hash.
   
Signed-off-by: Zoltan Kiss zoltan.k...@linaro.org
---
v2:
- focus on RSS hash only
- use setter/getter's
   
v3:
- do not mention pointers
- add a note
- add new patches for implementation and test
   
  include/odp/api/packet.h | 30 ++
  1 file changed, 30 insertions(+)
   
diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
index 3a454b5..1ae24bc 100644
--- a/include/odp/api/packet.h
+++ b/include/odp/api/packet.h
@@ -48,6 +48,11 @@ extern C {
   * Invalid packet segment
   */
   
+/**
+ * @def ODP_PACKET_RSS_INVALID
+ * RSS hash is not set
+ */
+
  /*
   *
   * Alloc and free
@@ -605,6 +610,31 @@ uint32_t odp_packet_l4_offset(odp_packet_t
 pkt);
  int odp_packet_l4_offset_set(odp_packet_t pkt, uint32_t
 offset);
   
  /**
+ * RSS hash value
+ *
+ * Returns the RSS hash stored for the packet.
+ *
+ * @param  pkt  Packet handle
+ *
+ * @return  Hash value
+ * @retval ODP_PACKET_RSS_INVALID if RSS hash is not set.
+ */
+uint32_t odp_packet_rss_hash(odp_packet_t pkt);
+
+/**
+ * Set RSS hash value
+ *
+ * Store the RSS hash for the packet.
+ *
+ * @param  pkt  Packet handle
+ * @param  rss_hash Hash value to set
+ *
+ * @note If the application changes the packet header, it might
 want to
+ * recalculate this value and set it.
+ */
+void odp_packet_rss_hash_set(odp_packet_t pkt, uint32_t
 rss_hash);
   
Can this use-case be handled by calling
odp_packet_generate_rss_hash(odp_packet_t pkt) function where the
 rss gets
generated by the implementation rather than being set from
 application using
odp_packet_set_rss_hash() function?
   
  
   Bala, As we discussed and in summary for rest; Considering ovs
 example
   : ovs uses sw based rss_hash only when HW not supporting rss Or HW
   generated rss is zero.
  
   hash = packet_get_hash(packet);
   if (OVS_UNLIKELY(!hash)) {
   hash = miniflow_hash_5tuple(mf, 0);
   packet_set_hash(packet, hash);
   }
   return hash;
  
   and rss_hash generation is inside ovs dp_packet (middle  layer), It
 is
  
   How about following change in OVS plarform specific packet_get_hash
 code.
   odp_packet_rss_hash_set
   kind of interface wont fit correctly in octeon  platform as hash
   identifier used by hardware in subsequent HW based queue operations
  
   static inline uint32_t
   packet_get_hash(struct dp_packet *p)
   {
   #ifdef DPDK_NETDEV
   return p-mbuf.hash.rss;
   #else
   #ifdef ODP_NETDEV
   unit32_t hash;
   hash = odp_packet_rss_hash(p-odp_pkt);
   if (OVS_UNLIKELY(!hash)
   hash = odp_packet_generate_rss_hash(p-odp_pkt);
   return hash;
   #endif
   return p-rss_hash;
   #endif
   }
  
 
  Trying to understand your api definition;
  odp_packet_rss_hash() - to get rss
  and
  odp_packet_generate_rss_hash() - generate rss from where :  sw or hw?

 NOP if HW is capable, for software generated packet/ non-capable HW it
 will be in SW

  In either case, your trying to generate non-zero rss (considering a
  valid rss (?) for 

Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-19 Thread Bala Manoharan
Hi Ivan,

On 19 August 2015 at 15:02, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Hi Bala,

 just several comments I forgot to mention.

 On 19.08.15 08:45, Bala Manoharan wrote:

 Ivan,

 On 18 August 2015 at 22:39, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org wrote:

 post test review.

 I've tested. It works for me. (except UDP/TCP src, as it's not
 supported).
 Why you didn't add here others PMR? Are you planing it after?
 Also there is no some inter-PMR tests, like:
 1 - create PMR_dudp-CoS1 (port X)
 2 - create PMR_dtcp-CoS2 (same port X)
 3 - send UDP packet with port X
 4 - check if it was received with CoS1
 5 - send TCP packet with same port X
 6 - check if it was received with CoS2

 Maybe it's not place for it, but it definitly should be added.
 It can help to figure out issues when L4 layer cannot differ type of
 packet.
 And implementor has to add some inter PMR to dispatch packet by type
 on L3 level.

 Yes. I am planning to add more complex Test Cases in a separate suite. So
 that the user can first test whether the basic functionality is working
 fine.
 This current patch contains the basic test suite suite which tests the
 basic functionality of all the APIs.

 Some additional comments below.

 On 12.08.15 11:53, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite
 to test
 individual PMRs. This suite will test the defined PMRs by
 configuring
 pktio separately for every test case.

 Fixes:
 https://bugs.linaro.org/show_bug.cgi?id=1542
 https://bugs.linaro.org/show_bug.cgi?id=1544
 https://bugs.linaro.org/show_bug.cgi?id=1545
 https://bugs.linaro.org/show_bug.cgi?id=1546

 Signed-off-by: Balasubramanian Manoharan 
 bala.manoha...@linaro.org mailto:bala.manoha...@linaro.org
 ---
 v2: Incorporates review comments from Ivan and Christophe

helper/include/odp/helper/tcp.h|   4 +
test/validation/classification/Makefile.am |   2 +
test/validation/classification/classification.c|   5 +
.../classification/odp_classification_common.c | 225
 
.../classification/odp_classification_test_pmr.c   | 640
 +
.../classification/odp_classification_tests.c  | 152 +
.../classification/odp_classification_testsuites.h |  10 +-
7 files changed, 906 insertions(+), 132 deletions(-)
create mode 100644
 test/validation/classification/odp_classification_common.c
create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
 *  @{
 */

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct
 header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20

/** TCP header */
typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
noinst_LTLIBRARIES = libclassification.la 
 http://libclassification.la
libclassification_la_SOURCES = odp_classification_basic.c \
 odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
 classification.c

bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..a88a301 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
  .pInitFunc = classification_suite_init,
  .pCleanupFunc =
 classification_suite_term,
  },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_suite_pmr,
 +   .pInitFunc =
 classification_suite_pmr_init,
 +   .pCleanupFunc

Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-18 Thread Bala Manoharan
Ivan,

On 18 August 2015 at 22:39, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 post test review.

 I've tested. It works for me. (except UDP/TCP src, as it's not supported).
 Why you didn't add here others PMR? Are you planing it after?
 Also there is no some inter-PMR tests, like:
 1 - create PMR_dudp-CoS1 (port X)
 2 - create PMR_dtcp-CoS2 (same port X)
 3 - send UDP packet with port X
 4 - check if it was received with CoS1
 5 - send TCP packet with same port X
 6 - check if it was received with CoS2

 Maybe it's not place for it, but it definitly should be added.
 It can help to figure out issues when L4 layer cannot differ type of
 packet.
 And implementor has to add some inter PMR to dispatch packet by type on L3
 level.

 Yes. I am planning to add more complex Test Cases in a separate suite. So
that the user can first test whether the basic functionality is working
fine.
This current patch contains the basic test suite suite which tests the
basic functionality of all the APIs.

Some additional comments below.

 On 12.08.15 11:53, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite to test
 individual PMRs. This suite will test the defined PMRs by configuring
 pktio separately for every test case.

 Fixes:
 https://bugs.linaro.org/show_bug.cgi?id=1542
 https://bugs.linaro.org/show_bug.cgi?id=1544
 https://bugs.linaro.org/show_bug.cgi?id=1545
 https://bugs.linaro.org/show_bug.cgi?id=1546

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
 ---
 v2: Incorporates review comments from Ivan and Christophe

   helper/include/odp/helper/tcp.h|   4 +
   test/validation/classification/Makefile.am |   2 +
   test/validation/classification/classification.c|   5 +
   .../classification/odp_classification_common.c | 225 
   .../classification/odp_classification_test_pmr.c   | 640
 +
   .../classification/odp_classification_tests.c  | 152 +
   .../classification/odp_classification_testsuites.h |  10 +-
   7 files changed, 906 insertions(+), 132 deletions(-)
   create mode 100644
 test/validation/classification/odp_classification_common.c
   create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
*  @{
*/

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20

   /** TCP header */
   typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
   noinst_LTLIBRARIES = libclassification.la
   libclassification_la_SOURCES = odp_classification_basic.c \
odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
classification.c

   bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..a88a301 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
 .pInitFunc = classification_suite_init,
 .pCleanupFunc = classification_suite_term,
 },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_suite_pmr,
 +   .pInitFunc = classification_suite_pmr_init,
 +   .pCleanupFunc = classification_suite_pmr_term,
 +   },
 CU_SUITE_INFO_NULL,
   };

 diff --git a/test/validation/classification/odp_classification_common.c
 b/test/validation/classification/odp_classification_common.c
 new file mode 100644
 index 000..fe392c0
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_common.c
 @@ -0,0 +1,225 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier:BSD-3-Clause
 + */
 +
 +#include odp_classification_testsuites.h
 +#include odp_cunit_common.h
 +#include odp/helper/eth.h
 +#include odp/helper/ip.h
 +#include odp/helper/udp.h
 +#include odp/helper/tcp.h
 +
 +#define SHM_PKT_NUM_BUFS32
 +#define SHM_PKT_BUF_SIZE1024
 +
 +#define CLS_DEFAULT_SADDR  10.0.0.1/32
 +#define CLS_DEFAULT_DADDR  10.0.0.100/32
 

Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-18 Thread Bala Manoharan
Ivan,

On 18 August 2015 at 21:15, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Bala,

 On 18.08.15 18:16, Bala Manoharan wrote:

 Hi Ivan,

 On 18 August 2015 at 19:22, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org wrote:

 Hi, Bala

 Note: Your patch is based on API-NEXT and it obliged me to do some
 modifications
 with odp_pktio_param_t before testing. Also I'm still not sure about
 using
 odp_pmr_terms_cap(), but maybe it's OK to simply fail.


 I understand that you want to simly fail test, but on my opinion
 that's slightly different result. It be good to print some msg like
 PMR is not supported...I'm printing the msg like Unsupported term, but
 it's not printed in the summary. It's just slightly different result and
 it worries me a little.

 Okay. We can discuss this further in arch call.



 Short pretesting review farther below



 On 12.08.15 11:53, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite
 to test
 individual PMRs. This suite will test the defined PMRs by
 configuring
 pktio separately for every test case.

 Fixes:
 https://bugs.linaro.org/show_bug.cgi?id=1542
 https://bugs.linaro.org/show_bug.cgi?id=1544
 https://bugs.linaro.org/show_bug.cgi?id=1545
 https://bugs.linaro.org/show_bug.cgi?id=1546

 Signed-off-by: Balasubramanian Manoharan 
 bala.manoha...@linaro.org mailto:bala.manoha...@linaro.org

 ---
 v2: Incorporates review comments from Ivan and Christophe

helper/include/odp/helper/tcp.h|   4 +
test/validation/classification/Makefile.am |   2 +
test/validation/classification/classification.c|   5 +
.../classification/odp_classification_common.c | 225
 
.../classification/odp_classification_test_pmr.c   | 640
 +
.../classification/odp_classification_tests.c  | 152 +
.../classification/odp_classification_testsuites.h |  10 +-
7 files changed, 906 insertions(+), 132 deletions(-)
create mode 100644
 test/validation/classification/odp_classification_common.c
create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
 *  @{
 */

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct
 header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20


 What about to name it ODPH_MIN_TCPHDR_LEN.


 As you have mentioned in your next mail this, I have followed existing
 ODP standard for naming convention.


 Don't forget to shorten the comment.


Okay.






/** TCP header */
typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
noinst_LTLIBRARIES = libclassification.la 
 http://libclassification.la

libclassification_la_SOURCES = odp_classification_basic.c \
 odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
 classification.c

bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..a88a301 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
  .pInitFunc = classification_suite_init,
  .pCleanupFunc =
 classification_suite_term,
  },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_suite_pmr,
 +   .pInitFunc =
 classification_suite_pmr_init,
 +   .pCleanupFunc =
 classification_suite_pmr_term,
 +   },


 I'm not sure if classificatio pmr tests should be after
 classification tests
 Maybe better to insert it before. Just in order

Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-18 Thread Bala Manoharan
Ivan,

On 18 August 2015 at 21:43, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Bala,

 On 18.08.15 18:54, Bala Manoharan wrote:

 Ivan,

 On 18 August 2015 at 21:15, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org wrote:

 Bala,

 On 18.08.15 18:16, Bala Manoharan wrote:

 Hi Ivan,

 On 18 August 2015 at 19:22, Ivan Khoronzhuk 
 ivan.khoronz...@linaro.org mailto:ivan.khoronz...@linaro.org mailto:
 ivan.khoronz...@linaro.org mailto:ivan.khoronz...@linaro.org wrote:

  Hi, Bala

  Note: Your patch is based on API-NEXT and it obliged me to
 do some modifications
  with odp_pktio_param_t before testing. Also I'm still not
 sure about using
  odp_pmr_terms_cap(), but maybe it's OK to simply fail.


 I understand that you want to simly fail test, but on my opinion
 that's slightly different result. It be good to print some msg like
 PMR is not supported...I'm printing the msg like Unsupported term, but
 it's not printed in the summary. It's just slightly different result
 and
 it worries me a little.

 Okay. We can discuss this further in arch call.


 I don't have correct decision on this yet.
 I hoped you have.


Yes. I have an idea about this. I thought of discussing this on arch call
would be better than in mail chain.




  Short pretesting review farther below



  On 12.08.15 11:53, Balasubramanian Manoharan wrote:

  Additional test suite is added to classification
 validation suite to test
  individual PMRs. This suite will test the defined PMRs
 by configuring
  pktio separately for every test case.

  Fixes:
 https://bugs.linaro.org/show_bug.cgi?id=1542
 https://bugs.linaro.org/show_bug.cgi?id=1544
 https://bugs.linaro.org/show_bug.cgi?id=1545
 https://bugs.linaro.org/show_bug.cgi?id=1546

  Signed-off-by: Balasubramanian Manoharan 
 bala.manoha...@linaro.org mailto:bala.manoha...@linaro.org mailto:
 bala.manoha...@linaro.org mailto:bala.manoha...@linaro.org


  ---
  v2: Incorporates review comments from Ivan and Christophe

 helper/include/odp/helper/tcp.h|
  4 +
 test/validation/classification/Makefile.am |
  2 +
 test/validation/classification/classification.c|
  5 +
 .../classification/odp_classification_common.c |
 225 
 .../classification/odp_classification_test_pmr.c   |
 640 +
 .../classification/odp_classification_tests.c  |
 152 +
 .../classification/odp_classification_testsuites.h |
 10 +-
 7 files changed, 906 insertions(+), 132 deletions(-)
 create mode 100644
 test/validation/classification/odp_classification_common.c
 create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

  diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
  index defe422..b52784d 100644
  --- a/helper/include/odp/helper/tcp.h
  +++ b/helper/include/odp/helper/tcp.h
  @@ -26,6 +26,10 @@ extern C {
  *  @{
  */

  +/** TCP header length (Minimum Header length without
 options)*/
  +/** If options field is added to TCP header then the
 correct header value
  +should be updated by the application */
  +#define ODPH_TCPHDR_LEN 20


  What about to name it ODPH_MIN_TCPHDR_LEN.


 As you have mentioned in your next mail this, I have followed
 existing ODP standard for naming convention.


 Don't forget to shorten the comment.

 Okay.






 /** TCP header */
 typedef struct ODP_PACKED {
  diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
  index ba468fa..050d5e6 100644
  --- a/test/validation/classification/Makefile.am
  +++ b/test/validation/classification/Makefile.am
  @@ -3,6 +3,8 @@ include ../Makefile.inc
 noinst_LTLIBRARIES = libclassification.la 
 http://libclassification.la http://libclassification.la


 libclassification_la_SOURCES =
 odp_classification_basic.c \

  odp_classification_tests.c \
  +
 odp_classification_test_pmr.c \
  +
 odp_classification_common.c \
  classification.c

 bin_PROGRAMS = classification_main$(EXEEXT)
  diff --git
 a/test/validation/classification

Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-18 Thread Bala Manoharan
Hi Ivan,

On 18 August 2015 at 19:22, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Hi, Bala

 Note: Your patch is based on API-NEXT and it obliged me to do some
 modifications
 with odp_pktio_param_t before testing. Also I'm still not sure about using
 odp_pmr_terms_cap(), but maybe it's OK to simply fail.

 Short pretesting review farther below



 On 12.08.15 11:53, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite to test
 individual PMRs. This suite will test the defined PMRs by configuring
 pktio separately for every test case.

 Fixes:
 https://bugs.linaro.org/show_bug.cgi?id=1542
 https://bugs.linaro.org/show_bug.cgi?id=1544
 https://bugs.linaro.org/show_bug.cgi?id=1545
 https://bugs.linaro.org/show_bug.cgi?id=1546

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
 ---
 v2: Incorporates review comments from Ivan and Christophe

   helper/include/odp/helper/tcp.h|   4 +
   test/validation/classification/Makefile.am |   2 +
   test/validation/classification/classification.c|   5 +
   .../classification/odp_classification_common.c | 225 
   .../classification/odp_classification_test_pmr.c   | 640
 +
   .../classification/odp_classification_tests.c  | 152 +
   .../classification/odp_classification_testsuites.h |  10 +-
   7 files changed, 906 insertions(+), 132 deletions(-)
   create mode 100644
 test/validation/classification/odp_classification_common.c
   create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
*  @{
*/

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20


 What about to name it ODPH_MIN_TCPHDR_LEN.


As you have mentioned in your next mail this, I have followed existing ODP
standard for naming convention.




   /** TCP header */
   typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
   noinst_LTLIBRARIES = libclassification.la
   libclassification_la_SOURCES = odp_classification_basic.c \
odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
classification.c

   bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..a88a301 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
 .pInitFunc = classification_suite_init,
 .pCleanupFunc = classification_suite_term,
 },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_suite_pmr,
 +   .pInitFunc = classification_suite_pmr_init,
 +   .pCleanupFunc = classification_suite_pmr_term,
 +   },


 I'm not sure if classificatio pmr tests should be after classification
 tests
 Maybe better to insert it before. Just in order of complexity.


Yes. I will move this in the next patch.



 CU_SUITE_INFO_NULL,
   };

 diff --git a/test/validation/classification/odp_classification_common.c
 b/test/validation/classification/odp_classification_common.c
 new file mode 100644
 index 000..fe392c0
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_common.c
 @@ -0,0 +1,225 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier:BSD-3-Clause
 + */
 +
 +#include odp_classification_testsuites.h
 +#include odp_cunit_common.h
 +#include odp/helper/eth.h
 +#include odp/helper/ip.h
 +#include odp/helper/udp.h
 +#include odp/helper/tcp.h
 +
 +#define SHM_PKT_NUM_BUFS32
 +#define SHM_PKT_BUF_SIZE1024
 +
 +#define CLS_DEFAULT_SADDR  10.0.0.1/32
 +#define CLS_DEFAULT_DADDR  10.0.0.100/32
 +#define CLS_DEFAULT_SPORT  1024
 +#define CLS_DEFAULT_DPORT  2048
 +
 +#define CLS_TEST_SPORT 4096
 +#define CLS_TEST_DPORT 8192
 +#define CLS_TEST_DADDR 10.0.0.5/32
 +
 +/* Test Packet values */
 +#define DATA_MAGIC 0x01020304
 +#define TEST_SEQ_INVALID   ((uint32_t)~0)
 +
 +/** 

Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-17 Thread Bala Manoharan
Ping.

On 14 August 2015 at 22:00, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:


 On 14.08.15 19:29, Bala Manoharan wrote:

 Hi Ivan,

 I am planning to add MAC support in a separate patch.
 I believe MAC should be easier to get-in since it has been agreed.


 Ok.

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 2/3] test: validation: classification: unused variable

2015-08-14 Thread Bala Manoharan
I agree with Bill. retval should be tested for success in this case.

Regards,
Bala

On 14 August 2015 at 01:24, Bill Fischofer bill.fischo...@linaro.org
wrote:



 On Thu, Aug 13, 2015 at 2:04 PM, Mike Holmes mike.hol...@linaro.org
 wrote:

 retval is not used, remove it

 Signed-off-by: Mike Holmes mike.hol...@linaro.org
 ---
  test/validation/classification/odp_classification_tests.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/test/validation/classification/odp_classification_tests.c
 b/test/validation/classification/odp_classification_tests.c
 index 0e0c4eb..827afa4 100644
 --- a/test/validation/classification/odp_classification_tests.c
 +++ b/test/validation/classification/odp_classification_tests.c
 @@ -405,7 +405,7 @@ void configure_cls_pmr_chain(void)
  qparam);
 CU_ASSERT_FATAL(queue_list[CLS_PMR_CHAIN_DST] !=
 ODP_QUEUE_INVALID);

 -   retval = odp_cos_set_queue(cos_list[CLS_PMR_CHAIN_DST],
 +   odp_cos_set_queue(cos_list[CLS_PMR_CHAIN_DST],
queue_list[CLS_PMR_CHAIN_DST]);


 Again the issue here is not that retval is unused but that what's missing
 is the following:

 CU_ASSERT(retval == 0);


 parse_ipv4_string(CLS_PMR_CHAIN_SADDR, addr, mask);
 --
 2.1.4

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCHv2] validation: classification: added additional suite to test individual PMRs

2015-08-14 Thread Bala Manoharan
Hi Ivan,

I am planning to add MAC support in a separate patch.
I believe MAC should be easier to get-in since it has been agreed.

Regards,
Bala

On 14 August 2015 at 21:57, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Hi, Bala

 Just checked if you added real MAC and seems you forgot to.
 Or maybe you are planing to do it in separate patch.

 On 12.08.15 11:53, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite to test
 individual PMRs. This suite will test the defined PMRs by configuring
 pktio separately for every test case.

 Fixes:
 https://bugs.linaro.org/show_bug.cgi?id=1542
 https://bugs.linaro.org/show_bug.cgi?id=1544
 https://bugs.linaro.org/show_bug.cgi?id=1545
 https://bugs.linaro.org/show_bug.cgi?id=1546

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
 ---
 v2: Incorporates review comments from Ivan and Christophe

   helper/include/odp/helper/tcp.h|   4 +
   test/validation/classification/Makefile.am |   2 +
   test/validation/classification/classification.c|   5 +
   .../classification/odp_classification_common.c | 225 
   .../classification/odp_classification_test_pmr.c   | 640
 +
   .../classification/odp_classification_tests.c  | 152 +
   .../classification/odp_classification_testsuites.h |  10 +-
   7 files changed, 906 insertions(+), 132 deletions(-)
   create mode 100644
 test/validation/classification/odp_classification_common.c
   create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 ..

 +
 +odp_packet_t create_packet(odp_pool_t pool, bool vlan, bool flag_udp)
 +{
 +   uint32_t seqno;
 +   odph_ethhdr_t *ethhdr;
 +   odph_udphdr_t *udp;
 +   odph_tcphdr_t *tcp;
 +   odph_ipv4hdr_t *ip;
 +   uint8_t payload_len;
 +   char src_mac[ODPH_ETHADDR_LEN]  = {0};
 +   char dst_mac[ODPH_ETHADDR_LEN] = {0};
 +   uint32_t addr = 0;
 +   uint32_t mask;
 +   int offset;
 +   odp_packet_t pkt;
 +   int packet_len = 0;
 +
 +   payload_len = sizeof(cls_test_packet_t);
 +   packet_len += ODPH_ETHHDR_LEN;
 +   packet_len += ODPH_IPV4HDR_LEN;
 +   if (flag_udp)
 +   packet_len += ODPH_UDPHDR_LEN;
 +   else
 +   packet_len += ODPH_TCPHDR_LEN;
 +   packet_len += payload_len;
 +
 +   if (vlan)
 +   packet_len += ODPH_VLANHDR_LEN;
 +
 +   pkt = odp_packet_alloc(pool, packet_len);
 +   CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID);
 +
 +   /* Ethernet Header */
 +   offset = 0;
 +   odp_packet_l2_offset_set(pkt, offset);
 +   ethhdr = (odph_ethhdr_t *)odp_packet_l2_ptr(pkt, NULL);
 +   memcpy(ethhdr-src.addr, src_mac, ODPH_ETHADDR_LEN);
 +   memcpy(ethhdr-dst.addr, dst_mac, ODPH_ETHADDR_LEN);


 I expect here will be real MAC address.
 Like it was done for performance/odp_pktio_perf
 01f5c738c6e73bd3ad75984d293c506f952a1eff

 ...

 --
 Regards,
 Ivan Khoronzhuk

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH] api: pool: add buffer constructor for pool creation parameters

2015-08-12 Thread Bala Manoharan
Hi,

Comments inline...

On 12 August 2015 at 00:01, Zoltan Kiss zoltan.k...@linaro.org wrote:

 Applications can preset certain parts of the buffer or user area, so when
 that
 memory will be allocated it starts from a known state. If the platform
 allocates the memory during pool creation, it's enough to run the
 constructor
 after that. If it's allocating memory on demand, it should call the
 constructor each time.


[Bala] Not sure if I understand the above description correctly does it
mean that if the memory is allocated
for an existing pool this call should be called only during the pool
creation and not during each and every buffer allocation?
Then it will be applications responsibility to reset the user area before
freeing the buffer? Is this the use-case this API is trying to address?



 Signed-off-by: Zoltan Kiss zoltan.k...@linaro.org
 ---
  include/odp/api/pool.h | 20 
  1 file changed, 20 insertions(+)

 diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
 index 2e79a55..1bd19bf 100644
 --- a/include/odp/api/pool.h
 +++ b/include/odp/api/pool.h
 @@ -41,6 +41,20 @@ extern C {
  #define ODP_POOL_NAME_LEN  32

  /**
 + * Buffer constructor callback function for pools.
 + *
 + * @param pool  Handle of the pool which owns the buffer
 + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
 + * @param buf   Pointer to the buffer
 + *
 + * @note If the application specifies this pointer, it expects that every
 buffer
 + * is initialized with it when the underlying memory is allocated.
 + */
 +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
 +  void *buf_ctor_arg,
 +  void *buf);
 +


[Bala] We have avoided call back functions in ODP architecture so if the
requirement can be addressed without a call-back maybe we can follow that
approach. Again I am not clear if this call-back function should be called
only once during pool creation or every time during buffer alloc.


 +/**
   * Pool parameters
   * Used to communicate pool creation options.
   */
 @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
 uint32_t num;
 } tmo;
 };
 +
 +   /** Buffer constructor to initialize every buffer. Use NULL if no
 +   initialization needed. */
 +   odp_pool_buf_ctor_t *buf_ctor;
 +   /** Opaque pointer passed to buffer constructor */
 +   void *buf_ctor_arg;
  } odp_pool_param_t;

  /** Packet pool*/
 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp


Regards,
Bala
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH] api: pool: add buffer constructor for pool creation parameters

2015-08-12 Thread Bala Manoharan
Hi,

On 12 August 2015 at 16:29, Bill Fischofer bill.fischo...@linaro.org
wrote:

 User metadata is entirely under the control of the application.  The only
 thing ODP implementations do with regard to it are:


1. Reserve space for it at pool create time.
2. Copy it if as part of an ODP API a new packet is implicitly
allocated.  For example, the definition of odp_packet_add_data() returns an
odp_packet_t that may be the same or different than the input
odp_packet_t.  In the latter case, the user area of the input packet is
copied to the output packet as part of the operation of the API

 Under discussion here is the option to permit the application to
 initialize the contents of the user area for each element of a packet pool
 at pool create time.


Adding a minor point here. If the requirement is to call this
initialization only during pool create then application will have to reset
the user-area before calling odp_packet_free() API.

Regards,
Bala


 I've added an agenda item for this to today's ARCH call.

 On Wed, Aug 12, 2015 at 5:55 AM, Bala Manoharan bala.manoha...@linaro.org
  wrote:



 On 12 August 2015 at 16:17, Bala Manoharan bala.manoha...@linaro.org
 wrote:



 On 12 August 2015 at 15:37, Zoltan Kiss zoltan.k...@linaro.org wrote:



 On 12/08/15 07:34, Bala Manoharan wrote:

 Hi,

 Comments inline...

 On 12 August 2015 at 00:01, Zoltan Kiss zoltan.k...@linaro.org
 mailto:zoltan.k...@linaro.org wrote:

 Applications can preset certain parts of the buffer or user area,
 so
 when that
 memory will be allocated it starts from a known state. If the
 platform
 allocates the memory during pool creation, it's enough to run the
 constructor
 after that. If it's allocating memory on demand, it should call the
 constructor each time.


 [Bala] Not sure if I understand the above description correctly does it
 mean that if the memory is allocated
 for an existing pool this call should be called only during the pool
 creation and not during each and every buffer allocation?

 I'm also not sure I understand the question :) How do you mean if the
 memory is allocated for an _existing_ pool? I mean, you can't allocate
 memory for a pool which doesn't exist.
 The key point is when that memory will be allocated. This callback
 should be called whenever the memory is allocated for the buffer. If it is
 done during pool creation (which I guess most sensible implementations do),
 then it happens then. If for some reason the platform allocates memory when
 someone allocates the buffer, it happens then.


 [Bala] Let me try to decrypt my query in a better way :-)

 Usually when application calls odp_pool_create() most implementations
 will allocate a memory region and map them as pointers based on a fixed
 segment size so that when the application calls odp_buffer_allocate() the
 segment of this memory pool (segments may be combined depending on the
 required size of the buffer ) will be returned to the user.

 So my understanding of your requirement is that whenever the application
 calls  odp_buffer_alloc() then the user area should be reset to a
 predefined value.

 So in-case that an application does the following
 1. odp_buffer_alloc()
 2. odp_buffer_free()
 3. odp_buffer_alloc()

 For simplicity lets assume the implementation returned the same buffer
 during the second odp_buffer_alloc() call ( Basically the buffer gets
 reused ) then the implementation should have reset the user-area of the
 buffer before returning it to the application. Is this the requirement?


 [Bala] Missed a sentence here
 In the above scenario implementation should have reset the user-area
 before both odp_buffer_alloc() both in step 1 and step 3.


 I have additional query regarding the use-case you have defined below
 but I would like to get clarity on the above first :)

 Regards,
 Bala


 Then it will be applications responsibility to reset the user area
 before freeing the buffer? Is this the use-case this API is trying to
 address?

 No, the application is not required to reset it at any time. It can do
 that, if it's what it wants. The only requirement is that the buffer starts
 from a known state after its memory was allocated.
 The use case is the following: OVS has it's layer to handle buffers
 from different sources, e.g. it has to be able to differentiate between
 packets coming from DPDK and ODP (the latter can run DPDK underneath as
 well, but that's a different story ...). It stores a struct dp_packet in
 the user area to store data about the packet:

 https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41

 Most of these fields will be set during processing to their right
 value, however there are two things we need to set right after receive:
 source to DPBUF_ODP and odp_pkt to point to the odp_packet_t itself.
 The first value will be the same for all ODP packets, every time. And the
 second is known when the underlying memory was allocated.
 Both

Re: [lng-odp] [API-NEXT PATCH] api: pool: add buffer constructor for pool creation parameters

2015-08-12 Thread Bala Manoharan
On 12 August 2015 at 18:34, Zoltan Kiss zoltan.k...@linaro.org wrote:

 On 12/08/15 11:55, Bala Manoharan wrote:



 On 12 August 2015 at 16:17, Bala Manoharan bala.manoha...@linaro.org
 wrote:



 On 12 August 2015 at 15:37, Zoltan Kiss zoltan.k...@linaro.org wrote:



 On 12/08/15 07:34, Bala Manoharan wrote:

 Hi,

 Comments inline...

 On 12 August 2015 at 00:01, Zoltan Kiss zoltan.k...@linaro.org
 mailto:zoltan.k...@linaro.org wrote:

 Applications can preset certain parts of the buffer or user area, so
 when that
 memory will be allocated it starts from a known state. If the
 platform
 allocates the memory during pool creation, it's enough to run the
 constructor
 after that. If it's allocating memory on demand, it should call the
 constructor each time.


 [Bala] Not sure if I understand the above description correctly does it
 mean that if the memory is allocated
 for an existing pool this call should be called only during the pool
 creation and not during each and every buffer allocation?

 I'm also not sure I understand the question :) How do you mean if the
 memory is allocated for an _existing_ pool? I mean, you can't allocate
 memory for a pool which doesn't exist.
 The key point is when that memory will be allocated. This callback
 should be called whenever the memory is allocated for the buffer. If it is
 done during pool creation (which I guess most sensible implementations do),
 then it happens then. If for some reason the platform allocates memory when
 someone allocates the buffer, it happens then.


 [Bala] Let me try to decrypt my query in a better way :-)

 Usually when application calls odp_pool_create() most implementations
 will allocate a memory region and map them as pointers based on a fixed
 segment size so that when the application calls odp_buffer_allocate() the
 segment of this memory pool (segments may be combined depending on the
 required size of the buffer )

 No, odp_buffer_alloc() doesn't let you specify you sizes, the buffer size
 is set when you create the pool. I think you mean packet type pools. In
 that case there could be segmentation, so a packet could have been the Nth
 segment of an another packet before.


Yes, but this init call back will be called for each segment in a packet
pool since each segment could become a potential packet. So in general any
packet allocated from this pool should have the default value set before
the very first alloc and resetting this value during subsequent allocation
is an application responsibility and implementation should not worry about
the same.



 will be returned to the user.

 So my understanding of your requirement is that whenever the application
 calls  odp_buffer_alloc() then the user area should be reset to a
 predefined value.

 No. so when that memory will be allocated. The callback is only called
 when the memory underlying the buffer gets allocated. Beware, allocating a
 buffer and allocating the memory can be two different things in this
 context. At least linux-generic and ODP-DPDK allocates the memory when the
 pool is created, not when the buffer is allocated.


 So in-case that an application does the following
 1. odp_buffer_alloc()
 2. odp_buffer_free()
 3. odp_buffer_alloc()

 For simplicity lets assume the implementation returned the same buffer
 during the second odp_buffer_alloc() call ( Basically the buffer gets
 reused ) then the implementation should have reset the user-area of the
 buffer before returning it to the application. Is this the requirement?

 No. The only requirement is that when that memory will be allocated it
 starts from a known state. The implementation should call the callback
 only when that memory is allocated. Therefore, if the memory underlying
 this buffer were not released and reallocated between an alloc and free,
 this callback shouldn't be called


I understand your use-case now please add the description in the API
definition that this initialisation is not required when the buffer gets
reused after a free and it is only needed when memory is created the very
first time.


 [Bala] Missed a sentence here
 In the above scenario implementation should have reset the user-area
 before both odp_buffer_alloc() both in step 1 and step 3.


 I have additional query regarding the use-case you have defined below but
 I would like to get clarity on the above first :)

 Regards,
 Bala


 Then it will be applications responsibility to reset the user area
 before freeing the buffer? Is this the use-case this API is trying to
 address?

 No, the application is not required to reset it at any time. It can do
 that, if it's what it wants. The only requirement is that the buffer starts
 from a known state after its memory was allocated.
 The use case is the following: OVS has it's layer to handle buffers from
 different sources, e.g. it has to be able to differentiate between packets
 coming from DPDK and ODP (the latter can run DPDK underneath as well

Re: [lng-odp] [API-NEXT PATCH] api: pool: add buffer constructor for pool creation parameters

2015-08-12 Thread Bala Manoharan
On 12 August 2015 at 19:33, Bill Fischofer bill.fischo...@linaro.org
wrote:

 No, packets are a first-class object.  Segments are internal to packets.
 That's why we have both odp_packet_t and odp_packet_seg_t handles. The user
 area is associated with an odp_packet_t, not an odp_packet_seg_t.


Yes. user_area is associated only with packets and not with segments.
I was using the term segment to say that each segment could become a
potential packet in the sense that you can never guess which segment will
be the start of a packet and also user_area for performance reasons might
get stored on the first segment. This is just an implementation detail and
we can leave this topic since I believe we are both on the same page.

Regards,
Bala

 On Wed, Aug 12, 2015 at 8:51 AM, Bala Manoharan bala.manoha...@linaro.org
  wrote:



 On 12 August 2015 at 18:34, Zoltan Kiss zoltan.k...@linaro.org wrote:

 On 12/08/15 11:55, Bala Manoharan wrote:



 On 12 August 2015 at 16:17, Bala Manoharan bala.manoha...@linaro.org
 wrote:



 On 12 August 2015 at 15:37, Zoltan Kiss zoltan.k...@linaro.org wrote:



 On 12/08/15 07:34, Bala Manoharan wrote:

 Hi,

 Comments inline...

 On 12 August 2015 at 00:01, Zoltan Kiss zoltan.k...@linaro.org
 mailto:zoltan.k...@linaro.org wrote:

 Applications can preset certain parts of the buffer or user area,
 so
 when that
 memory will be allocated it starts from a known state. If the
 platform
 allocates the memory during pool creation, it's enough to run the
 constructor
 after that. If it's allocating memory on demand, it should call
 the
 constructor each time.


 [Bala] Not sure if I understand the above description correctly does
 it
 mean that if the memory is allocated
 for an existing pool this call should be called only during the pool
 creation and not during each and every buffer allocation?

 I'm also not sure I understand the question :) How do you mean if the
 memory is allocated for an _existing_ pool? I mean, you can't allocate
 memory for a pool which doesn't exist.
 The key point is when that memory will be allocated. This callback
 should be called whenever the memory is allocated for the buffer. If it is
 done during pool creation (which I guess most sensible implementations 
 do),
 then it happens then. If for some reason the platform allocates memory 
 when
 someone allocates the buffer, it happens then.


 [Bala] Let me try to decrypt my query in a better way :-)

 Usually when application calls odp_pool_create() most implementations
 will allocate a memory region and map them as pointers based on a fixed
 segment size so that when the application calls odp_buffer_allocate() the
 segment of this memory pool (segments may be combined depending on the
 required size of the buffer )

 No, odp_buffer_alloc() doesn't let you specify you sizes, the buffer
 size is set when you create the pool. I think you mean packet type pools.
 In that case there could be segmentation, so a packet could have been the
 Nth segment of an another packet before.


 Yes, but this init call back will be called for each segment in a packet
 pool since each segment could become a potential packet. So in general any
 packet allocated from this pool should have the default value set before
 the very first alloc and resetting this value during subsequent allocation
 is an application responsibility and implementation should not worry about
 the same.



 will be returned to the user.

 So my understanding of your requirement is that whenever the
 application calls  odp_buffer_alloc() then the user area should be reset to
 a predefined value.

 No. so when that memory will be allocated. The callback is only called
 when the memory underlying the buffer gets allocated. Beware, allocating a
 buffer and allocating the memory can be two different things in this
 context. At least linux-generic and ODP-DPDK allocates the memory when the
 pool is created, not when the buffer is allocated.


 So in-case that an application does the following
 1. odp_buffer_alloc()
 2. odp_buffer_free()
 3. odp_buffer_alloc()

 For simplicity lets assume the implementation returned the same buffer
 during the second odp_buffer_alloc() call ( Basically the buffer gets
 reused ) then the implementation should have reset the user-area of the
 buffer before returning it to the application. Is this the requirement?

 No. The only requirement is that when that memory will be allocated it
 starts from a known state. The implementation should call the callback
 only when that memory is allocated. Therefore, if the memory underlying
 this buffer were not released and reallocated between an alloc and free,
 this callback shouldn't be called


 I understand your use-case now please add the description in the API
 definition that this initialisation is not required when the buffer gets
 reused after a free and it is only needed when memory is created the very
 first time.


 [Bala] Missed a sentence here

Re: [lng-odp] [API-NEXT PATCHv7 05/13] api: schedule: revised definition of odp_schedule_release_ordered

2015-08-05 Thread Bala Manoharan
Hi,

On 5 August 2015 at 04:11, Bill Fischofer bill.fischo...@linaro.org wrote:

 Signed-off-by: Bill Fischofer bill.fischo...@linaro.org
 ---
  include/odp/api/schedule.h | 38 --
  1 file changed, 24 insertions(+), 14 deletions(-)

 diff --git a/include/odp/api/schedule.h b/include/odp/api/schedule.h
 index 95fc8df..0ab91e4 100644
 --- a/include/odp/api/schedule.h
 +++ b/include/odp/api/schedule.h
 @@ -147,21 +147,31 @@ void odp_schedule_resume(void);
  void odp_schedule_release_atomic(void);

  /**
 - * Release the current ordered context
 - *
 - * This call is valid only for source queues with ordered
 synchronization. It
 - * hints the scheduler that the user has done all enqueues that need to
 maintain
 - * event order in the current ordered context. The scheduler is allowed to
 - * release the ordered context of this thread and avoid reordering any
 following
 - * enqueues. However, the context may be still held until the next
 - * odp_schedule() or odp_schedule_multi() call - this call allows but
 does not
 - * force the scheduler to release the context early.
 - *
 - * Early ordered context release may increase parallelism and thus system
 - * performance, since scheduler may start reordering events sooner than
 the next
 - * schedule call.
 + * Release the order associated with an event
 + *
 + * This call tells the scheduler that order no longer needs to be
 maintained
 + * for the specified event. This call is needed if, for example, the
 caller


I believe we had agreed to modify the above sentence to include specified
event in this ordered context

Regards,
Bala

 + * will free or otherwise dispose of an event that came from an ordered
 queue
 + * without enqueuing it to another queue. This call does not effect the
 + * ordering associated with any other event held by the caller.
 + *
 + * Order release may increase parallelism and thus system performance,
 since
 + * the scheduler may start resolving reordered events sooner than the next
 + * odp_queue_enq() call.
 + *
 + * @param ev  The event to be released from order preservation.
 + *
 + * @retval 0  Success. Upon return ev behaves as if it originated
 + *from a parallel rather than an ordered queue.
 + *
 + * @retval 0 Failure. This can occur if the event did not originate
 + *from an ordered queue (caller error) or the
 implementation
 + *is unable to release order at this time. In this case,
 + *the caller must not dispose of ev without enqueing it
 + *first to avoid deadlocking other events originating from
 + *ev's ordered queue.
   */
 -void odp_schedule_release_ordered(void);
 +int odp_schedule_release_ordered(odp_event_t ev);

  /**
   * Prefetch events for next schedule call
 --
 2.1.4

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT] validation: classification: added additional suite to test individual PMRs

2015-08-03 Thread Bala Manoharan
On 3 August 2015 at 21:23, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 As a proposition, we had talk some time ago about,
 You moved packet creation to common classification file.
 Maybe it's time to assign correct src/dst MAC address, taken from pktio?
 In separate patch.


Yes. I agree we need to correct them.
A separate patch is better as it does not require much discussion.

Regards,
Bala



 --
 Regards,
 Ivan Khoronzhuk

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT] validation: classification: added additional suite to test individual PMRs

2015-08-03 Thread Bala Manoharan
On 3 August 2015 at 21:44, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 One more issue

 On 30.07.15 18:20, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite to test
 individual PMRs. This suite will test the defined PMRs by configuring
 pktio separately for every test case.

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org

 ---
   helper/include/odp/helper/tcp.h|   4 +
   test/validation/classification/Makefile.am |   2 +
   test/validation/classification/classification.c|   5 +
   .../classification/odp_classification_common.c | 200 +++
   .../classification/odp_classification_test_pmr.c   | 629
 +
   .../classification/odp_classification_tests.c  | 116 +---
   .../classification/odp_classification_testsuites.h |   7 +
   7 files changed, 854 insertions(+), 109 deletions(-)
   create mode 100644
 test/validation/classification/odp_classification_common.c
   create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
*  @{
*/

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20

   /** TCP header */
   typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
   noinst_LTLIBRARIES = libclassification.la

   libclassification_la_SOURCES = odp_classification_basic.c \
odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
classification.c

   bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..aec0655 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
 .pInitFunc = classification_suite_init,
 .pCleanupFunc = classification_suite_term,
 },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_test_pmr,
 +   .pInitFunc = classification_test_pmr_init,
 +   .pCleanupFunc = classification_test_pmr_term,
 +   },
 CU_SUITE_INFO_NULL,
   };

 diff --git a/test/validation/classification/odp_classification_common.c
 b/test/validation/classification/odp_classification_common.c
 new file mode 100644
 index 000..db994c6
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_common.c


 

  +
 +   /* set pkt sequence number */
 +   cls_pkt_set_seq(pkt, flag_udp);
 +
 +   return pkt;
 +}
 diff --git a/test/validation/classification/odp_classification_test_pmr.c
 b/test/validation/classification/odp_classification_test_pmr.c
 new file mode 100644
 index 000..c18beaf
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_test_pmr.c
 @@ -0,0 +1,629 @@


 ...


  +
 +static inline
 +odp_queue_t queue_create(char *queuename, bool sched)
 +{
 +   odp_queue_t queue;
 +
 +   if (sched) {
 +   odp_queue_param_t qparam;
 +
 +   qparam.sched.prio = ODP_SCHED_PRIO_HIGHEST;
 +   qparam.sched.sync = ODP_SCHED_SYNC_NONE;
 +   qparam.sched.group = ODP_SCHED_GROUP_ALL;
 +
 +   queue = odp_queue_create(queuename,
 +ODP_QUEUE_TYPE_SCHED,
 +qparam);
 +   } else {
 +   queue = odp_queue_create(queuename,
 +ODP_QUEUE_TYPE_POLL,
 +NULL);
 +   }
 +
 +   return queue;
 +}
 +
 +static uint32_t cls_pkt_get_seq(odp_packet_t pkt)


 I think it should be moved to common file also.

  +{
 +   uint32_t offset;
 +   cls_test_packet_t data;
 +
 +   offset = odp_packet_l4_offset(pkt);
 +   if (offset) {
 +   odp_packet_copydata_out(pkt, offset + ODPH_UDPHDR_LEN,
 +   sizeof(data), data);


 what in case of TCP? Incorrect seq num?

Looks like a coding error. TCP should be added using a boolean flag.

Incorrect seq num is 

Re: [lng-odp] [API-NEXT] validation: classification: added additional suite to test individual PMRs

2015-08-03 Thread Bala Manoharan
On 3 August 2015 at 22:01, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:



 On 03.08.15 19:24, Bala Manoharan wrote:



 On 3 August 2015 at 21:44, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org wrote:

 One more issue

 On 30.07.15 18:20, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation
 suite to test
 individual PMRs. This suite will test the defined PMRs by
 configuring
 pktio separately for every test case.

 Signed-off-by: Balasubramanian Manoharan
 bala.manoha...@linaro.org mailto:bala.manoha...@linaro.org


 ---
helper/include/odp/helper/tcp.h|   4 +
test/validation/classification/Makefile.am |   2 +
test/validation/classification/classification.c|   5 +
.../classification/odp_classification_common.c | 200
 +++
.../classification/odp_classification_test_pmr.c   | 629
 +
.../classification/odp_classification_tests.c  | 116 +---
.../classification/odp_classification_testsuites.h |   7 +
7 files changed, 854 insertions(+), 109 deletions(-)
create mode 100644
 test/validation/classification/odp_classification_common.c
create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
 *  @{
 */

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct
 header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20

/** TCP header */
typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
noinst_LTLIBRARIES = libclassification.la
 http://libclassification.la


libclassification_la_SOURCES = odp_classification_basic.c \
 odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
 classification.c

bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..aec0655 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
  .pInitFunc = classification_suite_init,
  .pCleanupFunc =
 classification_suite_term,
  },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_test_pmr,
 +   .pInitFunc = classification_test_pmr_init,
 +   .pCleanupFunc =
 classification_test_pmr_term,
 +   },
  CU_SUITE_INFO_NULL,
};

 diff --git
 a/test/validation/classification/odp_classification_common.c
 b/test/validation/classification/odp_classification_common.c
 new file mode 100644
 index 000..db994c6
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_common.c


 

 +
 +   /* set pkt sequence number */
 +   cls_pkt_set_seq(pkt, flag_udp);
 +
 +   return pkt;
 +}
 diff --git
 a/test/validation/classification/odp_classification_test_pmr.c
 b/test/validation/classification/odp_classification_test_pmr.c
 new file mode 100644
 index 000..c18beaf
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_test_pmr.c
 @@ -0,0 +1,629 @@


 ...


 +
 +static inline
 +odp_queue_t queue_create(char *queuename, bool sched)
 +{
 +   odp_queue_t queue;
 +
 +   if (sched) {
 +   odp_queue_param_t qparam

Re: [lng-odp] [API-NEXT] validation: classification: added additional suite to test individual PMRs

2015-08-03 Thread Bala Manoharan
On 3 August 2015 at 22:16, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Bala,

 ...



  what in case of TCP? Incorrect seq num?

 Looks like a coding error. TCP should be added using a boolean
 flag.

 Incorrect seq num is tested in the ASSERT function after
 receiving the
 packet.
 Seq num is used to check whether the received packet is the same
 which
 was sent.


 That's not enough. You use the same fail function before and after.
 It's not correct when you read it before sending and after sending.

 It can be assigned 1 while packet creation.
 Read before sending: 
 Read after sending: 

 No errors - it's the same.
 But assigned 1, in fact that's error. And will not be caught.


 I understand your point. seq number should be tested for !=
 TEST_SEQ_INVALID using a separate assert after receiving the packet.


 Smth similar. But I'm not sure if it's needed at all.

 I've better added check like seq_num  20 to cover all seqnumbers that's
 not possible. (Maybe 20 or other num, but it shouldn't be
 very big)


But seq_num is somthing you generate and add right? So why do u need to
check for multiple numbers?
we just need to know if the packet is the same which we sent through
scheduler/poll queue.
Do you return different numbers for different error case or something?

Regards,
Bala





 PS:
   I propose also rename create_packet_tcp() to create_packet().
   It can create TCP and UDP packets.


 I wanted to make this change before sending the patch. Looks like I had
 missed and sent the patch before.

 Regards,
 Bala





  +
  +   if (data.magic == DATA_MAGIC)
  +   return data.seq;
  +   }
  +
  +   return TEST_SEQ_INVALID;
  +}
  +


  ...

  --
  Regards,
  Ivan Khoronzhuk

 Regards,
 Bala


 --
 Regards,
 Ivan Khoronzhuk



 --
 Regards,
 Ivan Khoronzhuk

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT] validation: classification: added additional suite to test individual PMRs

2015-08-03 Thread Bala Manoharan
Hi,

On 3 August 2015 at 17:52, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:


 Hi, Bala

 On 30.07.15 18:20, Balasubramanian Manoharan wrote:

 Additional test suite is added to classification validation suite to test
 individual PMRs. This suite will test the defined PMRs by configuring
 pktio separately for every test case.

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org

 ---
   helper/include/odp/helper/tcp.h|   4 +
   test/validation/classification/Makefile.am |   2 +
   test/validation/classification/classification.c|   5 +
   .../classification/odp_classification_common.c | 200 +++
   .../classification/odp_classification_test_pmr.c   | 629
 +
   .../classification/odp_classification_tests.c  | 116 +---
   .../classification/odp_classification_testsuites.h |   7 +
   7 files changed, 854 insertions(+), 109 deletions(-)
   create mode 100644
 test/validation/classification/odp_classification_common.c
   create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
*  @{
*/

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20

   /** TCP header */
   typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
   noinst_LTLIBRARIES = libclassification.la

   libclassification_la_SOURCES = odp_classification_basic.c \
odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
classification.c

   bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..aec0655 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
 .pInitFunc = classification_suite_init,
 .pCleanupFunc = classification_suite_term,
 },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_test_pmr,
 +   .pInitFunc = classification_test_pmr_init,
 +   .pCleanupFunc = classification_test_pmr_term,
 +   },
 CU_SUITE_INFO_NULL,
   };

 diff --git a/test/validation/classification/odp_classification_common.c
 b/test/validation/classification/odp_classification_common.c
 new file mode 100644
 index 000..db994c6
 --- /dev/null
 +++ b/test/validation/classification/odp_classification_common.c
 @@ -0,0 +1,200 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier:BSD-3-Clause
 + */
 +
 +#include odp_classification_testsuites.h
 +#include odp_cunit_common.h
 +#include odp/helper/eth.h
 +#include odp/helper/ip.h
 +#include odp/helper/udp.h
 +#include odp/helper/tcp.h
 +
 +#define SHM_PKT_NUM_BUFS32
 +#define SHM_PKT_BUF_SIZE1024
 +
 +#define CLS_DEFAULT_SADDR  10.0.0.1/32
 +#define CLS_DEFAULT_DADDR  10.0.0.100/32
 +#define CLS_DEFAULT_SPORT  1024
 +#define CLS_DEFAULT_DPORT  2048
 +
 +#define CLS_TEST_SPORT 4096
 +#define CLS_TEST_DPORT 8192
 +#define CLS_TEST_DADDR 10.0.0.5/32

 +
 +/* Test Packet values */
 +#define DATA_MAGIC 0x01020304
 +#define TEST_SEQ_INVALID   ((uint32_t)~0)
 +
 +/** sequence number of IP packets */
 +odp_atomic_u32_t seq;
 +
 +typedef struct cls_test_packet {
 +   uint32be_t magic;
 +   uint32be_t seq;
 +} cls_test_packet_t;
 +
 +static int cls_pkt_set_seq(odp_packet_t pkt, bool flag_udp)
 +{
 +   static uint32_t seq;
 +   cls_test_packet_t data;
 +   uint32_t offset;
 +   int status;
 +
 +   data.magic = DATA_MAGIC;
 +   data.seq = ++seq;
 +
 +   offset = odp_packet_l4_offset(pkt);
 +   CU_ASSERT_FATAL(offset != 0);
 +
 +   if (flag_udp)
 +   status = odp_packet_copydata_in(pkt, offset +
 ODPH_UDPHDR_LEN,
 +   sizeof(data), data);
 +   else
 +   status = odp_packet_copydata_in(pkt, offset +
 ODPH_TCPHDR_LEN,
 +   sizeof(data), data);
 +
 +   return status;
 +}

Re: [lng-odp] [API-NEXT PATCHv3 5/6] api: schedule: revised definition of odp_schedule_release_ordered

2015-07-31 Thread Bala Manoharan
On 31 July 2015 at 17:48, Bill Fischofer bill.fischo...@linaro.org wrote:



 On Fri, Jul 31, 2015 at 1:38 AM, Bala Manoharan bala.manoha...@linaro.org
  wrote:

 Hi,

 Comments inline...

 On 31 July 2015 at 08:11, Bill Fischofer bill.fischo...@linaro.org
 wrote:

 Signed-off-by: Bill Fischofer bill.fischo...@linaro.org
 ---
  include/odp/api/schedule.h | 38 --
  1 file changed, 24 insertions(+), 14 deletions(-)

 diff --git a/include/odp/api/schedule.h b/include/odp/api/schedule.h
 index 95fc8df..0ab91e4 100644
 --- a/include/odp/api/schedule.h
 +++ b/include/odp/api/schedule.h
 @@ -147,21 +147,31 @@ void odp_schedule_resume(void);
  void odp_schedule_release_atomic(void);

  /**
 - * Release the current ordered context
 - *
 - * This call is valid only for source queues with ordered
 synchronization. It
 - * hints the scheduler that the user has done all enqueues that need to
 maintain
 - * event order in the current ordered context. The scheduler is allowed
 to
 - * release the ordered context of this thread and avoid reordering any
 following
 - * enqueues. However, the context may be still held until the next
 - * odp_schedule() or odp_schedule_multi() call - this call allows but
 does not
 - * force the scheduler to release the context early.
 - *
 - * Early ordered context release may increase parallelism and thus
 system
 - * performance, since scheduler may start reordering events sooner than
 the next
 - * schedule call.
 + * Release the order associated with an event
 + *
 + * This call tells the scheduler that order no longer needs to be
 maintained
 + * for the specified event. This call is needed if, for example, the
 caller


 [Bala]  This release ordered is for the specified event in this ordered
 context only. coz it is always possible
 that this event may get enqueued into other ordered/Atomic queue and the
 ordering should be maintained
 in that context.


 Not sure I understand your point here.  Ordering is something that is
 inherited from a source queue and each queue is independent.  If you pass
 events through a series of ordered queues then you get end-to-end ordering,
 however if you release an event and then add it to another ordered queue it
 will maintain order from that point however might be reordered with respect
 to the original queue from which it was released.


I agree with your description above. My point was that the new description
says

* This call tells the scheduler that order no longer needs to be
maintained for the specified event *
but the interpretation means that the order will not be maintained for the
specified event hereafter. But actually what this call does is it removes
ordering for this event in this specific ordered queue (ordered context )
as you explained above the ordering will be maintained by the scheduler for
this specified event when it gets enqueued into another ordered/ atomic
queue. so IMO the description should be*  This call tells the scheduler
that order no longer needs to be maintained for the specified event in the
current ordered context *




  + * will free or otherwise dispose of an event that came from an ordered
 queue
 + * without enqueuing it to another queue. This call does not effect the


 [Bala] The use-case of freeing or disposing the event can be handled
 implicitly by the implementation
 since freeing should be done by calling odp_event_free() API and the
 implementation is free to release
 ordered context during free.


 I thought of that, however that would add a lot of unnecessary checking
 overhead to the free path, which is itself a performance path.  If we're
 going to provide the ability to release order then I don't think it's
 unreasonable to ask the application to use that in a disciplined manner.


Yes. But lets say if an application does odp_event_free() without calling
this call then the implementation will have to release the ordering for
that event. coz this specific case will cause a queue halt if application
calls odp_event_free() without calling release ordered() API.

Regards,
Bala





 + * ordering associated with any other event held by the caller.
 + *
 + * Order release may increase parallelism and thus system performance,
 since
 + * the scheduler may start resolving reordered events sooner than the
 next
 + * odp_queue_enq() call.
 + *
 + * @param ev  The event to be released from order preservation.
 + *
 + * @retval 0  Success. Upon return ev behaves as if it originated
 + *from a parallel rather than an ordered queue.
 + *
 + * @retval 0 Failure. This can occur if the event did not originate
 + *from an ordered queue (caller error) or the
 implementation
 + *is unable to release order at this time. In this case,
 + *the caller must not dispose of ev without enqueing it
 + *first to avoid deadlocking other events originating
 from
 + *ev's ordered

Re: [lng-odp] [API-NEXT PATCHv3 5/6] api: schedule: revised definition of odp_schedule_release_ordered

2015-07-31 Thread Bala Manoharan
On 31 July 2015 at 20:03, Bill Fischofer bill.fischo...@linaro.org wrote:



 On Fri, Jul 31, 2015 at 6:52 AM, Bala Manoharan bala.manoha...@linaro.org
  wrote:



 On 31 July 2015 at 17:48, Bill Fischofer bill.fischo...@linaro.org
 wrote:



 On Fri, Jul 31, 2015 at 1:38 AM, Bala Manoharan 
 bala.manoha...@linaro.org wrote:

 Hi,

 Comments inline...

 On 31 July 2015 at 08:11, Bill Fischofer bill.fischo...@linaro.org
 wrote:

 Signed-off-by: Bill Fischofer bill.fischo...@linaro.org
 ---
  include/odp/api/schedule.h | 38 --
  1 file changed, 24 insertions(+), 14 deletions(-)

 diff --git a/include/odp/api/schedule.h b/include/odp/api/schedule.h
 index 95fc8df..0ab91e4 100644
 --- a/include/odp/api/schedule.h
 +++ b/include/odp/api/schedule.h
 @@ -147,21 +147,31 @@ void odp_schedule_resume(void);
  void odp_schedule_release_atomic(void);

  /**
 - * Release the current ordered context
 - *
 - * This call is valid only for source queues with ordered
 synchronization. It
 - * hints the scheduler that the user has done all enqueues that need
 to maintain
 - * event order in the current ordered context. The scheduler is
 allowed to
 - * release the ordered context of this thread and avoid reordering
 any following
 - * enqueues. However, the context may be still held until the next
 - * odp_schedule() or odp_schedule_multi() call - this call allows but
 does not
 - * force the scheduler to release the context early.
 - *
 - * Early ordered context release may increase parallelism and thus
 system
 - * performance, since scheduler may start reordering events sooner
 than the next
 - * schedule call.
 + * Release the order associated with an event
 + *
 + * This call tells the scheduler that order no longer needs to be
 maintained
 + * for the specified event. This call is needed if, for example, the
 caller


 [Bala]  This release ordered is for the specified event in this ordered
 context only. coz it is always possible
 that this event may get enqueued into other ordered/Atomic queue and
 the ordering should be maintained
 in that context.


 Not sure I understand your point here.  Ordering is something that is
 inherited from a source queue and each queue is independent.  If you pass
 events through a series of ordered queues then you get end-to-end ordering,
 however if you release an event and then add it to another ordered queue it
 will maintain order from that point however might be reordered with respect
 to the original queue from which it was released.


 I agree with your description above. My point was that the new
 description says

 * This call tells the scheduler that order no longer needs to be
 maintained for the specified event *
 but the interpretation means that the order will not be maintained for
 the specified event hereafter. But actually what this call does is it
 removes ordering for this event in this specific ordered queue (ordered
 context ) as you explained above the ordering will be maintained by the
 scheduler for this specified event when it gets enqueued into another
 ordered/ atomic queue. so IMO the description should be*  This call
 tells the scheduler that order no longer needs to be maintained for the
 specified event in the current ordered context *


 OK, I agree.  I'll expand the documentation to make that point clear that
 the ordering is released only for the current context.






  + * will free or otherwise dispose of an event that came from an
 ordered queue
 + * without enqueuing it to another queue. This call does not effect
 the


 [Bala] The use-case of freeing or disposing the event can be handled
 implicitly by the implementation
 since freeing should be done by calling odp_event_free() API and the
 implementation is free to release
 ordered context during free.


 I thought of that, however that would add a lot of unnecessary checking
 overhead to the free path, which is itself a performance path.  If we're
 going to provide the ability to release order then I don't think it's
 unreasonable to ask the application to use that in a disciplined manner.


 Yes. But lets say if an application does odp_event_free() without calling
 this call then the implementation will have to release the ordering for
 that event. coz this specific case will cause a queue halt if application
 calls odp_event_free() without calling release ordered() API.


 No different than any number of other possible programming errors on the
 part of the application (e.g., leaks memory, tries to double-free, forgets
 to release a lock, etc.).  Why should this one be special?  I think we just
 need to make it clear that if the application wishes to use ordered queues
 that it needs to use them correctly, and explain how to use this API
 properly in that context.


But this release ordered context is different because ODP expectation is
that the ordered context is released implicitly incase of odp_schedule()
call so extending from logic in seems reasonable

Re: [lng-odp] [API-NEXT] validation: classification: added additional suite to test individual PMRs

2015-07-31 Thread Bala Manoharan
Hi Christophe,

Thanks for pointing this out :) I had started this work before the naming
conventions were mandated.
I will follow the naming conventions followed here before my final patch is
out.

I will update these changes along with review comments I get ;)

Regards,
Bala

On 31 July 2015 at 21:17, Christophe Milard christophe.mil...@linaro.org
wrote:

 I am sorry, as this is not well documented yet (I am currentely working on
 the documentation), but this patch does not respect the naming convention
 for test function, test-suite...

 Possibly you started your work on this before my modification went in...

 Anyway here follows the naming convention for testing:
 (I regard this naming as important as all tests are now libifized -part
 of a library-, and naming chaos would make this lib unusable)

 Once, again sorry you hit this transition period...

 Here follows the naming convention:

 * Tests, i.e. functions which are used in CUNIT test suites are named:
Module_test_*
 * Test arrays, i.e. arrays of CU_TestInfo, listing the test functions
  belonging to a suite, are called:
Module_suite[_*]
  where the possible suffix can be used if many suites are declared.
 * CUNIT suite init and termination functions are called:
Module_suite[_*]_init() and Module_suite[_*]_term()
  respectively.
 * Suite arrays, i.e. arrays of CU_SuiteInfo used in executables are called:
Module_suites[_*]
  where the possible suffix identifies the executable using it, if many.
 * Main executable function(s), are called:
Module_main[_*]
  where the possible suffix identifies the executable using it
 * Init/term function for the whole executable are called:
Module_init
Module_term

 All test tests have now been transformed to match this new naming, so you
 should see this in classification tests as well on recent checkouts.

 Thanks,

 Christophe.

 On 30 July 2015 at 17:20, Balasubramanian Manoharan 
 bala.manoha...@linaro.org wrote:

 Additional test suite is added to classification validation suite to test
 individual PMRs. This suite will test the defined PMRs by configuring
 pktio separately for every test case.

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org

 ---
  helper/include/odp/helper/tcp.h|   4 +
  test/validation/classification/Makefile.am |   2 +
  test/validation/classification/classification.c|   5 +
  .../classification/odp_classification_common.c | 200 +++
  .../classification/odp_classification_test_pmr.c   | 629
 +
  .../classification/odp_classification_tests.c  | 116 +---
  .../classification/odp_classification_testsuites.h |   7 +
  7 files changed, 854 insertions(+), 109 deletions(-)
  create mode 100644
 test/validation/classification/odp_classification_common.c
  create mode 100644
 test/validation/classification/odp_classification_test_pmr.c

 diff --git a/helper/include/odp/helper/tcp.h
 b/helper/include/odp/helper/tcp.h
 index defe422..b52784d 100644
 --- a/helper/include/odp/helper/tcp.h
 +++ b/helper/include/odp/helper/tcp.h
 @@ -26,6 +26,10 @@ extern C {
   *  @{
   */

 +/** TCP header length (Minimum Header length without options)*/
 +/** If options field is added to TCP header then the correct header value
 +should be updated by the application */
 +#define ODPH_TCPHDR_LEN 20

  /** TCP header */
  typedef struct ODP_PACKED {
 diff --git a/test/validation/classification/Makefile.am
 b/test/validation/classification/Makefile.am
 index ba468fa..050d5e6 100644
 --- a/test/validation/classification/Makefile.am
 +++ b/test/validation/classification/Makefile.am
 @@ -3,6 +3,8 @@ include ../Makefile.inc
  noinst_LTLIBRARIES = libclassification.la

  libclassification_la_SOURCES = odp_classification_basic.c \
odp_classification_tests.c \
 +  odp_classification_test_pmr.c \
 +  odp_classification_common.c \
classification.c

  bin_PROGRAMS = classification_main$(EXEEXT)
 diff --git a/test/validation/classification/classification.c
 b/test/validation/classification/classification.c
 index 2582aaa..aec0655 100644
 --- a/test/validation/classification/classification.c
 +++ b/test/validation/classification/classification.c
 @@ -18,6 +18,11 @@ static CU_SuiteInfo classification_suites[] = {
 .pInitFunc = classification_suite_init,
 .pCleanupFunc = classification_suite_term,
 },
 +   { .pName = classification pmr tests,
 +   .pTests = classification_test_pmr,
 +   .pInitFunc = classification_test_pmr_init,
 +   .pCleanupFunc = classification_test_pmr_term,
 +   },
 CU_SUITE_INFO_NULL,
  };

 diff --git a/test/validation/classification/odp_classification_common.c
 b/test/validation/classification/odp_classification_common.c
 new file mode 100644
 index 

Re: [lng-odp] [API-NEXT PATCHv3 5/6] api: schedule: revised definition of odp_schedule_release_ordered

2015-07-31 Thread Bala Manoharan
Hi,

Comments inline...

On 31 July 2015 at 08:11, Bill Fischofer bill.fischo...@linaro.org wrote:

 Signed-off-by: Bill Fischofer bill.fischo...@linaro.org
 ---
  include/odp/api/schedule.h | 38 --
  1 file changed, 24 insertions(+), 14 deletions(-)

 diff --git a/include/odp/api/schedule.h b/include/odp/api/schedule.h
 index 95fc8df..0ab91e4 100644
 --- a/include/odp/api/schedule.h
 +++ b/include/odp/api/schedule.h
 @@ -147,21 +147,31 @@ void odp_schedule_resume(void);
  void odp_schedule_release_atomic(void);

  /**
 - * Release the current ordered context
 - *
 - * This call is valid only for source queues with ordered
 synchronization. It
 - * hints the scheduler that the user has done all enqueues that need to
 maintain
 - * event order in the current ordered context. The scheduler is allowed to
 - * release the ordered context of this thread and avoid reordering any
 following
 - * enqueues. However, the context may be still held until the next
 - * odp_schedule() or odp_schedule_multi() call - this call allows but
 does not
 - * force the scheduler to release the context early.
 - *
 - * Early ordered context release may increase parallelism and thus system
 - * performance, since scheduler may start reordering events sooner than
 the next
 - * schedule call.
 + * Release the order associated with an event
 + *
 + * This call tells the scheduler that order no longer needs to be
 maintained
 + * for the specified event. This call is needed if, for example, the
 caller


[Bala]  This release ordered is for the specified event in this ordered
context only. coz it is always possible
that this event may get enqueued into other ordered/Atomic queue and the
ordering should be maintained
in that context.

 + * will free or otherwise dispose of an event that came from an ordered
 queue
 + * without enqueuing it to another queue. This call does not effect the


[Bala] The use-case of freeing or disposing the event can be handled
implicitly by the implementation
since freeing should be done by calling odp_event_free() API and the
implementation is free to release
ordered context during free.



 + * ordering associated with any other event held by the caller.
 + *
 + * Order release may increase parallelism and thus system performance,
 since
 + * the scheduler may start resolving reordered events sooner than the next
 + * odp_queue_enq() call.
 + *
 + * @param ev  The event to be released from order preservation.
 + *
 + * @retval 0  Success. Upon return ev behaves as if it originated
 + *from a parallel rather than an ordered queue.
 + *
 + * @retval 0 Failure. This can occur if the event did not originate
 + *from an ordered queue (caller error) or the
 implementation
 + *is unable to release order at this time. In this case,
 + *the caller must not dispose of ev without enqueing it
 + *first to avoid deadlocking other events originating from
 + *ev's ordered queue.
   */
 -void odp_schedule_release_ordered(void);
 +int odp_schedule_release_ordered(odp_event_t ev);

  /**
   * Prefetch events for next schedule call


Regards,
Bala

 --
 2.1.4

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] RFC - Application counters helpers

2015-07-30 Thread Bala Manoharan
Hi,

Yes. The above use-case makes sense.
Also we need to document explicit limitation that these variables should be
created before starting the worker threads and that dynamic creation of
variables should be avoided.

Regards,
Bala

On 30 July 2015 at 13:29, Alexandru Badicioiu 
alexandru.badici...@linaro.org wrote:

 Bala,
 I agree that functions to subtract from a counter or reset it to 0 or
 other value might be required for the completeness.
 I don't see these functions as APIs (since the implementation is the same
 for all HW) but only as an application helper and only in case the
 application uses threads as workers. Implementation defined counters is
 something that we may address in the future.

 These helpers can be used the following way:
 Each application defines its own counters, for example in case of
 odp_ipsec one counter can be number of packets/bytes per SA. Based on
 command line switches (e.g. -d packets:sa -d bytes:sa) the main application
 thread builds a global list of counter creation requests before creating
 worker threads.
 When each worker thread starts, it walks the list, creates each requested
 counter and attaches the counter handle to the object it refers to, e.g:
 typedef struct sa_db_entry_s {
 struct sa_db_entry_s *next;  /** Next entry on list */
 ..
 odph_counter_tpackets;   /** Counter for SA packets */
 }
 sa_db_entry-packets = odph_counter_create_local(ODPH_COUNTER_SIZE_32BIT);

 The assumption here (for simplicity) is that each counter creation call
 returns the same handle in each thread - maybe this should change into
 something like :
 cnt = odph_counter_create_local(ODPH_COUNTER_SIZE_32BIT);
 odph_counter_attach(cnt, sa_db_entry-packets_cnt_list);
 where packets member of sa_db_entry changes into an array of
 odph_counter_t .

 Each time an SA has processed a packet, the processing thread increments
 (or adds too) the counter:
 odph_counter_add(entry-cipher_sa-packets, 1);
 odph_counter_add(entry-auth_sa-packets, 1);

 The main thread has the job to display the global counter value and what
 it does is :
 foreach_sa_db_entry(display_sa_counters, arg);

 static int
 display_sa_counters(sa_db_entry_t *sa_db_entry, void *arg)
 {
 odp_counter_val_t val;
 int len;
 counters_arg_t *_arg = (counters_arg_t *)arg;
 if (sa_db_entry-packets != ODPH_COUNTER_INVALID)
 val = odph_counter_read_global(_arg-thread_tbl, _arg-num,
  sa_db_entry-packets);
..
 }

 Hope this helps,
 Alex

















 On 30 July 2015 at 07:20, Bala Manoharan bala.manoha...@linaro.org
 wrote:

 Hi,

 Maybe we need additional API for initialising the counter to reset it to
 zero and also a need for decrementing the counter?

 IMO, we need to properly document the use-case of these counter API
 functions since these counters are thread-specific what will the different
 between using these APIs and just defining a new thread specific variable
 in ODP?

 Regards,
 Bala


 On 29 July 2015 at 02:47, Bill Fischofer bill.fischo...@linaro.org
 wrote:

 Since counters are likely to be found on performance paths, I'm
 wondering if it would make more sense to separate 32 and 64 bit counters
 (or just have all counters be 64 bit by default) rather than trying to do
 unions that presumably then require discriminator functions to access.

 On Tue, Jul 28, 2015 at 7:54 AM, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 Hi,
 I would like to get your opinions about the opportunity to introduce
 helper functions to enable applications to define and retrieve various
 events or application defined objects related counters, e.g - IPSec SA
 requests/processed packets, IPSec policy match, IPSec policy check
 failures, packets received/sent etc.

 Here is a proposal for applications using threads as workers:

 typedef enum odph_counter_size {
 ODPH_COUNTER_SIZE_32BIT,
 ODPH_COUNTER_SIZE_64BIT,
 } odph_counter_size_t;

 typedef union odp_counter_val {
 uint32_t __u32;
 uint64_t __u64;
 } odp_counter_val_t;

 typedef int32_t odph_counter_t;

 #define ODPH_COUNTER_INVALID(-1)

 /* Create a local (per thread) counter */
 odph_counter_t
 odph_counter_create_local(odph_counter_size_t size);

 /* Get counter size */
 odph_counter_size_t
 odph_counter_size(odph_counter_t counter);

 /* Add a value to a local counter */
 void
 odph_counter_add(odph_counter_t counter, unsigned int val);

 /* Return local counter value */
 odp_counter_val_t
 odph_counter_read_local(odph_counter_t counter);

 /* Return global counter value by summing
 all local values */
 odp_counter_val_t
 odph_counter_read_global(odph_linux_pthread_t *thread_tbl, int num,
 odph_counter_t counter);

 I know there's an ongoing work to introduce an abstract type for a
 worker, as such these functions would not be usable in a process

Re: [lng-odp] RFC - Application counters helpers

2015-07-29 Thread Bala Manoharan
Hi,

Maybe we need additional API for initialising the counter to reset it to
zero and also a need for decrementing the counter?

IMO, we need to properly document the use-case of these counter API
functions since these counters are thread-specific what will the different
between using these APIs and just defining a new thread specific variable
in ODP?

Regards,
Bala

On 29 July 2015 at 02:47, Bill Fischofer bill.fischo...@linaro.org wrote:

 Since counters are likely to be found on performance paths, I'm wondering
 if it would make more sense to separate 32 and 64 bit counters (or just
 have all counters be 64 bit by default) rather than trying to do unions
 that presumably then require discriminator functions to access.

 On Tue, Jul 28, 2015 at 7:54 AM, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 Hi,
 I would like to get your opinions about the opportunity to introduce
 helper functions to enable applications to define and retrieve various
 events or application defined objects related counters, e.g - IPSec SA
 requests/processed packets, IPSec policy match, IPSec policy check
 failures, packets received/sent etc.

 Here is a proposal for applications using threads as workers:

 typedef enum odph_counter_size {
 ODPH_COUNTER_SIZE_32BIT,
 ODPH_COUNTER_SIZE_64BIT,
 } odph_counter_size_t;

 typedef union odp_counter_val {
 uint32_t __u32;
 uint64_t __u64;
 } odp_counter_val_t;

 typedef int32_t odph_counter_t;

 #define ODPH_COUNTER_INVALID(-1)

 /* Create a local (per thread) counter */
 odph_counter_t
 odph_counter_create_local(odph_counter_size_t size);

 /* Get counter size */
 odph_counter_size_t
 odph_counter_size(odph_counter_t counter);

 /* Add a value to a local counter */
 void
 odph_counter_add(odph_counter_t counter, unsigned int val);

 /* Return local counter value */
 odp_counter_val_t
 odph_counter_read_local(odph_counter_t counter);

 /* Return global counter value by summing
 all local values */
 odp_counter_val_t
 odph_counter_read_global(odph_linux_pthread_t *thread_tbl, int num,
 odph_counter_t counter);

 I know there's an ongoing work to introduce an abstract type for a
 worker, as such these functions would not be usable in a process
 environment.

 Thanks,
 Alex


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp



 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT RFC] api: packet: add define for max segmentation

2015-07-24 Thread Bala Manoharan
On 23 July 2015 at 12:09, Nicolas Morey-Chaisemartin nmo...@kalray.eu
wrote:



 On 07/23/2015 07:43 AM, Bala Manoharan wrote:



 On 21 July 2015 at 13:05, Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu wrote:



 On 07/20/2015 07:24 PM, Bala Manoharan wrote:

 Hi,

  Few comments inline

 On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu wrote:

 Replace current segmentation with an explicit define.
 This mainly means two things:
  - All code can now test and check the max segmentation which will prove
useful for tests and open the way for many code optimizations.
  - The minimum segment length and the maximum buffer len can now be
 decorrelated.
This means that pool with very small footprints can be allocated for
 small packets,
while pool for jumbo frame will still work as long as seg_len *
 ODP_CONFIG_PACKET_MAX_SEG = packet_len

  Signed-off-by: Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu
 ---
  include/odp/api/config.h | 10 +-
  platform/linux-generic/include/odp_buffer_internal.h |  9 +++--
  platform/linux-generic/odp_pool.c|  4 ++--
  test/validation/packet/packet.c  |  3 ++-
  4 files changed, 16 insertions(+), 10 deletions(-)

 diff --git a/include/odp/api/config.h b/include/odp/api/config.h
 index b5c8fdd..1f44db6 100644
 --- a/include/odp/api/config.h
 +++ b/include/odp/api/config.h
 @@ -108,6 +108,13 @@ extern C {
  #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024)

  /**
 + * Maximum number of segments in a packet
 + *
 + * This defines the maximum number of segment buffers in a packet
 + */
 +#define ODP_CONFIG_PACKET_MAX_SEG 6


  What is the use-case of the above define? Does it mean that the packet
 should not be stored in a pool if the max number of segment gets reached?
 If this is something used in the linux-generic we can define it in the
 internal header file.

  The reason is that the #defines in config.h file has to be defined by
 all the platforms.

  Regards,
 Bala

This maybe a little to linux-generic otiented I guess. What I'm
 looking for is a clean way to handle segment length vs packet length in
 pools.


  The optimisations specific to linux-generic should be in internal header
 and not in config files as any change in config file will have to be
 handled by all the platforms.


 Agreed. But supporting segmentation is still  a platform/HW related
 feature. And I think it has its place somewhere in config.h. Although
 probably not in this form.

 Maybe something as simple as ODP_CONFIG_SEGMENTATION_SUPPORT ?


IMO, this configuration is not needed as application need not know if the
underlying HW is supporting the requested pool configuration through
segments or unsegmented pools. As all the APIs for getting the packet
pointer returns the length in the current segment the application can be
agnostic about how the packet is stored in the pool.


   I was trying to kill two birds with one stone in this patch:
 - Be able to disable segmentation completely and add fast compile time in
 the code to avoid segment computations
 - Fix packet validation test (and maybe enhance my proposal for
 pktio/segmentation) which rely heavily on the number of supported segment.

 For testing, the main issue I guess is that there is no way to know the
 actual segment length and length used by the pool. We could go to the
 internals but that would make the tests platform specific.
 Something like odp_pool_get_seg_len() and odp_pool_get_len()  could be
 qute useful for building tests but not very interesting for end users...


  IMO the testing for segmentation should be written in such a way that
 the validation suite should not fail if the implementation has handled the
 given requirement without creating segments as technically creating the
 segmentation is an implementation optimisation but not a requirement.

  The validation suite should try to allocate a larger packet from a pool
 with a small segment size and then it can only expect that the
 implementation has stored it as segments if the packet is segmented then
 segment tests should be run else it should not throw an error since by not
 creating segmentation the implementation has not violated any ODP
 requirement.

  Regards,
 Bala

   Yes we can probably try to alloc a packet starting from MAX_BUF_LEN and
 try again with a rduced size until we get a success.
 Definitly not very pretty but it should be portable.


Yes. This approach should be fine.


 Having a define about SEGMENTATION support would also make it possible to
 disable/skip tests specifically relying on segmentation.

 Nicolas


Regards,
Bala
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT RFC] api: packet: add define for max segmentation

2015-07-24 Thread Bala Manoharan
On 24 July 2015 at 14:44, Nicolas Morey-Chaisemartin nmo...@kalray.eu
wrote:



 On 07/24/2015 11:10 AM, Bala Manoharan wrote:



 On 23 July 2015 at 12:09, Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu wrote:



 On 07/23/2015 07:43 AM, Bala Manoharan wrote:



 On 21 July 2015 at 13:05, Nicolas Morey-Chaisemartin nmo...@kalray.eu
 wrote:




 This maybe a little to linux-generic otiented I guess. What I'm looking
 for is a clean way to handle segment length vs packet length in pools.


  The optimisations specific to linux-generic should be in internal
 header and not in config files as any change in config file will have to be
 handled by all the platforms.


  Agreed. But supporting segmentation is still  a platform/HW related
 feature. And I think it has its place somewhere in config.h. Although
 probably not in this form.

 Maybe something as simple as ODP_CONFIG_SEGMENTATION_SUPPORT ?


  IMO, this configuration is not needed as application need not know if
 the underlying HW is supporting the requested pool configuration through
 segments or unsegmented pools. As all the APIs for getting the packet
 pointer returns the length in the current segment the application can be
 agnostic about how the packet is stored in the pool.


 The one thing it allows to do is avoid copyin/copyout to read/write
 packets. If you do not use any fragmentation, you only have one segment and
 can always work directly within the segment.


Not sure if I understand your point completely, Copying a data plane packet
is usually a costly operation and  most applications should try to avoid
this scenario. Segmentation usually helps in resource optimization as the
total memory of the system can be efficiently handled by storing the packet
as segments.


 You can do it now by checking that the len of data in the segment is equal
 to the total len of data in the packet.
 But it needs two API calls to check that, instead of one check handled at
 compile tim.


This optimization will be useful only for platforms which do not support
segmentation as then the application can be sure that the packet are in
first segment but for platforms which support segmentation the application
will have to decide at run time depending upon the size of the segments
which is optimized per implementation.

Maybe if you can provide some use-case example for having the above as a
config parameter. It might be easy to understand the requirements from your
side.



 Nicolas


Regards,
Bala
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC/API-NEXT] api: assigning packet pool per class of service

2015-07-24 Thread Bala Manoharan
Agreed. Please raise a BUG against me on this topic and I will send a patch
to change them.

Regards,
Bala

On 24 July 2015 at 17:38, Bill Fischofer bill.fischo...@linaro.org wrote:

 For consistency with ODP naming conventions there should be a standard
 getter/setter for this information that have the following signatures:

 odp_pool_t  odp_cos_pool(odp_cos_t cos);

 and

 int odp_cos_pool_set(odp_cos_t cos, odp_pool_t pool);

 And while we're at it, we should make the rest of the classifier APIs
 consistent with the rest of ODP:

 odp_cos_set_queue() -- odp_cos_queue()/odp_cos_queue_set()

 odp_cos_set_drop() -- odp_cos_drop()/odp_cos_drop_set()




 On Fri, Jul 24, 2015 at 6:52 AM, Ivan Khoronzhuk 
 ivan.khoronz...@linaro.org wrote:

 I though it's already in API :),
 Only now realized that it was only in local tree.
 I think it's must have feature.

 On 24.07.15 13:51, Balasubramanian Manoharan wrote:


 This API proposal links a packet pool to a CoS and the packets belonging
 to this CoS will be allocated from this packet pool.
 This packet pool belonging to the CoS will supersede the packet pool
 associated with the pktio interface.

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org

 ---
   include/odp/api/classification.h | 16 
   1 file changed, 16 insertions(+)

 diff --git a/include/odp/api/classification.h
 b/include/odp/api/classification.h
 index f597b26..7a47021 100644
 --- a/include/odp/api/classification.h
 +++ b/include/odp/api/classification.h
 @@ -358,6 +358,22 @@ int odp_pktio_pmr_match_set_cos(odp_pmr_set_t
 pmr_set_id, odp_pktio_t src_pktio,
 odp_cos_t dst_cos);

   /**
 +* Assigns a packet buffer pool for a specific Class of service.
 +* All the packets belonging to the given class of service will
 +* be allocated from the assigned packet pool.
 +*
 +* @param   cos_id  class-of-service handle
 +* @param   pool_id Packet buffer pool handle
 +*
 +* @return  0 on success
 +* @return  0 on failure
 +*
 +* @noteThe packet pool associated with CoS will
 supersede
 +*  the packet pool associated with the pktio interface
 +*/
 +int odp_cos_set_pool(odp_cos_t cos_id, odp_pool_t pool_id);
 +
 +/**
* Get printable value for an odp_cos_t
*
* @param hdl  odp_cos_t handle to be printed


 --
 Regards,
 Ivan Khoronzhuk
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT RFC] api: packet: add define for max segmentation

2015-07-22 Thread Bala Manoharan
Hi,

Even with this current pool parameters if the application provides
seg_len pool parameter equal to the expected packet length then the
implementation will have to store the entire packet in a single segment but
it will not be performant in platforms which support segmentation. I
believe this configuration will be similar to UNSEGMENTED pools.

Regards,
Bala

On 23 July 2015 at 03:07, Bill Fischofer bill.fischo...@linaro.org wrote:

 In the original packet API design it was proposed that one of the pool
 options should be UNSEGMENTED, which says the application does not wish to
 see segments at all. The implementation would then either comply or else
 fail the pool create if it is unable to support unsegmented pools.
 However, that didn't make the cut for v1.0.  If there is a use case perhaps
 that should be revisited?

 On Tue, Jul 21, 2015 at 2:35 AM, Nicolas Morey-Chaisemartin 
 nmo...@kalray.eu wrote:



 On 07/20/2015 07:24 PM, Bala Manoharan wrote:

 Hi,

  Few comments inline

 On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu wrote:

 Replace current segmentation with an explicit define.
 This mainly means two things:
  - All code can now test and check the max segmentation which will prove
useful for tests and open the way for many code optimizations.
  - The minimum segment length and the maximum buffer len can now be
 decorrelated.
This means that pool with very small footprints can be allocated for
 small packets,
while pool for jumbo frame will still work as long as seg_len *
 ODP_CONFIG_PACKET_MAX_SEG = packet_len

 Signed-off-by: Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu
 ---
  include/odp/api/config.h | 10 +-
  platform/linux-generic/include/odp_buffer_internal.h |  9 +++--
  platform/linux-generic/odp_pool.c|  4 ++--
  test/validation/packet/packet.c  |  3 ++-
  4 files changed, 16 insertions(+), 10 deletions(-)

 diff --git a/include/odp/api/config.h b/include/odp/api/config.h
 index b5c8fdd..1f44db6 100644
 --- a/include/odp/api/config.h
 +++ b/include/odp/api/config.h
 @@ -108,6 +108,13 @@ extern C {
  #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024)

  /**
 + * Maximum number of segments in a packet
 + *
 + * This defines the maximum number of segment buffers in a packet
 + */
 +#define ODP_CONFIG_PACKET_MAX_SEG 6


  What is the use-case of the above define? Does it mean that the packet
 should not be stored in a pool if the max number of segment gets reached?
 If this is something used in the linux-generic we can define it in the
 internal header file.

  The reason is that the #defines in config.h file has to be defined by
 all the platforms.

  Regards,
 Bala

This maybe a little to linux-generic otiented I guess. What I'm
 looking for is a clean way to handle segment length vs packet length in
 pools.
 I was trying to kill two birds with one stone in this patch:
 - Be able to disable segmentation completely and add fast compile time in
 the code to avoid segment computations
 - Fix packet validation test (and maybe enhance my proposal for
 pktio/segmentation) which rely heavily on the number of supported segment.

 For testing, the main issue I guess is that there is no way to know the
 actual segment length and length used by the pool. We could go to the
 internals but that would make the tests platform specific.
 Something like odp_pool_get_seg_len() and odp_pool_get_len()  could be
 qute useful for building tests but not very interesting for end users...

 I'd still like to see some easy way to disable segmentation so user code
 can check for that and remove complex mapping, memcopying to/from packet
 and iterating on segments.


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT RFC] api: packet: add define for max segmentation

2015-07-22 Thread Bala Manoharan
On 21 July 2015 at 13:05, Nicolas Morey-Chaisemartin nmo...@kalray.eu
wrote:



 On 07/20/2015 07:24 PM, Bala Manoharan wrote:

 Hi,

  Few comments inline

 On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu wrote:

 Replace current segmentation with an explicit define.
 This mainly means two things:
  - All code can now test and check the max segmentation which will prove
useful for tests and open the way for many code optimizations.
  - The minimum segment length and the maximum buffer len can now be
 decorrelated.
This means that pool with very small footprints can be allocated for
 small packets,
while pool for jumbo frame will still work as long as seg_len *
 ODP_CONFIG_PACKET_MAX_SEG = packet_len

 Signed-off-by: Nicolas Morey-Chaisemartin  nmo...@kalray.eu
 nmo...@kalray.eu
 ---
  include/odp/api/config.h | 10 +-
  platform/linux-generic/include/odp_buffer_internal.h |  9 +++--
  platform/linux-generic/odp_pool.c|  4 ++--
  test/validation/packet/packet.c  |  3 ++-
  4 files changed, 16 insertions(+), 10 deletions(-)

 diff --git a/include/odp/api/config.h b/include/odp/api/config.h
 index b5c8fdd..1f44db6 100644
 --- a/include/odp/api/config.h
 +++ b/include/odp/api/config.h
 @@ -108,6 +108,13 @@ extern C {
  #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024)

  /**
 + * Maximum number of segments in a packet
 + *
 + * This defines the maximum number of segment buffers in a packet
 + */
 +#define ODP_CONFIG_PACKET_MAX_SEG 6


  What is the use-case of the above define? Does it mean that the packet
 should not be stored in a pool if the max number of segment gets reached?
 If this is something used in the linux-generic we can define it in the
 internal header file.

  The reason is that the #defines in config.h file has to be defined by
 all the platforms.

  Regards,
 Bala

This maybe a little to linux-generic otiented I guess. What I'm
 looking for is a clean way to handle segment length vs packet length in
 pools.


The optimisations specific to linux-generic should be in internal header
and not in config files as any change in config file will have to be
handled by all the platforms.

I was trying to kill two birds with one stone in this patch:
 - Be able to disable segmentation completely and add fast compile time in
 the code to avoid segment computations
 - Fix packet validation test (and maybe enhance my proposal for
 pktio/segmentation) which rely heavily on the number of supported segment.

 For testing, the main issue I guess is that there is no way to know the
 actual segment length and length used by the pool. We could go to the
 internals but that would make the tests platform specific.
 Something like odp_pool_get_seg_len() and odp_pool_get_len()  could be
 qute useful for building tests but not very interesting for end users...


IMO the testing for segmentation should be written in such a way that the
validation suite should not fail if the implementation has handled the
given requirement without creating segments as technically creating the
segmentation is an implementation optimisation but not a requirement.

The validation suite should try to allocate a larger packet from a pool
with a small segment size and then it can only expect that the
implementation has stored it as segments if the packet is segmented then
segment tests should be run else it should not throw an error since by not
creating segmentation the implementation has not violated any ODP
requirement.

Regards,
Bala


 I'd still like to see some easy way to disable segmentation so user code
 can check for that and remove complex mapping, memcopying to/from packet
 and iterating on segments.


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT RFC] api: packet: add define for max segmentation

2015-07-20 Thread Bala Manoharan
Hi,

Few comments inline

On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin nmo...@kalray.eu
wrote:

 Replace current segmentation with an explicit define.
 This mainly means two things:
  - All code can now test and check the max segmentation which will prove
useful for tests and open the way for many code optimizations.
  - The minimum segment length and the maximum buffer len can now be
 decorrelated.
This means that pool with very small footprints can be allocated for
 small packets,
while pool for jumbo frame will still work as long as seg_len *
 ODP_CONFIG_PACKET_MAX_SEG = packet_len

 Signed-off-by: Nicolas Morey-Chaisemartin nmo...@kalray.eu
 ---
  include/odp/api/config.h | 10 +-
  platform/linux-generic/include/odp_buffer_internal.h |  9 +++--
  platform/linux-generic/odp_pool.c|  4 ++--
  test/validation/packet/packet.c  |  3 ++-
  4 files changed, 16 insertions(+), 10 deletions(-)

 diff --git a/include/odp/api/config.h b/include/odp/api/config.h
 index b5c8fdd..1f44db6 100644
 --- a/include/odp/api/config.h
 +++ b/include/odp/api/config.h
 @@ -108,6 +108,13 @@ extern C {
  #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024)

  /**
 + * Maximum number of segments in a packet
 + *
 + * This defines the maximum number of segment buffers in a packet
 + */
 +#define ODP_CONFIG_PACKET_MAX_SEG 6


What is the use-case of the above define? Does it mean that the packet
should not be stored in a pool if the max number of segment gets reached?
If this is something used in the linux-generic we can define it in the
internal header file.

The reason is that the #defines in config.h file has to be defined by all
the platforms.

Regards,
Bala

+
 +/**
   * Maximum packet buffer length
   *
   * This defines the maximum number of bytes that can be stored into a
 packet
 @@ -119,7 +126,8 @@ extern C {
   * - The value MUST be an integral number of segments
   * - The value SHOULD be large enough to accommodate jumbo packets (9K)
   */
 -#define ODP_CONFIG_PACKET_BUF_LEN_MAX (ODP_CONFIG_PACKET_SEG_LEN_MIN*6)
 +#define ODP_CONFIG_PACKET_BUF_LEN_MAX (ODP_CONFIG_PACKET_SEG_LEN_MIN * \
 +  ODP_CONFIG_PACKET_MAX_SEG)

  /** Maximum number of shared memory blocks.
   *
 diff --git a/platform/linux-generic/include/odp_buffer_internal.h
 b/platform/linux-generic/include/odp_buffer_internal.h
 index ae799dd..5d1199a 100644
 --- a/platform/linux-generic/include/odp_buffer_internal.h
 +++ b/platform/linux-generic/include/odp_buffer_internal.h
 @@ -57,14 +57,11 @@ _ODP_STATIC_ASSERT((ODP_CONFIG_PACKET_BUF_LEN_MAX %
ODP_CONFIG_PACKET_SEG_LEN_MIN) == 0,
   Packet max size must be a multiple of segment size);

 -#define ODP_BUFFER_MAX_SEG \
 -   (ODP_CONFIG_PACKET_BUF_LEN_MAX / ODP_CONFIG_PACKET_SEG_LEN_MIN)
 -
  /* We can optimize storage of small raw buffers within metadata area */
 -#define ODP_MAX_INLINE_BUF ((sizeof(void *)) * (ODP_BUFFER_MAX_SEG -
 1))
 +#define ODP_MAX_INLINE_BUF ((sizeof(void *)) *
 (ODP_CONFIG_PACKET_MAX_SEG - 1))

  #define ODP_BUFFER_POOL_BITS   ODP_BITSIZE(ODP_CONFIG_POOLS)
 -#define ODP_BUFFER_SEG_BITSODP_BITSIZE(ODP_BUFFER_MAX_SEG)
 +#define ODP_BUFFER_SEG_BITSODP_BITSIZE(ODP_CONFIG_PACKET_MAX_SEG)
  #define ODP_BUFFER_INDEX_BITS  (32 - ODP_BUFFER_POOL_BITS -
 ODP_BUFFER_SEG_BITS)
  #define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS +
 ODP_BUFFER_INDEX_BITS)
  #define ODP_BUFFER_MAX_POOLS   (1  ODP_BUFFER_POOL_BITS)
 @@ -130,7 +127,7 @@ typedef struct odp_buffer_hdr_t {
 uint32_t uarea_size; /* size of user area */
 uint32_t segcount;   /* segment count */
 uint32_t segsize;/* segment size */
 -   void*addr[ODP_BUFFER_MAX_SEG]; /* block addrs
 */
 +   void*addr[ODP_CONFIG_PACKET_MAX_SEG]; /* block
 addrs */
  } odp_buffer_hdr_t;

  /** @internal Compile time assert that the
 diff --git a/platform/linux-generic/odp_pool.c
 b/platform/linux-generic/odp_pool.c
 index 0b8921c..ff2a89c 100644
 --- a/platform/linux-generic/odp_pool.c
 +++ b/platform/linux-generic/odp_pool.c
 @@ -217,7 +217,7 @@ odp_pool_t odp_pool_create(const char *name,
 odp_pool_param_t *params)
 ODP_ALIGN_ROUNDUP(params-pkt.len, seg_len);

 /* Reject create if pkt.len needs too many segments */
 -   if (blk_size / seg_len  ODP_BUFFER_MAX_SEG)
 +   if (blk_size / seg_len  ODP_CONFIG_PACKET_MAX_SEG)
 return ODP_POOL_INVALID;

 p_udata_size = params-pkt.uarea_size;
 @@ -481,7 +481,7 @@ odp_buffer_t buffer_alloc(odp_pool_t pool_hdl, size_t
 size)
 /* Reject oversized allocation requests */
 if ((pool-s.flags.unsegmented  totsize  pool-s.seg_size) ||
 (!pool-s.flags.unsegmented 
 -totsize  

Re: [lng-odp] [PATCH] api: pool: add headroom init parameter to odp_pool_param_t

2015-07-20 Thread Bala Manoharan
Hi Genis,

The headroom per CoS API had been previously discussed in ODP and I will
post a patch in API-NEXT to include this API for further discussion and
approval.

Regards,
Bala

On 20 July 2015 at 22:53, Genis Riera gri...@starflownetworks.com wrote:

 Hi Bala,

 Thank you again for your explanations. In the near future I will try to
 use this API's to change my current solution. However, moving to the
 previous point, will the work for defining headroom per CoS go on, or
 doesn't it matter anymore? I continue thinking that it could be a good idea
 to take into account for a future new API's, as you said before.

 Regards,

 Genís Riera Pérez.

 Genís Riera Pérez
 Software Engineer at StarFlow Networks
 Edifici K2M, S103 c/ Jordi Girona 31
 08034 Barcelona

 E-mail: gri...@starflownetworks.com

 On Mon, Jul 20, 2015 at 7:03 PM, Bala Manoharan bala.manoha...@linaro.org
  wrote:


 Hi Genis,

 On 20 July 2015 at 22:18, Genis Riera gri...@starflownetworks.com
 wrote:

 Hi Bala,

 I've been reflecting about this and I agree with you, seems to make more
 sense defining a headroom size per CoS as a generic concept, however my use
 case differs from it cause it's simpler. In my use case, so far, I only
 need to define the same headroom size for all packet stored in a packet
 pool memory. For this reason I proposed to define it at packet pool
 creation level. However I have some doubts regarding to use this feature to
 solve my use case. I wondered if using the following ODP API's:

  - uint32_t   uarea_size // packet pool parameter from odp_pool_param_t
  - void *odp_packet_user_area(odp_packet_t pkt);
  - uint32_t odp_packet_user_area_size(odp_packet_t pkt);

 I would be able to store meta-data for each received packet to the
 packet pool memory I initialize. When I create a packet pool memory and
 define uarea_size, does it mean that each packet received on this pool have
 a specific user memory area to store meta-data per-packet? If this is
 correct, my use case already is solved, but regardles of it, I'm agree with
 you to add a headroom per CoS.


 Yes. uarea_size will be the user area associated with every packet
 allocated from this pool.
 This region could be used to store meta-data associated with each packet.
 eg the application might be interested to store a context per packet to
 hold some packet processing information and this context could be store in
 this uarea_size ( This uarea_size can hold a pointer if a bigger ctx needs
 to be stored)

 Headroom_size will denote the amount of data available before the start
 of the packet so that if the application wants to add additional header or
 increase the size of the L2/L3 protocol header then there is no need to
 copy the entire packet it can simply increase the headroom and modify the
 L2/L3 protocol header and it will be more performance efficient.

 Hope this helps

 Regards,
 Bala


 Best,

 Genís Riera Pérez.

 Genís Riera Pérez
 Software Engineer at StarFlow Networks
 Edifici K2M, S103 c/ Jordi Girona 31
 08034 Barcelona

 E-mail: gri...@starflownetworks.com

 On Mon, Jul 20, 2015 at 3:00 PM, Bala Manoharan 
 bala.manoha...@linaro.org wrote:

 Hi Genis,

 I would like you to validate whether the options of adding headroom per
 CoS work for you.

 IMO, adding headroom per CoS makes more sense since CoS defines a
 specific flow and the application might be interested to modify headroom
 per flow. Since with your current proposal the same is possible but that
 would mean you need to create different pools per different types of flows.

 Regards,
 Bala

 On 16 July 2015 at 00:20, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 wrote:

 Nicolas,

 On 15.07.15 20:49, Nicolas Morey Chaisemartin wrote:

 You don t need to add the lng-odp part. The mailing list doed it on
 its
 own when it s not in the title already


 You cannot be sure and rely upon it. I'm often add someone else in CC
 and in this case the letter doesn't pass through the list server.
 It's good when the letter is seen by everyone with the same subject.




 Envoyé depuis un mobile Samsung.


  Message d'origine 
 De : Ivan Khoronzhuk
 Date :15/07/2015 18:34 (GMT+01:00)
 À : Bill Fischofer , Genis Riera
 Cc : ODP mailing list
 Objet : Re: [lng-odp] [PATCH] api: pool: add headroom init parameter
 to
 odp_pool_param_t



 On 15.07.15 19:20, Bill Fischofer wrote:
   Any proposed API changes need to be tagged API-NEXT.  Proper patch
   procedure is the following:
  
   git clone http://git.linaro.org/lng/odp.git myodp
   cd myodp
   git checkout -b api-next origin/api-next
   ...Make your changes and commits locally
   git format-patch origin/api-next --subject-prefix=API-NEXT PATCH

 --subject-prefix=lng-odp]\ [API-NEXT PATCH
 Correct me if I'm wrong

   // Make sure your patches are checkpatch clean:
   ./scripts/checkpatch *.patch
   git send-email --to=lng-odp@lists.linaro.org
   mailto:lng-odp@lists.linaro.org *.patch
  

 You can simplify by adding hook to:
 .git

Re: [lng-odp] [PATCH] api: pool: add headroom init parameter to odp_pool_param_t

2015-07-20 Thread Bala Manoharan
Hi Genis,

On 20 July 2015 at 22:18, Genis Riera gri...@starflownetworks.com wrote:

 Hi Bala,

 I've been reflecting about this and I agree with you, seems to make more
 sense defining a headroom size per CoS as a generic concept, however my use
 case differs from it cause it's simpler. In my use case, so far, I only
 need to define the same headroom size for all packet stored in a packet
 pool memory. For this reason I proposed to define it at packet pool
 creation level. However I have some doubts regarding to use this feature to
 solve my use case. I wondered if using the following ODP API's:

  - uint32_t   uarea_size // packet pool parameter from odp_pool_param_t
  - void *odp_packet_user_area(odp_packet_t pkt);
  - uint32_t odp_packet_user_area_size(odp_packet_t pkt);

 I would be able to store meta-data for each received packet to the packet
 pool memory I initialize. When I create a packet pool memory and define
 uarea_size, does it mean that each packet received on this pool have a
 specific user memory area to store meta-data per-packet? If this is
 correct, my use case already is solved, but regardles of it, I'm agree with
 you to add a headroom per CoS.


Yes. uarea_size will be the user area associated with every packet
allocated from this pool.
This region could be used to store meta-data associated with each packet.
eg the application might be interested to store a context per packet to
hold some packet processing information and this context could be store in
this uarea_size ( This uarea_size can hold a pointer if a bigger ctx needs
to be stored)

Headroom_size will denote the amount of data available before the start of
the packet so that if the application wants to add additional header or
increase the size of the L2/L3 protocol header then there is no need to
copy the entire packet it can simply increase the headroom and modify the
L2/L3 protocol header and it will be more performance efficient.

Hope this helps

Regards,
Bala


 Best,

 Genís Riera Pérez.

 Genís Riera Pérez
 Software Engineer at StarFlow Networks
 Edifici K2M, S103 c/ Jordi Girona 31
 08034 Barcelona

 E-mail: gri...@starflownetworks.com

 On Mon, Jul 20, 2015 at 3:00 PM, Bala Manoharan bala.manoha...@linaro.org
  wrote:

 Hi Genis,

 I would like you to validate whether the options of adding headroom per
 CoS work for you.

 IMO, adding headroom per CoS makes more sense since CoS defines a
 specific flow and the application might be interested to modify headroom
 per flow. Since with your current proposal the same is possible but that
 would mean you need to create different pools per different types of flows.

 Regards,
 Bala

 On 16 July 2015 at 00:20, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 wrote:

 Nicolas,

 On 15.07.15 20:49, Nicolas Morey Chaisemartin wrote:

 You don t need to add the lng-odp part. The mailing list doed it on its
 own when it s not in the title already


 You cannot be sure and rely upon it. I'm often add someone else in CC
 and in this case the letter doesn't pass through the list server.
 It's good when the letter is seen by everyone with the same subject.




 Envoyé depuis un mobile Samsung.


  Message d'origine 
 De : Ivan Khoronzhuk
 Date :15/07/2015 18:34 (GMT+01:00)
 À : Bill Fischofer , Genis Riera
 Cc : ODP mailing list
 Objet : Re: [lng-odp] [PATCH] api: pool: add headroom init parameter to
 odp_pool_param_t



 On 15.07.15 19:20, Bill Fischofer wrote:
   Any proposed API changes need to be tagged API-NEXT.  Proper patch
   procedure is the following:
  
   git clone http://git.linaro.org/lng/odp.git myodp
   cd myodp
   git checkout -b api-next origin/api-next
   ...Make your changes and commits locally
   git format-patch origin/api-next --subject-prefix=API-NEXT PATCH

 --subject-prefix=lng-odp]\ [API-NEXT PATCH
 Correct me if I'm wrong

   // Make sure your patches are checkpatch clean:
   ./scripts/checkpatch *.patch
   git send-email --to=lng-odp@lists.linaro.org
   mailto:lng-odp@lists.linaro.org *.patch
  

 You can simplify by adding hook to:
 .git/hooks/post-commit

 git show --format=email | ./scripts/checkpatch.pl --strict --mailback
 --show-types -

 Just to see issues when adding a commit.

  
  
   On Wed, Jul 15, 2015 at 11:11 AM, Genis Riera
   gri...@starflownetworks.com mailto:gri...@starflownetworks.com
 wrote:
  
   Ivan,
  
   If you have this compilation issues I can send again the patch
   without this check, assuming always positive values. Is it right
 for
   you?
  
   Genís Riera Pérez
   Software Engineer at StarFlow Networks
   Edifici K2M, S103 c/ Jordi Girona 31
   08034 Barcelona
  
   E-mail: gri...@starflownetworks.com
 mailto:gri...@starflownetworks.com
  
   On Wed, Jul 15, 2015 at 6:04 PM, Ivan Khoronzhuk
   ivan.khoronz...@linaro.org mailto:ivan.khoronz...@linaro.org
 wrote:
  
   Genis
  
   On 15.07.15 19:00, Genis Riera wrote:
  
   Hi, Ivan

Re: [lng-odp] Receive only packets that match configured PMRs

2015-07-17 Thread Bala Manoharan
Hi Stuart,

Pls raise a bug for POOL_DROP implementation. I will implement drop policy.

Regards,
Bala

On 17 July 2015 at 19:40, Bill Fischofer bill.fischo...@linaro.org wrote:

 odp_cos_set_drop() should certainly be implemented. If it's not that
 should be reported as a bug against both the classifier and the test suite.

 On Fri, Jul 17, 2015 at 8:54 AM, Stuart Haslam stuart.has...@linaro.org
 wrote:

 Is it possible to configure the pktio and classifier such that the
 application receives *only* packets matching a defined set of PMRs?

 I tried something like this;

 pktio = odp_pktio_create(..);
 cos = odp_cos_create(MyCoS);
 q = odp_queue_create(MyQ, ODP_QUEUE_TYPE_SCHED, qparam);
 odp_cos_set_queue(cos, q);
 odp_pmr_set_t pmr_set;
 odp_pmr_match_set_create(4, pmrs, pmr_set);
 odp_pktio_pmr_match_set_cos(pmr_set, pktio, cos);

 Expecting that packets matching the PMR set would be delivered to q and
 non-matching packets would be dropped as there's no queue to deliver
 them to. With the linux-generic implementations of pktio and the
 classifier what actually happens is nothing is received at all, because
 in order to receive anything on a pktio you must configure a default inq,
 by calling odp_pktio_inq_setdef(). I think this is a bug in the
 implementation (the setdef call installs the hook into the scheduler)
 and the above sequence should really work.

 Also, in classification.h we have;

 /**
  * Class-of-service packet drop policies
  */
 typedef enum odp_cos_drop {
 ---ODP_COS_DROP_POOL,/** Follow buffer pool drop policy */
 ---ODP_COS_DROP_NEVER,   /** Never drop, ignoring buffer pool
 policy */
 } odp_drop_e;

 And odp_cos_set_drop(), but it's not clear what this is intended to do
 and it's not implemented in linux-generic.

 --
 Stuart.
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp



 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] Receive only packets that match configured PMRs

2015-07-17 Thread Bala Manoharan
On 17 July 2015 at 20:56, Stuart Haslam stuart.has...@linaro.org wrote:

 On Fri, Jul 17, 2015 at 07:43:08PM +0530, Bala Manoharan wrote:
  Hi Stuart,
 
  Pls raise a bug for POOL_DROP implementation. I will implement drop
 policy.
 
  Regards,
  Bala
 

 Will do.

 I assume then that the intention is that you set a default CoS for the
 pktio, then set the drop policy for the CoS so that packets are dropped,
 which seems reasonable.


As per the case you had explained above, If the requirement is to drop all
the packets which does not
match any configured PMRs then the default CoS should be configured to be
POOL_DROP policy and in this case
there will not be any need for a default queue.


 Having to set a default queue is still a problem though, should this be
 considered a linux-generic bug?


Yes this is a linux-generic bug. The linux-generic implementation has
created a dependency between default queue and pktio interface. IMO the
configuration you had described seems logical and should be supported.

Regards,
Bala


 --
 Stuart.

  On 17 July 2015 at 19:40, Bill Fischofer bill.fischo...@linaro.org
 wrote:
 
   odp_cos_set_drop() should certainly be implemented. If it's not that
   should be reported as a bug against both the classifier and the test
 suite.
  
   On Fri, Jul 17, 2015 at 8:54 AM, Stuart Haslam 
 stuart.has...@linaro.org
   wrote:
  
   Is it possible to configure the pktio and classifier such that the
   application receives *only* packets matching a defined set of PMRs?
  
   I tried something like this;
  
   pktio = odp_pktio_create(..);
   cos = odp_cos_create(MyCoS);
   q = odp_queue_create(MyQ, ODP_QUEUE_TYPE_SCHED, qparam);
   odp_cos_set_queue(cos, q);
   odp_pmr_set_t pmr_set;
   odp_pmr_match_set_create(4, pmrs, pmr_set);
   odp_pktio_pmr_match_set_cos(pmr_set, pktio, cos);
  
   Expecting that packets matching the PMR set would be delivered to q
 and
   non-matching packets would be dropped as there's no queue to deliver
   them to. With the linux-generic implementations of pktio and the
   classifier what actually happens is nothing is received at all,
 because
   in order to receive anything on a pktio you must configure a default
 inq,
   by calling odp_pktio_inq_setdef(). I think this is a bug in the
   implementation (the setdef call installs the hook into the scheduler)
   and the above sequence should really work.
  
   Also, in classification.h we have;
  
   /**
* Class-of-service packet drop policies
*/
   typedef enum odp_cos_drop {
   ---ODP_COS_DROP_POOL,/** Follow buffer pool drop policy */
   ---ODP_COS_DROP_NEVER,   /** Never drop, ignoring buffer pool
   policy */
   } odp_drop_e;
  
   And odp_cos_set_drop(), but it's not clear what this is intended to do
   and it's not implemented in linux-generic.
  
   --
   Stuart.

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [Patch] validation: classification: fix ODP_PMR_IPPROTO capability check

2015-07-15 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

P.S: May be the patch description needs to change

On 14 July 2015 at 18:36, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 I suppose, the intention was to check only ODP_PMR_IPPROTO capability.

 Signed-off-by: Ivan Khoronzhuk ivan.khoronz...@linaro.org
 ---
  test/validation/classification/odp_classification_tests.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/test/validation/classification/odp_classification_tests.c
 b/test/validation/classification/odp_classification_tests.c
 index 502105e..6e8d152 100644
 --- a/test/validation/classification/odp_classification_tests.c
 +++ b/test/validation/classification/odp_classification_tests.c
 @@ -807,7 +807,7 @@ static void classification_test_pmr_terms_cap(void)
 unsigned long long retval;
 /* Need to check different values for different platforms */
 retval = odp_pmr_terms_cap();
 -   CU_ASSERT(retval | (1  ODP_PMR_IPPROTO));
 +   CU_ASSERT(retval  (1  ODP_PMR_IPPROTO));
  }

  static void classification_test_pktio_configure(void)
 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC] [Patch] validation: classification: improve pmr set check test

2015-07-15 Thread Bala Manoharan
Hi Ivan,

Comments Inline...

On 15 July 2015 at 02:48, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 It's simple improvement is intended to open eyes on possible
 hidden issues when a packet can be lost (or sent to def CoS)
 while matching one of the rules of first PMR match set, but
 intendent to second PMR match set. To correctly check, the
 new dst CoS should be used, but for simplicity I used only one.
 It's not formated patch and is only for demonstration.

 Signed-off-by: Ivan Khoronzhuk ivan.khoronz...@linaro.org
 ---
  .../classification/odp_classification_tests.c  | 42
 ++
  1 file changed, 42 insertions(+)

 diff --git a/test/validation/classification/odp_classification_tests.c
 b/test/validation/classification/odp_classification_tests.c
 index 6e8d152..b5daf32 100644
 --- a/test/validation/classification/odp_classification_tests.c
 +++ b/test/validation/classification/odp_classification_tests.c
 @@ -41,7 +41,9 @@
  #define TEST_PMR_SET   1
  #define CLS_PMR_SET5
  #define CLS_PMR_SET_SADDR  10.0.0.6/32
 +#define CLS_PMR_SET_DADDR  10.0.0.7/32
  #define CLS_PMR_SET_SPORT  5000
 +#define CLS_PMR_SET_SPORT2 5001

  /* Config values for CoS L2 Priority */
  #define TEST_L2_QOS1
 @@ -723,6 +725,7 @@ void configure_pktio_pmr_match_set_cos(void)
 odp_queue_param_t qparam;
 char cosname[ODP_COS_NAME_LEN];
 char queuename[ODP_QUEUE_NAME_LEN];
 +   static odp_pmr_set_t pmr_set2;
 uint32_t addr = 0;
 uint32_t mask;

 @@ -743,6 +746,22 @@ void configure_pktio_pmr_match_set_cos(void)
 retval = odp_pmr_match_set_create(num_terms, pmr_terms, pmr_set);
 CU_ASSERT(retval  0);

 +   parse_ipv4_string(CLS_PMR_SET_DADDR, addr, mask);
 +   pmr_terms[0].term = ODP_PMR_DIP_ADDR;
 +   pmr_terms[0].val = addr;
 +   pmr_terms[0].mask = mask;
 +   pmr_terms[0].val_sz = sizeof(addr);
 +
 +   val = CLS_PMR_SET_SPORT2;
 +   maskport = 0x;
 +   pmr_terms[1].term = ODP_PMR_UDP_SPORT;
 +   pmr_terms[1].val = val;
 +   pmr_terms[1].mask = maskport;
 +   pmr_terms[1].val_sz = sizeof(val);
 +
 +   retval = odp_pmr_match_set_create(num_terms, pmr_terms, pmr_set2);
 +   CU_ASSERT(retval  0);
 +
 sprintf(cosname, cos_pmr_set);
 cos_list[CLS_PMR_SET] = odp_cos_create(cosname);
 CU_ASSERT_FATAL(cos_list[CLS_PMR_SET] != ODP_COS_INVALID)
 @@ -764,6 +783,9 @@ void configure_pktio_pmr_match_set_cos(void)
 retval = odp_pktio_pmr_match_set_cos(pmr_set, pktio_loop,
  cos_list[CLS_PMR_SET]);
 CU_ASSERT(retval == 0);
 +   retval = odp_pktio_pmr_match_set_cos(pmr_set2, pktio_loop,
 +cos_list[CLS_PMR_SET]);



Since you are using a different pmr_match_set it is better to create
another CoS and then attach it with this rule pmr_set2 rather than
attaching the same CoS (cos_list[CLS_PMR_SET]).

Coz the validation suite constructs a packet with different parameters,
sends them through loop back interface and then verifies the queue in which
the packet is arriving after classification. it is better to have different
CoS for different PMR rules.

IMO, this can be added as a separate Test Case rather than modifying on the
same Test Case.

Regards,
Bala


 +   CU_ASSERT(retval == 0);
  }

  void test_pktio_pmr_match_set_cos(void)
 @@ -781,6 +803,8 @@ void test_pktio_pmr_match_set_cos(void)
 ip = (odph_ipv4hdr_t *)odp_packet_l3_ptr(pkt, NULL);
 parse_ipv4_string(CLS_PMR_SET_SADDR, addr, mask);
 ip-src_addr = odp_cpu_to_be_32(addr);
 +   parse_ipv4_string(CLS_PMR_SET_DADDR, addr, mask);
 +   ip-dst_addr = odp_cpu_to_be_32(addr);
 ip-chksum = 0;
 ip-chksum = odp_cpu_to_be_16(odph_ipv4_csum_update(pkt));

 @@ -791,6 +815,24 @@ void test_pktio_pmr_match_set_cos(void)
 CU_ASSERT(queue == queue_list[CLS_PMR_SET]);
 CU_ASSERT(seq == cls_pkt_get_seq(pkt));
 odp_packet_free(pkt);
 +
 +   pkt = create_packet(false);
 +   seq = cls_pkt_get_seq(pkt);
 +   ip = (odph_ipv4hdr_t *)odp_packet_l3_ptr(pkt, NULL);
 +   parse_ipv4_string(CLS_PMR_SET_SADDR, addr, mask);
 +   ip-src_addr = odp_cpu_to_be_32(addr);
 +   parse_ipv4_string(CLS_PMR_SET_DADDR, addr, mask);
 +   ip-dst_addr = odp_cpu_to_be_32(addr);
 +   ip-chksum = 0;
 +   ip-chksum = odp_cpu_to_be_16(odph_ipv4_csum_update(pkt));
 +
 +   udp = (odph_udphdr_t *)odp_packet_l4_ptr(pkt, NULL);
 +   udp-src_port = odp_cpu_to_be_16(CLS_PMR_SET_SPORT2);
 +   enqueue_loop_interface(pkt);
 +   pkt = receive_packet(queue, ODP_TIME_SEC);
 +   CU_ASSERT(queue == queue_list[CLS_PMR_SET]);
 +   CU_ASSERT(seq == cls_pkt_get_seq(pkt));
 +   odp_packet_free(pkt);
  }

  static void classification_test_pmr_terms_avail(void)
 --
 1.9.1



Re: [lng-odp] [RFC] [Patch] validation: classification: improve pmr set check test

2015-07-15 Thread Bala Manoharan
On 15 July 2015 at 14:49, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Bala,

 On 15.07.15 11:31, Bala Manoharan wrote:

 Hi Ivan,

 Comments Inline...

 On 15 July 2015 at 02:48, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org wrote:

 It's simple improvement is intended to open eyes on possible
 hidden issues when a packet can be lost (or sent to def CoS)
 while matching one of the rules of first PMR match set, but
 intendent to second PMR match set. To correctly check, the
 new dst CoS should be used, but for simplicity I used only one.
 It's not formated patch and is only for demonstration.

 Signed-off-by: Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org
 ---
   .../classification/odp_classification_tests.c  | 42
 ++
   1 file changed, 42 insertions(+)

 diff --git
 a/test/validation/classification/odp_classification_tests.c
 b/test/validation/classification/odp_classification_tests.c
 index 6e8d152..b5daf32 100644
 --- a/test/validation/classification/odp_classification_tests.c
 +++ b/test/validation/classification/odp_classification_tests.c
 @@ -41,7 +41,9 @@
   #define TEST_PMR_SET   1
   #define CLS_PMR_SET5
   #define CLS_PMR_SET_SADDR  10.0.0.6/32 http://10.0.0.6/32
 +#define CLS_PMR_SET_DADDR  10.0.0.7/32 http://10.0.0.7/32

   #define CLS_PMR_SET_SPORT  5000
 +#define CLS_PMR_SET_SPORT2 5001

   /* Config values for CoS L2 Priority */
   #define TEST_L2_QOS1
 @@ -723,6 +725,7 @@ void configure_pktio_pmr_match_set_cos(void)
  odp_queue_param_t qparam;
  char cosname[ODP_COS_NAME_LEN];
  char queuename[ODP_QUEUE_NAME_LEN];
 +   static odp_pmr_set_t pmr_set2;
  uint32_t addr = 0;
  uint32_t mask;

 @@ -743,6 +746,22 @@ void configure_pktio_pmr_match_set_cos(void)
  retval = odp_pmr_match_set_create(num_terms, pmr_terms,
 pmr_set);
  CU_ASSERT(retval  0);

 +   parse_ipv4_string(CLS_PMR_SET_DADDR, addr, mask);
 +   pmr_terms[0].term = ODP_PMR_DIP_ADDR;
 +   pmr_terms[0].val = addr;
 +   pmr_terms[0].mask = mask;
 +   pmr_terms[0].val_sz = sizeof(addr);
 +
 +   val = CLS_PMR_SET_SPORT2;
 +   maskport = 0x;
 +   pmr_terms[1].term = ODP_PMR_UDP_SPORT;
 +   pmr_terms[1].val = val;
 +   pmr_terms[1].mask = maskport;
 +   pmr_terms[1].val_sz = sizeof(val);
 +
 +   retval = odp_pmr_match_set_create(num_terms, pmr_terms,
 pmr_set2);
 +   CU_ASSERT(retval  0);
 +
  sprintf(cosname, cos_pmr_set);
  cos_list[CLS_PMR_SET] = odp_cos_create(cosname);
  CU_ASSERT_FATAL(cos_list[CLS_PMR_SET] != ODP_COS_INVALID)
 @@ -764,6 +783,9 @@ void configure_pktio_pmr_match_set_cos(void)
  retval = odp_pktio_pmr_match_set_cos(pmr_set, pktio_loop,
   cos_list[CLS_PMR_SET]);
  CU_ASSERT(retval == 0);
 +   retval = odp_pktio_pmr_match_set_cos(pmr_set2, pktio_loop,
 +cos_list[CLS_PMR_SET]);



 Since you are using a different pmr_match_set it is better to create
 another CoS and then attach it with this rule pmr_set2 rather than
 attaching the same CoS (cos_list[CLS_PMR_SET]).

 Coz the validation suite constructs a packet with different parameters,
 sends them through loop back interface and then verifies the queue in
 which the packet is arriving after classification. it is better to have
 different CoS for different PMR rules.


 I know, that's why I mentioned about it in comment message.


 IMO, this can be added as a separate Test Case rather than modifying on
 the same Test Case.


 It's only a proposition to add.
 You can add this as separate test. I just want to show that current test
 is not enough to test this primitive.


The current validation suite is just used to test the acceptance of
different classification APIs.
Strengthening of validation test suite is a pending activity which I am
working on.


 Even more, I propose to add another one test that will do the same but
 with 3 PMRs in each PMRset and all of them on different layers.
 For instance, DMAC - IPsrc - UDP.


Yes. This can be done.


 It can allow to see situations when API doesn't work as expected.
 Some platforms, like my, can have different PDSPs for different
 layers and match packets only in one direction, that can lead for
 interesting hidden things.


Regards,
Bala




 Regards,
 Bala

 +   CU_ASSERT(retval == 0);
   }

   void test_pktio_pmr_match_set_cos(void)
 @@ -781,6 +803,8 @@ void test_pktio_pmr_match_set_cos(void)
  ip = (odph_ipv4hdr_t *)odp_packet_l3_ptr(pkt, NULL

Re: [lng-odp] [PATCH] linux-generic: classification: add support for ODP_PMR_IPSEC_SPI

2015-07-15 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 14 July 2015 at 17:28, Stuart Haslam stuart.has...@linaro.org wrote:

 Signed-off-by: Stuart Haslam stuart.has...@linaro.org
 ---
  platform/linux-generic/include/odp_classification_inlines.h | 13
 -
  1 file changed, 12 insertions(+), 1 deletion(-)

 diff --git a/platform/linux-generic/include/odp_classification_inlines.h
 b/platform/linux-generic/include/odp_classification_inlines.h
 index 8d1e1c1..560104e 100644
 --- a/platform/linux-generic/include/odp_classification_inlines.h
 +++ b/platform/linux-generic/include/odp_classification_inlines.h
 @@ -189,7 +189,18 @@ static inline int verify_pmr_ipsec_spi(uint8_t
 *pkt_addr ODP_UNUSED,
odp_packet_hdr_t *pkt_hdr
 ODP_UNUSED,
pmr_term_value_t *term_value
 ODP_UNUSED)
  {
 -   ODP_UNIMPLEMENTED();
 +   uint32_t *spi;
 +
 +   if (!pkt_hdr-input_flags.ipsec)
 +   return 0;
 +
 +   spi = (uint32_t *)(pkt_addr + pkt_hdr-l4_offset);
 +   if (pkt_hdr-l4_protocol == ODPH_IPPROTO_AH)
 +   spi++;
 +
 +   if (term_value-val == (odp_be_to_cpu_32(*spi)  term_value-mask))
 +   return 1;
 +
 return 0;
  }
  static inline int verify_pmr_ld_vni(uint8_t *pkt_addr ODP_UNUSED,
 --
 2.1.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add ODP_PMR_OFFSET_ABS

2015-07-08 Thread Bala Manoharan
On 7 July 2015 at 18:34, Benoît Ganne bga...@kalray.eu wrote:

 In this case of ABS_OFFSET_L2 failure during creation is better
 as the application can better handle the code for different
 implementations supporting different number of OFFSET term values
 rather than failing in the context in which case the application
 will have to redo the entire context again. In classification the
 responsibility is with the application to configure the rules in
 an unambiguous way and the context management is used to
 facilitate the implementation to better optimise and reuse any HW
 resources if possible. The reason being that the application
 should be intelligent enough to configure for better performance
 across multiple implementations with minimal change coz returning
 an error while applying the context is good but not great since
 if each implementation returns an error for different
 configurations then portability aspect of ODP will be lost.


  Maybe we should add an API to get the max number of such rules an
 implementation support?
 Something like int odp_pmr_offset_max_get(void) or something.
 That being said, we do not have any mean to check the number of PMR
 an implementation support either, which in my point of view is the
 same problem...


  This value of maximum PMRs supported by an implementation is available
 as a configuration parameter in the config file.
 But I think I need to move the same to config.h it is currently in the
 internal header file.


 Ok, in that case I suppose it should be even simpler: we need to add a
 configuration parameter in config.h.


Yes. Adding to config is what I was also tying to imply in my reply.




  For example, let's say an application want to define a CoS based on
 ODP_PMR_DMAC and ODP_PMR_OFFSET where offset  ODP_PMR_DMAC. If I
 understand correctly you need the application to tell you that the
 ODP_PMR_OFFSET PMR must be apply after ODP_PMR_DMAC PMR. Am I correct?
 If so, we could requests that PMR must be inserted in the correct
 order by the application.


  Actually it is reverse ODP_PMR_OFFSET should be applied before DMAC
 irrespective of the size of the offset.



 Hmm ok. If I understand correctly, you need to start with those rules
 (because of the absolute offset), but you cannot mix it with subsequent
 rules because it will automatically advance the packet pointer possibly
 beyond the next one. In my example, you will have to implement the
 ODP_PMR_OFFSET PMR before the ODP_PMR_DMAC PMR (because of absolute
 offset), but as offset  ODP_PMR_DMAC, you can no longer match it when its
 time has come.


Let me try to clarify, I can match ODP_PMR_DMAC if it is placed after
ODP_PMR_OFFSET but not if it is placed before PMR_OFFSET.


 I think we have several solution:
  a) restrict PMR combination so that it is either a combination of
 ODP_PMR_OFFSET PMR or standard PMR, but not both. In that case, you can
 sort them by offset, calculate relative offsets for each but the 1st and
 apply this. If I understood correctly this should work fine for you
  b) no specific restriction: in that case, if there are only
 ODP_PMR_OFFSET rules (no standard PMR), (a) applies, otherwise you need
 to apply OFFSET rules in software (those should be relatively cheap though,
 thanks to absolute offset). This is not much different of a HW not
 supporting eg. ODP_PMR_LD_VNI.


I would prefer that platforms do not perform a SW match when an underlying
HW support is available otherwise it would defeat the purpose of ODP. If
the platforms do not have an HW support then SW match is an option.

This issue could be easily solved if we describe the position of
ODP_PMR_OFFSET term with relation to other PMR terms.
Since all the other PMR terms have an inherent preference order i.e
application knows that for a better performance it has to check an L2 term
before L3 terms.

So the only requirement form my side is to define an preference order for
ODP_PMR_OFFSET term and since this term is always from the start of L2 it
would be better to add a description that this term should be added at the
beginning of an PMR chain or that the preference order is ODP_PMR_OFFSET 
ODP_PMR_L2_TERMS  PMR_L3_TERMS  PMR_L4_TERMS.

Hope this clears up the confusion.

Regards,
Bala


 Is my understanding correct?

 I have an agenda conflict and won't be able to follow today call but
 Nicolas will.

 ben

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add ODP_PMR_OFFSET_ABS

2015-07-05 Thread Bala Manoharan
On 3 July 2015 at 21:22, Benoît Ganne bga...@kalray.eu wrote:

 Hi Bala,

  This signature should work fine. Can we additionally add the pktio
 interface information also to this API so that the
 implementation could
 fail during creation itself if more than the supported numbers get
 attached to the pktio interface.


  I do not see why we should treat it differently from other rules.
 Eg. a platform which does not support eg. ODP_PMR_IPSEC_SPI should
 fail the same way. And it would defeat the whole discussion about
 classification rules contexts (Bill's patches).


  In this case of ABS_OFFSET_L2 failure during creation is better as the
 application can better handle the code for different implementations
 supporting different number of OFFSET term values rather than failing in
 the context in which case the application will have to redo the entire
 context again.
 In classification the responsibility is with the application to
 configure the rules in an unambiguous way and the context management is
 used to facilitate the implementation to better optimise and reuse any
 HW resources if possible.
 The reason being that the application should be intelligent enough to
 configure for better performance across multiple implementations with
 minimal change coz returning an error while applying the context is good
 but not great since if each implementation returns an error for
 different configurations then portability aspect of ODP will be lost.


 Maybe we should add an API to get the max number of such rules an
 implementation support?
 Something like int odp_pmr_offset_max_get(void) or something.
 That being said, we do not have any mean to check the number of PMR an
 implementation support either, which in my point of view is the same
 problem...


This value of maximum PMRs supported by an implementation is available as a
configuration parameter in the config file.
But I think I need to move the same to config.h it is currently in the
internal header file.



  The issue here is to create this API in a way in which it avoid
 ambiguity for the application. I would like to create the API in such a
 way that it works efficiently in all the given platforms which will
 generally help ODP so that more platforms will be able to migrate
 easily. With the rest of the PMR terms there is an inherent preference
 order which will be the order in which the packet gets received in the
 interface eg L2 should be checked before L3 and the concern here is to
 provide a knowledge for the application to be able to understand the
 preference of this term ABS_OFFSET_L2. I believe we need to get an
 opinion from the different existing implementations as to the extent to
 which they can support this ABS_OFFSET_L2 and then take a call based on
 what works will all of them.


 For example, let's say an application want to define a CoS based on
 ODP_PMR_DMAC and ODP_PMR_OFFSET where offset  ODP_PMR_DMAC. If I
 understand correctly you need the application to tell you that the
 ODP_PMR_OFFSET PMR must be apply after ODP_PMR_DMAC PMR. Am I correct?
 If so, we could requests that PMR must be inserted in the correct order by
 the application.


Actually it is reverse ODP_PMR_OFFSET should be applied before DMAC
irrespective of the size of the offset. Lets take the define API
odp_pmr_create_offset_abs()




*odp_pmr_t odp_pmr_create_offset_abs(odp_pmr_term_e term,
 unsigned int offset, const void *val,
 const void *mask, uint32_t val_sz);*

The parameter offset in the above API is not part of the term but it is
part of a value and this value can be different for different
ODP_PMR_OFFSET. In a HW packet parser the parsing pointer should be at the
start location for this term ODP_PMR_OFFSET.
Basically the HW automatically advances the packet pointer to read
predefined terms like Ethernet SMAC but you need to be at the start of the
packet to read PMR_OFFSET since this offset value is dynamic and can vary
per PMR.

Regards,
Bala


 ben

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add ODP_PMR_OFFSET_ABS

2015-07-02 Thread Bala Manoharan
On 2 July 2015 at 19:03, Savolainen, Petri (Nokia - FI/Espoo) 
petri.savolai...@nokia.com wrote:



  -Original Message-
  From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of ext
  Benoît Ganne
  Sent: Thursday, June 18, 2015 9:53 PM
  To: lng-odp@lists.linaro.org
  Subject: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add
  ODP_PMR_OFFSET_ABS
 
  The application can now specify a packet offset
  in PMR rules. This offset is absolute from the
  frame start. It is used to extract the PMR value.
 
  This is useful to support arbitrary backplane
  protocols and extensions.
 
  Signed-off-by: Benoît Ganne bga...@kalray.eu
  ---
   include/odp/api/classification.h | 25 +
   1 file changed, 25 insertions(+)
 
  diff --git a/include/odp/api/classification.h
  b/include/odp/api/classification.h
  index f597b26..7b84e78 100644
  --- a/include/odp/api/classification.h
  +++ b/include/odp/api/classification.h
  @@ -215,6 +215,8 @@ typedef enum odp_pmr_term {
ODP_PMR_IPSEC_SPI,  /** IPsec session
  identifier(*val=uint32_t)*/
ODP_PMR_LD_VNI, /** NVGRE/VXLAN network identifier
(*val=uint32_t)*/
  + ODP_PMR_OFFSET_ABS, /** User-defined offset/val_sz to match
  + (*val=uint8_t[val_sz]*/

 This is user defined raw offset from first byte of the frame.

 There could be another ones for more structured user defined offsets e.g.
 ODP_PMR_OFFSET_ABS_L3/L4, offset from first byte of L3/L4 header.


/** Inner header may repeat above values with this offset */
ODP_PMR_INNER_HDR_OFF = 32
  @@ -240,6 +242,27 @@ odp_pmr_t odp_pmr_create(odp_pmr_term_e term, const
  void *val,
 const void *mask, uint32_t val_sz);
/**
  + * Create a packet match rule with absolute offset, mask and value
  + *
  + * @param[in]termOne of the enumerated values supported

 Is this needed? I thought this is custom match rule and defined only by:
 offset, val, mask, val_sz


  + * @param[in]offset  Absolute value in the packet
  + * @param[in]val Value to match against the packet header
  + *   in native byte order.
  + * @param[in]maskMask to indicate which bits of the header
  + *   should be matched ('1') and
  + *   which should be ignored ('0')
  + * @param[in]val_sz  Size of the val and mask arguments,
  + *   that must match the value size requirement of the
  + *   specific term.
  + *
  + * @return   Handle of the matching rule
  + * @retval   ODP_PMR_INVAL on failure
  + */
  +odp_pmr_t odp_pmr_create_offset_abs(odp_pmr_term_e term,
  + unsigned int offset, const void *val,

 packet offsets and lengths are in uint32_t.

  + const void *mask, uint32_t val_sz);


 I'd define it like this,

 odp_pmr_t odp_pmr_create_custom(uint32_t offset, const void *val, const
 void *mask, uint32_t val_sz);

 It would fail if the requested custom rule is not supported or too many
 custom rules are used.


This signature should work fine. Can we additionally add the pktio
interface information also to this API so that the implementation could
fail during creation itself if more than the supported numbers get attached
to the pktio interface.

Regards,
Bala



 -Petri


  +
  +/**
* Invalidate a packet match rule and vacate its resources
*
* @param[in]pmr_id  Identifier of the PMR to be destroyed
  @@ -302,6 +325,8 @@ typedef struct odp_pmr_match_t {
const void  *val;   /** Value to be matched */
const void  *mask;  /** Masked set of bits to be matched */
unsigned intval_sz;  /** Size of the term value */
  + unsigned intoffset;  /** User-defined offset in packet
  + only valid if term
 ==
  ODP_PMR_OFFSET */
   } odp_pmr_match_t;
/**
  --
  2.1.4
 
  ___
  lng-odp mailing list
  lng-odp@lists.linaro.org
  https://lists.linaro.org/mailman/listinfo/lng-odp
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add ODP_PMR_OFFSET_ABS

2015-07-02 Thread Bala Manoharan
Hi,

I have a few concerns with this API proposal,

1. The term ODP_PMR_OFFSET_ABS could be renamed as ODP_PMR_OFFSET_L2 as
this offset is from L2.
This would help some platforms which can support additional offset from
custom layer other than L2 and they can add additional Enum in their
private header file to enhance the feature.

2. This PMR term ODP_PMR_OFFSET_ABS_L2 could NOT be added in the middle of
a PMR chain as most platforms parse the packets in a linear order and it
might not be feasible to come back and check the L2 starting offset lets
say after parsing any L3 or L2 parameter. I would like to know if this
behavior is common across all platforms. Any comments are welcome here.

3. In Cavium implementation only one of this term ODP_PMR_OFFSET_ABS_L2 can
be attached to a single pktio interface. Would like to know if other HWs
have this same limitation or that any number of this terms can be attached.

Regards,
Bala

On 2 July 2015 at 17:08, Maxim Uvarov maxim.uva...@linaro.org wrote:

 That is the change which we discussed on Sprint.

 Bala, Petri - no objection to include it?

 Thank you,
 Maxim.



 On 07/02/15 13:40, Benoît Ganne wrote:

 Ping.

 On 06/18/2015 08:53 PM, Benoît Ganne wrote:

 The application can now specify a packet offset
 in PMR rules. This offset is absolute from the
 frame start. It is used to extract the PMR value.

 This is useful to support arbitrary backplane
 protocols and extensions.

 Signed-off-by: Benoît Ganne bga...@kalray.eu
 ---
   include/odp/api/classification.h | 25 +
   1 file changed, 25 insertions(+)

 diff --git a/include/odp/api/classification.h
 b/include/odp/api/classification.h
 index f597b26..7b84e78 100644
 --- a/include/odp/api/classification.h
 +++ b/include/odp/api/classification.h
 @@ -215,6 +215,8 @@ typedef enum odp_pmr_term {
   ODP_PMR_IPSEC_SPI,/** IPsec session
 identifier(*val=uint32_t)*/
   ODP_PMR_LD_VNI,/** NVGRE/VXLAN network identifier
   (*val=uint32_t)*/
 +ODP_PMR_OFFSET_ABS,/** User-defined offset/val_sz to match
 +(*val=uint8_t[val_sz]*/
/** Inner header may repeat above values with this offset */
   ODP_PMR_INNER_HDR_OFF = 32
 @@ -240,6 +242,27 @@ odp_pmr_t odp_pmr_create(odp_pmr_term_e term, const
 void *val,
const void *mask, uint32_t val_sz);
/**
 + * Create a packet match rule with absolute offset, mask and value
 + *
 + * @param[in]termOne of the enumerated values supported
 + * @param[in]offsetAbsolute value in the packet
 + * @param[in]val Value to match against the packet header
 + *in native byte order.
 + * @param[in]maskMask to indicate which bits of the header
 + *should be matched ('1') and
 + *which should be ignored ('0')
 + * @param[in]val_sz  Size of the val and mask arguments,
 + *that must match the value size requirement of the
 + *specific term.
 + *
 + * @returnHandle of the matching rule
 + * @retvalODP_PMR_INVAL on failure
 + */
 +odp_pmr_t odp_pmr_create_offset_abs(odp_pmr_term_e term,
 +unsigned int offset, const void *val,
 +const void *mask, uint32_t val_sz);
 +
 +/**
* Invalidate a packet match rule and vacate its resources
*
* @param[in]pmr_idIdentifier of the PMR to be destroyed
 @@ -302,6 +325,8 @@ typedef struct odp_pmr_match_t {
   const void*val;/** Value to be matched */
   const void*mask;/** Masked set of bits to be matched */
   unsigned intval_sz; /** Size of the term value */
 +unsigned intoffset;  /** User-defined offset in packet
 +only valid if term == ODP_PMR_OFFSET */
   } odp_pmr_match_t;
/**




 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add ODP_PMR_OFFSET_ABS

2015-07-02 Thread Bala Manoharan
Hi Ben,

Pls find my comments inline.

On 2 July 2015 at 18:48, Benoît Ganne bga...@kalray.eu wrote:

 Hi Bala,

 Thanks for your feedback. My comments inline.

  1. The term ODP_PMR_OFFSET_ABS could be renamed as ODP_PMR_OFFSET_L2 as
 this offset is from L2.
 This would help some platforms which can support additional offset from
 custom layer other than L2 and they can add additional Enum in their
 private header file to enhance the feature.


 I am not against a renaming, but this is intended to start at the
 beginning of the packet, this is why I called it 'ABS' for 'ABSOLUTE'.

  2. This PMR term ODP_PMR_OFFSET_ABS_L2 could NOT be added in the middle
 of a PMR chain as most platforms parse the packets in a linear order and
 it might not be feasible to come back and check the L2 starting offset
 lets say after parsing any L3 or L2 parameter. I would like to know if
 this behavior is common across all platforms. Any comments are welcome
 here.


 But what if you create a PMR on ODP_PMR_DMAC and then a ODP_PMR_OFFSET_ABS
 where the offset is greater than that? I believe this is an implementation
 restriction. I think it should be possible to reorder the rules in the
 implementation before configuring the HW.


This will not be feasible in my platform since DMAC is a predefined TERM
and the packet parsing pointer will be moved to this location to read the
value and OFFSET_L2 is something which needs to be calculated when the
packet pointer is at the start.



  3. In Cavium implementation only one of this term ODP_PMR_OFFSET_ABS_L2
 can be attached to a single pktio interface. Would like to know if other
 HWs have this same limitation or that any number of this terms can be
 attached.


 We can attach up to 10 such rules per CoS.


Does this mean that at each CoS level you can attach 10 different OFFSET
with different values? Our platform cannot implement this change.


 ben


Regards,
Bala
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [API-NEXT PATCH 1/3] api: classification: add ODP_PMR_OFFSET_ABS

2015-07-02 Thread Bala Manoharan
On 2 July 2015 at 21:17, Benoît Ganne bga...@kalray.eu wrote:

 Hi Bala,

  I'd define it like this,
 odp_pmr_t odp_pmr_create_custom(uint32_t offset, const void *val,
 const void *mask, uint32_t val_sz);

 It would fail if the requested custom rule is not supported or too
 many custom rules are used.


  This signature should work fine. Can we additionally add the pktio
 interface information also to this API so that the implementation could
 fail during creation itself if more than the supported numbers get
 attached to the pktio interface.


 I do not see why we should treat it differently from other rules. Eg. a
 platform which does not support eg. ODP_PMR_IPSEC_SPI should fail the same
 way. And it would defeat the whole discussion about classification rules
 contexts (Bill's patches).


In this case of ABS_OFFSET_L2 failure during creation is better as the
application can better handle the code for different implementations
supporting different number of OFFSET term values rather than failing in
the context in which case the application will have to redo the entire
context again.
In classification the responsibility is with the application to configure
the rules in an unambiguous way and the context management is used to
facilitate the implementation to better optimise and reuse any HW resources
if possible.
The reason being that the application should be intelligent enough to
configure for better performance across multiple implementations with
minimal change coz returning an error while applying the context is good
but not great since if each implementation returns an error for different
configurations then portability aspect of ODP will be lost.

The issue here is to create this API in a way in which it avoid ambiguity
for the application. I would like to create the API in such a way that it
works efficiently in all the given platforms which will generally help ODP
so that more platforms will be able to migrate easily. With the rest of the
PMR terms there is an inherent preference order which will be the order in
which the packet gets received in the interface eg L2 should be checked
before L3 and the concern here is to provide a knowledge for the
application to be able to understand the preference of this term
ABS_OFFSET_L2. I believe we need to get an opinion from the different
existing implementations as to the extent to which they can support this
ABS_OFFSET_L2 and then take a call based on what works will all of them.

Regards,
Bala


 ben

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv2 3/3] test: pktio: reduce pools seg_len to test segmented packets

2015-07-01 Thread Bala Manoharan
If the idea of this patch is to test segmented packets it can be
accomplished by allocating packets of size greater than seg_len in an
additional test case rather than modifying the segment length in pool
create function.

Regards,
Bala

On 30 June 2015 at 22:26, Stuart Haslam stuart.has...@linaro.org wrote:

 On Tue, Jun 30, 2015 at 06:31:41PM +0300, Maxim Uvarov wrote:
  On 06/30/15 17:45, Stuart Haslam wrote:
  On Tue, Jun 30, 2015 at 03:33:53PM +0200, Nicolas Morey-Chaisemartin
 wrote:
  Signed-off-by: Nicolas Morey-Chaisemartin nmo...@kalray.eu
  Reviewed-by: Stuart Haslam stuart.has...@linaro.org
  
  However this will cause test failures due to bug #1661, so the fix for
  that should be merged first.
  
  ---
test/validation/pktio/pktio.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
  
  diff --git a/test/validation/pktio/pktio.c
 b/test/validation/pktio/pktio.c
  index 82ced5c..b87586a 100644
  --- a/test/validation/pktio/pktio.c
  +++ b/test/validation/pktio/pktio.c
  @@ -208,7 +208,7 @@ static int default_pool_create(void)
  return -1;
  memset(params, 0, sizeof(params));
  -   params.pkt.seg_len = PKT_BUF_SIZE;
  +   params.pkt.seg_len = PKT_BUF_SIZE / 3;
  comment has to be here.
  params.pkt.len = PKT_BUF_SIZE;
  params.pkt.num = PKT_BUF_NUM;
  params.type= ODP_POOL_PACKET;
  @@ -607,7 +607,7 @@ static int create_pool(const char *iface, int num)
  odp_pool_param_t params;
  memset(params, 0, sizeof(params));
  -   params.pkt.seg_len = PKT_BUF_SIZE;
  +   params.pkt.seg_len = PKT_BUF_SIZE / 3;
  
  and here. So that is should be clear why seg_len is set to that.
  Also I'm not sure that all platforms support packet segmentation.

 The seg_len parameter is defined as the Minimum number of packet data
 bytes that are stored in the first segment of a packet., so presumably
 an implementation that doesn't support segmentation simply rounds it up
 to seg_len == len and everything works.

  So I would prefer separate test for segmentation instead of forcing
  all test use segmented packets.

 It's actually only the jumbo test that may get segmented.

 Not that I'm particularly against having a separate test for segmented
 packets, but I don't think it's strictly necessary.

 --
 Stuart.
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv2 3/3] test: pktio: reduce pools seg_len to test segmented packets

2015-07-01 Thread Bala Manoharan
Also, Since seg_len gives information to the implementation as to how many
bytes are required to be contiguous on the first segment for faster access.
I would like to have the value of seg_len as something in the lines of 256
or 512 bytes rather than 3K or 9K.

Regards,
Bala

On 2 July 2015 at 02:36, Bill Fischofer bill.fischo...@linaro.org wrote:

 The ODP API does not mandate that an implementation uses segments, only
 that applications can discover and work with any segments that may be
 present.

 So it is not possible to force an implementation to use segments since, as
 Stuart points out, seg_len is simply an application minimum value that
 implementations can increase as needed.

 In fixing Bug 1661 https://bugs.linaro.org/show_bug.cgi?id=1661 I extended
 the odp_packet test http://patches.opendataplane.org/patch/1956/ to
 allocate a packet of 5 times the configured minimum segment length
 (ODP_CONFIG_PACKET_SEG_LEN_MIN) to test the segment APIs against a packet
 that would likely be segmented on most implementations, but again there's
 no guarantee that a given implementation will actually create a segmented
 packet.

 In any event, if you wish to play the odds you need to base these
 calculations on ODP_CONFIG_PACKET_SEG_LEN_MIN rather than some
 application-chosen value since that's the only insight into what the
 implementation will guarantee in terms of minimum segmentation.

 On Wed, Jul 1, 2015 at 5:03 AM, Stuart Haslam stuart.has...@linaro.org
 wrote:

 On Wed, Jul 01, 2015 at 01:00:07PM +0530, Bala Manoharan wrote:
  If the idea of this patch is to test segmented packets it can be
  accomplished by allocating packets of size greater than seg_len in an
  additional test case rather than modifying the segment length in pool
  create function.
 
  Regards,
  Bala
 

 Yes but the pool seg_len is already larger than a max sized jumbo frame,
 so we wouldn't be able to allocate and send a packet larger than that.

 --
 Stuart.
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] Classification API clarity

2015-06-25 Thread Bala Manoharan
On 25 June 2015 at 02:48, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:



 On 24.06.15 19:57, Bala Manoharan wrote:

 Hi Ivan,

 Pls see my comments inline.

 On 24 June 2015 at 09:13, Ivan Khoronzhuk ivan.khoronz...@linaro.org
 mailto:ivan.khoronz...@linaro.org wrote:

 Guys, sorry I didn't ask the following questions during the ODP
 meeting.
 I had an issue with my microphone and it seems the call was ended
 quickly.
 But I need to ask. Maybe it's better, it requires some illustration.

 For now, I need the answer on the following questions:

 - Are we going to use the API proposed by Taras? I mean when PMR is
 connected to CoSes while its creation.
 (https://lists.linaro.org/pipermail/lng-odp/2015-April/010975.html)

 - About PMR. Can we re-use the same PMR in several places? If no, then
 each PMR must have source and destination CoSes and can be created and
 connected to the CoSes in the same function. If yes, what will happen
 when PMR is deleted, it must be deleted in all places where it was
 used?
 What if I want to delete PMR in one place and leave it for others?
 I know that it can be up to user to allocate several same PMRs and
 delete them separately if he wants. But what the official behaviour
 about that? In case if PMR can be connected in several places, it
 requires
 me to emulate a lot of actions as in this case it has no 1:1 PMR
 mapping.
 And in case of platform I'm aware of I must postpone the physical
 allocation
 in the hardware, as it requires PMR to have CoSes while creating.


 This is something to be discussed since we want to have a similar
 behavior for both ingress and egress APIs
 the idea was go ahead with profiling model where the HWs which can
 effectively reuse the PMRs can do some optimization and defer from
 creating a single HW resource per PMR. But this point is up for
 discussion and I am not sure there was an agreement on the same.


 Yes. But I need to implement it now.




 - About PMR order. Do the classification usage have some order
 requirements?
 For instance. Can the PMR that on level 4 (UDP port) be inserted
 before PMR
 that on level 3 (exact IP address):
 +++--++++--+++
 |CoS0| - |PMR l4| - |CoS1| - |PMR l3| - |CoS2|
 +++--++++--+++
 This use-case can be useful for example when we want to receive in
 CoS1 all UDP
 packets according to PMR l4 rule and exclude packets with IP that
 match PMR l3
 rule. CoS2 in this case is some drop CoS.

 I know that it can be implemented in the following way:
 +--+++
 |PMR l3| - |CoS2|
 ++   / +--+++
 |CoS0| -
 ++   \ +--+++
 |PMR l4| - |CoS1|
 +--+++
 In this case the packets with not needed IP will be stolen before
 matching
 the PMR l4 rule, because PMR l3 rule was connected first. This example
 requires some headache to implement on my board, if it's possible at
 all.
 But after that step I have another question:


 - Can we connect PMRs of different levels to the same src CoS?
 This question is also about ordering. The above example is OK, as we
 connected
 PMR l3 before PMR l4. But what if it's used like following:
 +--+++
 |PMR l4| - |CoS1|
 ++   / +--+++
 |CoS0| -
 ++   \ +--+++
 |PMR l3| - |CoS2|
 +--+++
 In this case all packets of PMR l4 will go to CoS1 even they match
 PMR l3. Only
 when they didn't match PMR l4 they will go to match PMR l3. The
 issue is that
 it can't be implemented on platform I'm aware of. The hardware has
 separate
 tables for matching L3 and L4 levels, and L3 level must go before L4.

 - What action has to be done if user connected entries in the above
 examples
 order? It has to be reordered by implementation before allocating in
 hardware
 or implementation has to return an error. In case of reordering, I'm
 afraid it
 won't match the user needs. And this a question of portability.


 I am not sure if HWs will be able to support such a connection I believe
 most HWs can only process the packets in a layered manner i.e L2 will be
 processed before L3.  Also in-case of classification the responsibility
 lies with the application to make sure that the rules are not programmed
 in this way. Since from the implementation it will not be possible to
 check for these error scenarios


 The user have to know what he can and what not. The hardware in one
 case can support such kind of connect, in this case the application
 works as expected. But when the application is moved to another h/w
 that couldn't physically do like

Re: [lng-odp] Classification API clarity

2015-06-24 Thread Bala Manoharan
Hi Ivan,

Pls see my comments inline.

On 24 June 2015 at 09:13, Ivan Khoronzhuk ivan.khoronz...@linaro.org
wrote:

 Guys, sorry I didn't ask the following questions during the ODP meeting.
 I had an issue with my microphone and it seems the call was ended quickly.
 But I need to ask. Maybe it's better, it requires some illustration.

 For now, I need the answer on the following questions:

 - Are we going to use the API proposed by Taras? I mean when PMR is
 connected to CoSes while its creation.
 (https://lists.linaro.org/pipermail/lng-odp/2015-April/010975.html)

 - About PMR. Can we re-use the same PMR in several places? If no, then
 each PMR must have source and destination CoSes and can be created and
 connected to the CoSes in the same function. If yes, what will happen
 when PMR is deleted, it must be deleted in all places where it was used?
 What if I want to delete PMR in one place and leave it for others?
 I know that it can be up to user to allocate several same PMRs and
 delete them separately if he wants. But what the official behaviour
 about that? In case if PMR can be connected in several places, it requires
 me to emulate a lot of actions as in this case it has no 1:1 PMR mapping.
 And in case of platform I'm aware of I must postpone the physical
 allocation
 in the hardware, as it requires PMR to have CoSes while creating.


This is something to be discussed since we want to have a similar behavior
for both ingress and egress APIs
the idea was go ahead with profiling model where the HWs which can
effectively reuse the PMRs can do some optimization and defer from creating
a single HW resource per PMR. But this point is up for discussion and I am
not sure there was an agreement on the same.


 - About PMR order. Do the classification usage have some order
 requirements?
 For instance. Can the PMR that on level 4 (UDP port) be inserted before PMR
 that on level 3 (exact IP address):
 +++--++++--+++
 |CoS0| - |PMR l4| - |CoS1| - |PMR l3| - |CoS2|
 +++--++++--+++
 This use-case can be useful for example when we want to receive in CoS1
 all UDP
 packets according to PMR l4 rule and exclude packets with IP that match
 PMR l3
 rule. CoS2 in this case is some drop CoS.

 I know that it can be implemented in the following way:
+--+++
|PMR l3| - |CoS2|
 ++   / +--+++
 |CoS0| -
 ++   \ +--+++
|PMR l4| - |CoS1|
+--+++
 In this case the packets with not needed IP will be stolen before matching
 the PMR l4 rule, because PMR l3 rule was connected first. This example
 requires some headache to implement on my board, if it's possible at all.
 But after that step I have another question:


 - Can we connect PMRs of different levels to the same src CoS?
 This question is also about ordering. The above example is OK, as we
 connected
 PMR l3 before PMR l4. But what if it's used like following:
+--+++
|PMR l4| - |CoS1|
 ++   / +--+++
 |CoS0| -
 ++   \ +--+++
|PMR l3| - |CoS2|
+--+++
 In this case all packets of PMR l4 will go to CoS1 even they match PMR l3.
 Only
 when they didn't match PMR l4 they will go to match PMR l3. The issue is
 that
 it can't be implemented on platform I'm aware of. The hardware has separate
 tables for matching L3 and L4 levels, and L3 level must go before L4.

 - What action has to be done if user connected entries in the above
 examples
 order? It has to be reordered by implementation before allocating in
 hardware
 or implementation has to return an error. In case of reordering, I'm
 afraid it
 won't match the user needs. And this a question of portability.


I am not sure if HWs will be able to support such a connection I believe
most HWs can only process the packets in a layered manner i.e L2 will be
processed before L3.  Also in-case of classification the responsibility
lies with the application to make sure that the rules are not programmed in
this way. Since from the implementation it will not be possible to check
for these error scenarios.

Regards,
Bala


 - Do default CoS of pktio to be replaced in case when
 odp_pktio_default_cos_set() is called once again?

___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [ODP/RFC v2 1/1] example: classification example

2015-06-22 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

This is a UDP source port based loopback application which creates
multiple packet output queues for given UDP port numbers and attaches
them to the given pktio interface. The default packet are enqueued into the
lowest priority value.

The packets are enqueued into different packet output queues based on
their UDP source port number and the user can configure the CIR, PIR, CBS
and PBS values for each of the packet output queues.

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
 Makefile.am |   2 +-
 configure.ac|   1 +
 example/Makefile.am |   2 +-
 example/egress/Makefile.am  |  10 +
 example/egress/odp_egress.c | 727 
 5 files changed, 740 insertions(+), 2 deletions(-)
 create mode 100644 example/egress/Makefile.am
 create mode 100644 example/egress/odp_egress.c

diff --git a/Makefile.am b/Makefile.am
index 2c8a9d6..942b2df 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,4 +1,4 @@
-ACLOCAL_AMFLAGS=-I m4
+ACLOCAL_AMFLAGS=-I m4 -g
 AUTOMAKE_OPTIONS = foreign
 
 #@with_platform@ works alone in subdir but not as part of a path???
diff --git a/configure.ac b/configure.ac
index de7de50..e6d77dc 100644
--- a/configure.ac
+++ b/configure.ac
@@ -289,6 +289,7 @@ AC_CONFIG_FILES([Makefile
 doc/Makefile
 example/Makefile
 example/classifier/Makefile
+example/egress/Makefile
 example/generator/Makefile
 example/ipsec/Makefile
 example/packet/Makefile
diff --git a/example/Makefile.am b/example/Makefile.am
index 353f397..6ea51ab 100644
--- a/example/Makefile.am
+++ b/example/Makefile.am
@@ -1 +1 @@
-SUBDIRS = classifier generator ipsec packet timer
+SUBDIRS = classifier egress generator ipsec packet timer
diff --git a/example/egress/Makefile.am b/example/egress/Makefile.am
new file mode 100644
index 000..1d62daa
--- /dev/null
+++ b/example/egress/Makefile.am
@@ -0,0 +1,10 @@
+include $(top_srcdir)/example/Makefile.inc
+
+bin_PROGRAMS = odp_egress
+odp_egress_LDFLAGS = $(AM_LDFLAGS) -static
+odp_egress_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+noinst_HEADERS = \
+ $(top_srcdir)/example/example_debug.h
+
+dist_odp_egress_SOURCES = odp_egress.c
diff --git a/example/egress/odp_egress.c b/example/egress/odp_egress.c
new file mode 100644
index 000..46ca38a
--- /dev/null
+++ b/example/egress/odp_egress.c
@@ -0,0 +1,727 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#define _POSIX_C_SOURCE 200112L
+#include time.h
+#include stdio.h
+#include stdlib.h
+#include string.h
+#include getopt.h
+#include unistd.h
+#include example_debug.h
+
+#include odp.h
+#include odp/helper/linux.h
+#include odp/helper/eth.h
+#include odp/helper/ip.h
+#include odp/helper/udp.h
+#include strings.h
+#include errno.h
+
+/** @def MAX_WORKERS
+ * @brief Maximum number of worker threads
+ */
+#define MAX_WORKERS32
+
+/** @def SHM_PKT_POOL_SIZE
+ * @brief Size of the shared memory block
+ */
+#define SHM_PKT_POOL_SIZE  (2048 * 2048)
+
+/** @def SHM_PKT_POOL_BUF_SIZE
+ * @brief Buffer size of the packet pool buffer
+ */
+#define SHM_PKT_POOL_BUF_SIZE  1856
+
+/** @def MAX_VLAN_PRIO
+ * @brief Maximum vlan priority value
+ */
+#define MAX_VLAN_PRIO  8
+
+/** Get rid of path in filename - only for unix-type paths using '/' */
+#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
+   strrchr((file_name), '/') + 1 : (file_name))
+
+typedef struct {
+   int appl_mode;  /** application mode */
+   int cpu_count;  /** Number of CPUs to use */
+   uint32_t time;  /** Number of seconds to run */
+   char *if_name;  /** pointer to interface names */
+   odp_atomic_u64_t total_packets; /** total received packets */
+   odp_atomic_u64_t drop_packets;  /** packets dropped */
+   odp_pktout_queue_t outq[MAX_VLAN_PRIO];
+   uint32_t cir_kbps[MAX_VLAN_PRIO];
+   uint32_t pir_kbps[MAX_VLAN_PRIO];
+   uint32_t cbs_bytes[MAX_VLAN_PRIO];
+   uint32_t pbs_bytes[MAX_VLAN_PRIO];
+   uint32_t rate_kbps;
+   uint32_t burst_byte;
+   uint16_t udp_port[MAX_VLAN_PRIO];
+} appl_args_t;
+
+/* helper funcs */
+static int drop_err_pkts(odp_packet_t pkt_tbl[], unsigned len);
+static void swap_pkt_addrs(odp_packet_t pkt_tbl[], unsigned len);
+static void parse_args(int argc, char *argv[], appl_args_t *appl_args);
+static void print_info(char *progname, appl_args_t *appl_args);
+static void usage(char *progname);
+static int parse_vlan_conf(appl_args_t *appl_args, char *argv[], char *optarg);
+
+static inline
+void statistics(appl_args_t *args)
+{
+   int i;
+   uint32_t timeout;
+   int infinite = 0;
+   struct timespec t;
+
+   t.tv_sec = 0;
+   t.tv_nsec = 10 * 

Re: [lng-odp] [PATCH 1/2] example: classifier: check sscanf return code

2015-06-10 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 10 June 2015 at 17:36, Nicolas Morey-Chaisemartin nmo...@kalray.eu
wrote:

 Signed-off-by: Nicolas Morey-Chaisemartin nmo...@kalray.eu
 ---
  example/classifier/odp_classifier.c | 6 --
  1 file changed, 4 insertions(+), 2 deletions(-)

 diff --git a/example/classifier/odp_classifier.c
 b/example/classifier/odp_classifier.c
 index 48fc1ab..63678b7 100644
 --- a/example/classifier/odp_classifier.c
 +++ b/example/classifier/odp_classifier.c
 @@ -171,9 +171,11 @@ static inline
  int parse_ipv4_mask(const char *str, uint32_t *mask)
  {
 uint32_t b;
 -   sscanf(str, %x, b);
 +   int ret;
 +
 +   ret = sscanf(str, %x, b);
 *mask = b;
 -   return 0;
 +   return ret != 1;
  }

  /**
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] Apply multiple filters using ODP Classifier API

2015-06-10 Thread Bala Manoharan
Hi,

There is a possibility in classification configuration to attach multiple
PMR rules at the pktio level.
I believe the above example you have described could be solved using the
following rules

pmr1 = odp_pmr_create(rule1);
pmr2 = odp_pmr_create(rule2);

odp_pktio_pmr_match_set_cos(pmr1, src_pktio, cos1);

odp_pktio_pmr_match_set_cos(pmr2, src_pktio, cos2);

With this above configuration the incoming packets will be matched with
PMR1 and if it does not match it will be matched with PMR2 and if both
fails the packet will be sent to the default Cos attached with the
src_pktio.

Please note that since both the PMRs are attached at the same level the
preference among pmr1 and pmr2 will be implementation dependent.

Hope this helps,

Bala

On 10 June 2015 at 17:23, Genís Riera Pérez genis.riera.pe...@gmail.com
wrote:

 Hi,

 I've used the linux-generic implementation of ODP since few month ago, and
 I would like to implement a classification system based on multiple
 filters. My scenario is as follows (arbitrary example, not real values):

 Imagine that you want classify packets matching the following set of
 filters:

 Filter 1:
 - IP source 1.2.3.4/24
 - IP destination 5.6.7.8/24
 - Port source 2345
 - Port destination 80
 - IP protocol TCP

 If all the above rules from Filter1 matched, the packet will be assigned
 to the queue Queue1. if not all the above rules from Filter1 matched, try
 the next filter:

 Filter2:
 - IP source 10.20.30.40/24
 - IP destination 50.60.70.80/24
 - Port source 5432
 - Port destination 80
 - IP protocol TCP

 If all the above rules from Filter2 matched, the packet will be assigned
 to the queue Queue2. If not all the above from FIlter2 matched, assign
 the packet to a default queue DefaultQueue.

 With only one filter, it's easy to accomplish by using the CoS cascade
 method (odp_cos_pmr_cos() function) and setting a default CoS and queue to
 the pktio I use in my program. But what about when I want to define more
 than one filter? I've been working on it for a long time, and I would to
 ask if someone could help me. Maybe ODP does not support this type of
 scenarios yet, but it would be fantastic to know from other people that are
 currently working on ODP too.

 Thank you so much, and sorry for my english level, which is not as good as
 might be expected.

 Regards,

 Genís Riera Pérez

 E-mail: genis.riera.pe...@gmail.com

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] validation: pktio: don't use reserved UDP port 0

2015-06-10 Thread Bala Manoharan
We need the fix for IP address also in the same function as the src and dst
ip address are also kept as zero.

Regards,
Bala

On 10 June 2015 at 20:55, Maxim Uvarov maxim.uva...@linaro.org wrote:

 I always fix that for debugging :)

 Maxim.

 On 06/10/15 18:00, Stuart Haslam wrote:

 The test generates UDP packets with the src and dest port numbers set to
 0,
 which is a reserved port so may lead some to the packets being dropped
 and the test failing. Change to use some other arbitrary port numbers.

 Signed-off-by: Stuart Haslam stuart.has...@linaro.org
 ---
 This is a fix for; https://bugs.linaro.org/show_bug.cgi?id=1632

   test/validation/odp_pktio.c | 4 ++--
   1 file changed, 2 insertions(+), 2 deletions(-)

 diff --git a/test/validation/odp_pktio.c b/test/validation/odp_pktio.c
 index e1025d6..495222f 100644
 --- a/test/validation/odp_pktio.c
 +++ b/test/validation/odp_pktio.c
 @@ -168,8 +168,8 @@ static uint32_t pktio_init_packet(odp_packet_t pkt)
 /* UDP */
 odp_packet_l4_offset_set(pkt, ODPH_ETHHDR_LEN + ODPH_IPV4HDR_LEN);
 udp = (odph_udphdr_t *)(buf + ODPH_ETHHDR_LEN + ODPH_IPV4HDR_LEN);
 -   udp-src_port = odp_cpu_to_be_16(0);
 -   udp-dst_port = odp_cpu_to_be_16(0);
 +   udp-src_port = odp_cpu_to_be_16(12049);
 +   udp-dst_port = odp_cpu_to_be_16(12050);
 udp-length = odp_cpu_to_be_16(pkt_len -
ODPH_ETHHDR_LEN -
 ODPH_IPV4HDR_LEN);
 udp-chksum = 0;


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] Apply multiple filters using ODP Classifier API

2015-06-10 Thread Bala Manoharan
Hi Genís,

Sorry that I had not explained about the second part of your question in
the previous mail,

The composite pmr (odp_pmr_match_t ) succeeds only if it matches ALL the
values in the odp_pmr_set_t it is an AND operation and not an OR operation.
The entire set either matches or does not match as a single entity.It is
reverse of what you have explained in your mail.

So in your case you need to create two composite pmrs using
odp_pmr_match_set_create() and attach them to pktio interface using
odp_pktio_pmr_match_set_cos() function.

Regards,
Bala

On 10 June 2015 at 19:33, Genís Riera Pérez genis.riera.pe...@gmail.com
wrote:

 Hi Bala,

 First of all, thanks a lot for your response. I think I've explained
 myself wrong in the previous mail, so I'll try to express myself in a
 better way.

 The scanerio I propose has two filter, eahc one composed by a set of PMR.
 The problem is that I want the packet classified in Queue1 if matches
 exactly all PMR in Filter1, or classified in Queue2 if matches exactky
 all PMR in Filter2, or classified in DefaultQueue if not matches any of
 the specified filters. As an example, imagine that you receive a packet
 with the following information:

 - IP source 10.20.30.23
 - IP destination 50.60.70.80
 - Port source 5432
 - Port destination 80
 - IP protocol TCP

 This packet will be enqueued on Queue2 since its IP source does not
 match the IP source specified in Filter1 (neither the source port), but the
 whole packet matches all PMRs in Filter2, so its classified queue will be
 Queue2. However, if you receive the following packet:

 - IP source 1.2.3.23
 - IP destination 50.60.70.80
 - Port source 5432
 - Port destination 80
 - IP protocol TCP

 This will be enqueued on DefaultQueue because for the Filter1, the IP
 destination does not match, and for the Filter2 the IP source does not
 match, so at the end this packet does not match completely any of the
 filters and will be enqueued on DefaultQueue.

 As I understand, a composite PMR (odp_pmr_set_t) applies if one of the PMR
 belongging to the set matches (an OR operation between PMRs of a
 odp_pmr_set_t), but I need to apply an AND operation between PMRs inside a
 filter, plus apply an OR operation between the different filters I specify
 to the application. Am I right?

 Hope I clarify the problem I try to solve with ODP.

 Regards,

 Genís Riera Pérez

 E-mail: genis.riera.pe...@gmail.com

 2015-06-10 15:25 GMT+02:00 Bala Manoharan bala.manoha...@linaro.org:

 Hi,

 There is a possibility in classification configuration to attach multiple
 PMR rules at the pktio level.
 I believe the above example you have described could be solved using the
 following rules

 pmr1 = odp_pmr_create(rule1);
 pmr2 = odp_pmr_create(rule2);

 odp_pktio_pmr_match_set_cos(pmr1, src_pktio, cos1);

 odp_pktio_pmr_match_set_cos(pmr2, src_pktio, cos2);

 With this above configuration the incoming packets will be matched with
 PMR1 and if it does not match it will be matched with PMR2 and if both
 fails the packet will be sent to the default Cos attached with the
 src_pktio.

 Please note that since both the PMRs are attached at the same level the
 preference among pmr1 and pmr2 will be implementation dependent.

 Hope this helps,

 Bala

 On 10 June 2015 at 17:23, Genís Riera Pérez genis.riera.pe...@gmail.com
 wrote:

 Hi,

 I've used the linux-generic implementation of ODP since few month ago,
 and I would like to implement a classification system based on multiple
 filters. My scenario is as follows (arbitrary example, not real values):

 Imagine that you want classify packets matching the following set of
 filters:

 Filter 1:
 - IP source 1.2.3.4/24
 - IP destination 5.6.7.8/24
 - Port source 2345
 - Port destination 80
 - IP protocol TCP

 If all the above rules from Filter1 matched, the packet will be assigned
 to the queue Queue1. if not all the above rules from Filter1 matched, try
 the next filter:

 Filter2:
 - IP source 10.20.30.40/24
 - IP destination 50.60.70.80/24
 - Port source 5432
 - Port destination 80
 - IP protocol TCP

 If all the above rules from Filter2 matched, the packet will be assigned
 to the queue Queue2. If not all the above from FIlter2 matched, assign
 the packet to a default queue DefaultQueue.

 With only one filter, it's easy to accomplish by using the CoS cascade
 method (odp_cos_pmr_cos() function) and setting a default CoS and queue to
 the pktio I use in my program. But what about when I want to define more
 than one filter? I've been working on it for a long time, and I would to
 ask if someone could help me. Maybe ODP does not support this type of
 scenarios yet, but it would be fantastic to know from other people that are
 currently working on ODP too.

 Thank you so much, and sorry for my english level, which is not as good
 as might be expected.

 Regards,

 Genís Riera Pérez

 E-mail: genis.riera.pe...@gmail.com

 ___
 lng-odp mailing list
 lng-odp

Re: [lng-odp] [PATCH] validation: pktio: do not dequeue from scheduled queue

2015-06-08 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 2 June 2015 at 20:16, Maxim Uvarov maxim.uva...@linaro.org wrote:

 packet i/o test can create 2 types of queues: scheduled and
 polled. Do not do dequeue from scheduled queue.
 https://bugs.linaro.org/show_bug.cgi?id=1383

 Signed-off-by: Maxim Uvarov maxim.uva...@linaro.org
 ---
  test/validation/odp_pktio.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

 diff --git a/test/validation/odp_pktio.c b/test/validation/odp_pktio.c
 index 7c1a666..e1025d6 100644
 --- a/test/validation/odp_pktio.c
 +++ b/test/validation/odp_pktio.c
 @@ -319,7 +319,8 @@ static odp_packet_t wait_for_packet(odp_queue_t queue,
 start = odp_time_cycles();

 do {
 -   if (queue != ODP_QUEUE_INVALID)
 +   if (queue != ODP_QUEUE_INVALID 
 +   odp_queue_type(queue) == ODP_QUEUE_TYPE_POLL)
 ev = queue_deq_wait_time(queue, ns);
 else
 ev  = odp_schedule(NULL, ns);
 --
 1.9.1

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] [ODP/RFC 2/2] example: egress classifier example

2015-06-08 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

This is a vlan priority based L2 loopback application which creates
multiple packet output queues for each vlan priority value and attaches
them to the given pktio interface.

The packets are enqueued into different packet output queues based on
their vlan priority values and the user can configure the CIR, PIR, CBS
and PBS values for each of the packet output queues.

The user can also set a port level rate limitting value.

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
 Makefile.am |   2 +-
 configure.ac|   1 +
 example/Makefile.am |   2 +-
 example/egress/Makefile.am  |  10 +
 example/egress/odp_egress.c | 702 
 5 files changed, 715 insertions(+), 2 deletions(-)
 create mode 100644 example/egress/Makefile.am
 create mode 100644 example/egress/odp_egress.c

diff --git a/Makefile.am b/Makefile.am
index b9b2517..72be426 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,4 +1,4 @@
-ACLOCAL_AMFLAGS=-I m4
+ACLOCAL_AMFLAGS=-I m4 -g
 AUTOMAKE_OPTIONS = foreign
 
 SUBDIRS = doc platform example test helper scripts
diff --git a/configure.ac b/configure.ac
index 89b0846..ce70ca0 100644
--- a/configure.ac
+++ b/configure.ac
@@ -287,6 +287,7 @@ AC_CONFIG_FILES([Makefile
 doc/Makefile
 example/Makefile
 example/classifier/Makefile
+example/egress/Makefile
 example/generator/Makefile
 example/ipsec/Makefile
 example/packet/Makefile
diff --git a/example/Makefile.am b/example/Makefile.am
index 353f397..6ea51ab 100644
--- a/example/Makefile.am
+++ b/example/Makefile.am
@@ -1 +1 @@
-SUBDIRS = classifier generator ipsec packet timer
+SUBDIRS = classifier egress generator ipsec packet timer
diff --git a/example/egress/Makefile.am b/example/egress/Makefile.am
new file mode 100644
index 000..1d62daa
--- /dev/null
+++ b/example/egress/Makefile.am
@@ -0,0 +1,10 @@
+include $(top_srcdir)/example/Makefile.inc
+
+bin_PROGRAMS = odp_egress
+odp_egress_LDFLAGS = $(AM_LDFLAGS) -static
+odp_egress_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+noinst_HEADERS = \
+ $(top_srcdir)/example/example_debug.h
+
+dist_odp_egress_SOURCES = odp_egress.c
diff --git a/example/egress/odp_egress.c b/example/egress/odp_egress.c
new file mode 100644
index 000..969b1e3
--- /dev/null
+++ b/example/egress/odp_egress.c
@@ -0,0 +1,702 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#define _POSIX_C_SOURCE 200112L
+#include time.h
+#include stdio.h
+#include stdlib.h
+#include string.h
+#include getopt.h
+#include unistd.h
+#include example_debug.h
+
+#include odp.h
+#include odp/helper/linux.h
+#include odp/helper/eth.h
+#include odp/helper/ip.h
+#include strings.h
+#include errno.h
+
+/** @def MAX_WORKERS
+ * @brief Maximum number of worker threads
+ */
+#define MAX_WORKERS32
+
+/** @def SHM_PKT_POOL_SIZE
+ * @brief Size of the shared memory block
+ */
+#define SHM_PKT_POOL_SIZE  (1024 * 2048)
+
+/** @def SHM_PKT_POOL_BUF_SIZE
+ * @brief Buffer size of the packet pool buffer
+ */
+#define SHM_PKT_POOL_BUF_SIZE  1856
+
+/** @def MAX_VLAN_PRIO
+ * @brief Maximum vlan priority value
+ */
+#define MAX_VLAN_PRIO  8
+
+/** Get rid of path in filename - only for unix-type paths using '/' */
+#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
+   strrchr((file_name), '/') + 1 : (file_name))
+
+typedef struct {
+   int appl_mode;  /** application mode */
+   int cpu_count;  /** Number of CPUs to use */
+   uint32_t time;  /** Number of seconds to run */
+   char *if_name;  /** pointer to interface names */
+   odp_atomic_u64_t total_packets; /** total received packets */
+   odp_atomic_u64_t drop_packets;  /** packets dropped */
+   odp_pktout_queue_t outq[MAX_VLAN_PRIO];
+   uint32_t cir_kbps[MAX_VLAN_PRIO];
+   uint32_t pir_kbps[MAX_VLAN_PRIO];
+   uint32_t cbs_bytes[MAX_VLAN_PRIO];
+   uint32_t pbs_bytes[MAX_VLAN_PRIO];
+   uint32_t rate_kbps;
+   uint32_t burst_byte;
+} appl_args_t;
+
+/* helper funcs */
+static int drop_err_pkts(odp_packet_t pkt_tbl[], unsigned len);
+static void swap_pkt_addrs(odp_packet_t pkt_tbl[], unsigned len);
+static void parse_args(int argc, char *argv[], appl_args_t *appl_args);
+static void print_info(char *progname, appl_args_t *appl_args);
+static void usage(char *progname);
+static int parse_vlan_conf(appl_args_t *appl_args, char *argv[], char *optarg);
+
+static inline
+void statistics(appl_args_t *args)
+{
+   int i;
+   uint32_t timeout;
+   int infinite = 0;
+   struct timespec t;
+
+   t.tv_sec = 0;
+   t.tv_nsec = 10 * 1000 * 1000; /* 10 milli second */
+
+   printf(\n);
+   for (i = 0; i  80; i++)
+  

[lng-odp] [ODP/RFC 1/2] linux-generic: egress classification implementation

2015-06-08 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

This is linux-generic implementation of egress classification.
This is a lock-less implementation for output packet scheduling, shaping
and rate limitting.

Multiple packet output queues with different priority values can be created
and attached with pktio interface and the packet enqueued into the output
queues will be scheduled based on the priority value.

Committed Information Rate, Committed Burst Size, Peak Information Rate
and Peak Burst Size can be set individually on each packet output queues.

Rate limitting can also be set on a pktio interface.

The implementation supports multiple hierarchy level of packet scheduling
but since the current APIs are defined only for a single level, this
version of the implementation supports only one hierarchy level.
Once the APIs are finalized for multiple hierarchy level the same will be
incorporated.

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
 include/odp.h  |   1 +
 include/odp/api/pktout.h   | 276 +
 platform/linux-generic/Makefile.am |   4 +
 platform/linux-generic/include/odp/pktout.h|  30 ++
 .../linux-generic/include/odp/plat/pktout_types.h  |  44 ++
 platform/linux-generic/include/odp_internal.h  |   3 +
 .../linux-generic/include/odp_packet_io_internal.h |  12 +-
 .../include/odp_packet_out_internal.h  | 164 
 platform/linux-generic/odp_init.c  |   5 +
 platform/linux-generic/odp_packet_io.c |  49 ++-
 platform/linux-generic/odp_packet_out.c| 454 +
 11 files changed, 1027 insertions(+), 15 deletions(-)
 create mode 100644 include/odp/api/pktout.h
 create mode 100644 platform/linux-generic/include/odp/pktout.h
 create mode 100644 platform/linux-generic/include/odp/plat/pktout_types.h
 create mode 100644 platform/linux-generic/include/odp_packet_out_internal.h
 create mode 100644 platform/linux-generic/odp_packet_out.c

diff --git a/include/odp.h b/include/odp.h
index 2bac510..b8df7d3 100644
--- a/include/odp.h
+++ b/include/odp.h
@@ -38,6 +38,7 @@ extern C {
 #include odp/shared_memory.h
 #include odp/buffer.h
 #include odp/pool.h
+#include odp/pktout.h
 #include odp/queue.h
 #include odp/ticketlock.h
 #include odp/time.h
diff --git a/include/odp/api/pktout.h b/include/odp/api/pktout.h
new file mode 100644
index 000..0d64a61
--- /dev/null
+++ b/include/odp/api/pktout.h
@@ -0,0 +1,276 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+/**
+ * @file
+ *
+ * ODP packet descriptor
+ */
+
+#ifndef ODP_API_PACKET_OUT_H_
+#define ODP_API_PACKET_OUT_H_
+
+#ifdef __cplusplus
+extern C {
+#endif
+
+#include odp/std_types.h
+#include odp/buffer.h
+#include odp/packet_io.h
+#include odp/schedule.h
+
+/**
+ * Number of packet output scheduling priorities per pktio interface
+ * This function returns the number of pktout scheduling priority
+ * supported in the platform per pktio interface.
+ *
+ * @param[in]  pktio   pktio handle
+ *
+ * @retval Number of packet output scheduling priorities
+ * @retval 1 if the pktio does not support pktout priority
+ */
+int odp_pktout_num_prio(odp_pktio_t pktio);
+
+/**
+* Gets the total number of hierarchy level supported by the platform
+* for output packet shaping and scheduling
+*
+* @return  Total number of hierarchy supported
+*/
+uint8_t odp_pktout_num_heirarchy(void);
+
+/* Get Maximum nuber of packet output queues that can be attached with
+* the given pktio interface. This will be the total number of packet output
+* queues supported by the pktio interface. This number is different from the
+* pktout node which are points in the hierarchial tree.
+*
+* @param[in]   pktio   pktio handle
+*
+* @retval  Number of output queues that can be attached to the
+*  given pktio handle
+* @retval  1 if the pktout scheduling is not supported
+*/
+uint32_t odp_pktout_max_queues(odp_pktio_t pktio);
+
+/* Assign outq to pktio interface
+* Sets the outq into the pktio interface, Any queues that maybe previously
+* associated with this pktio will be superseded by this queue array.
+* The number of queues associate in the pktio interface must not exceed the
+* maximum supported output queue given by odp_pktout_max_queues() function
+* for the same pktio interface.
+* Calling this function with num_queues set to zero will disassociate all
+* the queues that are linked to this pktio interface
+*
+* @param[in]   pktio   pktio handle
+* @param[in]   queue_hdl[] Array of queue handles
+* @param[in]   num_queues  Number of queue handles in the array
+*
+* @return  Number of queues associated
+*  0 in case of failure
+*/
+int odp_pktout_set_queues(odp_pktio_t pktio,
+  

Re: [lng-odp] loopback interface

2015-05-28 Thread Bala Manoharan
Hi,

Yes. Lookback interface should behave like other interfaces in the
system and classification rules if set should get applied.

Regards,
Bala

On 28 May 2015 at 14:51, Ola Liljedahl ola.liljed...@linaro.org wrote:
 Is the loopback interface supposed to be supported in all ODP
 implementations? And it will have the same functionality as real pktio
 interfaces?

 I was thinking of feeding the loopback interface packets that are read from
 a pcap file. Classification rules could then be associated with the loopback
 interface and packets will eventually be enqueued on the queues specified in
 the matching class of service.

 -- Ola


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] Fwd: [PATCH 1/3] example: classifier: remove extra local init

2015-05-28 Thread Bala Manoharan
Hi Maxim,

This patch from Petri fixes an issue in classifier example and I had
sent my Reviewed-by for the same.
Can you please merge this patch.

Regards,
Bala

-- Forwarded message --
From: Savolainen, Petri (Nokia - FI/Espoo) petri.savolai...@nokia.com
Date: 7 May 2015 at 18:54
Subject: RE: [lng-odp] [PATCH 1/3] example: classifier: remove extra local init
To: ext Bala Manoharan bala.manoha...@linaro.org
Cc: LNG ODP Mailman List lng-odp@lists.linaro.org


I noticed the same and will add that documentation.



-Petri



From: ext Bala Manoharan [mailto:bala.manoha...@linaro.org]
Sent: Thursday, May 07, 2015 3:49 PM
To: Savolainen, Petri (Nokia - FI/Espoo)
Cc: LNG ODP Mailman List
Subject: Re: [lng-odp] [PATCH 1/3] example: classifier: remove extra local init



Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

IMO, we can add additional information in odph_linux_pthread_create()
header file documentation that this function is expected to call
odp_init_local() for the thread it creates. Current documentation only
says the following

/**
 * Creates and launches pthreads
 *
 * Creates, pins and launches threads to separate CPU's based on the cpumask.
 *
 * @param thread_tblThread table
 * @param mask  CPU mask
 * @param start_routine Thread start function
 * @param arg   Thread argument
 */
void odph_linux_pthread_create(odph_linux_pthread_t *thread_tbl,
   const odp_cpumask_t *mask,
   void *(*start_routine) (void *), void *arg);

Regards,

Bala



On 7 May 2015 at 17:04, Petri Savolainen petri.savolai...@nokia.com wrote:

Worker threads are created with odph_linux_pthread_create()
which calls odp_local_init() before entering the function.

Signed-off-by: Petri Savolainen petri.savolai...@nokia.com
---
 example/classifier/odp_classifier.c | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/example/classifier/odp_classifier.c
b/example/classifier/odp_classifier.c
index d78eb7b..35d9684 100644
--- a/example/classifier/odp_classifier.c
+++ b/example/classifier/odp_classifier.c
@@ -249,13 +249,6 @@ static void *pktio_receive_thread(void *arg)
appl_args_t *appl = (appl_args_t *)arg;
global_statistics *stats;

-
-   /* Init this thread */
-   if (odp_init_local()) {
-   EXAMPLE_ERR(ODP thread local init failed.\n);
-   exit(EXIT_FAILURE);
-   }
-
/* Loop packets */
for (;;) {
odp_pktio_t pktio_tmp;
--
2.4.0

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] odp_classifier issue

2015-05-28 Thread Bala Manoharan
Yes. I am also searching this patch in the repo.
Looks like the patch from Petri has been missed.

Regards,
Bala

On 28 May 2015 at 18:31, Savolainen, Petri (Nokia - FI/Espoo)
petri.savolai...@nokia.com wrote:
 I send a patch that corrected this, but not sure what happened to it.



 -Petri



 From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of ext
 Radu-Andrei Bulie
 Sent: Thursday, May 28, 2015 3:54 PM
 To: lng-odp@lists.linaro.org
 Subject: [lng-odp] odp_classifier issue



 Hi,



 There is a problem in the odp_classifier demo application. Each time an
 odp_thread is created,  inside the thread function an odp_init_local is done

 which is not ok.  Init_local is already done in the odp_thread helper,  when
 a thread is created,  so there is no need in calling again odp_init_local on
 the

 existing thread.



 Regards,



 Radu


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] buffer_alloc length parameter

2015-05-27 Thread Bala Manoharan
On 27 May 2015 at 14:04, Ola Liljedahl ola.liljed...@linaro.org wrote:
 On 27 May 2015 at 08:00, Bala Manoharan bala.manoha...@linaro.org wrote:

 Hi,

 On 26 May 2015 at 20:14, Zoltan Kiss zoltan.k...@linaro.org wrote:
 
 
  On 26/05/15 12:19, Bala Manoharan wrote:
 
  In the current API the odp_packet_alloc() function always takes len
  as input parameter and hence it is not required to support any default
  size.
 
 
  Yes, that's why I think it's a problem that linux-generic takes 0 as
  use
  the segment size of the pool, whilst another implementation (e.g.
  ODP-DPDK)
  can interpret it as a buffer with only (default) headroom and tailroom.
  I
  think my interpretation for ODP-DPDK is more logical, but it's not
  necessarily what we need. We should probably define that in the API.

 Yes. we should define this properly in the API but I am not sure about
 the use-case where a 0 size buffer
 will be required by the application.

 I agree it is difficult to see the actual use case for allocating a packet
 with length 0 (no data present) but you can also argue that there is nothing
 special about a 0 length packet, it should behave the same as when the
 length is  0.

There is nothing wrong in 0 size packet but my concern is that HW
limitations will prevent the HW to return a packet of size 0. The
packet will return a length of zero but actually the size will be
equal to minimal segment size.


 Since this specific value for the size parameter causes such confusion on
 what to actually do, I think we should minimum define it better. The
 implementation in linux-generic (per above) is not good as the segment size
 may change and we don't want that to be directly visible to the application.

 There also needs to be a verification test that when allocating a packet
 (using odp_packet_alloc) with data length N, the packet length
 (odp_packet_len) is actually N. N includes the value 0 as 0 is a valid
 packet data length.

I am okay with this test case but I just want to make sure that if a
packet of size 0 is allocated from a packet pool the pool will get
depleted at the rate equal to the segment size of the pool.
Just to be clear if odp_packet_alloc(0) is run 100 times in a loop
then the pool will be depleted by an amount of ( 100 * segment size of
the pool) .

Regards,
Bala

 Allocating a packet with length 1 and then calling odp_packet_pull_tail(1)
 should create a similar (identical?) 0 length packet.
 odp_packet_pull_head(1) has slightly different semantics (the headroom
 becomes larger, not the tailroom but this is difficult to observe directly).


 
 
  In case of odp_buffer_alloc() the default value of allocation will be
  equal to the size parameter which will be the value given during
  odp_pool_create() function as part of odp_pool_params_t variable.
 
  Yes, but when you create a packet pool, you'll set up the pkt part of
  that
  union, so buf.size could be anything. It's just a coincidence that it's
  pkt.len, odp_buffer_alloc() uses the wrong value there. The same problem
  applies to odp_packet_alloc(), it also uses buf.size

 If the requested pool is of type ODP_POOL_PACKET then implementation
 is supposed to check only the packet union part of the
 odp_pool_param_t as packet pool contains seg_len which is the
 minimum continuous data which the applications wants to access in the
 first segment of the packet as this length might indicate the required
 amount of header information to be available in the first segment for
 better performance from application.
 Also a packet pool might do some additional allocation for
 headroom/tailroom and will be different than a simple buffer pool.
 The same holds true for timer pool type also.
 If the above issue is in the linux-generic implementation then we can fix
 that.

 
  If this value is given as 0 then the HW will chose the default
  segment size supported by the HW.
  This value will be the value of the segment with which the pools are
  getting created by default in the HW.
 
 
  odp_pool_create just copies the parameters, it doesn't modify them to
  reflect what values were used in the end. So odp_buffer_alloc() will
  call
  buffer_alloc() with s.params.buf.size, and if that's 0, totsize =
  pool-s.headroom + size + pool-s.tailroom.

 IMO, if the input parameter value is 0 then the implementation should
 store the s.params.buf.size as the
 default segment size supported by the implementation. There might be
 implementations which cannot support
 segments lesser than a default segment size in the system and it might
 always return the default size if the requested value by the
 application is smaller.
 Again if this is an issue in the linux-generic implementation then we
 can fix the same.

 Regards,
 Bala
 
  Zoli
 
 
 
  Regards,
  Bala
 
  On 26 May 2015 at 16:25, Zoltan Kiss zoltan.k...@linaro.org wrote:
 
  Which value? I've mentioned seg_len and seg_size, but the latter
  actually
  doesn't exist. I guess I meant 'len'. The trouble

Re: [lng-odp] buffer_alloc length parameter

2015-05-26 Thread Bala Manoharan
In the current API the odp_packet_alloc() function always takes len
as input parameter and hence it is not required to support any default
size.

In case of odp_buffer_alloc() the default value of allocation will be
equal to the size parameter which will be the value given during
odp_pool_create() function as part of odp_pool_params_t variable.
If this value is given as 0 then the HW will chose the default
segment size supported by the HW.
This value will be the value of the segment with which the pools are
getting created by default in the HW.

Regards,
Bala

On 26 May 2015 at 16:25, Zoltan Kiss zoltan.k...@linaro.org wrote:
 Which value? I've mentioned seg_len and seg_size, but the latter actually
 doesn't exist. I guess I meant 'len'. The trouble with them, that you can
 set both of them to 0 to ask for the default, so even if you look up the
 params with odp_pool_info(), you can end up with both values 0.
 So how do you figure out what length should be used by odp_buffer_alloc?
 With the current API definition it will probably end up using pkt.len, just
 by accident.
 And the same applies to odp_packet_alloc, although I think it's also a bug
 that when the user requested a 0 length buffer (plus headroom and tailroom),
 it gets one with the default values.

 Zoli

 On 23/05/15 03:00, Bill Fischofer wrote:

 In linux-generic that value represents the default allocation length.

 On Friday, May 22, 2015, Zoltan Kiss zoltan.k...@linaro.org
 mailto:zoltan.k...@linaro.org wrote:

 Hi,

 While fixing up things in the DPDK implementation I've found that
 linux-generic might have some troubles too. odp_buffer_alloc() and
 odp_packet_alloc() uses
 odp_pool_to_entry(pool_hdl)-s.params.buf.size, but if it's a packet
 pool (which is always true in case of odp_packet_alloc(), and might
 be true with odp_buffer_alloc()).
 My first idea would be to use s.params.pkt.seg_len in that case, but
 it might be 0. Maybe s.seg_size would be the right value?
 If anyone has time to come up with a patch to fix this, feel free, I
 probably won't have time to work on this in the near future.

 Zoli
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] example: classifier: check for null token

2015-05-15 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 15 May 2015 at 02:53, Mike Holmes mike.hol...@linaro.org wrote:
 fixes https://bugs.linaro.org/show_bug.cgi?id=1534

 Signed-off-by: Mike Holmes mike.hol...@linaro.org
 ---

 The case  sudo ./example/classifier/odp_classifier -i eth0 -m 0 -p 
 now returns
 odp_classifier.c:599:parse_pmr_policy():Invalid ODP_PMR_TERM string

  example/classifier/odp_classifier.c | 3 +++
  1 file changed, 3 insertions(+)

 diff --git a/example/classifier/odp_classifier.c 
 b/example/classifier/odp_classifier.c
 index d78eb7b..3cc6738 100644
 --- a/example/classifier/odp_classifier.c
 +++ b/example/classifier/odp_classifier.c
 @@ -559,6 +559,9 @@ static void swap_pkt_addrs(odp_packet_t pkt_tbl[], 
 unsigned len)

  static int convert_str_to_pmr_enum(char *token, odp_pmr_term_e *term)
  {
 +   if (NULL == token)
 +   return -1;
 +
 if (0 == strcasecmp(token, ODP_PMR_SIP_ADDR)) {
 *term = ODP_PMR_SIP_ADDR;
 return 0;
 --
 2.1.4

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] Query regarding sequence number update in IPSEC application

2015-05-13 Thread Bala Manoharan
My understanding is that the Atomic queues are expected to maintain
the ingress order and not on the order in which the packets were
en-queued into the queue. So in the  above example atomic queues will
preserve the ingress order even if the packets are submitted by
different cores in different order.

Regards,
Bala

On 13 May 2015 at 15:21, Agarwal Nikhil Agarwal
nikhil.agar...@freescale.com wrote:


 My current concerns is w.r.t the ipsec example use case only. Usages of
 atomic queues may be demonstrated by other specific example.

 Also even if we use atomic queues for sequence number it cannot guarantee
 the ingress order preservation if incoming packets are picked up by
 different cores and submitted to seq number queue in different order.



 Regards

 Nikhil



 From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Bala
 Manoharan
 Sent: Tuesday, May 12, 2015 5:58 PM
 To: Ola Liljedahl


 Cc: lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] Query regarding sequence number update in IPSEC
 application



 IMO, using atomic variable instead of atomic queues will work for this Ipsec
 example use-case as in this case the critical section is required only for
 updating the sequence number but in a generic use-case the atomicity should
 be protected over a region of code which the application wants to be
 executed in the ingress-order.



 If HWs are capable we should add additional APIs for scheduler based locking
 which can be used by application incase the critical section is small enough
 that going through scheduler will cause a performance impact.



 Regards,

 Bala



 On 12 May 2015 at 17:31, Ola Liljedahl ola.liljed...@linaro.org wrote:

 Yes the seqno counter is a shared resource and the atomic_fetch_and_add will
 eventually become a bottleneck for per-SA throughput. But one could hope
 that it should scale better than using atomic queues although this depends
 on the actual microarchitecture.



 Freescale PPC has decorated storage, can't it be used for
 atomic_fetch_and_add()?

 ARMv8.1 has support for far atomics which are supposed to scale better
 (again depends on the actual implementation).



 On 12 May 2015 at 13:47, Alexandru Badicioiu
 alexandru.badici...@linaro.org wrote:

 Atomic increment performance gets worse by increasing the number of cores -

 see
 https://www.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.2015.01.31a.pdf
 - chapter 5) for some measurements on a conventional Intel machine.

 It may be possible for this overhead to become bigger than the one
 associated with the atomic queue.



 Alex



 On 12 May 2015 at 14:20, Ola Liljedahl ola.liljed...@linaro.org wrote:

 I think it should be OK to use ordered queues instead of atomic queues, e.g.
 for sequence number allocation (which will need an atomic variable to hold
 the seqno counter). Packet ingress order will be maintained but might not
 always correspond to sequence number order. This is not a problem a the
 sliding window in the replay protection will take care of that, the sliding
 window could be hundreds of entries (sequence numbers) large (each will only
 take one bit). Packet ingress order is the important characteristic that
 must be maintained. The IPsec sequence number is not used for packet
 ordering and order restoration, it is only used for replay protection.



 Someone with a platform which supports both ordered and atomic scheduling
 could benchmark both designs and see how performance scales when using
 ordered queues (and that atomic fetch_and_add) for some relevant traffic
 patterns.



 On 8 May 2015 at 13:53, Bill Fischofer bill.fischo...@linaro.org wrote:

 Jerrin,



 Can you propose such a set of APIs for further discussion?  This would be
 good to discuss at the Tuesday call.



 Thanks.



 Bill



 On Fri, May 8, 2015 at 12:07 AM, Jacob, Jerin
 jerin.ja...@caviumnetworks.com wrote:


 I agree with Ola here on preserving the ingress order.
 However, I have experienced same performance issue as Nikhil pointed out
 (atomic queues have too much overhead for short critical section)

 I am not sure about any other HW but Cavium has support for
 introducing the critical section while maintain the ingress order as a HW
 scheduler feature.

 IMO, if such support is available in other HW then
 odp_schedule_ordered_lock()/unlock()
 kind of API will solve the performance issue for the need for short critical
 section in ordered flow.

 /Jerin.


 From: lng-odp lng-odp-boun...@lists.linaro.org on behalf of Ola Liljedahl
 ola.liljed...@linaro.org
 Sent: Thursday, May 7, 2015 9:06 PM
 To: nikhil.agar...@freescale.com
 Cc: lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] Query regarding sequence number update in IPSEC
 application



 Using atomic queues will preserve the ingress order when allocating and
 assigning the sequence number. Also you don't need to use an expensive
 atomic operation for updating the sequence number as the atomic queue and
 scheduling

Re: [lng-odp] Classification: APIs deferred from ODP v1.0

2015-05-13 Thread Bala Manoharan
Hi,

These APIs are in the list of changes required for classification and
will be introduced  in the next version of ODP.

Regards,
Bala

On 13 May 2015 at 15:12, Agrawal Hemant hem...@freescale.com wrote:
 HI,
 What is the plan to re-introduce the flow signature based 
 distribution APIs in next version of ODP?

 Regards,
 Hemant

 -Original Message-
 From: lng-odp-boun...@lists.linaro.org 
 [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Balasubramanian 
 Manoharan
 Sent: Wednesday, November 26, 2014 3:29 PM
 To: lng-odp@lists.linaro.org
 Subject: [lng-odp] [PATCH v2] Classification: APIs deferred from ODP v1.0

 This patch removes Classification APIs which have been deferred from ODP v1.0 
 The following is the list of the deferred APIs
 * odp_cos_set_queue_group
 * odp_cos_set_pool
 * odp_cos_set_headroom
 * odp_cos_flow_set
 * odp_cos_flow_is_set
 * odp_cos_class_flow_signature
 * odp_cos_port_flow_signature

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
 ---
 V2: This patch is modified as independantly compilable unit  
 .../linux-generic/include/api/odp_classification.h | 106 +
  platform/linux-generic/odp_classification.c|  48 +-
  2 files changed, 8 insertions(+), 146 deletions(-)

 diff --git a/platform/linux-generic/include/api/odp_classification.h 
 b/platform/linux-generic/include/api/odp_classification.h
 index cc5d84a..64ad73f 100644
 --- a/platform/linux-generic/include/api/odp_classification.h
 +++ b/platform/linux-generic/include/api/odp_classification.h
 @@ -48,6 +48,9 @@ typedef uint32_t odp_flowsig_t;  */
  #define ODP_COS_INVALID((odp_cos_t)~0)

 +/** Maximum ClassOfService name length in chars */ #define
 +ODP_COS_NAME_LEN 32
 +
  /**
   * Class-of-service packet drop policies
   */
 @@ -110,33 +113,6 @@ int odp_cos_destroy(odp_cos_t cos_id);  int 
 odp_cos_set_queue(odp_cos_t cos_id, odp_queue_t queue_id);

  /**
 - * Assign a homogenous queue-group to a class-of-service.
 - *
 - * @param[in]  cos_id  class-of-service instance
 - * @param[in]  queue_group_id  Identifier of the queue group to receive 
 packets
 - * associated with this class of service.
 - *
 - * @return 0 on success, -1 on error.
 - */
 -int odp_cos_set_queue_group(odp_cos_t cos_id,
 -   odp_queue_group_t queue_group_id);
 -
 -/**
 - * Assign packet buffer pool for specific class-of-service
 - *
 - * @param[in]  cos_id  class-of-service instance.
 - * @param[in]  pool_id Buffer pool identifier where all packet 
 buffers
 - * will be sourced to store packet that
 - * belong to this class of service.
 - *
 - * @return 0 on success, -1 on error.
 - *
 - * @note Optional.
 - */
 -int odp_cos_set_pool(odp_cos_t cos_id, odp_buffer_pool_t pool_id);
 -
 -
 -/**
   * Assign packet drop policy for specific class-of-service
   *
   * @param[in]  cos_id  class-of-service instance.
 @@ -203,21 +179,6 @@ int odp_pktio_set_skip(odp_pktio_t pktio_in, size_t 
 offset);  int odp_pktio_set_headroom(odp_pktio_t pktio_in, size_t headroom);

  /**
 - * Specify per-cos buffer headroom
 - *
 - * @param[in]  cos_id  Class-of-service instance
 - * @param[in]  headroomNumber of bytes of space preceding packet
 - * data to reserve for use as headroom.
 - * Must not exceed the implementation
 - * defined ODP_PACKET_MAX_HEADROOM.
 - *
 - * @return 0 on success, -1 on error.
 - *
 - * @note Optional.
 - */
 -int odp_cos_set_headroom(odp_cos_t cos_id, size_t headroom);
 -
 -/**
   * Request to override per-port class of service
   * based on Layer-2 priority field if present.
   *
 @@ -263,60 +224,6 @@ int odp_cos_with_l3_qos(odp_pktio_t pktio_in,  typedef 
 uint16_t odp_cos_flow_set_t;

  /**
 - * Set a member of the flow signature fields data set
 - */
 -static inline
 -odp_cos_flow_set_t odp_cos_flow_set(odp_cos_flow_set_t set,
 -   odp_cos_hdr_flow_fields_e field)
 -{
 -   return set | (1U  field);
 -}
 -
 -/**
 - * Test a member of the flow signature fields data set
 - */
 -static inline bool
 -odp_cos_flow_is_set(odp_cos_flow_set_t set, odp_cos_hdr_flow_fields_e field) 
 -{
 -   return (set  (1U  field)) != 0;
 -}
 -
 -/**
 - * Set up set of headers used to calculate a flow signature
 - * based on class-of-service.
 - *
 - * @param[in]  cos_id  Class of service instance identifier
 - * @param[in]  req_data_setRequested data-set for
 - * flow signature calculation
 - *
 - * @return Data-set that was successfully applied.
 - * All-zeros data set indicates a failure to
 - * assign any of the requested fields,
 - 

Re: [lng-odp] Query regarding sequence number update in IPSEC application

2015-05-12 Thread Bala Manoharan
On 12 May 2015 at 18:06, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 12 May 2015 at 14:28, Bala Manoharan bala.manoha...@linaro.org wrote:

 IMO, using atomic variable instead of atomic queues will work for this Ipsec 
 example use-case as in this case the critical section is required only for 
 updating the sequence number but in a generic use-case the atomicity should 
 be protected over a region of code which the application wants to be 
 executed in the ingress-order.

 Isn't this the atomic queues/scheduling of ODP (and several SoC's)?

Yes. But atomic queue switching is currently only through scheduler
and it involves scheduler overhead.




 If HWs are capable we should add additional APIs for scheduler based locking 
 which can be used by application incase the critical section is small enough 
 that going through scheduler will cause a performance impact.

 Isn't there a release atomic scheduling function? A CPU processing stage 
 can start out atomic (HW provided mutual exclusion) but convert into 
 ordered scheduling when the mutual exclusion is no longer necessary. Or 
 this missing from the ODP API today?

release atomic scheduling only talks from Atomic -- ordered
scenario, I believe the scenario here dictates
Ordered -- Atomic -- Ordered sequence. Currently this sequence
goes through the scheduler and the intent is to avoid scheduler
overhead.

Regards,
Bala




 Regards,
 Bala

 On 12 May 2015 at 17:31, Ola Liljedahl ola.liljed...@linaro.org wrote:

 Yes the seqno counter is a shared resource and the atomic_fetch_and_add 
 will eventually become a bottleneck for per-SA throughput. But one could 
 hope that it should scale better than using atomic queues although this 
 depends on the actual microarchitecture.

 Freescale PPC has decorated storage, can't it be used for 
 atomic_fetch_and_add()?
 ARMv8.1 has support for far atomics which are supposed to scale better 
 (again depends on the actual implementation).

 On 12 May 2015 at 13:47, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 Atomic increment performance gets worse by increasing the number of cores -
 see 
 https://www.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.2015.01.31a.pdf
  - chapter 5) for some measurements on a conventional Intel machine.
 It may be possible for this overhead to become bigger than the one 
 associated with the atomic queue.

 Alex

 On 12 May 2015 at 14:20, Ola Liljedahl ola.liljed...@linaro.org wrote:

 I think it should be OK to use ordered queues instead of atomic queues, 
 e.g. for sequence number allocation (which will need an atomic variable 
 to hold the seqno counter). Packet ingress order will be maintained but 
 might not always correspond to sequence number order. This is not a 
 problem a the sliding window in the replay protection will take care of 
 that, the sliding window could be hundreds of entries (sequence numbers) 
 large (each will only take one bit). Packet ingress order is the 
 important characteristic that must be maintained. The IPsec sequence 
 number is not used for packet ordering and order restoration, it is only 
 used for replay protection.

 Someone with a platform which supports both ordered and atomic scheduling 
 could benchmark both designs and see how performance scales when using 
 ordered queues (and that atomic fetch_and_add) for some relevant traffic 
 patterns.

 On 8 May 2015 at 13:53, Bill Fischofer bill.fischo...@linaro.org wrote:

 Jerrin,

 Can you propose such a set of APIs for further discussion?  This would 
 be good to discuss at the Tuesday call.

 Thanks.

 Bill

 On Fri, May 8, 2015 at 12:07 AM, Jacob, Jerin 
 jerin.ja...@caviumnetworks.com wrote:


 I agree with Ola here on preserving the ingress order.
 However, I have experienced same performance issue as Nikhil pointed out
 (atomic queues have too much overhead for short critical section)

 I am not sure about any other HW but Cavium has support for
 introducing the critical section while maintain the ingress order as a 
 HW scheduler feature.

 IMO, if such support is available in other HW then 
 odp_schedule_ordered_lock()/unlock()
 kind of API will solve the performance issue for the need for short 
 critical section in ordered flow.

 /Jerin.


 From: lng-odp lng-odp-boun...@lists.linaro.org on behalf of Ola 
 Liljedahl ola.liljed...@linaro.org
 Sent: Thursday, May 7, 2015 9:06 PM
 To: nikhil.agar...@freescale.com
 Cc: lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] Query regarding sequence number update in IPSEC 
 application


 Using atomic queues will preserve the ingress order when allocating and 
 assigning the sequence number. Also you don't need to use an expensive 
 atomic operation for updating the sequence number as the atomic queue 
 and scheduling will provide mutual  exclusion.


 If the packets that require a sequence number came from parallel or 
 ordered queues, there would be no guarantee that the sequence numbers 
 would be allocated

Re: [lng-odp] Query regarding sequence number update in IPSEC application

2015-05-12 Thread Bala Manoharan
IMO, using atomic variable instead of atomic queues will work for this
Ipsec example use-case as in this case the critical section is required
only for updating the sequence number but in a generic use-case the
atomicity should be protected over a region of code which the application
wants to be executed in the ingress-order.
If HWs are capable we should add additional APIs for scheduler based
locking which can be used by application incase the critical section is
small enough that going through scheduler will cause a performance impact.

Regards,
Bala

On 12 May 2015 at 17:31, Ola Liljedahl ola.liljed...@linaro.org wrote:

 Yes the seqno counter is a shared resource and the atomic_fetch_and_add
 will eventually become a bottleneck for per-SA throughput. But one could
 hope that it should scale better than using atomic queues although this
 depends on the actual microarchitecture.

 Freescale PPC has decorated storage, can't it be used for
 atomic_fetch_and_add()?
 ARMv8.1 has support for far atomics which are supposed to scale better
 (again depends on the actual implementation).

 On 12 May 2015 at 13:47, Alexandru Badicioiu 
 alexandru.badici...@linaro.org wrote:

 Atomic increment performance gets worse by increasing the number of cores
 -
 see
 https://www.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.2015.01.31a.pdf
 - chapter 5) for some measurements on a conventional Intel machine.
 It may be possible for this overhead to become bigger than the one
 associated with the atomic queue.

 Alex

 On 12 May 2015 at 14:20, Ola Liljedahl ola.liljed...@linaro.org wrote:

 I think it should be OK to use ordered queues instead of atomic queues,
 e.g. for sequence number allocation (which will need an atomic variable to
 hold the seqno counter). Packet ingress order will be maintained but might
 not always correspond to sequence number order. This is not a problem a the
 sliding window in the replay protection will take care of that, the sliding
 window could be hundreds of entries (sequence numbers) large (each will
 only take one bit). Packet ingress order is the important characteristic
 that must be maintained. The IPsec sequence number is not used for packet
 ordering and order restoration, it is only used for replay protection.

 Someone with a platform which supports both ordered and atomic
 scheduling could benchmark both designs and see how performance scales when
 using ordered queues (and that atomic fetch_and_add) for some relevant
 traffic patterns.

 On 8 May 2015 at 13:53, Bill Fischofer bill.fischo...@linaro.org
 wrote:

 Jerrin,

 Can you propose such a set of APIs for further discussion?  This would
 be good to discuss at the Tuesday call.

 Thanks.

 Bill

 On Fri, May 8, 2015 at 12:07 AM, Jacob, Jerin 
 jerin.ja...@caviumnetworks.com wrote:


 I agree with Ola here on preserving the ingress order.
 However, I have experienced same performance issue as Nikhil pointed
 out
 (atomic queues have too much overhead for short critical section)

 I am not sure about any other HW but Cavium has support for
 introducing the critical section while maintain the ingress order as a
 HW scheduler feature.

 IMO, if such support is available in other HW then
 odp_schedule_ordered_lock()/unlock()
 kind of API will solve the performance issue for the need for short
 critical section in ordered flow.

 /Jerin.


 From: lng-odp lng-odp-boun...@lists.linaro.org on behalf of Ola
 Liljedahl ola.liljed...@linaro.org
 Sent: Thursday, May 7, 2015 9:06 PM
 To: nikhil.agar...@freescale.com
 Cc: lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] Query regarding sequence number update in IPSEC
 application


 Using atomic queues will preserve the ingress order when allocating
 and assigning the sequence number. Also you don't need to use an expensive
 atomic operation for updating the sequence number as the atomic queue and
 scheduling will provide mutual  exclusion.


 If the packets that require a sequence number came from parallel or
 ordered queues, there would be no guarantee that the sequence numbers 
 would
 be allocated in packet (ingress) order. Just using an atomic operation
 (e.g. fetch_and_add or similar) only  guarantees proper update of the
 sequence number variable, not any specific ordering.


 If you are ready to trade absolute correctness for performance, you
 could use ordered or may even parallel (questionable for other reasons)
 queues and then allocate the sequence number using an atomic 
 fetch_and_add.
 Sometimes packets  egress order will then not match the sequence number
 order (for a flow/SA). For IPSec, this might affect the replay window 
 check
  update at the receiving end but as the replay protection uses a sliding
 window of sequence numbers (to handle misordered packets),  there might 
 not
 be any adverse effects in practice. The most important aspect is probably
 to preserve original packet order.


 -- Ola


 On 6 May 2015 at 11:29,  nikhil.agar...@freescale.com 
 

Re: [lng-odp] [PATCH 1/3] example: classifier: remove extra local init

2015-05-07 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

IMO, we can add additional information in odph_linux_pthread_create()
header file documentation that this function is expected to call
odp_init_local() for the thread it creates. Current documentation only says
the following

/**
 * Creates and launches pthreads
 *
 * Creates, pins and launches threads to separate CPU's based on the
cpumask.
 *
 * @param thread_tblThread table
 * @param mask  CPU mask
 * @param start_routine Thread start function
 * @param arg   Thread argument
 */
void odph_linux_pthread_create(odph_linux_pthread_t *thread_tbl,
   const odp_cpumask_t *mask,
   void *(*start_routine) (void *), void *arg);

Regards,
Bala

On 7 May 2015 at 17:04, Petri Savolainen petri.savolai...@nokia.com wrote:

 Worker threads are created with odph_linux_pthread_create()
 which calls odp_local_init() before entering the function.

 Signed-off-by: Petri Savolainen petri.savolai...@nokia.com
 ---
  example/classifier/odp_classifier.c | 7 ---
  1 file changed, 7 deletions(-)

 diff --git a/example/classifier/odp_classifier.c
 b/example/classifier/odp_classifier.c
 index d78eb7b..35d9684 100644
 --- a/example/classifier/odp_classifier.c
 +++ b/example/classifier/odp_classifier.c
 @@ -249,13 +249,6 @@ static void *pktio_receive_thread(void *arg)
 appl_args_t *appl = (appl_args_t *)arg;
 global_statistics *stats;

 -
 -   /* Init this thread */
 -   if (odp_init_local()) {
 -   EXAMPLE_ERR(ODP thread local init failed.\n);
 -   exit(EXIT_FAILURE);
 -   }
 -
 /* Loop packets */
 for (;;) {
 odp_pktio_t pktio_tmp;
 --
 2.4.0

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] Query on classification's rules ordering

2015-05-06 Thread Bala Manoharan
Hi,

Cos configured using PMR takes precedence over CoS configured using L2 and
L3 priority and QoS values.

Among L2 and L3 values, odp_cos_with_l3_qos() function contains a boolean
L3_precedence which indicates whether L2 or L3 priority value takes
precedence.

Hope this helps,
Bala

On 6 May 2015 at 15:59, Sunil Kumar Kori sunil.k...@freescale.com wrote:

  Hello,



 I was looking into the ODP v1.0 classification APIs.  I have doubt
 regarding the order of rules to be applied on a single pktio. Following is
 the scenario:

 1.   Pktio is configured with l2 priority using
 *odp_cos_with_l2_priority*

 2.   Pktio is configured with l3 qos using *odp_cos_with_l3_qos*

 3.   Pktio is configured with pmr using *odp_pktio_pmr_cos*

 Assume a packet arrives which satisfy all the 3 rules (as given above)
 then what should be the expected order/precedence of rules?



 Regards

 Sunil Kumar



 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse mode

2015-04-29 Thread Bala Manoharan
Hi,

Sorry I clicked send way too early in the previous mail :)

We can optimize the odp_packet_parse_flags_t struct to support
both  layered parsing and individual parsing by the following way.

I believe this might be useful for both application and implementation.

typedef struct odp_packet_parse_flags_t {
union {
uint8_t l2_all;
struct {
   uint8_t eth:1;   /** See odp_packet_has_eth() */
   uint8_t jumbo:1; /** See odp_packet_has_jumbo() */
   uint8_t vlan:1;  /** See odp_packet_has_vlan() */
   uint8_t vlan_qinq:1; /** See odp_packet_has_vlan_qinq() */
};
};
union {
uint8_t l2_all;
struct {
uint8_t arp:1;   /** See odp_packet_has_arp() */
uint8_t ipv4:1;  /** See odp_packet_has_ipv4() */
uint8_t ipv6:1;  /** See odp_packet_has_ipv6() */
uint8_t ipfrag:1;/** See odp_packet_has_ipfrag() */
uint8_t ipopt:1; /** See odp_packet_has_ipopt() */
uint8_t ipsec:1; /** See odp_packet_has_ipsec() */
};
};
union {
uint8_t l4_all;
struct {
uint8_t udp:1;   /** See odp_packet_has_udp() */
uint8_t tcp:1;   /** See odp_packet_has_tcp() */
uint8_t sctp:1;  /** See odp_packet_has_sctp() */
uint8_t icmp:1;  /** See odp_packet_has_icmp() */
};
};
} odp_packet_parse_flags_t;

Regards,
Bala

On 29 April 2015 at 12:46, Bala Manoharan bala.manoha...@linaro.org wrote:
 Hi,

 We can optimize the odp_packet_parse_flags_t in the following way to
 handle the layered approach for parsing

 +typedef struct odp_packet_parse_flags_t {

 +   uint32_t eth:1;   /** See odp_packet_has_eth() */
 +   uint32_t jumbo:1; /** See odp_packet_has_jumbo() */
 +   uint32_t vlan:1;  /** See odp_packet_has_vlan() */
 +   uint32_t vlan_qinq:1; /** See odp_packet_has_vlan_qinq() */
 +   uint32_t arp:1;   /** See odp_packet_has_arp() */
 +   uint32_t ipv4:1;  /** See odp_packet_has_ipv4() */
 +   uint32_t ipv6:1;  /** See odp_packet_has_ipv6() */
 +   uint32_t ipfrag:1;/** See odp_packet_has_ipfrag() */
 +   uint32_t ipopt:1; /** See odp_packet_has_ipopt() */
 +   uint32_t ipsec:1; /** See odp_packet_has_ipsec() */
 +   uint32_t udp:1;   /** See odp_packet_has_udp() */
 +   uint32_t tcp:1;   /** See odp_packet_has_tcp() */
 +   uint32_t sctp:1;  /** See odp_packet_has_sctp() */
 +   uint32_t icmp:1;  /** See odp_packet_has_icmp() */
 +
 +   uint32_t _reserved1:18; /** Reserved. Do not use. */
 +} odp_packet_parse_flags_t;

 On 29 April 2015 at 12:15, Savolainen, Petri (Nokia - FI/Espoo)
 petri.savolai...@nokia.com wrote:
 It's (v2) on the list (since last Thu):

 [lng-odp] [API-NEXT PATCH v2 4/5] api: packet_io: added parse mode


 -Petri


 -Original Message-
 From: ext Zoltan Kiss [mailto:zoltan.k...@linaro.org]
 Sent: Tuesday, April 28, 2015 9:17 PM
 To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse
 mode



 On 28/04/15 08:09, Savolainen, Petri (Nokia - FI/Espoo) wrote:
  Hi Zoltan,
 
  You should implement the latest version of the patch, which has only
 ALL/NONE defined. We can leave SELECTED for later.
 Ok, but where is that version? I could only find this one.

 
  Briefly about SELECTED. The idea is that the application lists all
 odp_packet_has_xxx() calls that it will call during packet processing.
 Implementation can use that information to optimize parser functionality,
 if it can. So, application is not telling to implementation what to do or
 how to do it, but what information application is expecting from packets.
 If application lies (indicates that it will not call xxx, but still calls
 it), results are undefined.
 
  -Petri
 
 
  -Original Message-
  From: ext Zoltan Kiss [mailto:zoltan.k...@linaro.org]
  Sent: Monday, April 27, 2015 8:29 PM
  To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
  Subject: Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse
  mode
 
 
 
  On 20/04/15 13:10, Petri Savolainen wrote:
  Application can indicate which packet parsing results it is
  interested in (all, none or selected).
 
  Signed-off-by: Petri Savolainen petri.savolai...@nokia.com
  ---
 include/odp/api/packet_flags.h | 26 ++
 include/odp/api/packet_io.h| 19 +++
 2 files changed, 45 insertions(+)
 
  diff --git a/include/odp/api/packet_flags.h
  b/include/odp/api/packet_flags.h
  index bfbcc94..9444fdc 100644
  --- a/include/odp/api/packet_flags.h
  +++ b/include/odp/api/packet_flags.h
  @@ -26,6 +26,32 @@ extern C {
  *  @{
  */
 
  +
  +/**
  + * Packet input parsing flags
  + *
  + * Each flag represents

[lng-odp] [PATCHv1] example: classifier: remove odp_pmr_create_range() support

2015-04-29 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

This patch removes support for odp_pmr_create_range() function 
in the classifier example application.

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
 example/classifier/odp_classifier.c | 106 ++--
 1 file changed, 29 insertions(+), 77 deletions(-)

diff --git a/example/classifier/odp_classifier.c 
b/example/classifier/odp_classifier.c
index cf53565..d78eb7b 100644
--- a/example/classifier/odp_classifier.c
+++ b/example/classifier/odp_classifier.c
@@ -52,22 +52,15 @@ typedef struct {
odp_cos_t cos;  /** Associated cos handle */
odp_pmr_t pmr;  /** Associated pmr handle */
odp_atomic_u64_t packet_count;  /** count of received packets */
-   odp_pmr_term_e term;/** odp pmr term value */
char queue_name[ODP_QUEUE_NAME_LEN];/** queue name */
-   odp_pmr_match_type_e match_type;/** pmr match type */
int val_sz; /** size of the pmr term */
-   union {
-   struct {
-   uint32_t val;   /** pmr term value */
-   uint32_t mask;  /** pmr term mask */
-   } match;
-   struct  {
-   uint32_t val1;  /** pmr term start range */
-   uint32_t val2;  /** pmr term end range */
-   } range;
-   };
-   char value1[DISPLAY_STRING_LEN];/** Display string1 */
-   char value2[DISPLAY_STRING_LEN];/** Display string2 */
+   struct {
+   odp_pmr_term_e term;/** odp pmr term value */
+   uint32_t val;   /** pmr term value */
+   uint32_t mask;  /** pmr term mask */
+   } rule;
+   char value[DISPLAY_STRING_LEN]; /** Display string for value */
+   char mask[DISPLAY_STRING_LEN];  /** Display string for mask */
 } global_statistics;
 
 typedef struct {
@@ -114,18 +107,14 @@ void print_cls_statistics(appl_args_t *args)
printf(\n);
printf(CONFIGURATION\n);
printf(\n);
-   printf(QUEUE\tMATCH\tVALUE1\t\tVALUE2\n);
+   printf(QUEUE\tVALUE\t\tMASK\n);
for (i = 0; i  40; i++)
printf(-);
printf(\n);
for (i = 0; i  args-policy_count - 1; i++) {
printf(%s\t, args-stats[i].queue_name);
-   if (args-stats[i].match_type == ODP_PMR_MASK)
-   printf(MATCH\t);
-   else
-   printf(RANGE\t);
-   printf(%s\t, args-stats[i].value1);
-   printf(%s\n, args-stats[i].value2);
+   printf(%s\t, args-stats[i].value);
+   printf(%s\n, args-stats[i].mask);
}
printf(\n);
printf(RECEIVED PACKETS\n);
@@ -357,17 +346,10 @@ static void configure_cos_queue(odp_pktio_t pktio, 
appl_args_t *args)
sprintf(cos_name, CoS%s, stats-queue_name);
stats-cos = odp_cos_create(cos_name);
 
-   if (stats-match_type == ODP_PMR_MASK) {
-   stats-pmr = odp_pmr_create_match(stats-term,
-   stats-match.val,
-   stats-match.mask,
-   stats-val_sz);
-   } else {
-   stats-pmr = odp_pmr_create_range(stats-term,
-   stats-range.val1,
-   stats-range.val2,
-   stats-val_sz);
-   }
+   stats-pmr = odp_pmr_create(stats-rule.term,
+   stats-rule.val,
+   stats-rule.mask,
+   stats-val_sz);
qparam.sched.prio = i % odp_schedule_num_prio();
qparam.sched.sync = ODP_SCHED_SYNC_NONE;
qparam.sched.group = ODP_SCHED_GROUP_ALL;
@@ -614,39 +596,18 @@ static int parse_pmr_policy(appl_args_t *appl_args, char 
*argv[], char *optarg)
EXAMPLE_ERR(Invalid ODP_PMR_TERM string\n);
exit(EXIT_FAILURE);
}
-   stats[policy_count].term = term;
-   /* PMR RANGE vs MATCH */
-   token = strtok(NULL, :);
-   if (0 == strcasecmp(token, range)) {
-   stats[policy_count].match_type = ODP_PMR_RANGE;
-   } else if (0 == strcasecmp(token, match)) {
-   stats[policy_count].match_type = ODP_PMR_MASK;
-   } else {
-   usage(argv[0]);
-   exit(EXIT_FAILURE);
-   }
+   stats[policy_count].rule.term = term;
 
/* PMR value */
switch (term)   {
case ODP_PMR_SIP_ADDR:
-   if (stats[policy_count].match_type == ODP_PMR_MASK) {
-   token = strtok(NULL, :);
-   strcpy(stats[policy_count].value1, token);
-   

Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse mode

2015-04-29 Thread Bala Manoharan
Hi,

We can optimize the odp_packet_parse_flags_t in the following way to
handle the layered approach for parsing

+typedef struct odp_packet_parse_flags_t {

+   uint32_t eth:1;   /** See odp_packet_has_eth() */
+   uint32_t jumbo:1; /** See odp_packet_has_jumbo() */
+   uint32_t vlan:1;  /** See odp_packet_has_vlan() */
+   uint32_t vlan_qinq:1; /** See odp_packet_has_vlan_qinq() */
+   uint32_t arp:1;   /** See odp_packet_has_arp() */
+   uint32_t ipv4:1;  /** See odp_packet_has_ipv4() */
+   uint32_t ipv6:1;  /** See odp_packet_has_ipv6() */
+   uint32_t ipfrag:1;/** See odp_packet_has_ipfrag() */
+   uint32_t ipopt:1; /** See odp_packet_has_ipopt() */
+   uint32_t ipsec:1; /** See odp_packet_has_ipsec() */
+   uint32_t udp:1;   /** See odp_packet_has_udp() */
+   uint32_t tcp:1;   /** See odp_packet_has_tcp() */
+   uint32_t sctp:1;  /** See odp_packet_has_sctp() */
+   uint32_t icmp:1;  /** See odp_packet_has_icmp() */
+
+   uint32_t _reserved1:18; /** Reserved. Do not use. */
+} odp_packet_parse_flags_t;

On 29 April 2015 at 12:15, Savolainen, Petri (Nokia - FI/Espoo)
petri.savolai...@nokia.com wrote:
 It's (v2) on the list (since last Thu):

 [lng-odp] [API-NEXT PATCH v2 4/5] api: packet_io: added parse mode


 -Petri


 -Original Message-
 From: ext Zoltan Kiss [mailto:zoltan.k...@linaro.org]
 Sent: Tuesday, April 28, 2015 9:17 PM
 To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse
 mode



 On 28/04/15 08:09, Savolainen, Petri (Nokia - FI/Espoo) wrote:
  Hi Zoltan,
 
  You should implement the latest version of the patch, which has only
 ALL/NONE defined. We can leave SELECTED for later.
 Ok, but where is that version? I could only find this one.

 
  Briefly about SELECTED. The idea is that the application lists all
 odp_packet_has_xxx() calls that it will call during packet processing.
 Implementation can use that information to optimize parser functionality,
 if it can. So, application is not telling to implementation what to do or
 how to do it, but what information application is expecting from packets.
 If application lies (indicates that it will not call xxx, but still calls
 it), results are undefined.
 
  -Petri
 
 
  -Original Message-
  From: ext Zoltan Kiss [mailto:zoltan.k...@linaro.org]
  Sent: Monday, April 27, 2015 8:29 PM
  To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
  Subject: Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse
  mode
 
 
 
  On 20/04/15 13:10, Petri Savolainen wrote:
  Application can indicate which packet parsing results it is
  interested in (all, none or selected).
 
  Signed-off-by: Petri Savolainen petri.savolai...@nokia.com
  ---
 include/odp/api/packet_flags.h | 26 ++
 include/odp/api/packet_io.h| 19 +++
 2 files changed, 45 insertions(+)
 
  diff --git a/include/odp/api/packet_flags.h
  b/include/odp/api/packet_flags.h
  index bfbcc94..9444fdc 100644
  --- a/include/odp/api/packet_flags.h
  +++ b/include/odp/api/packet_flags.h
  @@ -26,6 +26,32 @@ extern C {
  *  @{
  */
 
  +
  +/**
  + * Packet input parsing flags
  + *
  + * Each flag represents a parser output. See parser output functions
  for
  + * details.
 
  Now that I implement this for linux-generic, I realized this is
  ambiguous: does disabling a lower layer's parsing means that the parser
  stops looking into upper layers? Even if their parsing is enabled?
  E.g. if (jumbo == 0  ipv4 == 1), we won't parse the ethernet header
 if
  it's a jumbo frame, that's fine. But do we continue to look into the
  rest of the packet for the IPv4 header?
  I would say no, but it should be mentioned here explicitly.
 
  + */
  +typedef struct odp_packet_parse_flags_t {
  + uint32_t eth:1;   /** See odp_packet_has_eth() */
  + uint32_t jumbo:1; /** See odp_packet_has_jumbo() */
  + uint32_t vlan:1;  /** See odp_packet_has_vlan() */
  + uint32_t vlan_qinq:1; /** See odp_packet_has_vlan_qinq() */
  + uint32_t arp:1;   /** See odp_packet_has_arp() */
  + uint32_t ipv4:1;  /** See odp_packet_has_ipv4() */
  + uint32_t ipv6:1;  /** See odp_packet_has_ipv6() */
  + uint32_t ipfrag:1;/** See odp_packet_has_ipfrag() */
  + uint32_t ipopt:1; /** See odp_packet_has_ipopt() */
  + uint32_t ipsec:1; /** See odp_packet_has_ipsec() */
  + uint32_t udp:1;   /** See odp_packet_has_udp() */
  + uint32_t tcp:1;   /** See odp_packet_has_tcp() */
  + uint32_t sctp:1;  /** See odp_packet_has_sctp() */
  + uint32_t icmp:1;  /** See odp_packet_has_icmp() */
  +
  + uint32_t _reserved1:18; /** Reserved. Do not use. */
  +} odp_packet_parse_flags_t;
  +
 /**
  * Check for packet errors
  *
  diff --git a/include/odp/api/packet_io.h 

Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse mode

2015-04-29 Thread Bala Manoharan
On 29 April 2015 at 13:24, Savolainen, Petri (Nokia - FI/Espoo)
petri.savolai...@nokia.com wrote:
 Hi,

 Possibly. Although, grouping adds struct complexity, so there should be some 
 benefit on doing that. I was going to add union all there, but decided to 
 leave it out since unnamed structs/unions are not defined in C99, so the both 
 the union and the bit field struct inside would have to be named (for strict 
 C99 compatibility). I think API needs to be strictly C99, other code 
 (linux-generic and tests) can relax a bit from that requirement.

 Instead of unions, I'd defined a set of (inline) functions:

 // Check if all L2 flags are set
 // @return non-zero when all L2 level flags are set
 int odp_packet_parse_all_l2(const odp_packet_parse_flags_t *flags);

One minor query on the above function,
So when application calls the above function what will be the value of flags?
 will it have only l2 flags set or will have other layer flags also set?

Regards,
Bala


 // Set all L2 flags
 void odp_packet_parse_all_l2_set(odp_packet_parse_flags_t *flags);

 ...


 Also I think the implementation would not only check levels, but for specific 
 flags - especially those ones it does not have HW support. E.g. an 
 implementation may have everything else filled in by HW, but sctp or tcp 
 options flags. If the application does not ask for those, the implementation 
 is good to go as-is. If the application asks those, the implementation has to 
 choose a strategy to fill in those one way or another.


 -Petri

 -Original Message-
 From: ext Bala Manoharan [mailto:bala.manoha...@linaro.org]
 Sent: Wednesday, April 29, 2015 10:25 AM
 To: Savolainen, Petri (Nokia - FI/Espoo)
 Cc: ext Zoltan Kiss; lng-odp@lists.linaro.org
 Subject: Re: [lng-odp] [PATCH API-NEXT 4/5] api: packet_io: added parse
 mode

 Hi,

 Sorry I clicked send way too early in the previous mail :)

 We can optimize the odp_packet_parse_flags_t struct to support
 both  layered parsing and individual parsing by the following way.

 I believe this might be useful for both application and implementation.

 typedef struct odp_packet_parse_flags_t {
 union {
 uint8_t l2_all;
 struct {
uint8_t eth:1;   /** See odp_packet_has_eth() */
uint8_t jumbo:1; /** See odp_packet_has_jumbo() */
uint8_t vlan:1;  /** See odp_packet_has_vlan() */
uint8_t vlan_qinq:1; /** See
 odp_packet_has_vlan_qinq() */
 };
 };
 union {
 uint8_t l2_all;
 struct {
 uint8_t arp:1;   /** See odp_packet_has_arp() */
 uint8_t ipv4:1;  /** See odp_packet_has_ipv4() */
 uint8_t ipv6:1;  /** See odp_packet_has_ipv6() */
 uint8_t ipfrag:1;/** See odp_packet_has_ipfrag() */
 uint8_t ipopt:1; /** See odp_packet_has_ipopt() */
 uint8_t ipsec:1; /** See odp_packet_has_ipsec() */
 };
 };
 union {
 uint8_t l4_all;
 struct {
 uint8_t udp:1;   /** See odp_packet_has_udp() */
 uint8_t tcp:1;   /** See odp_packet_has_tcp() */
 uint8_t sctp:1;  /** See odp_packet_has_sctp() */
 uint8_t icmp:1;  /** See odp_packet_has_icmp() */
 };
 };
 } odp_packet_parse_flags_t;

 Regards,
 Bala

 On 29 April 2015 at 12:46, Bala Manoharan bala.manoha...@linaro.org
 wrote:
  Hi,
 
  We can optimize the odp_packet_parse_flags_t in the following way to
  handle the layered approach for parsing
 
  +typedef struct odp_packet_parse_flags_t {
 
  +   uint32_t eth:1;   /** See odp_packet_has_eth() */
  +   uint32_t jumbo:1; /** See odp_packet_has_jumbo() */
  +   uint32_t vlan:1;  /** See odp_packet_has_vlan() */
  +   uint32_t vlan_qinq:1; /** See odp_packet_has_vlan_qinq() */
  +   uint32_t arp:1;   /** See odp_packet_has_arp() */
  +   uint32_t ipv4:1;  /** See odp_packet_has_ipv4() */
  +   uint32_t ipv6:1;  /** See odp_packet_has_ipv6() */
  +   uint32_t ipfrag:1;/** See odp_packet_has_ipfrag() */
  +   uint32_t ipopt:1; /** See odp_packet_has_ipopt() */
  +   uint32_t ipsec:1; /** See odp_packet_has_ipsec() */
  +   uint32_t udp:1;   /** See odp_packet_has_udp() */
  +   uint32_t tcp:1;   /** See odp_packet_has_tcp() */
  +   uint32_t sctp:1;  /** See odp_packet_has_sctp() */
  +   uint32_t icmp:1;  /** See odp_packet_has_icmp() */
  +
  +   uint32_t _reserved1:18; /** Reserved. Do not use. */
  +} odp_packet_parse_flags_t;
 
  On 29 April 2015 at 12:15, Savolainen, Petri (Nokia - FI/Espoo)
  petri.savolai...@nokia.com wrote:
  It's (v2) on the list (since last Thu):
 
  [lng-odp] [API-NEXT PATCH v2 4/5] api: packet_io: added parse mode
 
 
  -Petri
 
 
  -Original Message-
  From: ext Zoltan Kiss [mailto:zoltan.k...@linaro.org]
  Sent: Tuesday, April

Re: [lng-odp] [PATCH 1/3] test: classification: add missing init of atomic variable

2015-04-28 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 24 April 2015 at 19:23, Nicolas Morey-Chaisemartin nmo...@kalray.eu wrote:
 Signed-off-by: Nicolas Morey-Chaisemartin nmo...@kalray.eu
 ---
  test/validation/classification/odp_classification_tests.c | 1 +
  1 file changed, 1 insertion(+)

 diff --git a/test/validation/classification/odp_classification_tests.c 
 b/test/validation/classification/odp_classification_tests.c
 index 1bf080f..b44bc62 100644
 --- a/test/validation/classification/odp_classification_tests.c
 +++ b/test/validation/classification/odp_classification_tests.c
 @@ -319,6 +319,7 @@ int classification_tests_init(void)
 for (i = 0; i  CLS_ENTRIES; i++)
 queue_list[i] = ODP_QUEUE_INVALID;

 +   odp_atomic_init_u32(seq, 0);
 return 0;
  }

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH 2/3] linux-generic: packet_io: init l2 and l3 cos table spinlocks

2015-04-28 Thread Bala Manoharan
Reviewed-by: Balasubramanian Manoharan bala.manoha...@linaro.org

On 24 April 2015 at 19:23, Nicolas Morey-Chaisemartin nmo...@kalray.eu wrote:
 Signed-off-by: Nicolas Morey-Chaisemartin nmo...@kalray.eu
 ---
  platform/linux-generic/odp_packet_io.c | 2 ++
  1 file changed, 2 insertions(+)

 diff --git a/platform/linux-generic/odp_packet_io.c 
 b/platform/linux-generic/odp_packet_io.c
 index cfe5b71..f16685d 100644
 --- a/platform/linux-generic/odp_packet_io.c
 +++ b/platform/linux-generic/odp_packet_io.c
 @@ -61,6 +61,8 @@ int odp_pktio_init_global(void)

 odp_spinlock_init(pktio_entry-s.lock);
 odp_spinlock_init(pktio_entry-s.cls.lock);
 +   odp_spinlock_init(pktio_entry-s.cls.l2_cos_table.lock);
 +   odp_spinlock_init(pktio_entry-s.cls.l3_cos_table.lock);

 pktio_entry_ptr[id - 1] = pktio_entry;
 /* Create a default output queue for each pktio resource */
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv3] example: ODP classifier example

2015-04-23 Thread Bala Manoharan
On 21 April 2015 at 16:57, Maxim Uvarov maxim.uva...@linaro.org wrote:
 Bala, one more comment. Please do parsing arguments before odp init.

Sorry missed this comment. parsing argument uses odp shared memory and
we cannot call it before odp init() function.

Regards,
Bala


 About this code Mike found that it will be abort if you do not run it under
 root due to
 unable do raw socket operations.


 pktio = odp_pktio_open(dev, pool);
 if (pktio == ODP_PKTIO_INVALID)
 EXAMPLE_ABORT(pktio create failed for %s\n, dev);

 I tried loop application starts well. But needed some traffic so that loop
 is no good fit.
 I think it's better to exit from this app with some return code and add
 small note to
 Usage that for linux-generic user has to be root to open real devices in raw
 mode.

 Thanks,
 Maxim.


 On 04/16/15 14:41, bala.manoha...@linaro.org wrote:

 From: Balasubramanian Manoharan bala.manoha...@linaro.org

 ODP Classifier example

 This programs gets pmr rules as command-line parameter and configures the
 classification engine
 in the system.

 This initial version supports the following
 * ODP_PMR_SIP_ADDR pmr term
 * PMR term MATCH and RANGE type
 * Multiple PMR rule can be set on a single pktio interface with different
 queues associated to each PMR rule
 * Automatically configures a default queue and provides statistics for the
 same

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
 ---
 V3: Incorporates review comments from Mike and Maxim
 Adds a timeout variable to configure the time in seconds for classifier
 example to run.

   configure.ac|   1 +
   example/Makefile.am |   2 +-
   example/classifier/Makefile.am  |  10 +
   example/classifier/odp_classifier.c | 820
 
   4 files changed, 832 insertions(+), 1 deletion(-)
   create mode 100644 example/classifier/Makefile.am
   create mode 100644 example/classifier/odp_classifier.c

 diff --git a/configure.ac b/configure.ac
 index 78ff245..d20bad2 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -272,6 +272,7 @@ AM_CXXFLAGS=-std=c++11
   AC_CONFIG_FILES([Makefile
  doc/Makefile
  example/Makefile
 +example/classifier/Makefile
  example/generator/Makefile
  example/ipsec/Makefile
  example/packet/Makefile
 diff --git a/example/Makefile.am b/example/Makefile.am
 index 6bb4f5c..353f397 100644
 --- a/example/Makefile.am
 +++ b/example/Makefile.am
 @@ -1 +1 @@
 -SUBDIRS = generator ipsec packet timer
 +SUBDIRS = classifier generator ipsec packet timer
 diff --git a/example/classifier/Makefile.am
 b/example/classifier/Makefile.am
 new file mode 100644
 index 000..938f094
 --- /dev/null
 +++ b/example/classifier/Makefile.am
 @@ -0,0 +1,10 @@
 +include $(top_srcdir)/example/Makefile.inc
 +
 +bin_PROGRAMS = odp_classifier
 +odp_classifier_LDFLAGS = $(AM_LDFLAGS) -static
 +odp_classifier_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
 +
 +noinst_HEADERS = \
 + $(top_srcdir)/example/example_debug.h
 +
 +dist_odp_classifier_SOURCES = odp_classifier.c
 diff --git a/example/classifier/odp_classifier.c
 b/example/classifier/odp_classifier.c
 new file mode 100644
 index 000..85b6e00
 --- /dev/null
 +++ b/example/classifier/odp_classifier.c
 @@ -0,0 +1,820 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier: BSD-3-Clause
 + */
 +
 +#include stdlib.h
 +#include string.h
 +#include getopt.h
 +#include unistd.h
 +#include example_debug.h
 +
 +#include odp.h
 +#include odp/helper/linux.h
 +#include odp/helper/eth.h
 +#include odp/helper/ip.h
 +#include strings.h
 +#include stdio.h
 +
 +/** @def MAX_WORKERS
 + * @brief Maximum number of worker threads
 + */
 +#define MAX_WORKERS32
 +
 +/** @def SHM_PKT_POOL_SIZE
 + * @brief Size of the shared memory block
 + */
 +#define SHM_PKT_POOL_SIZE  (512*2048)
 +
 +/** @def SHM_PKT_POOL_BUF_SIZE
 + * @brief Buffer size of the packet pool buffer
 + */
 +#define SHM_PKT_POOL_BUF_SIZE  1856
 +
 +/** @def MAX_PMR_COUNT
 + * @brief Maximum number of Classification Policy
 + */
 +#define MAX_PMR_COUNT  8
 +
 +/** @def DISPLAY_STRING_LEN
 + * @brief Length of string used to display term value
 + */
 +#define DISPLAY_STRING_LEN 32
 +
 +/** Get rid of path in filename - only for unix-type paths using '/' */
 +#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
 +   strrchr((file_name), '/') + 1 : (file_name))
 +
 +typedef struct {
 +   odp_queue_t queue;  /** Associated queue handle */
 +   odp_cos_t cos;  /** Associated cos handle */
 +   odp_pmr_t pmr;  /** Associated pmr handle */
 +   odp_atomic_u64_t packet_count;  /** count of received packets */
 +   odp_pmr_term_e term;/** odp pmr term value */
 +   char queue_name[ODP_QUEUE_NAME_LEN];/** queue 

[lng-odp] [PATCHv4] example: ODP classifier example

2015-04-23 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

ODP Classifier example

This programs gets pmr rules as command-line parameter and configures the 
classification engine
in the system.

This initial version supports the following
* ODP_PMR_SIP_ADDR pmr term
* PMR term MATCH and RANGE type
* Multiple PMR rule can be set on a single pktio interface with different 
queues associated to each PMR rule
* Automatically configures a default queue and provides statistics for the same
* Prints statistics interms of the number of packets dispatched to each queue

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
v4: Incorporates review comments from Mike and Ola
 configure.ac|   1 +
 example/Makefile.am |   2 +-
 example/classifier/.gitignore   |   1 +
 example/classifier/Makefile.am  |  10 +
 example/classifier/odp_classifier.c | 835 
 5 files changed, 848 insertions(+), 1 deletion(-)
 create mode 100644 example/classifier/.gitignore
 create mode 100644 example/classifier/Makefile.am
 create mode 100644 example/classifier/odp_classifier.c

diff --git a/configure.ac b/configure.ac
index 78ff245..d20bad2 100644
--- a/configure.ac
+++ b/configure.ac
@@ -272,6 +272,7 @@ AM_CXXFLAGS=-std=c++11
 AC_CONFIG_FILES([Makefile
 doc/Makefile
 example/Makefile
+example/classifier/Makefile
 example/generator/Makefile
 example/ipsec/Makefile
 example/packet/Makefile
diff --git a/example/Makefile.am b/example/Makefile.am
index 6bb4f5c..353f397 100644
--- a/example/Makefile.am
+++ b/example/Makefile.am
@@ -1 +1 @@
-SUBDIRS = generator ipsec packet timer
+SUBDIRS = classifier generator ipsec packet timer
diff --git a/example/classifier/.gitignore b/example/classifier/.gitignore
new file mode 100644
index 000..a356d48
--- /dev/null
+++ b/example/classifier/.gitignore
@@ -0,0 +1 @@
+odp_classifier
diff --git a/example/classifier/Makefile.am b/example/classifier/Makefile.am
new file mode 100644
index 000..938f094
--- /dev/null
+++ b/example/classifier/Makefile.am
@@ -0,0 +1,10 @@
+include $(top_srcdir)/example/Makefile.inc
+
+bin_PROGRAMS = odp_classifier
+odp_classifier_LDFLAGS = $(AM_LDFLAGS) -static
+odp_classifier_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+noinst_HEADERS = \
+ $(top_srcdir)/example/example_debug.h
+
+dist_odp_classifier_SOURCES = odp_classifier.c
diff --git a/example/classifier/odp_classifier.c 
b/example/classifier/odp_classifier.c
new file mode 100644
index 000..cf53565
--- /dev/null
+++ b/example/classifier/odp_classifier.c
@@ -0,0 +1,835 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#include stdlib.h
+#include string.h
+#include getopt.h
+#include unistd.h
+#include example_debug.h
+
+#include odp.h
+#include odp/helper/linux.h
+#include odp/helper/eth.h
+#include odp/helper/ip.h
+#include strings.h
+#include errno.h
+#include stdio.h
+
+/** @def MAX_WORKERS
+ * @brief Maximum number of worker threads
+ */
+#define MAX_WORKERS32
+
+/** @def SHM_PKT_POOL_SIZE
+ * @brief Size of the shared memory block
+ */
+#define SHM_PKT_POOL_SIZE  (512*2048)
+
+/** @def SHM_PKT_POOL_BUF_SIZE
+ * @brief Buffer size of the packet pool buffer
+ */
+#define SHM_PKT_POOL_BUF_SIZE  1856
+
+/** @def MAX_PMR_COUNT
+ * @brief Maximum number of Classification Policy
+ */
+#define MAX_PMR_COUNT  8
+
+/** @def DISPLAY_STRING_LEN
+ * @brief Length of string used to display term value
+ */
+#define DISPLAY_STRING_LEN 32
+
+/** Get rid of path in filename - only for unix-type paths using '/' */
+#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
+   strrchr((file_name), '/') + 1 : (file_name))
+
+typedef struct {
+   odp_queue_t queue;  /** Associated queue handle */
+   odp_cos_t cos;  /** Associated cos handle */
+   odp_pmr_t pmr;  /** Associated pmr handle */
+   odp_atomic_u64_t packet_count;  /** count of received packets */
+   odp_pmr_term_e term;/** odp pmr term value */
+   char queue_name[ODP_QUEUE_NAME_LEN];/** queue name */
+   odp_pmr_match_type_e match_type;/** pmr match type */
+   int val_sz; /** size of the pmr term */
+   union {
+   struct {
+   uint32_t val;   /** pmr term value */
+   uint32_t mask;  /** pmr term mask */
+   } match;
+   struct  {
+   uint32_t val1;  /** pmr term start range */
+   uint32_t val2;  /** pmr term end range */
+   } range;
+   };
+   char value1[DISPLAY_STRING_LEN];/** Display string1 */
+   char value2[DISPLAY_STRING_LEN];/** Display string2 */
+} global_statistics;
+
+typedef struct {
+   

Re: [lng-odp] [PATCH] validation: odp_pool: add double destroy

2015-04-23 Thread Bala Manoharan
On 23 April 2015 at 16:56, Maxim Uvarov maxim.uva...@linaro.org wrote:
 On 04/23/15 13:09, Taras Kondratiuk wrote:

 On 04/22/2015 08:54 PM, Maxim Uvarov wrote:

 On 04/22/15 19:06, Ciprian Barbu wrote:

 On Wed, Apr 22, 2015 at 5:13 PM, Maxim Uvarov
 maxim.uva...@linaro.org wrote:

 On 04/22/15 15:53, Bill Fischofer wrote:

 It does that, but as Taras points out there is a race condition.  If
 one
 thread does an odp_pool_destroy() and it succeeds, another thread
 could do
 an odp_pool_create() before the second odp_pool_destroy() and get
 the same
 pool handle which would then be deleted by the second
 odp_pool_destroy()
 call.

 odp_pool_destroy() should hold lock inside it. (i.e. it already does
 that).

 The scenario is not about a race condition on create/delete, it's
 actually even simpler:

 th1: lock - create_pool - unlock
 th1: lock - destroy_pool - unlock
 th2: lock - create_pool - unlock
 th1: lock - destroy_pool - unlock

 Ok, but it's not undefined behavior. If you work with threads you should
 know what you do.

 So behavior is: destroy function fails if pool already destroyed.

 I still think that Mikes test is valid. It's single threaded application
 with very exact case.
 Analogy for example:
 char *a = malloc(10);
 a[5] = 1;

 On your logic it has to be undefined behavior due to some other thread
 can free or malloc with different size.

 This in not a valid analogy. Should be something like this.

 char *a = malloc(10);
 free(a);
 free(a); /* Here you may free a block which is allocated again by another
 thread */

 Yes, and glibc will warn if you do double free. I still think that it's
 reasonable to accept that patch.

I agree with Taras. pool handle will map to a specific buffer
management hardware in the system and the value of odp_pool_t will be
the same even if the same pool gets allocated to a different process.

Regards,
Bala

 Maxim.

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCHv3] example: ODP classifier example

2015-04-21 Thread Bala Manoharan
On 21 April 2015 at 16:57, Maxim Uvarov maxim.uva...@linaro.org wrote:
 Bala, one more comment. Please do parsing arguments before odp init.


 About this code Mike found that it will be abort if you do not run it under
 root due to
 unable do raw socket operations.


 pktio = odp_pktio_open(dev, pool);
 if (pktio == ODP_PKTIO_INVALID)
 EXAMPLE_ABORT(pktio create failed for %s\n, dev);


The abort in this case if because the EXAMPLE_ABORT macro implements
abort() function instead of exit(EXIT_FAILURE)
IMO calling exit is better as this causes a graceful shutdown of the
system. In any case I believe it is better for the application to call
this macro and the macro definition can implement exit() instead of
abort()


 I tried loop application starts well. But needed some traffic so that loop
 is no good fit.
 I think it's better to exit from this app with some return code and add
 small note to
 Usage that for linux-generic user has to be root to open real devices in raw
 mode.

I believe the reason we added this macro EXAMPLE_ABORT was to be used
by the application in the scenario when it wants to terminate the
function so that the macro can be modified by different platforms.
We can call usage() to display the usage here but I am against calling
exit() directly in the application

 Thanks,
 Maxim.


 On 04/16/15 14:41, bala.manoha...@linaro.org wrote:

 From: Balasubramanian Manoharan bala.manoha...@linaro.org

 ODP Classifier example

 This programs gets pmr rules as command-line parameter and configures the
 classification engine
 in the system.

 This initial version supports the following
 * ODP_PMR_SIP_ADDR pmr term
 * PMR term MATCH and RANGE type
 * Multiple PMR rule can be set on a single pktio interface with different
 queues associated to each PMR rule
 * Automatically configures a default queue and provides statistics for the
 same

 Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
 ---
 V3: Incorporates review comments from Mike and Maxim
 Adds a timeout variable to configure the time in seconds for classifier
 example to run.

   configure.ac|   1 +
   example/Makefile.am |   2 +-
   example/classifier/Makefile.am  |  10 +
   example/classifier/odp_classifier.c | 820
 
   4 files changed, 832 insertions(+), 1 deletion(-)
   create mode 100644 example/classifier/Makefile.am
   create mode 100644 example/classifier/odp_classifier.c

 diff --git a/configure.ac b/configure.ac
 index 78ff245..d20bad2 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -272,6 +272,7 @@ AM_CXXFLAGS=-std=c++11
   AC_CONFIG_FILES([Makefile
  doc/Makefile
  example/Makefile
 +example/classifier/Makefile
  example/generator/Makefile
  example/ipsec/Makefile
  example/packet/Makefile
 diff --git a/example/Makefile.am b/example/Makefile.am
 index 6bb4f5c..353f397 100644
 --- a/example/Makefile.am
 +++ b/example/Makefile.am
 @@ -1 +1 @@
 -SUBDIRS = generator ipsec packet timer
 +SUBDIRS = classifier generator ipsec packet timer
 diff --git a/example/classifier/Makefile.am
 b/example/classifier/Makefile.am
 new file mode 100644
 index 000..938f094
 --- /dev/null
 +++ b/example/classifier/Makefile.am
 @@ -0,0 +1,10 @@
 +include $(top_srcdir)/example/Makefile.inc
 +
 +bin_PROGRAMS = odp_classifier
 +odp_classifier_LDFLAGS = $(AM_LDFLAGS) -static
 +odp_classifier_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
 +
 +noinst_HEADERS = \
 + $(top_srcdir)/example/example_debug.h
 +
 +dist_odp_classifier_SOURCES = odp_classifier.c
 diff --git a/example/classifier/odp_classifier.c
 b/example/classifier/odp_classifier.c
 new file mode 100644
 index 000..85b6e00
 --- /dev/null
 +++ b/example/classifier/odp_classifier.c
 @@ -0,0 +1,820 @@
 +/* Copyright (c) 2015, Linaro Limited
 + * All rights reserved.
 + *
 + * SPDX-License-Identifier: BSD-3-Clause
 + */
 +
 +#include stdlib.h
 +#include string.h
 +#include getopt.h
 +#include unistd.h
 +#include example_debug.h
 +
 +#include odp.h
 +#include odp/helper/linux.h
 +#include odp/helper/eth.h
 +#include odp/helper/ip.h
 +#include strings.h
 +#include stdio.h
 +
 +/** @def MAX_WORKERS
 + * @brief Maximum number of worker threads
 + */
 +#define MAX_WORKERS32
 +
 +/** @def SHM_PKT_POOL_SIZE
 + * @brief Size of the shared memory block
 + */
 +#define SHM_PKT_POOL_SIZE  (512*2048)
 +
 +/** @def SHM_PKT_POOL_BUF_SIZE
 + * @brief Buffer size of the packet pool buffer
 + */
 +#define SHM_PKT_POOL_BUF_SIZE  1856
 +
 +/** @def MAX_PMR_COUNT
 + * @brief Maximum number of Classification Policy
 + */
 +#define MAX_PMR_COUNT  8
 +
 +/** @def DISPLAY_STRING_LEN
 + * @brief Length of string used to display term value
 + */
 +#define DISPLAY_STRING_LEN 32
 +
 +/** Get rid of path in filename - only for unix-type paths using '/' */
 +#define 

Re: [lng-odp] [API-NEXT PATCH] api: classification: connect PMR on creation

2015-04-21 Thread Bala Manoharan
On 21 April 2015 at 18:15, Ola Liljedahl ola.liljed...@linaro.org wrote:
 On 21 April 2015 at 14:26, Maxim Uvarov maxim.uva...@linaro.org wrote:

 On 04/21/15 15:01, Bill Fischofer wrote:

 Behavior is undefined if rules are ambiguous. Consider the following
 rules:

 protocol == UDP -- CoS A
 port == 1234 -- CoS B

 What happens if a UDP packet for port 1234 arrives? Answer: undefined.



 Isn't pmr created one by one to the table? And first match will return Cos
 A in that case.

 Since we are having this discussion, it is obvious that we need a definition
 of the ODP classification that cannot be misunderstood or misinterpreted.
 Probably a formal definition. Or do everyone here agree that the (current)
 linux-generic implementation can serve as *the* definition?

Shouldn't we use the classification design document as the definition
of classification and the linux-generic implementation can be used as
a reference and not the definition.

Regards,
Bala





 Maxim.

 The result may be either CoS A or CoS B.

 A better rule set would be:

 protocol == UDP -- CoS A

 CoS A  port == 1234 -- CoS B

 Now this is unambiguous.  UDP packets go to CoS A unless they also
 specify port 1234, in which case they go to CoS B.

 On Tue, Apr 21, 2015 at 4:52 AM, Ola Liljedahl ola.liljed...@linaro.org
 mailto:ola.liljed...@linaro.org wrote:

 On 20 April 2015 at 21:49, Bill Fischofer
 bill.fischo...@linaro.org mailto:bill.fischo...@linaro.org wrote:

 Classification is a collaboration between the implementation
 and the application. It is the application's responsibility to
 write unambiguous classification rules and it is the
 implementation's job to perform the match as efficiently and
 as specifically as possible.

 What should an implementation do if the classification rules are
 ambiguous? E.g. if partial matches (of different partially
 overlapping rules) use different CoS? Is this an error already
 when creating the PMR rules?

 -- Ola


 On Mon, Apr 20, 2015 at 7:33 AM, Ola Liljedahl
 ola.liljed...@linaro.org mailto:ola.liljed...@linaro.org
 wrote:

 On 17 April 2015 at 22:55, Rosenboim, Leonid
 leonid.rosenb...@caviumnetworks.com
 mailto:leonid.rosenb...@caviumnetworks.com wrote:


 Guys,

 There are several versions of the Classifier API
 document floating in Google docs, here is one such copy:


 https://docs.google.com/document/d/14KMqNPIgd7InwGzdP2EaI9g_V3o0_wxpgp3N-nd-RBE/edit?usp=sharing

 Here is a different perspective on what PMR and COS
 mean,  perhaps in terms of an abstract hardware
 implementation:

 CoS is a meta-data field assigned to each packet as it
 traverses the classifier pipe-line.

 A packet is assigned an initial CoS by the pktio port
 which received it.

 Then, the packet is compared multiple times against a
 set of rules, and as it is common with TCAMs, each
 comparisons happens against all rules in parallel.

 Each rule has two values to match: 1. the current CoS
 meta-data field; and 2. a certain packet header field
 (value with a mask).
 If both these values match, the packet met-data CoS
 field is changed (Action taken) with the destination
 CoS of the matching rule.

 It is assumed that up to one such rule has matched.

 If a rule has matched, CoS has changed, the process
 continues one more time.

 If NO MATCH occured, the classification process is
 finished, and the packet is delivered in accordance to
 the current CoS (i.e. the last matching rule or the
 pktio default CoS if the first rule match failed).

 So partial matches are valid. Is this what we want, e.g.
 from application point of view and from HW point of view?

 Is partial matches what is commonly supported by HW
 classifiers?

 A classifier which supports these longest prefix matches
 can easily implement perfect matching (all partial matches
 just specify the default CoS). But a classifier which
 only supports perfect matching cannot directly support
 partial matches. I assume you would have to internally
 create rules/patterns for all (relevant) partial matches
 as well. The implementation can find all relevant partial
 matches (prefix rules which specify a CoS different from
 the default CoS) and add those to the list of rules.
 Longer prefix matches should be prioritized (however that
 is done) over 

[lng-odp] [PATCHv3] example: ODP classifier example

2015-04-16 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

ODP Classifier example

This programs gets pmr rules as command-line parameter and configures the 
classification engine
in the system.

This initial version supports the following
* ODP_PMR_SIP_ADDR pmr term
* PMR term MATCH and RANGE type
* Multiple PMR rule can be set on a single pktio interface with different 
queues associated to each PMR rule
* Automatically configures a default queue and provides statistics for the same

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
V3: Incorporates review comments from Mike and Maxim
Adds a timeout variable to configure the time in seconds for classifier example 
to run.

 configure.ac|   1 +
 example/Makefile.am |   2 +-
 example/classifier/Makefile.am  |  10 +
 example/classifier/odp_classifier.c | 820 
 4 files changed, 832 insertions(+), 1 deletion(-)
 create mode 100644 example/classifier/Makefile.am
 create mode 100644 example/classifier/odp_classifier.c

diff --git a/configure.ac b/configure.ac
index 78ff245..d20bad2 100644
--- a/configure.ac
+++ b/configure.ac
@@ -272,6 +272,7 @@ AM_CXXFLAGS=-std=c++11
 AC_CONFIG_FILES([Makefile
 doc/Makefile
 example/Makefile
+example/classifier/Makefile
 example/generator/Makefile
 example/ipsec/Makefile
 example/packet/Makefile
diff --git a/example/Makefile.am b/example/Makefile.am
index 6bb4f5c..353f397 100644
--- a/example/Makefile.am
+++ b/example/Makefile.am
@@ -1 +1 @@
-SUBDIRS = generator ipsec packet timer
+SUBDIRS = classifier generator ipsec packet timer
diff --git a/example/classifier/Makefile.am b/example/classifier/Makefile.am
new file mode 100644
index 000..938f094
--- /dev/null
+++ b/example/classifier/Makefile.am
@@ -0,0 +1,10 @@
+include $(top_srcdir)/example/Makefile.inc
+
+bin_PROGRAMS = odp_classifier
+odp_classifier_LDFLAGS = $(AM_LDFLAGS) -static
+odp_classifier_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+noinst_HEADERS = \
+ $(top_srcdir)/example/example_debug.h
+
+dist_odp_classifier_SOURCES = odp_classifier.c
diff --git a/example/classifier/odp_classifier.c 
b/example/classifier/odp_classifier.c
new file mode 100644
index 000..85b6e00
--- /dev/null
+++ b/example/classifier/odp_classifier.c
@@ -0,0 +1,820 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#include stdlib.h
+#include string.h
+#include getopt.h
+#include unistd.h
+#include example_debug.h
+
+#include odp.h
+#include odp/helper/linux.h
+#include odp/helper/eth.h
+#include odp/helper/ip.h
+#include strings.h
+#include stdio.h
+
+/** @def MAX_WORKERS
+ * @brief Maximum number of worker threads
+ */
+#define MAX_WORKERS32
+
+/** @def SHM_PKT_POOL_SIZE
+ * @brief Size of the shared memory block
+ */
+#define SHM_PKT_POOL_SIZE  (512*2048)
+
+/** @def SHM_PKT_POOL_BUF_SIZE
+ * @brief Buffer size of the packet pool buffer
+ */
+#define SHM_PKT_POOL_BUF_SIZE  1856
+
+/** @def MAX_PMR_COUNT
+ * @brief Maximum number of Classification Policy
+ */
+#define MAX_PMR_COUNT  8
+
+/** @def DISPLAY_STRING_LEN
+ * @brief Length of string used to display term value
+ */
+#define DISPLAY_STRING_LEN 32
+
+/** Get rid of path in filename - only for unix-type paths using '/' */
+#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
+   strrchr((file_name), '/') + 1 : (file_name))
+
+typedef struct {
+   odp_queue_t queue;  /** Associated queue handle */
+   odp_cos_t cos;  /** Associated cos handle */
+   odp_pmr_t pmr;  /** Associated pmr handle */
+   odp_atomic_u64_t packet_count;  /** count of received packets */
+   odp_pmr_term_e term;/** odp pmr term value */
+   char queue_name[ODP_QUEUE_NAME_LEN];/** queue name */
+   odp_pmr_match_type_e match_type;/** pmr match type */
+   int val_sz; /** size of the pmr term */
+   union {
+   struct {
+   uint32_t val;   /** pmr term value */
+   uint32_t mask;  /** pmr term mask */
+   } match;
+   struct  {
+   uint32_t val1;  /** pmr term start range */
+   uint32_t val2;  /** pmr term end range */
+   } range;
+   };
+   char value1[DISPLAY_STRING_LEN];/** Display string1 */
+   char value2[DISPLAY_STRING_LEN];/** Display string2 */
+} global_statistics;
+
+typedef struct {
+   global_statistics stats[MAX_PMR_COUNT];
+   int policy_count;   /** global policy count */
+   int appl_mode;  /** application mode */
+   odp_atomic_u64_t total_packets; /** total received packets */
+   int cpu_count;  /** Number of CPUs to use */
+   uint32_t 

Re: [lng-odp] [RFC] api: classification: Create PMR handle on its application

2015-04-09 Thread Bala Manoharan
On 9 April 2015 at 14:38, Taras Kondratiuk taras.kondrat...@linaro.org wrote:
 + mailing list that I've missed initially in the RFC.

 On 04/08/2015 10:25 PM, Rosenboim, Leonid wrote:

 Taras,

 I actually agree with you that a change in the API is justified
 that combines the in and out CoS values are provided at the time
 of PMR creation, instead of using a separate function.
 The main reason is that the input CoS is one of the rules that
 the ternary CAM must match, and the output CoS is the action
 that it must be assigned.

 Here the key changes I propose, granted that there will
 need to be additional changes made in other APIs in a similar manner.


 Yes this is an alternative approach.

 odp_pktio_pmr_cos() should be renamed to something like
 odp_pktio_cos_set()
 Is there still an implicit 'default' CoS assigned to Pktio on
 odp_pktio_open()? Or this function should be used to set a default one?

IMO, it might be better to have assigning of default CoS as a separate
function rather than on odp_pktio_open() since it gives freedom for
platforms which do not support classifier and also it treats
classifier as a separate module

Regards,
Bala

 This approach highlights one more implementation issue on my platform:
 CoS (pool and queue) assignment is the last step of classification, so
 CoS'es cannot be cascaded directly. Instead PMRs (LUT entries) are
 cascaded. CoS cascading can be mapped to PMRs cascading in simple
 use-cases, but if a first CoS has a several PMRs pointing to it, then
 next cascaded CoS will lead to creation of multiple LUT entries.
 I'll think how it can mapped in a better way.



 diff --git a/include/odp/api/classification.h
 b/include/odp/api/classification.h
 index 7db3645..b511cb4 100644
 --- a/include/odp/api/classification.h
 +++ b/include/odp/api/classification.h
 @@ -232,6 +232,9 @@ typedef enum odp_pmr_term {
* @param[in] val_sz  Size of the val and mask arguments,
*that must match the value size requirement of the
*specific term.
 + * @param[in]  src_cos CoS to be filtered
 + * @param[in]  dst_cos CoS to be assigned to packets filtered
 + * from src_cos that match pmr_id.
*
* @returnHandle of the matching rule
* @retvalODP_PMR_INVAL on failure
 @@ -239,7 +242,9 @@ typedef enum odp_pmr_term {
   odp_pmr_t odp_pmr_create_match(odp_pmr_term_e term,
const void *val,
const void *mask,
 -  uint32_t val_sz);
 +  uint32_t val_sz,
 +  odp_cos_t src_cos,
 +  odp_cos_t dst_cos);

   /**
* Create a packet match rule with value range
 @@ -250,6 +255,9 @@ odp_pmr_t odp_pmr_create_match(odp_pmr_term_e term,
* @param[in] val_sz  Size of the val1 and val2 arguments,
*that must match the value size requirement of the
*specific term.
 + * @param[in]  src_cos CoS to be filtered
 + * @param[in]  dst_cos CoS to be assigned to packets filtered
 + * from src_cos that match pmr_id.
*
* @returnHandle of the matching rule
* @retvalODP_PMR_INVAL on failure
 @@ -258,7 +266,10 @@ odp_pmr_t odp_pmr_create_match(odp_pmr_term_e term,
   odp_pmr_t odp_pmr_create_range(odp_pmr_term_e term,
const void *val1,
const void *val2,
 -  uint32_t val_sz);
 +  uint32_t val_sz,
 +  odp_cos_t src_cos,
 +  odp_cos_t dst_cos);
 +
   /**
* Invalidate a packet match rule and vacate its resources
*
 @@ -270,30 +281,20 @@ odp_pmr_t odp_pmr_create_range(odp_pmr_term_e term,
   int odp_pmr_destroy(odp_pmr_t pmr_id);

   /**
 - * Apply a PMR to a pktio to assign a CoS.
 - *
 - * @param[in]  pmr_id  PMR to be activated
 - * @param[in]  src_pktio   pktio to which this PMR is to be applied
 - * @param[in]  dst_cos CoS to be assigned by this PMR
 + * Apply a initial CoS to a pktio.
*
 - * @retval 0 on success
 - * @retval 0 on failure
 - */
 -int odp_pktio_pmr_cos(odp_pmr_t pmr_id,
 - odp_pktio_t src_pktio, odp_cos_t dst_cos);
 -
 -/**
 - * Cascade a PMR to refine packets from one CoS to another.
 + * This is the initial CoS that is assigned to packets arriving
 + * from the 'pktio' channel, that may later be changed by any
 + * Packet matching rules (PMRs) that hae the 'pktio_cos' assigned
 + * as its 'src_cos'.
*
 - * @param[in]  pmr_id  PMR to be activated
 - * @param[in]  src_cos CoS to be filtered
 - * @param[in]  dst_cos CoS to be assigned to packets filtered
 - * from src_cos that match pmr_id.
 + * @param[in]  src_pktio   

[lng-odp] [PATCHv2] example: ODP classifier example

2015-04-08 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

ODP Classifier example

This programs gets pmr rules as command-line parameter and configures the 
classification engine
in the system.

This initial version supports the following
* ODP_PMR_SIP_ADDR pmr term
* PMR term MATCH and RANGE type
* Multiple PMR rule can be set on a single pktio interface with different 
queues associated to each PMR rule
* Automatically configures a default queue and provides statistics for the same
* Prints statistics interms of the number of packets dispatched to each queue


Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
v2: Incorporates review comments from Maxim
Resending V2 as there were some issue in linaro mail server

 configure.ac|   1 +
 example/Makefile.am |   2 +-
 example/classifier/Makefile.am  |  10 +
 example/classifier/odp_classifier.c | 789 
 4 files changed, 801 insertions(+), 1 deletion(-)
 create mode 100644 example/classifier/Makefile.am
 create mode 100644 example/classifier/odp_classifier.c

diff --git a/configure.ac b/configure.ac
index 57054c5..d8d8ca5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -250,6 +250,7 @@ AM_CXXFLAGS=-std=c++11
 AC_CONFIG_FILES([Makefile
 doc/Makefile
 example/Makefile
+example/classifier/Makefile
 example/generator/Makefile
 example/ipsec/Makefile
 example/l2fwd/Makefile
diff --git a/example/Makefile.am b/example/Makefile.am
index 3021571..33b60c1 100644
--- a/example/Makefile.am
+++ b/example/Makefile.am
@@ -1 +1 @@
-SUBDIRS = generator ipsec l2fwd packet timer
+SUBDIRS = classifier generator ipsec l2fwd packet timer
diff --git a/example/classifier/Makefile.am b/example/classifier/Makefile.am
new file mode 100644
index 000..938f094
--- /dev/null
+++ b/example/classifier/Makefile.am
@@ -0,0 +1,10 @@
+include $(top_srcdir)/example/Makefile.inc
+
+bin_PROGRAMS = odp_classifier
+odp_classifier_LDFLAGS = $(AM_LDFLAGS) -static
+odp_classifier_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example
+
+noinst_HEADERS = \
+ $(top_srcdir)/example/example_debug.h
+
+dist_odp_classifier_SOURCES = odp_classifier.c
diff --git a/example/classifier/odp_classifier.c 
b/example/classifier/odp_classifier.c
new file mode 100644
index 000..f3dcbbb
--- /dev/null
+++ b/example/classifier/odp_classifier.c
@@ -0,0 +1,789 @@
+/* Copyright (c) 2015, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#include stdlib.h
+#include string.h
+#include getopt.h
+#include unistd.h
+#include example_debug.h
+
+#include odp.h
+#include odp/helper/linux.h
+#include odp/helper/eth.h
+#include odp/helper/ip.h
+#include strings.h
+#include stdio.h
+
+/** @def MAX_WORKERS
+ * @brief Maximum number of worker threads
+ */
+#define MAX_WORKERS32
+
+/** @def SHM_PKT_POOL_SIZE
+ * @brief Size of the shared memory block
+ */
+#define SHM_PKT_POOL_SIZE  (512*2048)
+
+/** @def SHM_PKT_POOL_BUF_SIZE
+ * @brief Buffer size of the packet pool buffer
+ */
+#define SHM_PKT_POOL_BUF_SIZE  1856
+
+/** @def MAX_PMR_COUNT
+ * @brief Maximum number of Classification Policy
+ */
+#define MAX_PMR_COUNT  8
+
+/** @def DISPLAY_STRING_LEN
+ * @brief Length of string used to display term value
+ */
+#define DISPLAY_STRING_LEN 32
+
+/** Get rid of path in filename - only for unix-type paths using '/' */
+#define NO_PATH(file_name) (strrchr((file_name), '/') ? \
+   strrchr((file_name), '/') + 1 : (file_name))
+
+typedef struct {
+   odp_queue_t queue;  /** Associated queue handle */
+   odp_cos_t cos;  /** Associated cos handle */
+   odp_pmr_t pmr;  /** Associated pmr handle */
+   odp_atomic_u64_t packet_count;  /** count of received packets */
+   odp_pmr_term_e term;/** odp pmr term value */
+   char queue_name[ODP_QUEUE_NAME_LEN];/** queue name */
+   odp_pmr_match_type_e match_type;/** pmr match type */
+   int val_sz; /** size of the pmr term */
+   union {
+   struct {
+   uint32_t val;   /** pmr term value */
+   uint32_t mask;  /** pmr term mask */
+   } match;
+   struct  {
+   uint32_t val1;  /** pmr term start range */
+   uint32_t val2;  /** pmr term end range */
+   } range;
+   };
+   char value1[DISPLAY_STRING_LEN];/** Display string1 */
+   char value2[DISPLAY_STRING_LEN];/** Display string2 */
+} global_statistics;
+
+typedef struct {
+   global_statistics stats[MAX_PMR_COUNT];
+   int policy_count;   /** global policy count */
+   int appl_mode;  /** application mode */
+   odp_atomic_u64_t total_packets; /** total received packets */
+   int cpu_count;

Re: [lng-odp] NO ODP API for Packet classification and Packet Shaping

2015-04-06 Thread Bala Manoharan
I would like to get application use-case for different scenarios where
this would be useful before finalizing pool groups and their API
signature.
Also in the above proposal it is not possible to combine multiple
pools to form a pool group, I like this idea as it gives freedom for
implementation to allocate and optimize the individual pools.

Regards,
Bala

On 6 April 2015 at 22:11, Bill Fischofer bill.fischo...@linaro.org wrote:
 I would call these pool groups for symmetry with queue groups and so the
 API would be odp_pool_create_group(), odp_pool_destroy_group(), etc.

 On Mon, Apr 6, 2015 at 11:35 AM, Taras Kondratiuk
 taras.kondrat...@linaro.org wrote:

 On 04/06/2015 03:31 PM, Bill Fischofer wrote:
  The /expression/ may be linear, but that doesn't imply that is how any
  given implementation needs to realize the expression.  Since PMRs are
  reasonably static, I'd expect them to be compiled into whatever native
  classification capabilities are present in the platform.  Real
  classification is typically done in parallel by the HW as the packet is
  coming off the wire.  This is necessary because one of the outputs of
  classification is pool selection, so all of this happens while the
  packet is in the HW's receive FIFO before the DMA engines are told where
  to store it.

 Following our discussion on a call.

 Choosing a pool on ingress solves two separate issues:
 1. Isolation - packets that belong to different CoS'es may need to use
separate packet pools. Pool choice is strictly deterministic.
This case is covered by current API.
 2. Segmentation optimization - application may want to sort packets of
different size into pools with different segment size. In this case
pool choice is not very strict. For example a small packet can be
allocated from a pool with big segments if small-segment pool is
empty. This case can't be expressed with a current API.

 Assigning several pools to CoS and allowing implementation to pick an
 optimal pool would be one option to solve #2.
 During a meeting Maxim proposed to use a 'composite' pool instead, and
 I like this idea more. I imagine something like this:

 /**
  * Create a composite pool
  *
  * This routine is used to create a pool which logically consists of
  * a set of sub-pools. Each sub-pool has its own configuration
  * parameters. Only pool of ODP_POOL_PACKET type can be created.
  * On allocation an implementation will choose an optimal sub-pool to
  * allocate a packet from.
  * A composite pool can be used to optimize memory usage and minimize
  * packet segmentation.
  *
  * @param name  Name of the pool, max ODP_POOL_NAME_LEN-1 chars.
  *  May be specified as NULL for anonymous pools.
  *
  * @param shm   The shared memory object in which to create the pool.
  *  Use ODP_SHM_NULL to reserve default memory type
  *  for the pool type.
  *
  * @param num_pools Number of sub-pools in a composite pool
  *
  * @param paramsArray of sub-pools' parameters.
  *
  * @return Handle of the created pool
  * @retval ODP_POOL_INVALID  Pool could not be created
  */

 odp_pool_t odp_pool_create_composite(const char *name,
odp_shm_t shm,
uint32_t num_pools,
odp_pool_param_t *params[]);


___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] NO ODP API for Packet classification and Packet Shaping

2015-04-03 Thread Bala Manoharan
On 3 April 2015 at 22:00, Taras Kondratiuk taras.kondrat...@linaro.org wrote:
 On 03/30/2015 03:34 PM, Zhujianhua wrote:

 @ Jerin Jacob:
 What will happen if odp_cos_set_pool(odp_cos_t cos_id, odp_buffer_pool_t
 pool_id) was called twice?
 Will the new pool_id replace the old one or the CoS have two pools?

 @Taras:
 Assume: set several pools per pktio interface.
 What will happen if two data plane applications share one physical
 Ethernet port to receive packets?
 Since the pool is per pktio interface, will these two applications share
 the same buffer pool?
 If there is memory leak in one application, the other application will be
 disturbed.
 Correct me if my understanding were wrong.


 That's a nice question. I'm afraid this use-case was not considered before.
 Do you want to split traffic between two applications via classifier?


 Maybe to let each CoS have more than one pool and limit the max number of
 Pool to for example 4 (Let the application designer decide how many pools
 are needed for each CoS) could be one option.

Hi,

If we are attaching multiple pools per CoS what will be the
distribution algorithm for packets to each of the associated pools?
will it be a simple round-robin in that case wouldn't it be better to
attach a single pool of bigger size to the specific CoS?

Since we are attaching pools per CoS object the application can
configure the PMR rule in such a way that packets which come from the
same interface and belong to different service can be configured to be
allocated from multiple pools by attaching to multiple CoS objects.
Pls let me know if my understanding is wrong.

Regards,
Bala


 That is a possible solution.
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 https://lists.linaro.org/mailman/listinfo/lng-odp
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC 5/8] api: packet_io: added packet input queue API definitions

2015-03-31 Thread Bala Manoharan
Hi,

Since we have the concept of a default CoS in classification, why cant
we associate the above APIs to default CoS, in that way we can avoid
linking this hash function directly to pktio.

coz there comes a discrepancy now as to what happens when both this
API and classification are configured at the same time.

Regards,
Bala

On 31 March 2015 at 18:57, Alexandru Badicioiu
alexandru.badici...@linaro.org wrote:


 On 31 March 2015 at 16:09, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 31 March 2015 at 14:49, Alexandru Badicioiu
 alexandru.badici...@linaro.org wrote:

 There are some issues queue groups in the classification API :

 Then I think we should clean up those issues in that API. Not invent
 similar (queue distribution) support in a different API.



 Distribution is too restrictive (based on ranges) and it may not be
 widely supported.

 Exactly to what level the classification API is supported is
 implementation specific. But perhaps some things are too specific and will
 not be widely supported. We need feedback from the SoC vendors.

 I can imagine that when a packet has been successfully matched, the CoS
 could specify a contiguous set of bits (i.e. a bit field) in the packet
 which is used for distributing matching packets to a range of queues. I know
 we discussed this but currently it is only possible to assign *one* queue to
 a CoS. Lots of things didn't make it to the 1.0 API.

 This means that when the classifier matches an inbound packet to the
 specified CoS, the packet will be distributed among the queues of the
 specified queue group based on the index value of the PMR range.  If the
 range value runs [startval..endval] for a field then a packet assigned to
 this CoS that has a field value of val in this range will be enqueued to
 queue val-startval of the queue group.

 Where is this text from? I can't find it in the header files. Is it from
 the original classification design document?

 Queue  scheduler API -
 https://docs.google.com/a/linaro.org/document/d/1h0g7OEQst_PlOauNIHKmIyqK9Bofiv9L-bn9slRaBZ8/edit#




 Packet field value matching has to be performed before doing the
 distribution. Using only packet field type in the distribution spec does not
 involve any matching (only information the provided by parser - field
 present/not present).

 Alex



 On 31 March 2015 at 14:48, Ola Liljedahl ola.liljed...@linaro.org
 wrote:

 On 31 March 2015 at 13:39, Alexandru Badicioiu
 alexandru.badici...@linaro.org wrote:

 In my understanding of the terms, distribution computes the queue id
 from the various packet fields/bytes (and maybe other data like port id)
 usually with a hashing function. Classification uses table lookup to find
 the queue id associated with a given key.Classification tables can be
 cascaded while distributions cannot. Distribution is used typically for
 spreading the load to a group of cores while classification deals with
 separating logical flows with different processing requirements (e.g.
 different IPsec SA, VLANs, etc).

 It is not a question which terms to use or what they mean.
 I claim that this distribution functionality is already supported in the
 classification API (where you can specify a range of queues as the
 destination, not only one queue). So do we need to add a similar feature to
 the pktio API?

 I think the abstraction level is too low if we think we need to add such
 a feature to different API's. How different ODP implementations implement
 this feature is a different thing. Maybe some ODP implementation only
 supports five-tuple hashing in the NIC (which corresponds to the pktio) but
 this should still be possible to control using the classification API. The
 idea is that ODP abstracts the underlying HW, not exposes it.

 Perhaps we should rename it to classification and distribution API if
 this eases the confusion?


 Alex

 On 31 March 2015 at 14:15, Ola Liljedahl ola.liljed...@linaro.org
 wrote:

 On 31 March 2015 at 10:56, Savolainen, Petri (Nokia - FI/Espoo)
 petri.savolai...@nokia.com wrote:

 Hi Alex,



 Could you explain a bit more what you mean with extracts here.
 Detailed classification (packet field + mask/range = queue / set of 
 queues)
 would be still handled with classification API. This API would offer an 
 easy
 way to spread typical flows (e.g. 5-tuple) into multiple queues (same 
 queue
 type, priority, sync, group).

 Yes I was thinking that hash-based distribution is already supported
 (or at least the rudiments are in place to support it) by the 
 classification
 API.
 Isn't this pktio support redundant? Why have it in both places?

 -- Ola





 -Petri





 From: ext Alexandru Badicioiu [mailto:alexandru.badici...@linaro.org]
 Sent: Tuesday, March 31, 2015 11:37 AM
 To: Savolainen, Petri (Nokia - FI/Espoo)
 Cc: LNG ODP Mailman List
 Subject: Re: [lng-odp] [RFC 5/8] api: packet_io: added packet input
 queue API definitions



 Hi Petri,

 I think it would be useful (for hardware which supports) to be 

[lng-odp] [PATCHv2] fix: incorrect pmr_term_value update in odp_pmr_create_xxx() function

2015-03-26 Thread bala . manoharan
From: Balasubramanian Manoharan bala.manoha...@linaro.org

Fix for incorrect pmr_term_value update in odp_pmr_create_match() and 
odp_pmr_create_range() functions.
Fixes https://bugs.linaro.org/show_bug.cgi?id=1381

Signed-off-by: Balasubramanian Manoharan bala.manoha...@linaro.org
---
v2: Fixes checkpatch issue pointed by Bill

 platform/linux-generic/odp_classification.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/platform/linux-generic/odp_classification.c 
b/platform/linux-generic/odp_classification.c
index 9fb034f..609faa9 100644
--- a/platform/linux-generic/odp_classification.c
+++ b/platform/linux-generic/odp_classification.c
@@ -438,6 +438,7 @@ odp_pmr_t odp_pmr_create_match(odp_pmr_term_e term,
pmr-s.pmr_term_value[0].mask.mask =  0;
memcpy(pmr-s.pmr_term_value[0].mask.val, val, val_sz);
memcpy(pmr-s.pmr_term_value[0].mask.mask, mask, val_sz);
+   pmr-s.pmr_term_value[0].mask.val = pmr-s.pmr_term_value[0].mask.mask;
UNLOCK(pmr-s.lock);
return id;
 }
@@ -460,7 +461,7 @@ odp_pmr_t odp_pmr_create_range(odp_pmr_term_e term,
return ODP_PMR_INVAL;
 
pmr-s.num_pmr = 1;
-   pmr-s.pmr_term_value[0].match_type = ODP_PMR_MASK;
+   pmr-s.pmr_term_value[0].match_type = ODP_PMR_RANGE;
pmr-s.pmr_term_value[0].term = term;
pmr-s.pmr_term_value[0].range.val1 =  0;
pmr-s.pmr_term_value[0].range.val2 =  0;
@@ -601,6 +602,8 @@ int odp_pmr_match_set_create(int num_terms, odp_pmr_match_t 
*terms,
   terms[i].mask.val, val_sz);
memcpy(pmr-s.pmr_term_value[i].mask.mask,
   terms[i].mask.mask, val_sz);
+   pmr-s.pmr_term_value[i].mask.val = pmr-s
+   .pmr_term_value[i].mask.mask;
} else {
val_sz = terms[i].range.val_sz;
if (val_sz  ODP_PMR_TERM_BYTES_MAX)
-- 
2.0.1.472.g6f92e5f


___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [RFC v3 0/4] Move the definition of odp syncronizers abstract types to platform

2015-03-24 Thread Bala Manoharan
For this series.
Reviewed-by: Bala Manoharanbala.manoha...@linaro.org

On 20 March 2015 at 17:44, Jerin Jacob jerin.ja...@caviumnetworks.com wrote:
 On Wed, Mar 18, 2015 at 11:29:03AM -0500, Bill Fischofer wrote:

 Ping

 This version looks good.  For this series:

 Reviewed-and-tested-by: Bill Fischofer bill.fischo...@linaro.org

 On Wed, Mar 18, 2015 at 9:11 AM, Jerin Jacob jerin.ja...@caviumnetworks.com
  wrote:

  Move the definition of odp syncronizers abstract types to platform
  from public headerfile.
  This will allow the platform to define odp syncronizers abstract type.
  Useful when native SDK has definition of the odp syncronizers and
  ODP implementation decides to reuses them.
 
  v1..v2 Corrected the Doxygen documentation issues identified by Petri
  v2..v3 Fixed compilation issues in 'make distcheck' identified by Bill
 
 
  Jerin Jacob (4):
spinlock: allow platform to override odp_spinlock_t
rwlock: allow platform to override odp_rwlock_t
ticketlock: allow platform to override odp_ticketlock_t
barrier: allow platform to override odp_barrier_t
 
   include/odp/api/barrier.h  |  7 +---
   include/odp/api/rwlock.h   | 11 +
   include/odp/api/spinlock.h | 10 +
   include/odp/api/ticketlock.h   | 12 +-
   platform/linux-generic/Makefile.am |  4 ++
   platform/linux-generic/include/odp/barrier.h   |  1 +
   .../linux-generic/include/odp/plat/barrier_types.h | 47
  +
   .../linux-generic/include/odp/plat/rwlock_types.h  | 48
  ++
   .../include/odp/plat/spinlock_types.h  | 46
  +
   .../include/odp/plat/ticketlock_types.h| 46
  +
   platform/linux-generic/include/odp/rwlock.h|  2 +
   platform/linux-generic/include/odp/spinlock.h  |  2 +
   platform/linux-generic/include/odp/ticketlock.h|  2 +
   13 files changed, 205 insertions(+), 33 deletions(-)
   create mode 100644 platform/linux-generic/include/odp/plat/barrier_types.h
   create mode 100644 platform/linux-generic/include/odp/plat/rwlock_types.h
   create mode 100644
  platform/linux-generic/include/odp/plat/spinlock_types.h
   create mode 100644
  platform/linux-generic/include/odp/plat/ticketlock_types.h
 
  --
  2.1.0
 
 
  ___
  lng-odp mailing list
  lng-odp@lists.linaro.org
  http://lists.linaro.org/mailman/listinfo/lng-odp
 

 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


<    1   2   3   4   5   >