Re: [lng-odp] [COMPRESSION,RFCv1]

2017-09-11 Thread Verma, Shally
Thanks Barry. Will go through these.  

Thanks
Shally

-Original Message-
From: Barry Spinney [mailto:spin...@mellanox.com] 
Sent: 09 September 2017 03:37
To: lng-odp@lists.linaro.org
Cc: Bill Fischofer ; Verma, Shally 
; Barry Spinney 
Subject: [lng-odp][COMPRESSION,RFCv1]


Here is my promised document on "Lossless  Data Compression Requirements and 
Use Cases", in response to Shally's request for compression Use cases.  Mostly 
a complete first draft, with just a couple of still "To Be Done" parts.  I seem 
to recall some constraints on the mailing lists of the use of email attachments 
and rich document formats, so I chose to use a simple text file format - 
despite the fact that it looks a little ugly.

**


  Lossless Data Compression
 Requirements and Use Cases

   Sep 8/2017   Barry Spinney


This document describes the Use Cases that would justify a HW based 
compression/decompression engine on Mellanox SoC chips.  An API for such a 
compression engine is currently being considered for addition to the 
OpenDataPlane (ODP) standard.  For these Use Cases some basic requirements are 
also listed.

This document considers a number of uses of data compression in computer 
systems - and especially those uses that are networking related.  For a number 
of such uses, we note whether that usage would justify the addition of 
compression/decompression HW for Mellanox chips, as well as how this use case 
would affect the ODP API.  This document is organized as a taxonomy or flow 
chart of the compression/decompression categories.


1 Low Speed Uses:
-
Note also that we ignore many uses of compression where it is usually done 
offline (often in SW) OR where it happens in real-time but at low speeds.
Here we define low speeds as those use cases that can be handled fairly 
reasonably in SW and so don't justify special compression/decompression HW.
Today this usually means speeds less than 1 Gbps.


2 High Speed Uses:
--


2.1 Lossy versus Lossless:
--
First we discuss the two major compression categories of lossy compression 
use cases and lossless compression use cases.


2.1.1 Lossy Compression:

Lossy compression/decompression is an important use of compression in the 
Internet.  In particular lossy audio and video compression is wide-spread, 
however the algorithms used here - like those described in the MPEG standards - 
are both very different and far more complex than the algorithms used for 
lossless data compression.  Often the compression can be done off-line rather 
than in real time - though decompression generally needs to be done in real 
time.

Since the current ODP compression API is designed for lossless compression, 
the lossy use cases are ignored/irrelevant.


2.1.2 Lossless Compression:
---
Lossless data compression generally uses a small set of simpler algorithms 
- which are more amenable to HW implementation.  This is the only category 
considered for the current ODP HW.


2.1.2.1 Offline vs Real-time:
-
Lossless data compression/decompression can be done "offline" or in 
"real-time".  By real-time we mean uses where the data to be compressed or 
decompressed is arriving over a high speed network port and needs to be dealt 
with at network speeds and with low latency.
   


2.1.2.2 Offline:

An important offline use case could be whole file compression.  However 
this is currently often done by the OS (somewhat transparently) and implemented 
in Kernel SW.

Another example is compression/decompression of large files/archives for 
download (e.g. a compressed ISO file or a compressed tar file).
Unfortunately, while gzip and zip are still used, it is more common today to 
instead use algorithms like bzip2 and xz.  But these newer algorithms are much 
more difficult to implement in HW because of the massive history buffers that 
are used.

While these use cases cannot justify the addition of compression/ 
decompression HW, if such HW exists for other uses, then it might be nice if 
the HW could handle this case as well.  However this should be considered a low 
priority.


2.1.2.3 Real-time:
--
Real-time use cases occur as a result of some (standardized?) network 
protocol.  These protocols can be divided into:
a) those that compress at the packet level - like IPComp and IPsec
b) those that compress at the disk sector/block level
c) those that compress at the file level


2.1.2.3.1 Packet Level Compression:
---
Packet level compression/decompression - on high speed networks - is not 
that common, with perhaps

[lng-odp] Regarding github pull (RE: [Linaro/odp] [PATCH API-NEXT v12] comp: compression spec (#102))

2017-09-05 Thread Verma, Shally
HI Maxim

So that I can better take care of problem raised below, help me to understand 
this.

Currently I have two commits in my local branch :1st-as of today, 2nd: older 
commit having older spec version (already pushed to branch and pull request v1 
raised on it).

So, before I push it to my pull request branch, do I need to squash them ? Or 
just push will do.?

As I understand, current push should generate v2 with latest commit log and it 
should just suffice.


Thanks
Shally

From: Shally Verma [mailto:notificati...@github.com]
Sent: 31 August 2017 14:00
To: Linaro/odp 
Cc: Verma, Shally ; Your activity 

Subject: Re: [Linaro/odp] [PATCH API-NEXT v12] comp: compression spec (#102)

Yea I realise that part and I tried to squash them however since I am new to 
github so taking more time with all this. I am trying to understand how best to 
resolve them and make it one smooth flow until then, Please bear with me on 
these issues.

If Maxim (or any other) don’t mind to share his chat ID to me, then I can work 
with him and get past these issues fast.

Thanks
Shally

From: Petri Savolainen [mailto:notificati...@github.com]
Sent: 31 August 2017 13:54
To: Linaro/odp 
Cc: Verma, Shally ; Mention 

Subject: Re: [Linaro/odp] [PATCH API-NEXT v12] comp: compression spec (#102)


There should be only one API spec commit. It seems now that there are 6 
commits. Please, squash all commits into one. It's pretty much impossible to 
review / comment this spec now.

Change commit subject to "api: comp: introduce compression API". All API spec 
file changing commits MUST start with "api: " prefix.

It seems that commit log text is empty. It should have rationale for the 
change. In this case describe the intended usage on the new API, etc. E.g. 5 
sentences of text what problem comp API solves, for what problem application 
should use it, etc.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on 
GitHub<https://github.com/Linaro/odp/pull/102#issuecomment-326226266>, or mute 
the 
thread<https://github.com/notifications/unsubscribe-auth/AIFXaHsw4hvMg50u0bvFvbZgtwPHLcJ4ks5sdm2pgaJpZM4Oq6xH>.


—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on 
GitHub<https://github.com/Linaro/odp/pull/102#issuecomment-326227683>, or mute 
the 
thread<https://github.com/notifications/unsubscribe-auth/AIFXaFU8ZmoXCJD9dS3-6eylmlvplzKZks5sdm8IgaJpZM4Oq6xH>.


Re: [lng-odp] api-next broken?

2017-08-31 Thread Verma, Shally

-Original Message-
From: Dmitry Eremin-Solenikov [mailto:dmitry.ereminsoleni...@linaro.org] 
Sent: 31 August 2017 16:13
To: shally verma ; lng-odp-forward 

Cc: Pimpalkar, Shrutika ; Verma, Shally 
; Nilla, Subrahmanyam 
Subject: Re: [lng-odp] api-next broken?

On 31/08/17 13:25, shally verma wrote:
> I was trying api-next from linaro/odp as of today and am seeing this 
> error. Am I missing anything here?
> 
> I simply use
> ./bootstrap
> ./configure
> ./make
> 
> make[2]: Entering directory
> `/home/shrutika/zip/zip-linux/83xx/odp/odp/example/ipfragreass'
>   CC   odp_ipfragreass-odp_ipfragreass.o
>   CC   odp_ipfragreass-odp_ipfragreass_fragment.o
>   CC   odp_ipfragreass-odp_ipfragreass_helpers.o
>   CC   odp_ipfragreass-odp_ipfragreass_reassemble.o
>   CCLD odp_ipfragreass
> odp_ipfragreass-odp_ipfragreass_reassemble.o: In function
> `atomic_strong_cas_dblptr':
> /home/shrutika/zip/zip-linux/83xx/odp/odp/example/ipfragreass/odp_ipfragreass_atomics.h:51:
> undefined reference to `__atomic_compare_exchange_16'
> /home/shrutika/zip/zip-linux/83xx/odp/odp/example/ipfragreass/odp_ipfragreass_atomics.h:51:
> undefined reference to `__atomic_compare_exchange_16'
> /home/shrutika/zip/zip-linux/83xx/odp/odp/example/ipfragreass/odp_ipfragreass_atomics.h:51:
> undefined reference to `__atomic_compare_exchange_16'
> collect2: error: ld returned 1 exit status
> make[2]: *** [odp_ipfragreass] Error 1

Please provide config.log and make V=1 log. Thanks.

It is resolved with install of -latomic .
Thanks
Shally

--
With best wishes
Dmitry


Re: [lng-odp] Regarding github pull request

2017-08-03 Thread Verma, Shally


From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: 03 August 2017 17:54
To: shally verma 
Cc: Dmitry Eremin-Solenikov ; Challa, 
Mahipal ; lng-odp-forward 
; Narayana, Prasad Athreya 
; Verma, Shally ; 
Attunuru, Vamsi 
Subject: Re: [lng-odp] Regarding github pull request

GitHub comments should be echoed to the ODP mailing list, so if you're 
subscribed you should see them.

I too assumed so but it didn’t work this way. I did get PATCH email but not of 
review comments on it. I had to add my email ID to github Email settings.


On Thu, Aug 3, 2017 at 12:40 AM, shally verma 
mailto:shallyvermacav...@gmail.com>> wrote:
On Wed, Aug 2, 2017 at 10:04 PM, shally verma
mailto:shallyvermacav...@gmail.com>> wrote:
> On Wed, Aug 2, 2017 at 6:03 PM, Dmitry Eremin-Solenikov
> mailto:dmitry.ereminsoleni...@linaro.org>> 
> wrote:
>> On 2 August 2017 at 15:18, shally verma 
>> mailto:shallyvermacav...@gmail.com>> wrote:
>>> On Wed, Aug 2, 2017 at 4:37 PM, Maxim Uvarov 
>>> mailto:maxim.uva...@linaro.org>> wrote:
>>>>
>>>>
>>>> On 2 August 2017 at 14:05, Dmitry Eremin-Solenikov
>>>> mailto:dmitry.ereminsoleni...@linaro.org>>
>>>>  wrote:
>>>>>
>>>>> On 02/08/17 13:56, shally verma wrote:
>>>>> > On Wed, Aug 2, 2017 at 4:15 PM, Maxim Uvarov 
>>>>> > mailto:maxim.uva...@linaro.org>>
>>>>> > wrote:
>>>>> >>
>>>>> >>
>>>>> >> On 2 August 2017 at 13:35, shally verma 
>>>>> >> mailto:shallyvermacav...@gmail.com>>
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Hi
>>>>> >>>
>>>>> >>> Based on discussion in yesterday's odp public call, I was trying to
>>>>> >>> exercise github pull request feature for patch submission. But running
>>>>> >>> into some doubts.
>>>>> >>>
>>>>> >>> -Should I create a pull request from a main odp.git repository like as
>>>>> >>> explained here
>>>>> >>> https://help.github.com/articles/creating-a-pull-request-from-a-fork/
>>>>> >>> OR from my forked repository?
>>>>> >>>
>>>>> >>> Ex. I forked odp.git --> 1234sv/odp, where upstream is set to odp.git,
>>>>> >>> then both repos allow me to create a pull request to main odp.git repo
>>>>> >>> but I dont know which way is preferred over other.
>>>>> >>>
>>>>> >>
>>>>> >> pull request to main repo. Link above is correct.
>>>>> >>
>>>>> >>
>>>>> >>>
>>>>> >>> - In another email, Bill mentioned:
>>>>> >>>
>>>>> >>> "Every time a revision of a patch is submitted, increment the version.
>>>>> >>> This
>>>>> >>> is done automatically if you use a GitHub pull request, "
>>>>> >>>
>>>>> >>> So, if I create a pull request  where previous versions were
>>>>> >>> manual(like in case of comp spec) then we need to mention subject
>>>>> >>> [API-NEXT PATCH v5] on pull request edit?
>>>>> >>
>>>>> >>
>>>>> >> yes, if you want to start with v5 then set this version in subject of
>>>>> >> pool
>>>>> >> request.
>>>>> >>
>>>>> > I just created sample pull request with Subject [API-NEXT PATCH v5]
>>>>> > but it changed to [PATCH API-NEXT v6] followed by my subject . So it
>>>>> > look like:
>>>>> >
>>>>> > [PATCH API-NEXT v6] comp: compression spec.
>>>>>
>>>>> Version will autoincrement on each pull request update (initial
>>>>> submission also counts). So, if you would like to send your PR as v5,
>>>>> please change title now to v5. If you will push an update, title will
>>>>> automatically be updated to v6
>>>>>
>>>>> > Also I did not get any email for same. Something is pending?
>>>>>
>>>>> Yes. Maxim's script sends e-mails hourly.
>>>>>
>>>>> > How do we add other intended recipient to this patch ? outside of

Re: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface

2017-06-01 Thread Verma, Shally
Regards
Shally

-Original Message-
From: Savolainen, Petri (Nokia - FI/Espoo) [mailto:petri.savolai...@nokia.com] 
Sent: 01 June 2017 18:20
To: Shally Verma ; lng-odp@lists.linaro.org
Cc: Challa, Mahipal ; Narayana, Prasad Athreya 
; Verma, Shally 
Subject: RE: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface



> -Original Message-
> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of 
> Shally Verma
> Sent: Monday, May 22, 2017 9:55 AM
> To: lng-odp@lists.linaro.org
> Cc: Mahipal Challa ; pathr...@cavium.com; Shally 
> Verma 
> Subject: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface
> 

A short description (5-10 lines) about the new API should be added here.

For example: what kind of compression this API offers. How it's used. Which 
kind of applications typical use it.

Current ODP Comp interface is designed to support widely used lossless data 
compression schemes : deflate, zlib and lzs.
Typical applications include (but not limited to ) IPComp, other application 
and transport layer protocols such as HTTP compression.

Changes from V2:
Add separate API for sync and async operation, 
Add separate API to load dictionary, 
Removed SHA only support from compression interface, 
Rename SHA operations as hash operation.
Add reference odp_packet_data_range_t for accessing packet buffer


> Signed-off-by: Shally Verma 
> Signed-off-by: Mahipal Challa 
> ---
>  include/odp/api/spec/comp.h | 740
> 
>  1 file changed, 740 insertions(+)
> 
> diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h 
> new file mode 100644 index 000..6c13ad4
> --- /dev/null
> +++ b/include/odp/api/spec/comp.h
> @@ -0,0 +1,740 @@
> +/*
> + */
> +
> +/**
> + * @file
> + *
> + * ODP Compression
> + */
> +
> +#ifndef ODP_API_COMP_H_
> +#define ODP_API_COMP_H_
> +
> +#include 
> +#include 
> +#include 
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** @defgroup odp_compression ODP COMP
> + *  ODP Compression defines interface to compress/decompress along 
> +with
> hash
> + *  operations.
> + *
> + *  Compression here implicilty refer to both compression and
> decompression.

Typo: implicitly.

Actually, you could just say: "Compression API supports both compression and 
decompression operations."


> + *  Example, compression algo 'deflate' mean both 'deflate' and
> 'inflate'.
> + *
> + *  if opcode = ODP_COMP_COMPRESS, then it will Compress and/or apply
> hash,
> + *  if opcode = ODP_COMP_DECOMPRESS, then it will Decompress and/or 
> + apply
> + *  hash.

I think it would be better to have separate operations for compress and 
decompress. Application would typically call one on ingress side and the other 
on egress side, or maybe only one of those. Dual functionality would make sense 
if every second packet on the same code path would be compressed and every 
second decompressed, but I think that's not the case.


> + *
> + *  Current version of interface allow Compression ONLY,and
> + *  both Compression + hash ONLY sessions.

What this comment tries to say. Decompression is not supported at all ? Or 
"hashing only" is not supported yet. "Hashing only" should not be a target for 
this API.



> + *
> + *  Macros, enums, types and operations to utilise compression.
> + *  @{
> + */
> +
> +/**
> + * @def ODP_COMP_SESSION_INVALID
> + * Invalid session handle
> + */
> +
> +/**
> + * @typedef odp_comp_session_t (platform dependent)
> + * Comp API opaque session handle

"Compression session handle"

No need to mention opaque, as all ODP handles are opaque.


> + */
> +
> +/**
> + * @typedef odp_comp_compl_t
> +* Compression API completion event (platform dependent)

Remove "platform dependent", all opaque types are like that.


> +*/
> +
> +/**
> + * Compression API operation mode
> + */
> +typedef enum {
> + /** Synchronous, return results immediately */
> + ODP_COMP_SYNC,
> + /** Asynchronous, return results via queue event */
> + ODP_COMP_ASYNC,
> +} odp_comp_op_mode_t;
> +
> +/**
> + * Comp API operation type.
> + *
> + */
> +typedef enum {
> + /** Compress and/or Compute digest  */
> + ODP_COMP_OP_COMPRESS,
> + /** Decompress and/or Compute digest */
> + ODP_COMP_OP_DECOMPRESS,
> +} odp_comp_op_t;

Maybe we get rid of this. Anyway, *or* compute digest is wrong. Digest don't 
need to be mentioned here at all.

> +
> +/**
> + * Comp API hash algorithm
> + *
> + */
> +typedef enum {
> + /** ODP_COMP_HASH_ALG_NONE*/
> + ODP_COMP_HASH_ALG_NONE,

Re: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface

2017-05-23 Thread Verma, Shally


-Original Message-
From: Dmitry Eremin-Solenikov [mailto:dmitry.ereminsoleni...@linaro.org] 
Sent: 23 May 2017 20:54
To: Shally Verma ; lng-odp@lists.linaro.org
Cc: Challa, Mahipal ; Narayana, Prasad Athreya 
; Verma, Shally 
Subject: Re: [lng-odp] [RFC, API-NEXT v3 1/1] comp: compression interface

On 22.05.2017 09:54, Shally Verma wrote:
> Signed-off-by: Shally Verma 
> Signed-off-by: Mahipal Challa 

> +/**
> + * Comp API operation return codes
> + *
> + */
> +typedef enum {
> + /** Operation completed successfully*/
> + ODP_COMP_ERR_NONE,
> + /** Invalid user data pointers*/
> + ODP_COMP_ERR_DATA_PTR,
> + /** Invalid input data size*/
> + ODP_COMP_ERR_DATA_SIZE,
> + /**  Compression and/or hash Algo failure*/
> + ODP_COMP_ERR_ALGO_FAIL,
> + /** Operation paused due to insufficient output buffer.
> + *
> + * This is not an error condition. On seeing this situation,
> + * Implementation should maintain context of in-progress operation and
> + * application should call packet processing API again with valid
> + * output buffer but no other altercation to operation params

Probably you've ment alteration here?
Ya. My bad. Thanks for pointing that. 

> + * (odp_comp_op_param_t).
> + *
> + * if using async mode, application should either make sure to
> + * provide sufficient output buffer size OR maintain relevant
> + * context (or ordering) information with respect to each input packet
> + * enqueued for processing
> + *
> + */
> + ODP_COMP_ERR_OUT_OF_SPACE,
> + /** Error if operation has been requested in an invalid state */
> + ODP_COMP_ERR_INV_STATE,
> + /** Error if API call does not input params or mode. */
> + ODP_COMP_ERR_NOT_SUPPORTED
> +} odp_comp_err_t;

[...]

> +typedef struct odp_comp_capability_t {
> + /** Maximum number of  sessions*/
> + uint32_t max_sessions;
> +
> + /** Supported compression algorithms*/
> + odp_comp_algos_t comps;
> +
> + /** Supported hash algorithms*/
> + odp_comp_hash_algos_t   hash;
> +
> + /** Support level for synchrnous operation mode (ODP_COMP_SYNC)
> + *   User should set odp_comp_session_param_t:mode based on
> + *   support level as indicated by this param.
> + *   true - mode supported,
> + *   false - mode not supported

True/false part is no longer relevant
Ack. Will rectify.
> + */
> + odp_support_t  sync;
> +
> + /** Support level for asynchrnous operation mode (ODP_COMP_ASYNC)
> + *   User should set odp_comp_session_param_t:mode param based on
> + *   support level as indicated by this param.
> + *   true - mode supported,
> + *   false - mode not supported

Same
Ack. 
> + *
> + */
> + odp_support_t async;
> +} odp_comp_capability_t;
> +
> +/**
> + * Hash algorithm capabilities
> + *
> + */
> +typedef struct odp_comp_hash_alg_capability_t {
> + /** Digest length in bytes */
> + uint32_t digest_len;
> +} odp_comp_hash_alg_capability_t;
> +
> +/**
> + * Compression algorithm capabilities
> + * structure for each algorithm.
> + *
> + */
> +typedef struct odp_comp_alg_capability_t {
> + /** Boolean indicating alg support dictionary load
> + *
> + * true: yes
> + * false : no
> + *
> + * dictionary , if supported, consists of a pointer to character
> + * array
> + */
> + odp_bool_t support_dict;

Can you switch to odp_support_t here?
Ack.
> +/**
> + * Comp API algorithm specific parameters
> + *
> + */
> +typedef struct odp_comp_alg_param_t {
> + struct comp_alg_def_param {
> + /** compression level where
> + ODP_COMP_LEVEL_MIN <= level <= ODP_COMP_LEVEL_MAX
> + */
> + odp_comp_level_t level;
> + /** huffman code to use */
> + odp_comp_huffman_code_t comp_code;
> + } deflate;
> + struct comp_alg_zlib_param {
> + /** deflate algo params */
> + struct comp_alg_def_param def;
> + } zlib;
> +} odp_comp_alg_param_t;

Would it be more logical to change this to union?
No. I need this as structure as I can anticipate many other having deflate as 
one of the field. 
> +
> +/**
> + * Comp API data range specifier
> + *
> + */
> +typedef union odp_comp_data_t {
> + struct odp_comp_pkt {
> + /** Offset from beginning of input */
> + odp_packet_t packet;
> +
> + /** Length of data to operate on */
> + odp_packet_data_range_t data_range;
> + } packet;
> +} odp_comp_data_t;

The plan is probably to support buf+len here?
Yes. At later stage.

Thanks
Shally

I have to think about this.


-- 
With best wishes
Dmitry


Re: [lng-odp] [Bug 2895] odp_crypto_operation() does not work with multi-segment packets

2017-05-22 Thread Verma, Shally


-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of 
bugzilla-dae...@bugs.linaro.org
Sent: 22 May 2017 11:09
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [Bug 2895] odp_crypto_operation() does not work with 
multi-segment packets

https://bugs.linaro.org/show_bug.cgi?id=2895

--- Comment #4 from Dmitry Eremin-Solenikov  
--- While working on this issue, I have hit an OpenSSL bug (see 
https://github.com/openssl/openssl/issues/3516). Basically that means that it 
is not possible to properly decode packets segment-by-segment. OpenSSL will lie 
about amount of data it has decoded. 
Do we know what is the reason behind it ? Why it fails to return correct length?

Thanks
Shally
In our case we can do one of the following things:

- Copy packet data to temp buffer then decode it in single pass.
- Or switch to another crypto library. The bug is present at least in both 
1.0.x and 1.1.x branches. I suspect it it will take ages to get all systems to 
be upgraded, after OpenSSL will provide fix.

--
You are receiving this mail because:
You are on the CC list for the bug.


Re: [lng-odp] Per-packet IV specification

2017-05-15 Thread Verma, Shally

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Dmitry 
Eremin-Solenikov
Sent: 16 May 2017 09:09
To: lng-odp-forward 
Subject: [lng-odp] Per-packet IV specification

Hello,

I'd like to clarify the crypto API vs the case of per-packet IV specification. 
Is it expected to have meaningful iv.length be specified at session creation 
time (with NULL == iv.data)?

Current linux-generic crypto code expects that if (NULL == iv.data) then also 
(0 == iv.length).

Two possible proposals:

 - Add override_iv_length to odp_crypto_op_param_t structure

OR

 - Enforce that an aplication always provide correct iv.length at session 
creation time.
Don’t see a reason why length should be considered if IV itself is NULL. Usage 
is user passes IV in a buffer with length specified.
So per me this second option looks a valid approach.

Thanks
Shally
Any thoughts?

--
With best wishes
Dmitry


Re: [lng-odp] [API-NEXT PATCH] api: packet: introduce odp_packet_data_range_t

2017-05-02 Thread Verma, Shally
Hi Dmitry

We are okay with proposed change. We will change compression interface 
accordingly.
Any idea when is it planned to be accepted and merged in api-next.? 

Thanks
Shally

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Verma, 
Shally
Sent: 26 April 2017 16:53
To: Dmitry Eremin-Solenikov ; 
lng-odp@lists.linaro.org
Cc: Challa, Mahipal ; Narayana, Prasad Athreya 

Subject: Re: [lng-odp] [API-NEXT PATCH] api: packet: introduce 
odp_packet_data_range_t


-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Dmitry 
Eremin-Solenikov
Sent: 25 April 2017 22:01
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [API-NEXT PATCH] api: packet: introduce 
odp_packet_data_range_t

Rename odp_crypto_data_range_t to odp_packet_data_range_t, as it is relevant 
not only to the crypto interface.

Signed-off-by: Dmitry Eremin-Solenikov 
---
 include/odp/api/spec/crypto.h   | 17 +++--
 include/odp/api/spec/packet.h   | 12 
 .../validation/api/crypto/odp_crypto_test_inp.c |  4 ++--
 test/common_plat/validation/api/crypto/test_vectors.h   |  4 ++--
 4 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h 
index d30f050f..c216c017 100644
--- a/include/odp/api/spec/crypto.h
+++ b/include/odp/api/spec/crypto.h
@@ -19,6 +19,8 @@
 extern "C" {
 #endif
 
+#include 
+
 /** @defgroup odp_crypto ODP CRYPTO
  *  Macros, enums, types and operations to utilise crypto.
  *  @{
@@ -238,15 +240,10 @@ typedef struct odp_crypto_iv {
 
 /**
  * Crypto API data range specifier
+ *
+ * @deprecated  Use odp_packet_data_range_t instead
  */
-typedef struct odp_crypto_data_range {
-   /** Offset from beginning of packet */
-   uint32_t offset;
-
-   /** Length of data to operate on */
-   uint32_t length;
-
-} odp_crypto_data_range_t;
+#define odp_crypto_data_range_t odp_packet_data_range_t

This can be  typedef odp_packet_data_range_t odp_crypto_data_range_t No other 
change to code should be required after this.

Will get back on this after internal review.
Thanks
Shally

 /**
  * Crypto API session creation parameters @@ -365,10 +362,10 @@ typedef struct 
odp_crypto_op_param_t {
uint32_t hash_result_offset;
 
/** Data range to apply cipher */
-   odp_crypto_data_range_t cipher_range;
+   odp_packet_data_range_t cipher_range;
 
/** Data range to authenticate */
-   odp_crypto_data_range_t auth_range;
+   odp_packet_data_range_t auth_range;
 
 } odp_crypto_op_param_t;
 
diff --git a/include/odp/api/spec/packet.h b/include/odp/api/spec/packet.h 
index 5439f234..95f5349b 100644
--- a/include/odp/api/spec/packet.h
+++ b/include/odp/api/spec/packet.h
@@ -71,6 +71,18 @@ extern "C" {
   * Packet is red
   */
 
+/**
+ * Packet API data range specifier
+ */
+typedef struct odp_packet_data_range {
+   /** Offset from beginning of packet */
+   uint32_t offset;
+
+   /** Length of data to operate on */
+   uint32_t length;
+
+} odp_packet_data_range_t;
+
 /*
  *
  * Alloc and free
diff --git a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c 
b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c
index 42149ac6..dfd4f122 100644
--- a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c
+++ b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c
@@ -71,8 +71,8 @@ static void alg_test(odp_crypto_op_t op,
 odp_crypto_key_t cipher_key,
 odp_auth_alg_t auth_alg,
 odp_crypto_key_t auth_key,
-odp_crypto_data_range_t *cipher_range,
-odp_crypto_data_range_t *auth_range,
+odp_packet_data_range_t *cipher_range,
+odp_packet_data_range_t *auth_range,
 const uint8_t *plaintext,
 unsigned int plaintext_len,
 const uint8_t *ciphertext,
diff --git a/test/common_plat/validation/api/crypto/test_vectors.h 
b/test/common_plat/validation/api/crypto/test_vectors.h
index da4610f3..a1cf4faf 100644
--- a/test/common_plat/validation/api/crypto/test_vectors.h
+++ b/test/common_plat/validation/api/crypto/test_vectors.h
@@ -139,14 +139,14 @@ static uint8_t 
aes128_gcm_reference_iv[][AES128_GCM_IV_LEN] = {
 
 static uint32_t aes128_gcm_reference_length[] = { 84, 72, 72, 40};
 
-static odp_crypto_data_range_t aes128_gcm_cipher_range[] = {
+static odp_packet_data_range_t aes128_gcm_cipher_range[] = {
{ .offset = 12, .length = 72 },
{ .offset = 8, .length = 64 },
{ .offset = 8, .length = 64 },
{ .offset = 12, .length = 28 },
 };
 
-static odp_crypto_data_range_t aes128_gcm_auth_range[] = {
+static odp_packet_data_range_t aes128_gcm_auth_range[] = {
{ .offset = 0, .length = 84 },
 

Re: [lng-odp] [API-NEXT PATCH] api: packet: introduce odp_packet_data_range_t

2017-04-26 Thread Verma, Shally

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Dmitry 
Eremin-Solenikov
Sent: 25 April 2017 22:01
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [API-NEXT PATCH] api: packet: introduce 
odp_packet_data_range_t

Rename odp_crypto_data_range_t to odp_packet_data_range_t, as it is relevant 
not only to the crypto interface.

Signed-off-by: Dmitry Eremin-Solenikov 
---
 include/odp/api/spec/crypto.h   | 17 +++--
 include/odp/api/spec/packet.h   | 12 
 .../validation/api/crypto/odp_crypto_test_inp.c |  4 ++--
 test/common_plat/validation/api/crypto/test_vectors.h   |  4 ++--
 4 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h 
index d30f050f..c216c017 100644
--- a/include/odp/api/spec/crypto.h
+++ b/include/odp/api/spec/crypto.h
@@ -19,6 +19,8 @@
 extern "C" {
 #endif
 
+#include 
+
 /** @defgroup odp_crypto ODP CRYPTO
  *  Macros, enums, types and operations to utilise crypto.
  *  @{
@@ -238,15 +240,10 @@ typedef struct odp_crypto_iv {
 
 /**
  * Crypto API data range specifier
+ *
+ * @deprecated  Use odp_packet_data_range_t instead
  */
-typedef struct odp_crypto_data_range {
-   /** Offset from beginning of packet */
-   uint32_t offset;
-
-   /** Length of data to operate on */
-   uint32_t length;
-
-} odp_crypto_data_range_t;
+#define odp_crypto_data_range_t odp_packet_data_range_t

This can be  typedef odp_packet_data_range_t odp_crypto_data_range_t
No other change to code should be required after this.

Will get back on this after internal review.
Thanks
Shally

 /**
  * Crypto API session creation parameters @@ -365,10 +362,10 @@ typedef struct 
odp_crypto_op_param_t {
uint32_t hash_result_offset;
 
/** Data range to apply cipher */
-   odp_crypto_data_range_t cipher_range;
+   odp_packet_data_range_t cipher_range;
 
/** Data range to authenticate */
-   odp_crypto_data_range_t auth_range;
+   odp_packet_data_range_t auth_range;
 
 } odp_crypto_op_param_t;
 
diff --git a/include/odp/api/spec/packet.h b/include/odp/api/spec/packet.h 
index 5439f234..95f5349b 100644
--- a/include/odp/api/spec/packet.h
+++ b/include/odp/api/spec/packet.h
@@ -71,6 +71,18 @@ extern "C" {
   * Packet is red
   */
 
+/**
+ * Packet API data range specifier
+ */
+typedef struct odp_packet_data_range {
+   /** Offset from beginning of packet */
+   uint32_t offset;
+
+   /** Length of data to operate on */
+   uint32_t length;
+
+} odp_packet_data_range_t;
+
 /*
  *
  * Alloc and free
diff --git a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c 
b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c
index 42149ac6..dfd4f122 100644
--- a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c
+++ b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c
@@ -71,8 +71,8 @@ static void alg_test(odp_crypto_op_t op,
 odp_crypto_key_t cipher_key,
 odp_auth_alg_t auth_alg,
 odp_crypto_key_t auth_key,
-odp_crypto_data_range_t *cipher_range,
-odp_crypto_data_range_t *auth_range,
+odp_packet_data_range_t *cipher_range,
+odp_packet_data_range_t *auth_range,
 const uint8_t *plaintext,
 unsigned int plaintext_len,
 const uint8_t *ciphertext,
diff --git a/test/common_plat/validation/api/crypto/test_vectors.h 
b/test/common_plat/validation/api/crypto/test_vectors.h
index da4610f3..a1cf4faf 100644
--- a/test/common_plat/validation/api/crypto/test_vectors.h
+++ b/test/common_plat/validation/api/crypto/test_vectors.h
@@ -139,14 +139,14 @@ static uint8_t 
aes128_gcm_reference_iv[][AES128_GCM_IV_LEN] = {
 
 static uint32_t aes128_gcm_reference_length[] = { 84, 72, 72, 40};
 
-static odp_crypto_data_range_t aes128_gcm_cipher_range[] = {
+static odp_packet_data_range_t aes128_gcm_cipher_range[] = {
{ .offset = 12, .length = 72 },
{ .offset = 8, .length = 64 },
{ .offset = 8, .length = 64 },
{ .offset = 12, .length = 28 },
 };
 
-static odp_crypto_data_range_t aes128_gcm_auth_range[] = {
+static odp_packet_data_range_t aes128_gcm_auth_range[] = {
{ .offset = 0, .length = 84 },
{ .offset = 0, .length = 72 },
{ .offset = 0, .length = 72 },
--
2.11.0



Re: [lng-odp] [RFC, API-NEXT v2 1/1] comp:compression interface

2017-04-19 Thread Verma, Shally


-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Verma, 
Shally
Sent: 19 April 2017 19:30
To: Dmitry Eremin-Solenikov ; Shally Verma 
; lng-odp@lists.linaro.org
Cc: Challa, Mahipal ; Masood, Faisal 
; Narayana, Prasad Athreya 

Subject: Re: [lng-odp] [RFC, API-NEXT v2 1/1] comp:compression interface



-Original Message-
From: Dmitry Eremin-Solenikov [mailto:dmitry.ereminsoleni...@linaro.org]
Sent: 19 April 2017 17:36
To: Shally Verma ; lng-odp@lists.linaro.org
Cc: Challa, Mahipal ; Masood, Faisal 
; Narayana, Prasad Athreya 
; Verma, Shally 
Subject: Re: [lng-odp] [RFC, API-NEXT v2 1/1] comp:compression interface

On 19.04.2017 13:00, Shally Verma wrote:
> An API set to add compression/decompression support in ODP interface.
> 
> Signed-off-by: Shally Verma 
> Signed-off-by: Mahipal Challa 
> ---
>  include/odp/api/spec/comp.h | 748
> 
>  1 file changed, 748 insertions(+)
> 
> diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h 
> new file mode 100644 index 000..65feacf
> --- /dev/null
> +++ b/include/odp/api/spec/comp.h
> @@ -0,0 +1,748 @@
> +/*
> + */
> +
> +/**
> + * @file
> + *
> + * ODP Compression
> + */
> +
> +#ifndef ODP_API_COMP_H_
> +#define ODP_API_COMP_H_
> +#include 
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** @defgroup odp_compression ODP COMP
> + *  ODP Compression defines interface to compress/decompress and 
> +authenticate
> + *  data.
> + *
> + *  Compression here implicilty refer to both compression and decompression.
> + *  Example, compression algo 'deflate' mean both 'deflate' and 'inflate'.
> + *
> + *  if opcode = ODP_COMP_COMPRESS, then it will Compress,
> + *  if opcode = ODP_COMP_DECOMPRESS, then it will Decompress.
> + *
> + *  Current version of Interface allow Compression 
> +ONLY,Authentication ONLY or
> + *  both Compression + Auth ONLY sessions.
> + *
> + *  Macros, enums, types and operations to utilise compression.
> + *  @{
> + */
> +
> +/**
> + * @def ODP_COMP_SESSION_INVALID
> + * Invalid session handle
> + */
> +
> +/**
> + * @typedef odp_comp_session_t (platform dependent)
> + * Comp API opaque session handle
> + */
> +
> +/**
> + * @typedef odp_comp_compl_t
> +* Compression API completion event (platform dependent) */
> +
> +/**
> + * Compression API operation mode
> + */
> +typedef enum {
> + /** Synchronous, return results immediately */
> + ODP_COMP_SYNC,
> + /** Asynchronous, return results via queue event */
> + ODP_COMP_ASYNC,
> +} odp_comp_op_mode_t;
> +
> +/**
> + * Comp API operation type.
> + * Ignored for Authentication ONLY.
> + *
> + */
> +typedef enum {
> + /** Compress and/or Compute digest  */
> + ODP_COMP_OP_COMPRESS,
> + /** Decompress and/or Compute digest */
> + ODP_COMP_OP_DECOMPRESS,
> +} odp_comp_op_t;
> +
> +/**
> + * Comp API authentication algorithm
> + *
> + */
> +typedef enum {
> + /** No authentication algorithm specified */
> + ODP_COMP_AUTH_ALG_NULL,
> + /** ODP_COMP_AUTH_ALG_SHA1*/
> + ODP_COMP_AUTH_ALG_SHA1,
> + /**  ODP_COMP_AUTH_ALG_SHA256*/
> + ODP_COMP_AUTH_ALG_SHA256
> +} odp_comp_auth_alg_t;

Why do you need special comp_auth_alg instead of just odp_auth_alg_t?
Shally - To differentiate that these algorithm are specifically part of 
compression interface as crypto also has them enumerated.
Correction - Crypto doesn't enumerate them "as is" in current api-next as I see 
it.
> +
> +/**
> + * Comp API compression algorithm
> + *
> + */
> +typedef enum {
> + /** No algorithm specified.
> + * Means no compression, no output provided.
> + */
> + ODP_COMP_ALG_NULL,
> + /** DEFLATE -
> + * implicit Inflate in case of decode operation.
> + */
> + ODP_COMP_ALG_DEFLATE,
> + /** ZLIB-RFC1950 */
> + ODP_COMP_ALG_ZLIB,
> + /** LZS*/
> + ODP_COMP_ALG_LZS,
> +} odp_comp_alg_t;
> +
> +/**
> + * Comp API session creation return code
> + *
> + */
> +typedef enum {
> + /** Session created */
> + ODP_COMP_SES_CREATE_ERR_NONE,
> + /** Creation failed, no resources */
> + ODP_COMP_SES_CREATE_ERR_ENOMEM,
> + /** Creation failed, bad compression params */
> + ODP_COMP_SES_CREATE_ERR_INV_COMP,
> + /** Creation failed, bad auth params */
> + ODP_COMP_SES_CREATE_ERR_INV_AUTH,
> + /** Creation failed,requested configuration not supported*/
> + ODP_COMP_SES_CREATE_ERR_NOT_SUPPORTED
> +} odp_comp_ses_create_er

Re: [lng-odp] [RFC, API-NEXT v2 1/1] comp:compression interface

2017-04-19 Thread Verma, Shally


-Original Message-
From: Dmitry Eremin-Solenikov [mailto:dmitry.ereminsoleni...@linaro.org] 
Sent: 19 April 2017 17:36
To: Shally Verma ; lng-odp@lists.linaro.org
Cc: Challa, Mahipal ; Masood, Faisal 
; Narayana, Prasad Athreya 
; Verma, Shally 
Subject: Re: [lng-odp] [RFC, API-NEXT v2 1/1] comp:compression interface

On 19.04.2017 13:00, Shally Verma wrote:
> An API set to add compression/decompression support in ODP interface.
> 
> Signed-off-by: Shally Verma 
> Signed-off-by: Mahipal Challa 
> ---
>  include/odp/api/spec/comp.h | 748 
> 
>  1 file changed, 748 insertions(+)
> 
> diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h 
> new file mode 100644 index 000..65feacf
> --- /dev/null
> +++ b/include/odp/api/spec/comp.h
> @@ -0,0 +1,748 @@
> +/*
> + */
> +
> +/**
> + * @file
> + *
> + * ODP Compression
> + */
> +
> +#ifndef ODP_API_COMP_H_
> +#define ODP_API_COMP_H_
> +#include 
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** @defgroup odp_compression ODP COMP
> + *  ODP Compression defines interface to compress/decompress and 
> +authenticate
> + *  data.
> + *
> + *  Compression here implicilty refer to both compression and decompression.
> + *  Example, compression algo 'deflate' mean both 'deflate' and 'inflate'.
> + *
> + *  if opcode = ODP_COMP_COMPRESS, then it will Compress,
> + *  if opcode = ODP_COMP_DECOMPRESS, then it will Decompress.
> + *
> + *  Current version of Interface allow Compression 
> +ONLY,Authentication ONLY or
> + *  both Compression + Auth ONLY sessions.
> + *
> + *  Macros, enums, types and operations to utilise compression.
> + *  @{
> + */
> +
> +/**
> + * @def ODP_COMP_SESSION_INVALID
> + * Invalid session handle
> + */
> +
> +/**
> + * @typedef odp_comp_session_t (platform dependent)
> + * Comp API opaque session handle
> + */
> +
> +/**
> + * @typedef odp_comp_compl_t
> +* Compression API completion event (platform dependent) */
> +
> +/**
> + * Compression API operation mode
> + */
> +typedef enum {
> + /** Synchronous, return results immediately */
> + ODP_COMP_SYNC,
> + /** Asynchronous, return results via queue event */
> + ODP_COMP_ASYNC,
> +} odp_comp_op_mode_t;
> +
> +/**
> + * Comp API operation type.
> + * Ignored for Authentication ONLY.
> + *
> + */
> +typedef enum {
> + /** Compress and/or Compute digest  */
> + ODP_COMP_OP_COMPRESS,
> + /** Decompress and/or Compute digest */
> + ODP_COMP_OP_DECOMPRESS,
> +} odp_comp_op_t;
> +
> +/**
> + * Comp API authentication algorithm
> + *
> + */
> +typedef enum {
> + /** No authentication algorithm specified */
> + ODP_COMP_AUTH_ALG_NULL,
> + /** ODP_COMP_AUTH_ALG_SHA1*/
> + ODP_COMP_AUTH_ALG_SHA1,
> + /**  ODP_COMP_AUTH_ALG_SHA256*/
> + ODP_COMP_AUTH_ALG_SHA256
> +} odp_comp_auth_alg_t;

Why do you need special comp_auth_alg instead of just odp_auth_alg_t?
Shally - To differentiate that these algorithm are specifically part of 
compression interface as crypto also has them enumerated.
> +
> +/**
> + * Comp API compression algorithm
> + *
> + */
> +typedef enum {
> + /** No algorithm specified.
> + * Means no compression, no output provided.
> + */
> + ODP_COMP_ALG_NULL,
> + /** DEFLATE -
> + * implicit Inflate in case of decode operation.
> + */
> + ODP_COMP_ALG_DEFLATE,
> + /** ZLIB-RFC1950 */
> + ODP_COMP_ALG_ZLIB,
> + /** LZS*/
> + ODP_COMP_ALG_LZS,
> +} odp_comp_alg_t;
> +
> +/**
> + * Comp API session creation return code
> + *
> + */
> +typedef enum {
> + /** Session created */
> + ODP_COMP_SES_CREATE_ERR_NONE,
> + /** Creation failed, no resources */
> + ODP_COMP_SES_CREATE_ERR_ENOMEM,
> + /** Creation failed, bad compression params */
> + ODP_COMP_SES_CREATE_ERR_INV_COMP,
> + /** Creation failed, bad auth params */
> + ODP_COMP_SES_CREATE_ERR_INV_AUTH,
> + /** Creation failed,requested configuration not supported*/
> + ODP_COMP_SES_CREATE_ERR_NOT_SUPPORTED
> +} odp_comp_ses_create_err_t;
> +
> +/**
> + * Comp API operation return codes
> + *
> + */
> +typedef enum {
> + /** Operation completed successfully*/
> + ODP_COMP_ERR_NONE,
> + /** Invalid user data pointers*/
> + ODP_COMP_ERR_DATA_PTR,
> + /** Invalid input data size*/
> + ODP_COMP_ERR_DATA_SIZE,
> + /**  Compression and/or Auth Algo failure*/
> + ODP_COMP_ERR_ALGO_FAIL,
> + /** Error detected durin

Re: [lng-odp] [Linaro/odp] 6ce267: Revert "api: ipsec: factor out definitions for fea...

2017-04-19 Thread Verma, Shally
I was preparing for compression patch2 and updated it today again to use 
odp_feature_t to indicate sync/async in capability (as per last but one 
check-in).

So what is the proposal meanwhile? Should we instead use odp_bool_t to indicate 
feature (like async/sync mode) support until it is finalized?

Thanks
Shally

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of GitHub
Sent: 19 April 2017 13:32
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [Linaro/odp] 6ce267: Revert "api: ipsec: factor out 
definitions for fea...

  Branch: refs/heads/api-next
  Home:   https://github.com/Linaro/odp
  Commit: 6ce267197a66180042916368a469ed9dcc33eaa7
  
https://github.com/Linaro/odp/commit/6ce267197a66180042916368a469ed9dcc33eaa7
  Author: Maxim Uvarov 
  Date:   2017-04-19 (Wed, 19 Apr 2017)

  Changed paths:
R include/odp/api/spec/feature.h
M include/odp/api/spec/ipsec.h
M include/odp_api.h
M platform/Makefile.inc
M platform/linux-generic/Makefile.am
R platform/linux-generic/include/odp/api/feature.h

  Log Message:
  ---
  Revert "api: ipsec: factor out definitions for feature support levels"

This reverts
commit d025907602c5 ("api: ipsec: factor out definitions for feature
  support levels")
Petri rejected this patch and plan write some common solution

Signed-off-by: Maxim Uvarov 




Re: [lng-odp] [API-NEXT PATCHv2 00/23] driver items registration and probing

2017-04-11 Thread Verma, Shally


-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Yi He
Sent: 11 April 2017 19:36
To: lng-odp 
Subject: Re: [lng-odp] [API-NEXT PATCHv2 00/23] driver items registration and 
probing

Hi, team

Today in odp cloud meeting we talked about the DDF status, before the new LNG 
colleague joining to take over the DDF works, I'll take the ownership of the 
DDF and continue to move this patch series forward and related bugs, comments.

This Thursday François would like to have a call (1:30 hours) to introduce 
knowledge, background and discussions he have had with Christophe around DDF, 
all are welcome if you feel interested in DDF topics and please reply before 
Wednesday earlier afternoon and I'll send meeting invitation to you.

I would be interested to join same. However I am based out of India. So please 
see if it would be feasible to schedule meeting to accommodate India time zone.
Thanks
Shally

thanks and best regards, Yi

On 22 March 2017 at 22:48, Christophe Milard 
wrote:

> This patch series can be pulled from:
> https://git.linaro.org/people/christophe.milard/odp.git/log/
> ?h=drv_framework_v2
>
> Since V1: Fixes following Bill's comments.
>
> Note: I am not really sure this is still in phase with what was 
> discussed at connect, since I couldn't attend. But, at least I did the 
> changes following the comments I recieved. Hope that still makes 
> sence.
> Also, I am aware that patch 1 generates a warning: I copied this part 
> from the north API so I assume this is an agreed decision.
>
> This patch series implements the driver interface, i.e.
> enumerator class, enumerator, devio and drivers registration and probing.
> This interface is depicted in:
> https://docs.google.com/document/d/1eCKPJF6uSlOllXi_
> sKDvRwUD2BXm-ZzxZoKT0nVEsl4/edit
> The associated tests are testing these mechanisms. Note that these 
> tests are testing staticaly linked modules only (hence avoiding the 
> module/platform/test debate). Also note that these tests are gathering 
> all the elements (enumerators, enumerator classes, devio, drivers) 
> making up the driver interface so as their interactions can be checked.
> Real elements (pci enumerators, drivers...) will likely be written in 
> a much more stand-alone way.
>
> Christophe Milard (23):
>   drv: adding compiler hints in the driver interface
>   linux-gen: adding compiler hints in the driver interface
>   drv: making parameter strings dynamically computable
>   linux-gen: drv: enumerator_class registration
>   test: drv: enumerator_class registration tests
>   linux-gen: drv: enumerator registration
>   test: drv: enumerator registration tests
>   drv: driver: change drv unbind function name and pass correct
> parameter
>   drv: driver: add callback function for device destruction
>   linux-gen: drv: device creation and deletion
>   drv: driver: adding device query function
>   linux-gen: drv: driver: adding device querry function
>   test: drv: device creation and destruction
>   drv: driver: adding a probe and remove callback for devio
>   linux-gen: drv: devio registration
>   test: drv: devio creation and destruction
>   drv: adding driver remove function
>   drv: complement parameters to the driver probe() function
>   linux-gen: driver registration and probing
>   test: drv: driver registration and probing
>   drv: driver: adding functions to attach driver's data to the device
>   linux-gen: adding functions to attach driver's data to the device
>   test: drv: test for setting and retrieving driver's data
>
>  include/odp/drv/spec/driver.h  |  132 ++-
>  include/odp/drv/spec/hints.h   |  119 +++
>  include/odp_drv.h  |1 +
>  platform/Makefile.inc  |1 +
>  platform/linux-generic/Makefile.am |1 +
>  platform/linux-generic/_modules.c  |4 +
>  platform/linux-generic/drv_driver.c| 1051
> +++-
>  .../linux-generic/include/drv_driver_internal.h|   22 +
>  platform/linux-generic/include/odp/drv/hints.h |   34 +
>  platform/linux-generic/include/odp_internal.h  |5 +
>  platform/linux-generic/odp_init.c  |   21 +-
>  test/common_plat/m4/configure.m4   |1 +
>  test/common_plat/validation/drv/Makefile.am|1 +
>  .../validation/drv/drvdriver/.gitignore|5 +
>  .../validation/drv/drvdriver/Makefile.am   |   60 ++
>  .../validation/drv/drvdriver/drvdriver_device.c|  218 
>  .../validation/drv/drvdriver/drvdriver_device.h|   24 +
>  .../drv/drvdriver/drvdriver_device_main.c  |   12 +
>  .../validation/drv/drvdriver/drvdriver_devio.c |  209 
>  .../validation/drv/drvdriver/drvdriver_devio.h |   24 +
>  .../drv/drvdriver/drvdriver_devio_main.c   |   12 +
>  .../validation/drv/drvdriver/drvdriver_driver.c|  518 ++
> 

Re: [lng-odp] [API-NEXT] API: IPSEC: Updating ipsec APIs to support sNIC implementation.

2017-04-07 Thread Verma, Shally


-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Nikhil 
Agarwal
Sent: 07 April 2017 15:30
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [API-NEXT] API: IPSEC: Updating ipsec APIs to support sNIC 
implementation.

Signed-off-by: Nikhil Agarwal 
---
 include/odp/api/spec/ipsec.h | 62 +++-
 1 file changed, 26 insertions(+), 36 deletions(-)

diff --git a/include/odp/api/spec/ipsec.h b/include/odp/api/spec/ipsec.h index 
b3dc0ca..d39e65d 100644
--- a/include/odp/api/spec/ipsec.h
+++ b/include/odp/api/spec/ipsec.h
@@ -58,8 +58,10 @@ typedef enum odp_ipsec_op_mode_t {
/** Inline IPSEC operation
  *
  * Packet input/output is connected directly to IPSEC inbound/outbound
- * processing. Application uses asynchronous or inline IPSEC
- * operations.
+ * processing. Application may use asynchronous IPSEC operations.
+ * Packet post IPSEC operations are delivered to PKTIO queues. Further
+ * classification/Hashing(inbound) will be applicaed to packet post 
IPSEC as
+ * defined in PKTIO configuration.
  */
ODP_IPSEC_OP_MODE_INLINE,
 
@@ -225,6 +227,24 @@ typedef struct odp_ipsec_outbound_config_t {
 
 } odp_ipsec_outbound_config_t;
 
+typedef union odp_ipsec_protocols_t {
+   /** Cipher algorithms */
+   struct {
+   /** ODP_IPSEC_ESP */
+   uint32_t esp: 1;
+
+   /** ODP_IPSEC_AH */
+   uint32_t ah : 1;
+
+   } bit;
+
+   /** All bits of the bit field structure
+*
+* This field can be used to set/clear all flags, or bitwise
+* operations over the entire structure. */
+   uint32_t all_bits;
+} odp_ipsec_protocols_t;
+
 /**
  * IPSEC capability
  */
@@ -279,6 +299,9 @@ typedef struct odp_ipsec_capability_t {
 */
uint8_t hard_limit_sec;
 
+   /** Supported ipsec Protocols */
+   odp_ipsec_protocols_t protocols;
+
/** Supported cipher algorithms */
odp_crypto_cipher_algos_t ciphers;
 
@@ -568,21 +591,6 @@ typedef enum odp_ipsec_lookup_mode_t {  } 
odp_ipsec_lookup_mode_t;
 
 /**
- * Result event pipeline configuration
- */
-typedef enum odp_ipsec_pipeline_t {
-   /** Do not pipeline */
-   ODP_IPSEC_PIPELINE_NONE = 0,
-
-   /** Send IPSEC result events to the classifier.
-*
-*  IPSEC capability 'pipeline_cls' determines if pipelined
-*  classification is supported. */
-   ODP_IPSEC_PIPELINE_CLS
-
-} odp_ipsec_pipeline_t;
-
-/**
  * IPSEC Security Association (SA) parameters
  */
 typedef struct odp_ipsec_sa_param_t {
@@ -646,31 +654,13 @@ typedef struct odp_ipsec_sa_param_t {
 */
uint32_t mtu;
 
-   /** Select pipelined destination for IPSEC result events
-*
-*  Asynchronous and inline modes generate result events. Select where
-*  those events are sent. Inbound SAs may choose to use pipelined
-*  classification. The default value is ODP_IPSEC_PIPELINE_NONE.
-*/
-   odp_ipsec_pipeline_t pipeline;
-
/** Destination queue for IPSEC events
 *
-*  Operations in asynchronous or inline mode enqueue resulting events
+*  Operations in asynchronous mode enqueue resulting events
 *  into this queue.
 */
odp_queue_t dest_queue;
 Shally - enque in order-of-completion or in-order-submission? What does this 
mean in-context of ipsec?
In general, there is bit of confusion when we are using terms async in ODP 
context here. 
It imply that queue is used to output events. an async implementation can queue 
events in order of their completion (which may be different from order of their 
submission). 
If we are queuing events in-order-of submission, then it is actually a 
"Synchronous queued operation" (as we block outputs until previous ones are 
complete).
Can we make it bit more explicit.

Thanks
Shally

-   /** Classifier destination CoS for IPSEC result events
-*
-*  Result events for successfully decapsulated packets are sent to
-*  classification through this CoS. Other result events are sent to
-*  'dest_queue'. This field is considered only when 'pipeline' is
-*  ODP_IPSEC_PIPELINE_CLS. The CoS must not be shared between any pktio
-*  interface default CoS.
-*/
-   odp_cos_t dest_cos;
-
/** User defined SA context pointer
 *
 *  User defined context pointer associated with the SA.
--
2.9.3



Re: [lng-odp] [RFC, API-NEXT v1 1/1] comp:compression interface

2017-04-07 Thread Verma, Shally
We are sharing an Overview document of patch V1 based comp interface 
https://docs.google.com/document/d/1aaEo8oTvZvm2096GJzl165L571tcsKqfcaD5yypiiCA/edit?usp=sharing
 . It outlines possible applications, expected API usage and few key points. 

Please give your suggestions/comment w.r.t this to help us identify change 
requirements better (including sync/async proposal). 

Thanks
Shally

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Nikhil 
Agarwal
Sent: 21 March 2017 16:50
To: Bala Manoharan ; Mahipal Challa 

Cc: Narayana, Prasad Athreya ; Masood, Faisal 
; Challa, Mahipal ; 
lng-odp-forward ; Challa, Mahipal 

Subject: Re: [lng-odp] [RFC, API-NEXT v1 1/1] comp:compression interface

Moreover, if both modes need to be added it is better to follow IPSEC APIs 
approach instead of crypto APIs, where SYNC and ASYNC APIs are different and 
user explicitly control mode of operation based on capabilities.

Regards
Nikhil

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Bala 
Manoharan
Sent: Tuesday, March 21, 2017 4:18 PM
To: Mahipal Challa 
Cc: prasad.athr...@cavium.com; faisal.mas...@cavium.com; Mahipal Challa 
; mcha...@cavium.com; lng-odp-forward 

Subject: Re: [lng-odp] [RFC, API-NEXT v1 1/1] comp:compression interface

Hi,

This proposal supports both sync and async compression offload in a single API 
following the crypto model, IMO We can separate SYNC and ASYNC compression 
offload as two different APIs and only add ASYNC compression offload in the 
first version and if required we can add SYNC compression mode as a later 
enhancement.

By following this approach we can simplify the module and will be easier to 
adapt.

Regards,
Bala

On 17 March 2017 at 17:50, Mahipal Challa 
wrote:

> From: Shally Verma 
>
> An API set to add compression/decompression support in ODP interface.
>
> Signed-off-by: Shally Verma 
> Signed-off-by: Mahipal Challa 
> ---
>  include/odp/api/spec/comp.h | 668 ++
> ++
>  1 file changed, 668 insertions(+)
>
> diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h 
> new file mode 100644 index 000..d8f6c68
> --- /dev/null
> +++ b/include/odp/api/spec/comp.h
> @@ -0,0 +1,668 @@
> +/* Copyright (c) 2017, Linaro Limited
> + * All rights reserved.
> + *
> + * SPDX-License-Identifier:BSD-3-Clause
> + */
> +
> +/**
> + * @file
> + *
> + * ODP Compression
> + */
> +
> +#ifndef ODP_API_COMP_H_
> +#define ODP_API_COMP_H_
> +#include 
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** @defgroup odp_compression ODP COMP
> + *  ODP Compression defines interface to compress/decompress and
> authenticate
> + *  data.
> + *
> + *  Compression here implicilty refer to both compression and
> decompression.
> + *  Example, compression algo 'deflate' mean both 'deflate' and 'inflate'.
> + *
> + *  if opcode = ODP_COMP_COMPRESS, then it will Compress,
> + *  if opcode = ODP_COMP_DECOMPRESS, then it will Decompress.
> + *
> + *  Current version of Interface allow Compression 
> + ONLY,Authentication
> ONLY or
> + *  both Compression + Auth ONLY sessions.
> + *
> + *  Macros, enums, types and operations to utilise compression.
> + *  @{
> + */
> +
> +/**
> + * @def ODP_COMP_SESSION_INVALID
> + * Invalid session handle
> + */
> +
> +/**
> + * @typedef odp_comp_session_t (platform dependent)
> + * Comp API opaque session handle
> + */
> +
> +/**
> + * @typedef odp_comp_compl_t
> +* Compression API completion event (platform dependent) */
> +
> +/**
> + * Compression API operation mode
> + */
> +typedef enum {
> +   /** Synchronous, return results immediately */
> +   ODP_COMP_SYNC,
> +   /** Asynchronous, return results via event */
> +   ODP_COMP_ASYNC,
> +} odp_comp_op_mode_t;
> +
> +/**
> + * Comp API operation type
> + */
> +typedef enum {
> +   /** Compress and/or Compute ICV  */
> +   ODP_COMP_OP_COMPRESS,
> +   /** Decompress and/or Compute ICV */
> +   ODP_COMP_OP_DECOMPRESS,
> +} odp_comp_op_t;
> +
> +/**
> + * Comp API compression algorithm
> + *
> + *  Enum listing support Compression algo. Currently one
> + *  Compressor corresponds to 1 compliant decompressor.
> + *
> + */
> +typedef enum {
> +   /** No algorithm specified */
> +   ODP_COMP_ALG_NULL,
> +   /** DEFLATE -
> +   *
> +   * implicit Inflate in case of decode operation
> +   *
> +   */
> +   ODP_COMP_ALG_DEFLATE,
> +   /** ZLIB */
> +   ODP_COMP_ALG_ZLIB,
> +   /** LZS*/
> +   ODP_COMP_ALG_LZS,
> +   /** SHA1
> +   *
> +   * When given, imply Authentication ONLY operation
> +   *
> +   */
> +   ODP_COMP_ALG_SHA1,
> +   /** SHA256
> +   *
> +   * When given, imply Authentication ONLY operation
> +   *
> +   */
> +   ODP_COMP_ALG_SHA256,
> +   /** DEFLATE+SHA1  */
> +   ODP_COMP_ALG_DEFLATE_SHA1,
> +   /** DEFLATE+SHA256 

[lng-odp] CRC/Adler requirement in comp interface

2017-04-05 Thread Verma, Shally
from yesterday meeting minutes , I see a note on this feedback on compression:
Consider adding additional "hashes" (e.g., CRC, Adler)

As we mentioned that comp interface does not provide CRC. Also adler comes as 
output of zlib format and CRC can be available through helper functions. So is 
there any use case identified where user need Adler as explicit algorithm to 
compression interface?

Thanks
Shally



Re: [lng-odp] odp_queue_enq semantics

2017-03-28 Thread Verma, Shally


-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Bill 
Fischofer
Sent: 28 March 2017 16:44
To: Ola Liljedahl 
Cc: nd ; lng-odp@lists.linaro.org
Subject: Re: [lng-odp] odp_queue_enq semantics

On Tue, Mar 28, 2017 at 4:10 AM, Ola Liljedahl  wrote:
> On 28 March 2017 at 10:41, Joe Savage  wrote:
>> Hey,
>>
>> I just wanted to clarify something about the expected behaviour of 
>> odp_queue_enq. In the following code snippet, is it acceptable for 
>> the assert to fire? (i.e. for a dequeue after a successful enqueue to 
>> fail, with only a single thread of execution)
>>
>> odp_queue_t queue;
>> odp_event_t ev1, ev2;
>> /* ... */
>> if (odp_queue_enq(queue, ev1) == 0) {
>> ev2 = odp_queue_deq(queue);
>> assert(ev2 != ODP_EVENT_INVALID);
>> }


rc == 0 from odp_queue_enq() simply means that the enqueue request has been 
accepted.

odp_queue_deq() removes the first element available on the specified queue.

As Ola points out, depending on the implementation there may be some latency 
associated with queue operations so it is possible for the assert to fire. Of 
course, in a multi-threaded environment some other thread may have dequeued the 
event first as well, so this sort of code is inherently brittle.

Shally-Sounds to me that based on implementation odp_queue_enq() can be async 
call so applications dequeuing events should always check against whether it is 
valid intended event? And if not intended, then app should put event back to 
queue? Is that understanding correct?

>>
>> That is, can the "success" status code from odp_queue_enq be used to 
>> indicate a delayed enqueue ("This event will be added to the queue at 
>> some point soon-ish"), or should it only be used to communicate an 
>> immediate successful addition to the queue? The documentation seems 
>> unclear on this point while the validation tests suggest the latter, 
>> but I thought it worth checking up on.
> Some code in odp_scheduling test/benchmark also expects an enqueued 
> event to be immediately dequeuable by the same thread.
>
> I think this is not something you can require, neither from a HW queue 
> manager or from a SW implementation. HW implementations can always 
> have associated latencies visible to the thread that 
> enqueues/dequeues. Also for a SW implementation with loosely coupled 
> parts (e.g. producer & consumer head & tail pointers) it can be 
> possible for an enqueued event to not be immediately available when 
> other threads are doing concurrent operations on the same queue. Only 
> if a lock protects the whole data structure can you enforce one global 
> view of this data structure. You don't want to use a lock.
>
>>
>> Thanks,
>>
>> Joe


Re: [lng-odp] [RFC, API-NEXT v1 1/1] comp:compression interface

2017-03-24 Thread Verma, Shally
Please see in-line.

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Bogdan 
Pricope
Sent: 22 March 2017 14:17
To: Mahipal Challa 
Cc: Narayana, Prasad Athreya ; Masood, Faisal 
; Challa, Mahipal ; 
Challa, Mahipal ; lng-odp-forward 

Subject: Re: [lng-odp] [RFC, API-NEXT v1 1/1] comp:compression interface

Hi,

My understanding is that:
 - compression can be used in stateless or statefull mode. How can we 
differentiate between the two configurations: by specifying a special algorithm 
on session creation or by setting "last" field to "true" for every operation? 
Do we need this setting per operation or per session?
>> 'last' or 'current enumerated algo' are not sufficient parameters to decide 
>> stateless operation. Pure stateless operation depends on App that it should 
>> pass data which is guaranteed to decompress into a known size and thus pass 
>> an sufficient output buffer size.

Take a case where user want stateless operation and set parameters:
In_pkt length 32KB,
Out_pkt length 64KB
Last=true //  indicating this is first and Only data to process, 
Call odp_comp_operation()  to decompress,

Now, during decompress operation at any point if odp_comp_operation () returns 
ODP_COMP_ERR_OUT_OF_SPACE(in case if uncompressed data is > 64kB)  , it enter 
into statefull mode.
Though in IPComp context, since IP packets are known to be max 64KB and need to 
be stateless, thus every call should have with last=1 indicating this is last 
chunk.

 - stateless mode may be used by IPComp (RFC3173) - it will be application 
job to add IPComp header between IP header and payload
>>yes
 - statefull mode requires a connection oriented protocol (TCP) support to 
take care of delivering date in order and without drops.
How will receiver know if payload is clear text or compressed? Should use a 
well-known port?
>> In such case, the protocol must have capability to distinguish 
>> compressed/uncompressed text (could be use of dedicated port/use of some 
>> negotiation protocol). The Comp Spec is oblivious of protocol/networking 
>> aspects of the data.

- on statefull mode, it will always be an output for every input or algorithm 
may decide to concat consecutive packets and output fewer?
>> These details are implementation dependent. Implementation may accumulate 
>> more data in order to provide better results, in such case Its 
>> implementation responsibility to return ODP_COMP_ERR_NONE to let user 
>> continue feed more data and keep track of output buffer where it starts 
>> producing data (as Application may change output packet with each subsequent 
>> call of odp_comp_operation()). 


- in-place vs. new packet: in-place processing will be added later - you should 
take care on what you to copy between input and output packets - user area, pkt 
user pointer, etc.
>> Yes. However, In-place support require bit more feasibility study as it 
>> heavily depend on implementation.  This becomes an issue if
- implementation may accumulate data as you mentioned above 
- if app does not know length of decompressed data beforehand. In such case, 
output buffer may grow invariably.
- also algo like deflate,lzs are based on sliding window in which case 
implementation would need to keep track how much to retain in input buffer
Thus, we have left it open to discussion subject to requirement/platform 
support.

Is my understanding correct?

BR,
Bogdan

On 17 March 2017 at 14:20, Mahipal Challa  wrote:
> From: Shally Verma 
>
> An API set to add compression/decompression support in ODP interface.
>
> Signed-off-by: Shally Verma 
> Signed-off-by: Mahipal Challa 
> ---
>  include/odp/api/spec/comp.h | 668 
> 
>  1 file changed, 668 insertions(+)
>
> diff --git a/include/odp/api/spec/comp.h b/include/odp/api/spec/comp.h 
> new file mode 100644 index 000..d8f6c68
> --- /dev/null
> +++ b/include/odp/api/spec/comp.h
> @@ -0,0 +1,668 @@
> +/* Copyright (c) 2017, Linaro Limited
> + * All rights reserved.
> + *
> + * SPDX-License-Identifier:BSD-3-Clause
> + */
> +
> +/**
> + * @file
> + *
> + * ODP Compression
> + */
> +
> +#ifndef ODP_API_COMP_H_
> +#define ODP_API_COMP_H_
> +#include 
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/** @defgroup odp_compression ODP COMP
> + *  ODP Compression defines interface to compress/decompress and 
> +authenticate
> + *  data.
> + *
> + *  Compression here implicilty refer to both compression and decompression.
> + *  Example, compression algo 'deflate' mean both 'deflate' and 'inflate'.
> + *
> + *  if opcode = ODP_COMP_COMPRESS, then it will Compress,
> + *  if opcode = ODP_COMP_DECOMPRESS, then it will Decompress.
> + *
> + *  Current version of Interface allow Compression 
> +ONLY,Authentication ONLY or
> + *  both Compression + Auth ONLY sessions.
> + *
> + *  Macros, enums, types and operations to utilise compression.
> + *  @{
> + */
> +
> +/**
> + * @def ODP_COMP_SESSI

Re: [lng-odp] Regarding Chained Buffers and Crypto

2017-03-16 Thread Verma, Shally
Have a question by chained , do you mean a "Segmented" packets?

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Bala 
Manoharan
Sent: 16 March 2017 14:01
To: Nanda Gopal 
Cc: lng-odp-forward 
Subject: Re: [lng-odp] Regarding Chained Buffers and Crypto

On 15 March 2017 at 18:42, Nanda Gopal  wrote:

> *Hi,*
>
>
>
> *When we pass, odp_crypto_op_params_tparams  to  odp_crypto_operation
> (), as per the documentation, if we set params-> out_pkt to INVALID, a 
> new pkt would be created from the original pkt pool, data copied from 
> original pkt and then passed on to DPDK. *
>
> *In our test case, we are passing a chained buffer to this crypto API, 
> so in order for send the same buffer to lower layer without a copy, 
> are there any flags to be set? *
>

If your requirement is to support in-place crypto processing you can specify 
the output packet as same as input packet (i.e out_pkt = pkt).

Regards,
Bala

>
>
> *Regards,*
>
> *Nanda*
>


Re: [lng-odp] Generic handle in ODP

2017-03-02 Thread Verma, Shally
Humm. I see that a valid point. Yes I agree this proposal will add limitations 
on implementations and app.

Thanks everyone for your time.

From: Francois Ozog [mailto:francois.o...@linaro.org]
Sent: 02 March 2017 14:44
To: Verma, Shally 
Cc: Maxim Uvarov ; lng-odp@lists.linaro.org
Subject: Re: [lng-odp] Generic handle in ODP

Hi Shally,

I think Bill stated an important aspect. Due to the abstract nature of 
odp_packet_t and odp_buffer_t that can represent VERY different things 
depending on the underlying hardware, we want a strongly typed ODP.

So regardless of advantages of a common piece of code, your proposal 
contradicts this strong typed objective.

And if I look at it from a performance perspective, where the specialized 
function would just do the work, may be even inlined, your proposal would need 
type checks and other "safety" checks to perform the task.

Cordially,

FF



On 2 March 2017 at 08:30, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:
Please see my response inline.

-Original Message-
From: lng-odp 
[mailto:lng-odp-boun...@lists.linaro.org<mailto:lng-odp-boun...@lists.linaro.org>]
 On Behalf Of Maxim Uvarov
Sent: 01 March 2017 18:38
To: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
Subject: Re: [lng-odp] Generic handle in ODP

btw,

if you need to get pool you can use functions:

odp_pool_t odp_packet_pool(odp_packet_t pkt); odp_pool_t 
odp_buffer_pool(odp_buffer_t buf);

If you know event type and pool then you should be able do all you need.

Maxim.

On 03/01/17 15:47, Bill Fischofer wrote:
> ODP handles are by design abstract types that may have very different
> internal representations between different ODP implementations. When
> asking for a generic handle type the question is which types would you
> expect that to encompass?
>
> When we originally started defining the ODP type system we avoided
> having a generic odp_handle_t supertype specifically to avoid C's
> issues with weak typing. We wanted ODP to be strongly typed so that,
> for example, trying to pass an odp_queue_t to an API that expects an
> odp_packet_t would be flagged at compile time.
>
Shally ≫ So you mean doing ' handle=(odp_handle_t) pkt; ' is not desirable in 
example code snippet below.
It maintains strong typing at API level, but letting app to use it
int odp_xxx_process_data(odp_handle_t handle, );

main()
{
 odp_buffer_t buf=odp_buffer_alloc(buf_pool);
 odp_packet_t pkt=odp_packet_alloc(pkt_pool);

odp_handle_t handle;

 //process pkt
 handle=(odp_handle_t) pkt; // is this allowed?
 odp_xxx_process_data(handle,...);

 //process buffer
 handle=(odp_handle_t)buf; // is this allowed?
 odp_xx_process_buf(handle,...)

}
> As noted in this thread odp_event_t is a generic type that is used to
> represent entities that can be transmitted through queues and ODP
> provides type conversion APIs to and from this container type to the
> specific types (buffer, packet, timeout, crypto completions) that are
> able to be carried by an event. The intent is that as different event
> types are added they will similarly be added to this container along
> with converter APIs. But trying to fit all types into this model seems
> unnecessary. If you have a use case for wanting to treat some other
> type as an event we'd be interested in hearing that.
>
Shally≫ Event module carry a different purpose in system. They are raised as an 
outcome of an actions initiated on different modules (and thus carry need to 
have convertor APIs). However I didn’t had same purpose and would not like to 
use or extend event for same. Mine was simple use case to initiate some common 
action on different kinds of data and avoiding code duplicity.

Thanks
Shally

> On Wed, Mar 1, 2017 at 5:56 AM, Verma, Shally
> mailto:shally.ve...@cavium.com>>
> wrote:
>
>> Francois
>>
>> It is base assumption that an ODP Interface/implementation supporting
>> generic handle concept takes due care and record keeping to find out
>> proper type casting and keeping pool info is one such of them.
>>
>> Petri
>> memcpy() is just an example to explain use case.
>>
>> packet APIs are good for interface which always and only process data
>> of type odp_packet_t however  if anyone would want to extend same API
>> to support plain buffer type memory as well (thus avoiding
>> packet_copy_to/from_mem()), then generic handle concept may be helpful.
>>
>>
>> Though it does not come as a MUST requirement but thinking if
>> flexibility of having generic handle in ODP helps in flexible
>> implementation where ever it is desirable / needed (of course with due care).
>>
>> Thanks
>> Shally
>>
>>
>> From: Francois Ozog 
>> [mailto:francois.o...@linaro.org<mailto:francois.o.

Re: [lng-odp] Generic handle in ODP

2017-03-01 Thread Verma, Shally
Please see my response inline.

-Original Message-
From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Maxim 
Uvarov
Sent: 01 March 2017 18:38
To: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] Generic handle in ODP

btw,

if you need to get pool you can use functions:

odp_pool_t odp_packet_pool(odp_packet_t pkt); odp_pool_t 
odp_buffer_pool(odp_buffer_t buf);

If you know event type and pool then you should be able do all you need.

Maxim.

On 03/01/17 15:47, Bill Fischofer wrote:
> ODP handles are by design abstract types that may have very different 
> internal representations between different ODP implementations. When 
> asking for a generic handle type the question is which types would you 
> expect that to encompass?
> 
> When we originally started defining the ODP type system we avoided 
> having a generic odp_handle_t supertype specifically to avoid C's 
> issues with weak typing. We wanted ODP to be strongly typed so that, 
> for example, trying to pass an odp_queue_t to an API that expects an 
> odp_packet_t would be flagged at compile time.
> 
Shally ≫ So you mean doing ' handle=(odp_handle_t) pkt; ' is not desirable in 
example code snippet below.
It maintains strong typing at API level, but letting app to use it
int odp_xxx_process_data(odp_handle_t handle, );

main()
{
 odp_buffer_t buf=odp_buffer_alloc(buf_pool);
 odp_packet_t pkt=odp_packet_alloc(pkt_pool); 
 
odp_handle_t handle;

 //process pkt
 handle=(odp_handle_t) pkt; // is this allowed? 
 odp_xxx_process_data(handle,...);

 //process buffer 
 handle=(odp_handle_t)buf; // is this allowed?
 odp_xx_process_buf(handle,...)

}
> As noted in this thread odp_event_t is a generic type that is used to 
> represent entities that can be transmitted through queues and ODP 
> provides type conversion APIs to and from this container type to the 
> specific types (buffer, packet, timeout, crypto completions) that are 
> able to be carried by an event. The intent is that as different event 
> types are added they will similarly be added to this container along 
> with converter APIs. But trying to fit all types into this model seems 
> unnecessary. If you have a use case for wanting to treat some other 
> type as an event we'd be interested in hearing that.
> 
Shally≫ Event module carry a different purpose in system. They are raised as an 
outcome of an actions initiated on different modules (and thus carry need to 
have convertor APIs). However I didn’t had same purpose and would not like to 
use or extend event for same. Mine was simple use case to initiate some common 
action on different kinds of data and avoiding code duplicity.

Thanks
Shally

> On Wed, Mar 1, 2017 at 5:56 AM, Verma, Shally 
> 
> wrote:
> 
>> Francois
>>
>> It is base assumption that an ODP Interface/implementation supporting 
>> generic handle concept takes due care and record keeping to find out 
>> proper type casting and keeping pool info is one such of them.
>>
>> Petri
>> memcpy() is just an example to explain use case.
>>
>> packet APIs are good for interface which always and only process data 
>> of type odp_packet_t however  if anyone would want to extend same API 
>> to support plain buffer type memory as well (thus avoiding 
>> packet_copy_to/from_mem()), then generic handle concept may be helpful.
>>
>>
>> Though it does not come as a MUST requirement but thinking if 
>> flexibility of having generic handle in ODP helps in flexible 
>> implementation where ever it is desirable / needed (of course with due care).
>>
>> Thanks
>> Shally
>>
>>
>> From: Francois Ozog [mailto:francois.o...@linaro.org]
>> Sent: 01 March 2017 16:22
>> To: Verma, Shally 
>> Cc: Savolainen, Petri (Nokia - FI/Espoo) 
>> ; lng-odp@lists.linaro.org
>> Subject: Re: [lng-odp] Generic handle in ODP
>>
>> I see the point but still, I don't feel comfortable with the approach 
>> as we don't know if we have access to the pool originating the handle 
>> when you want to do the copy.
>>
>> It is good to avoid code duplication but in that particular case, it 
>> looks opening dangerous directions. (a gut feel for the moment, not a 
>> documented statement).
>>
>> FF
>>
>> On 1 March 2017 at 10:38, Verma, Shally > shally.ve...@cavium.com>> wrote:
>>
>> HI Petri/Maxim
>>
>> Please see my response below.
>>
>> -Original Message-
>> From: Savolainen, Petri (Nokia - FI/Espoo) [mailto:petri.savolainen@ 
>> nokia-bell-labs.com<mailto:petri.savolai...@nokia-bell-labs.com>]
>> Sent: 01 March 2017 14:38
>> To: Verma, Shally 
>> mailto:shally.ve.

Re: [lng-odp] Generic handle in ODP

2017-03-01 Thread Verma, Shally
Francois

It is base assumption that an ODP Interface/implementation supporting generic 
handle concept takes due care and record keeping to find out proper type 
casting and keeping pool info is one such of them.

Petri
memcpy() is just an example to explain use case.

packet APIs are good for interface which always and only process data of type 
odp_packet_t however  if anyone would want to extend same API to support plain 
buffer type memory as well (thus avoiding  packet_copy_to/from_mem()), then 
generic handle concept may be helpful.


Though it does not come as a MUST requirement but thinking if flexibility of 
having generic handle in ODP helps in flexible implementation where ever it is 
desirable / needed (of course with due care).

Thanks
Shally


From: Francois Ozog [mailto:francois.o...@linaro.org]
Sent: 01 March 2017 16:22
To: Verma, Shally 
Cc: Savolainen, Petri (Nokia - FI/Espoo) 
; lng-odp@lists.linaro.org
Subject: Re: [lng-odp] Generic handle in ODP

I see the point but still, I don't feel comfortable with the approach as we 
don't know if we have access to the pool originating the handle when you want 
to do the copy.

It is good to avoid code duplication but in that particular case, it looks 
opening dangerous directions. (a gut feel for the moment, not a documented 
statement).

FF

On 1 March 2017 at 10:38, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:

HI Petri/Maxim

Please see my response below.

-Original Message-
From: Savolainen, Petri (Nokia - FI/Espoo) 
[mailto:petri.savolai...@nokia-bell-labs.com<mailto:petri.savolai...@nokia-bell-labs.com>]
Sent: 01 March 2017 14:38
To: Verma, Shally mailto:shally.ve...@cavium.com>>; 
Francois Ozog mailto:francois.o...@linaro.org>>
Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
Subject: RE: [lng-odp] Generic handle in ODP



> -Original Message-
> From: lng-odp 
> [mailto:lng-odp-boun...@lists.linaro.org<mailto:lng-odp-boun...@lists.linaro.org>]
>  On Behalf Of
> Verma, Shally
> Sent: Wednesday, March 01, 2017 10:38 AM
> To: Francois Ozog mailto:francois.o...@linaro.org>>
> Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
> Subject: Re: [lng-odp] Generic handle in ODP
>
> HI Francois
>
> What you said is correct and in such case API should only have
> odp_packet_t as an its type signature and access memory as per
> implementation policy.
>
> I am talking about use case where an API can input data both as  a
> plain buffer (which is a contiguous memory) *OR* as a packet (which is
> variable, scattered/non-scattered segmented memory). So when API sees
> that input data  is  from buffer pool, can simply use  address
> returned by odp_buf_addr as memory pointer and do direct read/write
> (as its not segmented and contiguous memory) and when it sees chunk is
> from packet pool , then access data according to its base hw implementation.
>
> I am taking here a simple memcpy() pseudo example to explain case .
> Say, if I want to enable memcpy from both packet and buffer memory,
> then there are 2 ways of doing it:
>
> 1.   Add two separate APIs , say memcpy_from_buffer(odp_buffer_t
> buf,size_t len, void *dst) and memcpy_from_packet(odp_packet_t packet,
> size_t len, void *) OR
>
> 2.   Or, make one API say memcpy(odp_handle_t handle, odp_pool_t pool,
> size_t len, void *dst)
>
> {
>
> if (pool type== odp_buffer_t ) then
>
> addr=odp_buffer_addr((odp_buffer_t)handle);
>
> else
>
>addr=odp_packet_data((odp_packet_t)handle);
>
>
>
>   memcpy(dst,addr,len);
>
> }
>
> Hope this could explain intended use case to an extent.
>
> Thanks
> Shally


As Maxim mentioned, odp_event_t is the single type (that is passed through 
queues). An event can be buffer, packet, timeout, crypto/ipsec completion, etc. 
Application needs to check event type and convert it to correct sub-type (e.g. 
packet) to access the data/metadata.

Application needs to be careful to handle buffers vs. packets correctly. 
Buffers are simple, always contiguous memory blocks - whereas packets may be 
fragmented. The example above would break if a packet segment boundary is hit 
between addr[0]...addr[len-1]. There are a bunch of packet_copy functions which 
handle segmentation correctly.


if (odp_event_type(ev) == ODP_EVENT_PACKET) {
pkt = odp_packet_from_event(ev);
odp_packet_copy_to_mem(pkt, 0, len, dst); } else if (odp_event_type(ev) 
== ODP_EVENT_BUFFER) {
buf = odp_buffer_from_event(ev);
addr = odp_buffer_addr(buf);
memcpy(dst, addr, len);
} else {
// BAD EVENT TYPE. NO DATA TO COPY.
}
Shally >> This is understood. However it is applicable for the Event based 
action where we can do typecasting between event/packet and buffer.
I am asking f

Re: [lng-odp] Generic handle in ODP

2017-03-01 Thread Verma, Shally

HI Petri/Maxim

Please see my response below.

-Original Message-
From: Savolainen, Petri (Nokia - FI/Espoo) 
[mailto:petri.savolai...@nokia-bell-labs.com] 
Sent: 01 March 2017 14:38
To: Verma, Shally ; Francois Ozog 

Cc: lng-odp@lists.linaro.org
Subject: RE: [lng-odp] Generic handle in ODP



> -Original Message-
> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of 
> Verma, Shally
> Sent: Wednesday, March 01, 2017 10:38 AM
> To: Francois Ozog 
> Cc: lng-odp@lists.linaro.org
> Subject: Re: [lng-odp] Generic handle in ODP
> 
> HI Francois
> 
> What you said is correct and in such case API should only have 
> odp_packet_t as an its type signature and access memory as per 
> implementation policy.
> 
> I am talking about use case where an API can input data both as  a 
> plain buffer (which is a contiguous memory) *OR* as a packet (which is 
> variable, scattered/non-scattered segmented memory). So when API sees 
> that input data  is  from buffer pool, can simply use  address 
> returned by odp_buf_addr as memory pointer and do direct read/write 
> (as its not segmented and contiguous memory) and when it sees chunk is 
> from packet pool , then access data according to its base hw implementation.
> 
> I am taking here a simple memcpy() pseudo example to explain case . 
> Say, if I want to enable memcpy from both packet and buffer memory, 
> then there are 2 ways of doing it:
> 
> 1.   Add two separate APIs , say memcpy_from_buffer(odp_buffer_t
> buf,size_t len, void *dst) and memcpy_from_packet(odp_packet_t packet, 
> size_t len, void *) OR
> 
> 2.   Or, make one API say memcpy(odp_handle_t handle, odp_pool_t pool,
> size_t len, void *dst)
> 
> {
> 
> if (pool type== odp_buffer_t ) then
> 
> addr=odp_buffer_addr((odp_buffer_t)handle);
> 
> else
> 
>addr=odp_packet_data((odp_packet_t)handle);
> 
> 
> 
>   memcpy(dst,addr,len);
> 
> }
> 
> Hope this could explain intended use case to an extent.
> 
> Thanks
> Shally


As Maxim mentioned, odp_event_t is the single type (that is passed through 
queues). An event can be buffer, packet, timeout, crypto/ipsec completion, etc. 
Application needs to check event type and convert it to correct sub-type (e.g. 
packet) to access the data/metadata.

Application needs to be careful to handle buffers vs. packets correctly. 
Buffers are simple, always contiguous memory blocks - whereas packets may be 
fragmented. The example above would break if a packet segment boundary is hit 
between addr[0]...addr[len-1]. There are a bunch of packet_copy functions which 
handle segmentation correctly.


if (odp_event_type(ev) == ODP_EVENT_PACKET) {
pkt = odp_packet_from_event(ev);
odp_packet_copy_to_mem(pkt, 0, len, dst); } else if (odp_event_type(ev) 
== ODP_EVENT_BUFFER) {
buf = odp_buffer_from_event(ev);
addr = odp_buffer_addr(buf);
memcpy(dst, addr, len);
} else {
// BAD EVENT TYPE. NO DATA TO COPY.
}

Shally >> This is understood. However it is applicable for the Event based 
action where we can do typecasting between event/packet and buffer. 
I am asking for scenario which user can initiate some action on data and that 
data can be either from buffer or packet pool (which may result into some event 
generation).  As you mentioned above -
" Application needs to be careful to handle buffers vs. packets correctly. 
Buffers are simple, always contiguous memory blocks - whereas packets may be 
fragmented. The example above would break if a packet segment boundary is hit 
between addr[0]...addr[len-1]. There are a bunch of packet_copy functions which 
handle segmentation correctly."
For the same reason, I am asking if we could add generic handle. So that API 
can understand what memory type it is dealing with.

I re-write my example here to explain bit more here..

 if (pool type== odp_buffer_t ) then
 {
 addr=odp_buffer_addr((odp_buffer_t)handle);
   memcpy(dst,addr,len);
   }
 
 else
 {
offset=0;
  while(offset

Re: [lng-odp] Generic handle in ODP

2017-03-01 Thread Verma, Shally
HI Francois

What you said is correct and in such case API should only have odp_packet_t as 
an its type signature and access memory as per implementation policy.

I am talking about use case where an API can input data both as  a plain buffer 
(which is a contiguous memory) *OR* as a packet (which is variable, 
scattered/non-scattered segmented memory). So when API sees that input data  is 
 from buffer pool, can simply use  address returned by odp_buf_addr as memory 
pointer and do direct read/write (as its not segmented and contiguous memory) 
and when it sees chunk is from packet pool , then access data according to its 
base hw implementation.

I am taking here a simple memcpy() pseudo example to explain case . Say, if I 
want to enable memcpy from both packet and buffer memory, then there are 2 ways 
of doing it:

1.   Add two separate APIs , say memcpy_from_buffer(odp_buffer_t buf,size_t 
len, void *dst) and memcpy_from_packet(odp_packet_t packet, size_t len, void *) 
OR

2.   Or, make one API say memcpy(odp_handle_t handle, odp_pool_t pool, 
size_t len, void *dst)

{

if (pool type== odp_buffer_t ) then

addr=odp_buffer_addr((odp_buffer_t)handle);

else

   addr=odp_packet_data((odp_packet_t)handle);



  memcpy(dst,addr,len);

}

Hope this could explain intended use case to an extent.

Thanks
Shally



From: Francois Ozog [mailto:francois.o...@linaro.org]
Sent: 01 March 2017 13:28
To: Verma, Shally 
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] Generic handle in ODP

Hi Shally,

I am unsure this leads to a safe direction and here is why.

Depending on the hardware, software provides either

1) a bunch of buffers where the HW will place packets (1 buffer will receive 
one packet)

or 2) a large memory zone in which the hardware will place the packets 
according to a silicon specific policy.

In the case of 2, I am not sure a packet has to be contained in a odp_buffer_t. 
And even so, freeing such an odp_buffer_t will not result in freeing the 
underlying memory because allocation unit is memory region not odp_buffer_t.

Could you elaborate on the use case you have in mind?

Cordially,

FF

On 1 March 2017 at 07:41, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:
I wanted to check if we could introduce a generic handle concept in ODP 
(something like odp_handle_t carrying purpose similar to void * ) which can be 
used for API that need to handle multiple types in one call. Ex. an API that 
works on both Packet and Buffer Type .Such use cases can take generic handle in 
input and map  it to either per requirement.

If agreed, following defines the purpose..

odp_handle_t is a generic handle type which implementation should implement in 
such a way that it could be map to represent any other type.
It is to aid an API which handle multiple types.  Example , odp_handle_t can 
represent as odp_buffer_t or odp_packet_t

Thanks
Shally








--
[Linaro]<http://www.linaro.org/>

François-Frédéric Ozog | Director Linaro Networking Group

T: +33.67221.6485
francois.o...@linaro.org<mailto:francois.o...@linaro.org> | Skype: ffozog





[lng-odp] Generic handle in ODP

2017-02-28 Thread Verma, Shally
I wanted to check if we could introduce a generic handle concept in ODP 
(something like odp_handle_t carrying purpose similar to void * ) which can be 
used for API that need to handle multiple types in one call. Ex. an API that 
works on both Packet and Buffer Type .Such use cases can take generic handle in 
input and map  it to either per requirement.

If agreed, following defines the purpose..

odp_handle_t is a generic handle type which implementation should implement in 
such a way that it could be map to represent any other type.
It is to aid an API which handle multiple types.  Example , odp_handle_t can 
represent as odp_buffer_t or odp_packet_t

Thanks
Shally








Re: [lng-odp] odp_buffer_t usage

2017-02-23 Thread Verma, Shally


From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: 23 February 2017 18:31
To: Verma, Shally 
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] odp_buffer_t usage



On Thu, Feb 23, 2017 at 6:56 AM, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:


From: Bill Fischofer 
[mailto:bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>]
Sent: 23 February 2017 18:17
To: Verma, Shally mailto:shally.ve...@cavium.com>>
Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
Subject: Re: [lng-odp] odp_buffer_t usage



On Thu, Feb 23, 2017 at 6:28 AM, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:


From: Bill Fischofer 
[mailto:bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>]
Sent: 23 February 2017 17:51
To: Verma, Shally mailto:shally.ve...@cavium.com>>
Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
Subject: Re: [lng-odp] odp_buffer_t usage

ODP pools provide an abstraction for various types of managed storage. There 
are currently three types of pools supported:

- ODP_POOL_BUFFER
- ODP_POOL_PACKET
- ODP_POOL_TIMEOUT

A buffer pool is simply a collection of fixed-sized blocks represented by the 
odp_buffer_t abstraction.
>> So is it safe to assume implementation will always allocate a fixed-size 
>> *contiguous* block ? Can it be used to pass on plain bulk data?
Example , if ODP Crypto is to encrypt a large data in a chunk size of 16K, 
where chunk is *not packet* type but need to be contiguous memory. Then is it 
legal to get a buffer of len 16k through odp_buffer_alloc() and passed to ODP 
Crypto for such use case?

Each odp_buffer_t contained in a buffer pool is contiguous. It's up to the 
implementation, however whether the pool itself is a single contiguous block of 
storage as this is not specified by ODP.

Sha>> So that implies to me that one single  alloc request (odp_buffer_alloc()) 
should return a contiguous chunk ptr. Agreed and understood that Implementation 
can maintain pool anyways.

Yes, that is correct.


The ODP crypto APIs operate on packet types, not buffer types, as the pkt and 
out_pkt fields of the odp_crypto_op_param_t struct that is passed to 
odp_crypto_operation() are of type odp_packet_t.
Sha>> Crypto I took as an example to explain a use case which I am looking for 
or trying to enable. All I want to make sure that it is perfectly legal to 
input odp_buffer_t in ODP APIs and access them as contiguous memory.

You can only pass an odp_buffer_t to ODP APIs whose type signatures are 
specified to accept odp_buffer_t inputs. Or are you talking about proposing 
adding a new ODP API?

 Sha >> Yes. Looking at feasibility of doing so.
Note that while packets in ODP are composed of one or more segments, each 
implementation determines the segmentation model that it uses. Some 
implementations may only use a single segment for packets while others may use 
multiple segments (varying based on packet length). At odp_pool_create() time, 
the application simply indicates the minimum segment size it requires, however 
the implementation is free to use a larger segment size if that is more 
convenient for it.


A packet pool stores packets represented by the odp_packet_t abstraction, which 
provide a rich set of semantics for manipulating packets. A timeout pool stores 
timeout events that are used as part of the timer management APIs.



On Thu, Feb 23, 2017 at 12:42 AM, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:
Hi

I was looking into odp_buffer_t to understand its use case from Application 
stand point.  While it is clear for odp_packet_t description that it can be 
segmented/non-segmented contiguous / non-contiguous memory and APIs are 
provided to query and hop across segments to access data, but It is not clear 
how odp_buffer_t  supposed to be allocated and accessed and what App  can use 
it for? As API set very minimalistic just to get address and length of data.

So, couple of questions comes :

-  Can ODP *Buffer* Pool be both linear / scatter-gather memory Or it 
is always supposed to be one contiguous piece of memory?

Implementations are free to realize any ODP abstract type however they wish. 
There is no requirement that pools themselves be a single block of memory since 
individual buffers/packet/timeout objects are allocated and freed via their own 
APIs. Individual odp_buffer_t objects, however, are fixed-sized blocks of 
contiguous memory as segmentation is not part of the odp_buffer_t semantics. 
Every odp_buffer_t object contained in a buffer pool is the same size, as 
determined at odp_pool_create() time.


-  Is it safe to assume that memory of the type odp_buffer_t is plain 
contiguous memory chunk (as malloc)? And data ptr retrieved through 
odp_buf_addr() can be directly read/written.

Yes. odp_buffer_addr() returns a pointer to a contiguous memory area of size 
odp_buffer_size(), which is f

Re: [lng-odp] odp_buffer_t usage

2017-02-23 Thread Verma, Shally


From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: 23 February 2017 18:17
To: Verma, Shally 
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] odp_buffer_t usage



On Thu, Feb 23, 2017 at 6:28 AM, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:


From: Bill Fischofer 
[mailto:bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>]
Sent: 23 February 2017 17:51
To: Verma, Shally mailto:shally.ve...@cavium.com>>
Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
Subject: Re: [lng-odp] odp_buffer_t usage

ODP pools provide an abstraction for various types of managed storage. There 
are currently three types of pools supported:

- ODP_POOL_BUFFER
- ODP_POOL_PACKET
- ODP_POOL_TIMEOUT

A buffer pool is simply a collection of fixed-sized blocks represented by the 
odp_buffer_t abstraction.
>> So is it safe to assume implementation will always allocate a fixed-size 
>> *contiguous* block ? Can it be used to pass on plain bulk data?
Example , if ODP Crypto is to encrypt a large data in a chunk size of 16K, 
where chunk is *not packet* type but need to be contiguous memory. Then is it 
legal to get a buffer of len 16k through odp_buffer_alloc() and passed to ODP 
Crypto for such use case?

Each odp_buffer_t contained in a buffer pool is contiguous. It's up to the 
implementation, however whether the pool itself is a single contiguous block of 
storage as this is not specified by ODP.

Sha>> So that implies to me that one single  alloc request (odp_buffer_alloc()) 
should return a contiguous chunk ptr. Agreed and understood that Implementation 
can maintain pool anyways.

The ODP crypto APIs operate on packet types, not buffer types, as the pkt and 
out_pkt fields of the odp_crypto_op_param_t struct that is passed to 
odp_crypto_operation() are of type odp_packet_t.
Sha>> Crypto I took as an example to explain a use case which I am looking for 
or trying to enable. All I want to make sure that it is perfectly legal to 
input odp_buffer_t in ODP APIs and access them as contiguous memory.

Note that while packets in ODP are composed of one or more segments, each 
implementation determines the segmentation model that it uses. Some 
implementations may only use a single segment for packets while others may use 
multiple segments (varying based on packet length). At odp_pool_create() time, 
the application simply indicates the minimum segment size it requires, however 
the implementation is free to use a larger segment size if that is more 
convenient for it.


A packet pool stores packets represented by the odp_packet_t abstraction, which 
provide a rich set of semantics for manipulating packets. A timeout pool stores 
timeout events that are used as part of the timer management APIs.



On Thu, Feb 23, 2017 at 12:42 AM, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:
Hi

I was looking into odp_buffer_t to understand its use case from Application 
stand point.  While it is clear for odp_packet_t description that it can be 
segmented/non-segmented contiguous / non-contiguous memory and APIs are 
provided to query and hop across segments to access data, but It is not clear 
how odp_buffer_t  supposed to be allocated and accessed and what App  can use 
it for? As API set very minimalistic just to get address and length of data.

So, couple of questions comes :

-  Can ODP *Buffer* Pool be both linear / scatter-gather memory Or it 
is always supposed to be one contiguous piece of memory?

Implementations are free to realize any ODP abstract type however they wish. 
There is no requirement that pools themselves be a single block of memory since 
individual buffers/packet/timeout objects are allocated and freed via their own 
APIs. Individual odp_buffer_t objects, however, are fixed-sized blocks of 
contiguous memory as segmentation is not part of the odp_buffer_t semantics. 
Every odp_buffer_t object contained in a buffer pool is the same size, as 
determined at odp_pool_create() time.


-  Is it safe to assume that memory of the type odp_buffer_t is plain 
contiguous memory chunk (as malloc)? And data ptr retrieved through 
odp_buf_addr() can be directly read/written.

Yes. odp_buffer_addr() returns a pointer to a contiguous memory area of size 
odp_buffer_size(), which is fixed for all odp_buffer_t objects drawn from the 
same pool.


-  If odp_buffer_t supposed to carry metadata info, then how user know 
metadata len and actually data len?

The only metadata associated with an odp_buffer_t is it's size, obtained via 
the odp_buffer_size() API.


-  Is it valid to use odp_buffer_t in ODP API prototyping or they are 
expected to use odp_packet_t ONLY?

Buffers and packets are different typed objects. Both can be converted to event 
types so they can be passed through queues that expect parameters of type 
odp_event_t, but the are not interchangeable. If an API expects an odp_packet_t 

Re: [lng-odp] odp_buffer_t usage

2017-02-23 Thread Verma, Shally


From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: 23 February 2017 17:51
To: Verma, Shally 
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] odp_buffer_t usage

ODP pools provide an abstraction for various types of managed storage. There 
are currently three types of pools supported:

- ODP_POOL_BUFFER
- ODP_POOL_PACKET
- ODP_POOL_TIMEOUT

A buffer pool is simply a collection of fixed-sized blocks represented by the 
odp_buffer_t abstraction.
>> So is it safe to assume implementation will always allocate a fixed-size 
>> *contiguous* block ? Can it be used to pass on plain bulk data?
Example , if ODP Crypto is to encrypt a large data in a chunk size of 16K, 
where chunk is *not packet* type but need to be contiguous memory. Then is it 
legal to get a buffer of len 16k through odp_buffer_alloc() and passed to ODP 
Crypto for such use case?

A packet pool stores packets represented by the odp_packet_t abstraction, which 
provide a rich set of semantics for manipulating packets. A timeout pool stores 
timeout events that are used as part of the timer management APIs.



On Thu, Feb 23, 2017 at 12:42 AM, Verma, Shally 
mailto:shally.ve...@cavium.com>> wrote:
Hi

I was looking into odp_buffer_t to understand its use case from Application 
stand point.  While it is clear for odp_packet_t description that it can be 
segmented/non-segmented contiguous / non-contiguous memory and APIs are 
provided to query and hop across segments to access data, but It is not clear 
how odp_buffer_t  supposed to be allocated and accessed and what App  can use 
it for? As API set very minimalistic just to get address and length of data.

So, couple of questions comes :

-  Can ODP *Buffer* Pool be both linear / scatter-gather memory Or it 
is always supposed to be one contiguous piece of memory?

Implementations are free to realize any ODP abstract type however they wish. 
There is no requirement that pools themselves be a single block of memory since 
individual buffers/packet/timeout objects are allocated and freed via their own 
APIs. Individual odp_buffer_t objects, however, are fixed-sized blocks of 
contiguous memory as segmentation is not part of the odp_buffer_t semantics. 
Every odp_buffer_t object contained in a buffer pool is the same size, as 
determined at odp_pool_create() time.


-  Is it safe to assume that memory of the type odp_buffer_t is plain 
contiguous memory chunk (as malloc)? And data ptr retrieved through 
odp_buf_addr() can be directly read/written.

Yes. odp_buffer_addr() returns a pointer to a contiguous memory area of size 
odp_buffer_size(), which is fixed for all odp_buffer_t objects drawn from the 
same pool.


-  If odp_buffer_t supposed to carry metadata info, then how user know 
metadata len and actually data len?

The only metadata associated with an odp_buffer_t is it's size, obtained via 
the odp_buffer_size() API.


-  Is it valid to use odp_buffer_t in ODP API prototyping or they are 
expected to use odp_packet_t ONLY?

Buffers and packets are different typed objects. Both can be converted to event 
types so they can be passed through queues that expect parameters of type 
odp_event_t, but the are not interchangeable. If an API expects an odp_packet_t 
as a parameter, attempting to pass an odp_buffer_t to it will be flagged as a 
type mismatch at compile time, as if you tried to pass an int to a routine that 
expected a char variable.


Any info will be of help here.

Happy to help. If you have further questions, just ask.


Thanks
Shally



[lng-odp] odp_buffer_t usage

2017-02-22 Thread Verma, Shally
Hi

I was looking into odp_buffer_t to understand its use case from Application 
stand point.  While it is clear for odp_packet_t description that it can be 
segmented/non-segmented contiguous / non-contiguous memory and APIs are 
provided to query and hop across segments to access data, but It is not clear 
how odp_buffer_t  supposed to be allocated and accessed and what App  can use 
it for? As API set very minimalistic just to get address and length of data.

So, couple of questions comes :

-  Can ODP *Buffer* Pool be both linear / scatter-gather memory Or it 
is always supposed to be one contiguous piece of memory?

-  Is it safe to assume that memory of the type odp_buffer_t is plain 
contiguous memory chunk (as malloc)? And data ptr retrieved through 
odp_buf_addr() can be directly read/written.

-  If odp_buffer_t supposed to carry metadata info, then how user know 
metadata len and actually data len?

-  Is it valid to use odp_buffer_t in ODP API prototyping or they are 
expected to use odp_packet_t ONLY?

Any info will be of help here.

Thanks
Shally



[lng-odp] FW: odp-crypto operations for segmented packets.

2017-02-17 Thread Verma, Shally
Hi

I was looking at linux-generic/odp_crypto.c implementation and it looks like 
that each odp_crypto_operation() call assumes that each Packet is contained 
within 1-segment or user passing either packet or segment len, whichever is 
smaller.

As I understand ODP Packet structure, it is scattered across segments and that 
segments may not be contiguous. So  let's say if App sets Segment Length as 32k 
and pass packet of length 64k with cipher_range.len = 64K, then current 
linux-generic/ODP_Crypto implementation may not work correct.

Could anyone please confirm it? Or Am I missing any baseline assumption here on 
how packets are supposed to pass for linux-generic/crypto ops?

Thanks
Shally