btw,

if you need to get pool you can use functions:

odp_pool_t odp_packet_pool(odp_packet_t pkt);
odp_pool_t odp_buffer_pool(odp_buffer_t buf);

If you know event type and pool then you should be able do all you need.

Maxim.

On 03/01/17 15:47, Bill Fischofer wrote:
> ODP handles are by design abstract types that may have very different
> internal representations between different ODP implementations. When asking
> for a generic handle type the question is which types would you expect that
> to encompass?
> 
> When we originally started defining the ODP type system we avoided having a
> generic odp_handle_t supertype specifically to avoid C's issues with weak
> typing. We wanted ODP to be strongly typed so that, for example, trying to
> pass an odp_queue_t to an API that expects an odp_packet_t would be flagged
> at compile time.
> 
> As noted in this thread odp_event_t is a generic type that is used to
> represent entities that can be transmitted through queues and ODP provides
> type conversion APIs to and from this container type to the specific types
> (buffer, packet, timeout, crypto completions) that are able to be carried
> by an event. The intent is that as different event types are added they
> will similarly be added to this container along with converter APIs. But
> trying to fit all types into this model seems unnecessary. If you have a
> use case for wanting to treat some other type as an event we'd be
> interested in hearing that.
> 
> On Wed, Mar 1, 2017 at 5:56 AM, Verma, Shally <shally.ve...@cavium.com>
> wrote:
> 
>> Francois
>>
>> It is base assumption that an ODP Interface/implementation supporting
>> generic handle concept takes due care and record keeping to find out proper
>> type casting and keeping pool info is one such of them.
>>
>> Petri
>> memcpy() is just an example to explain use case.
>>
>> packet APIs are good for interface which always and only process data of
>> type odp_packet_t however  if anyone would want to extend same API to
>> support plain buffer type memory as well (thus avoiding
>> packet_copy_to/from_mem()), then generic handle concept may be helpful.
>>
>>
>> Though it does not come as a MUST requirement but thinking if flexibility
>> of having generic handle in ODP helps in flexible implementation where ever
>> it is desirable / needed (of course with due care).
>>
>> Thanks
>> Shally
>>
>>
>> From: Francois Ozog [mailto:francois.o...@linaro.org]
>> Sent: 01 March 2017 16:22
>> To: Verma, Shally <shally.ve...@cavium.com>
>> Cc: Savolainen, Petri (Nokia - FI/Espoo) <petri.savolainen@nokia-bell-
>> labs.com>; lng-odp@lists.linaro.org
>> Subject: Re: [lng-odp] Generic handle in ODP
>>
>> I see the point but still, I don't feel comfortable with the approach as
>> we don't know if we have access to the pool originating the handle when you
>> want to do the copy.
>>
>> It is good to avoid code duplication but in that particular case, it looks
>> opening dangerous directions. (a gut feel for the moment, not a documented
>> statement).
>>
>> FF
>>
>> On 1 March 2017 at 10:38, Verma, Shally <shally.ve...@cavium.com<mailto:
>> shally.ve...@cavium.com>> wrote:
>>
>> HI Petri/Maxim
>>
>> Please see my response below.
>>
>> -----Original Message-----
>> From: Savolainen, Petri (Nokia - FI/Espoo) [mailto:petri.savolainen@
>> nokia-bell-labs.com<mailto:petri.savolai...@nokia-bell-labs.com>]
>> Sent: 01 March 2017 14:38
>> To: Verma, Shally <shally.ve...@cavium.com<mailto:shally.ve...@cavium.com>>;
>> Francois Ozog <francois.o...@linaro.org<mailto:francois.o...@linaro.org>>
>> Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
>> Subject: RE: [lng-odp] Generic handle in ODP
>>
>>
>>
>>> -----Original Message-----
>>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org<mailto:lng-odp-
>> boun...@lists.linaro.org>] On Behalf Of
>>> Verma, Shally
>>> Sent: Wednesday, March 01, 2017 10:38 AM
>>> To: Francois Ozog <francois.o...@linaro.org<mailto:francois.ozog@linaro.
>> org>>
>>> Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
>>> Subject: Re: [lng-odp] Generic handle in ODP
>>>
>>> HI Francois
>>>
>>> What you said is correct and in such case API should only have
>>> odp_packet_t as an its type signature and access memory as per
>>> implementation policy.
>>>
>>> I am talking about use case where an API can input data both as  a
>>> plain buffer (which is a contiguous memory) *OR* as a packet (which is
>>> variable, scattered/non-scattered segmented memory). So when API sees
>>> that input data  is  from buffer pool, can simply use  address
>>> returned by odp_buf_addr as memory pointer and do direct read/write
>>> (as its not segmented and contiguous memory) and when it sees chunk is
>>> from packet pool , then access data according to its base hw
>> implementation.
>>>
>>> I am taking here a simple memcpy() pseudo example to explain case .
>>> Say, if I want to enable memcpy from both packet and buffer memory,
>>> then there are 2 ways of doing it:
>>>
>>> 1.       Add two separate APIs , say memcpy_from_buffer(odp_buffer_t
>>> buf,size_t len, void *dst) and memcpy_from_packet(odp_packet_t packet,
>>> size_t len, void *) OR
>>>
>>> 2.       Or, make one API say memcpy(odp_handle_t handle, odp_pool_t
>> pool,
>>> size_t len, void *dst)
>>>
>>> {
>>>
>>> if (pool type== odp_buffer_t ) then
>>>
>>>     addr=odp_buffer_addr((odp_buffer_t)handle);
>>>
>>> else
>>>
>>>    addr=odp_packet_data((odp_packet_t)handle);
>>>
>>>
>>>
>>>   memcpy(dst,addr,len);
>>>
>>> }
>>>
>>> Hope this could explain intended use case to an extent.
>>>
>>> Thanks
>>> Shally
>>
>>
>> As Maxim mentioned, odp_event_t is the single type (that is passed through
>> queues). An event can be buffer, packet, timeout, crypto/ipsec completion,
>> etc. Application needs to check event type and convert it to correct
>> sub-type (e.g. packet) to access the data/metadata.
>>
>> Application needs to be careful to handle buffers vs. packets correctly.
>> Buffers are simple, always contiguous memory blocks - whereas packets may
>> be fragmented. The example above would break if a packet segment boundary
>> is hit between addr[0]...addr[len-1]. There are a bunch of packet_copy
>> functions which handle segmentation correctly.
>>
>>
>> if (odp_event_type(ev) == ODP_EVENT_PACKET) {
>>         pkt = odp_packet_from_event(ev);
>>         odp_packet_copy_to_mem(pkt, 0, len, dst); } else if
>> (odp_event_type(ev) == ODP_EVENT_BUFFER) {
>>         buf = odp_buffer_from_event(ev);
>>         addr = odp_buffer_addr(buf);
>>         memcpy(dst, addr, len);
>> } else {
>>         // BAD EVENT TYPE. NO DATA TO COPY.
>> }
>> Shally >> This is understood. However it is applicable for the Event based
>> action where we can do typecasting between event/packet and buffer.
>> I am asking for scenario which user can initiate some action on data and
>> that data can be either from buffer or packet pool (which may result into
>> some event generation).  As you mentioned above -
>> " Application needs to be careful to handle buffers vs. packets correctly.
>> Buffers are simple, always contiguous memory blocks - whereas packets may
>> be fragmented. The example above would break if a packet segment boundary
>> is hit between addr[0]...addr[len-1]. There are a bunch of packet_copy
>> functions which handle segmentation correctly."
>> For the same reason, I am asking if we could add generic handle. So that
>> API can understand what memory type it is dealing with.
>>
>> I re-write my example here to explain bit more here..
>>
>>  if (pool type== odp_buffer_t ) then
>>  {
>>      addr=odp_buffer_addr((odp_buffer_t)handle);
>>    memcpy(dst,addr,len);
>>    }
>>
>>  else
>>  {
>>     offset=0;
>>       while(offset<len)
>>      {
>>        addr=odp_packet_offset(packet,offset,&cur_seg,&seg_len);
>>         memcpy(dst, addr, seg_len);
>>       offset+=seg_len;
>>      }
>>     }
>> Hope this help.
>>
>> -Petri
>>
>>
>>
>>
>> --
>> [Linaro]<http://www.linaro.org/>
>>
>> François-Frédéric Ozog | Director Linaro Networking Group
>>
>> T: +33.67221.6485<tel:%2B33.67221.6485>
>> francois.o...@linaro.org<mailto:francois.o...@linaro.org> | Skype: ffozog
>>
>>
>>
>>

Reply via email to