I think it's fair to note though that applications that uses any API outside 
ODP may not be portable across other ODP platforms, or even exhibit different 
performance characteristic from ODP APIs on the same platform.

Therefore I expect that *all other things being equal*, there is an advantage 
for the application writer to use ODP APIs where applicable APIs exists.

If you're faced with an already existing hunk of code, that probably doesn't 
matter at all. If you're writing a new application and you are targeting ODP 
anyway, I would find it weird if you'd use a none ODP API for something ODP has 
an API for unless the ODP API semantic is somehow missing something.

Gilad






Gilad Ben-Yossef
Software Architect
EZchip Technologies Ltd.
37 Israel Pollak Ave, Kiryat Gat 82025 ,Israel
Tel: +972-4-959-6666 ext. 576, Fax: +972-8-681-1483
Mobile: +972-52-826-0388, US Mobile: +1-973-826-0388
Email: gil...@ezchip.com<mailto:gil...@ezchip.com>, Web: 
http://www.ezchip.com<http://www.ezchip.com/>

"Ethernet always wins."
        — Andy Bechtolsheim

From: lng-odp-boun...@lists.linaro.org 
[mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Bill Fischofer
Sent: Monday, November 03, 2014 8:11 PM
To: Shmulik Ladkani
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] [Q] Memory allocation in ODP applications

ODP APIs are designed to be used a la carte by applications, as ODP is a 
framework, not a platform.  So feel free to mix malloc() or your own memory 
management or other API calls in as needed.

What ODP requires is the types specified in its APIs, so for example the only 
way to get an odp_buffer_t is via the odp_buffer_alloc() call.  
odp_buffer_alloc() in turn requires an odp_buffer_pool_t and that in turn 
requires an odp_buffer_pool_create() call.

ODP_BUFFER_TYPE_RAW simply exposes the basic block manager functions of the ODP 
buffer APIs.  Again, you're free to use them for whatever purpose the 
application wants.  Obviously one reason for doing so is to gain portability 
across potentially different memory management implementations.

On Mon, Nov 3, 2014 at 11:02 AM, Shmulik Ladkani 
<shmulik.ladk...@gmail.com<mailto:shmulik.ladk...@gmail.com>> wrote:
Hi,

As demonstrated by odp/examples, libc memory allocation routines (malloc
and friends) aren't been used by ODP apps (unless for a very tight
temporary scope, usually in the control thread).

Application's data structures are allocated either using odp_buffer_pool
interface (e.g. odp_ipsec.c allocating its private per packet context
structure using a ODP_BUFFER_TYPE_RAW pool), or directly using
odp_shm_reserve and managing it internally (e.g. odp_ipsec_fwd_db.c,
odp_ipsec_sp_db.c etc).

Questions:

Must the application use ODP memory allocation interfaces at all times?
Or only for data structures accessed at data-path routines?
Or only when allocating them from within data-path routines?

Any preference for when to use a ODP_BUFFER_TYPE_RAW pool vs. allocating
a shared memory and managing data structures internally?

Thanks,
Shmulik

_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
http://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to