On Mon, Oct 23, 2017 at 2:22 PM, Maxim Uvarov <maxim.uva...@linaro.org>
wrote:

> On 10/23/17 18:23, Francois Ozog wrote:
> > Hi Maxim,
> >
> > Is this regular memory or DMA capable memory?
> >
> > This would mean application would know how, for instance, to handle
> things
> > when memory is lower than a threshold or something similar; or it may
> > decide to use a percentage of memory for packet pools; or many other
> > policies.  I don't see that feature in "mature" applications like VPP.
> >
> > Applications that want do that may still use platform information
> directly
> > but I think it is too early work on an ODP API for that. An ODP memory
> API
> > will also have to deal with NUMA...
> >
> > FF
>
>
> I came to that question after Petri's api to print ODP shm info.
>
> We have pool capabilities which how many pools with which size can be
> created. DMA memory should connected to pool capabilities info.
>

I had this discussion with Yi earlier today and we'll be covering it during
tomorrow's ODP 2.0 call. The question of adding additional pool attributes
to cover type of storage, whether it needs to be physically contiguous,
etc. is a reasonable one to explore. Certainly the long-delayed NUMA topic
is relevant here. Petri has indicated he has more ideas to follow in this
area as well.


>
> But odp_shm_reserve() (which is also connected to pool resources) does
> not have maximum value for reserve. On some platforms not all system RAM
> can be used as shared memory. So to run apps there you will need to
> specify somewhere amount of available RAM for ODP application.
>
> If query of total/free memory for odp_shm_reserve() is platform specific
> than single binary will not work on different platforms. Or will require
> some start up script to specify this amount. If it will be script than
> it will have bunch of 'if platform' conditions. If we place it inside
> ODP then it will be easy to get this number.
>

The problem with memory is it's not so simple to come up with a single
number. That's why odp_pool_create() exists and lets the application know
up-front whether it succeeds or fails. If it succeeds, then the application
doesn't have to worry about runtime surprises in not being able to allocate
a buffer/packet, as long as it's maintaining a steady-state of allocates
and frees. Remember, in the data plane the idea is to move packets through
the application rapidly to minimize overall latency. Storing huge numbers
of "pending" packets is contrary to this goal. Hence the number of "in
flight" packets (plus a safety margin) is what determines the needed pool
size, and that's more a function of the number of pktios and their I/O
rates than the available RAM.


>
> Maxim.
>
>
> In general we need allocated
> >
> > On 23 October 2017 at 14:26, Bill Fischofer <bill.fischo...@linaro.org>
> > wrote:
> >
> >> Applications should request the amount of storage they need (possibly as
> >> configured) rather than trying to grab everything they can find "just in
> >> case". Especially in an NFV environment that's not very neighborly
> >> behavior.
> >>
> >> On Mon, Oct 23, 2017 at 2:44 AM, Dmitry Eremin-Solenikov <
> >> dmitry.ereminsoleni...@linaro.org> wrote:
> >>
> >>> On 23/10/17 10:39, Maxim Uvarov wrote:
> >>>> It might be reasonable to add also api call to get return free memory.
> >> So
> >>>> that application can adjust pools /buffers size according to hardware
> >> or
> >>> VM
> >>>> settings. Which might be good fit for NFV set up.
> >>>> Any opinions on that?
> >>>
> >>> It would depend on the platform too much. Also remember, that in some
> >>> cases buffers/packets will use separate memory, not main RAM.
> >>>
> >>> --
> >>> With best wishes
> >>> Dmitry
> >>>
> >>
> >
> >
> >
>
>

Reply via email to