HW is free to organize whatever underlying facilities it has at its
disposal to match ODP API semantics.  That's the whole point of having ODP
APIs specify only what's actually needed at the application level and
giving the implementation freedom in how to provide the needed semantics.
For example, we've already discussed how an implementation that supports
only a single physical pool might partition that into multiple logical
pools using usage counters or some other scheme.  So there's no assurance
that the ODP concept of pools map one-to-one to underlying HW concepts.
The actual structure of a pool is completely opaque.  The only observable
semantics are those that the ODP APIs define.  If we take the
API/Implementation separation model seriously we really need to take care
the ensure that ODP API definitions do not assume an underlying
implementation.

In the case of segmentation, if the application is allowed to specify pool
segment sizes, then what the API is actually specifying is what the
application wishes to observe--it has nothing to do with how the underlying
implementation may actually work on a given platform.  So we can easily get
into a situation with this or other APIs that try to specify implementation
behavior where the application believes it is helping to "optimize" things
but is only causing additional work and overhead at the implementation
level to present the requested "optimized" observable behavior back to the
application.

This is why I keep coming back to fundamentals.  Either the application is
or is not prepared to deal with segments.  If it is not, just say so and
let the implementation do whatever it needs to do to hide segments from the
application.  If it is, then just deal with whatever segment sizes are
native to that platform, trusting that the designers of that platform know
what they are doing.  Writing segment-size independent code is no different
than writing endian-neutral code or code that can run both 32-bit and
64-bit. Trying to control segment sizes at the application level is
effectively saying that the application really can't deal with segments but
somehow feels guilty about that and hence wants to "support" segment
processing by mandating sizes such that it won't ever actually have to deal
with them.

Taras rightly points out that we already have a perfectly usable model for
sorting packets into unsegmented pools of varying sizes based on the
lengths of incoming packets. This seems the perfect compromise between
application simplicity and memory efficiency, and I'm not sure why we
wouldn't want to promote this approach for application that don't want to
deal with segmentation.

On Thu, Feb 5, 2015 at 4:18 PM, Ola Liljedahl <ola.liljed...@linaro.org>
wrote:

> On 5 February 2015 at 22:12, Taras Kondratiuk
> <taras.kondrat...@linaro.org> wrote:
> > On 02/03/2015 11:04 PM, Ola Liljedahl wrote:
> >> My alternative approach that should achieve that same goals as Petri
> >> but give more freedom to implementations. You don't have to approve of
> >> it, I just want to show that given a defined and understood problem,
> >> many potential solutions exist and the first alternative might not be
> >> the best. Let's work together.
> >>
> >> * init_seg_len:
> >>         On input: user's required (desired) minimal length of initial
> >> segment (including headroom)
> >>         On output: implementation's best effort to match user's request
> >>         Purpose: ensure that those packet headers the application
> >> normally will process are stored in a consecutive memory area.
> >>         Applications do not have to check any configuration in order
> >> to initialize a configuration which the implementation anyway has to
> >> check if it can support.
> >>         Applications should check output values to see if its desired
> >> values were matched. The application decides whether a failure to
> >> match is a fatal error or the application can handle the situation
> >> anyway (e.g. with degraded performance because it has to do some
> >> workarounds in SW).
> >>
> >> * seg_len:
> >>         On input: user's desired length of other segments
> >>         On output: implementation's best effort to match user's request
> >>         Purpose: a hint from the user how to partition to pool into
> >> segments for best trade-off between memory utilization and SW
> >> processing performance.
> >>         Note: I know some HW can support multiple segment sizes so I
> >> think it is useful for the API to allow for this. Targets which only
> >> support one segment size (per packet pool) could use e.g.
> >> max(int_seg_len, seg_len). Some targets may not allow user-defined
> >> segment sizes at all, the ODP implementation will just return the
> >> actual values and the application can check whether those are
> >> acceptable.
> >>
> >> * init_seg_num:
> >>        On input: Number of initial segments.
> >>        On output: Updated with actual number of segments if a shared
> >> memory region was specified?
> >> * seg_num:
> >>         On input: Number of other segments.
> >>        On output: Updated with actual number of segments if a shared
> >> memory region was specified?
> >
> > Wouldn't it be simpler to create two pools and assign both to Pktio?
> Exactly the kind of constructive comments I want to enable by starting
> with defining and agreeing on the problem!
>
> If this is not an acceptable solution because of some other (currently
> unknown) requirement, we need to understand and formalize that
> requirement as well.
>
> Is there any disadvantage of pre-segregating the buffers into
> different pools (with different segment sizes)? Could this limit how
> HW uses those segments for different purposes? Can a multi-segment
> packet contain segments from different pools?
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to