On 23 July 2015 at 12:09, Nicolas Morey-Chaisemartin <nmo...@kalray.eu>
wrote:

>
>
> On 07/23/2015 07:43 AM, Bala Manoharan wrote:
>
>
>
> On 21 July 2015 at 13:05, Nicolas Morey-Chaisemartin < <nmo...@kalray.eu>
> nmo...@kalray.eu> wrote:
>
>>
>>
>> On 07/20/2015 07:24 PM, Bala Manoharan wrote:
>>
>> Hi,
>>
>>  Few comments inline
>>
>> On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin < <nmo...@kalray.eu>
>> nmo...@kalray.eu> wrote:
>>
>>> Replace current segmentation with an explicit define.
>>> This mainly means two things:
>>>  - All code can now test and check the max segmentation which will prove
>>>    useful for tests and open the way for many code optimizations.
>>>  - The minimum segment length and the maximum buffer len can now be
>>> decorrelated.
>>>    This means that pool with very small footprints can be allocated for
>>> small packets,
>>>    while pool for jumbo frame will still work as long as seg_len *
>>> ODP_CONFIG_PACKET_MAX_SEG >= packet_len
>>>
>>>  Signed-off-by: Nicolas Morey-Chaisemartin < <nmo...@kalray.eu>
>>> nmo...@kalray.eu>
>>> ---
>>>  include/odp/api/config.h                             | 10 +++++++++-
>>>  platform/linux-generic/include/odp_buffer_internal.h |  9 +++------
>>>  platform/linux-generic/odp_pool.c                    |  4 ++--
>>>  test/validation/packet/packet.c                      |  3 ++-
>>>  4 files changed, 16 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/include/odp/api/config.h b/include/odp/api/config.h
>>> index b5c8fdd..1f44db6 100644
>>> --- a/include/odp/api/config.h
>>> +++ b/include/odp/api/config.h
>>> @@ -108,6 +108,13 @@ extern "C" {
>>>  #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024)
>>>
>>>  /**
>>> + * Maximum number of segments in a packet
>>> + *
>>> + * This defines the maximum number of segment buffers in a packet
>>> + */
>>> +#define ODP_CONFIG_PACKET_MAX_SEG 6
>>>
>>
>>  What is the use-case of the above define? Does it mean that the packet
>> should not be stored in a pool if the max number of segment gets reached?
>> If this is something used in the linux-generic we can define it in the
>> internal header file.
>>
>>  The reason is that the #defines in config.h file has to be defined by
>> all the platforms.
>>
>>  Regards,
>> Bala
>>
>>    This maybe a little to linux-generic otiented I guess. What I'm
>> looking for is a clean way to handle segment length vs packet length in
>> pools.
>>
>
>  The optimisations specific to linux-generic should be in internal header
> and not in config files as any change in config file will have to be
> handled by all the platforms.
>
>
> Agreed. But supporting segmentation is still  a platform/HW related
> feature. And I think it has its place somewhere in config.h. Although
> probably not in this form.
>
> Maybe something as simple as ODP_CONFIG_SEGMENTATION_SUPPORT ?
>

IMO, this configuration is not needed as application need not know if the
underlying HW is supporting the requested pool configuration through
segments or unsegmented pools. As all the APIs for getting the packet
pointer returns the length in the current segment the application can be
agnostic about how the packet is stored in the pool.

>
>   I was trying to kill two birds with one stone in this patch:
>> - Be able to disable segmentation completely and add fast compile time in
>> the code to avoid segment computations
>> - Fix packet validation test (and maybe enhance my proposal for
>> pktio/segmentation) which rely heavily on the number of supported segment.
>>
>> For testing, the main issue I guess is that there is no way to know the
>> actual segment length and length used by the pool. We could go to the
>> internals but that would make the tests platform specific.
>> Something like odp_pool_get_seg_len() and odp_pool_get_len()  could be
>> qute useful for building tests but not very interesting for end users...
>>
>
>  IMO the testing for segmentation should be written in such a way that
> the validation suite should not fail if the implementation has handled the
> given requirement without creating segments as technically creating the
> segmentation is an implementation optimisation but not a requirement.
>
>  The validation suite should try to allocate a larger packet from a pool
> with a small segment size and then it can only expect that the
> implementation has stored it as segments if the packet is segmented then
> segment tests should be run else it should not throw an error since by not
> creating segmentation the implementation has not violated any ODP
> requirement.
>
>  Regards,
> Bala
>
>   Yes we can probably try to alloc a packet starting from MAX_BUF_LEN and
> try again with a rduced size until we get a success.
> Definitly not very pretty but it should be portable.
>
>
Yes. This approach should be fine.


> Having a define about SEGMENTATION support would also make it possible to
> disable/skip tests specifically relying on segmentation.
>
> Nicolas
>

Regards,
Bala
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to