> 在 2015年12月17日,下午4:26,Maxim Uvarov <maxim.uva...@linaro.org> 写道:
> 
> On 12/17/2015 08:56, HePeng wrote:
>> Hi, Maxim
>> 
>> I will present a quick patch that will not pass the kernel style
>> check as this one is just for evaluation.
> ok, in that case you can add RFC to patch tag.

Sorry, I misunderstood your decision. Will provide a 
patch passing style check soon. 

Another question is that last time you told me 
about moving cuckoo hash into the ODP/ not 
helper/. Is there any changes? if not, I am going to 
add this in ODP/. 

> 
> Maxim.
>> 
>>> 在 2015年12月16日,下午8:22,Maxim Uvarov <maxim.uva...@linaro.org> 写道:
>>> 
>>> From meeting we decided that we want solution based on pool. Can you post 
>>> that
>>> patches? We will try to help how to improve performance.
>>> 
>>> Maxim.
>>> 
>>> On 12/14/2015 22:30, Mike Holmes wrote:
>>>> Added to Tuesday agenda
>>>> 
>>>> On 12 December 2015 at 22:18, HePeng <xnhp0...@icloud.com 
>>>> <mailto:xnhp0...@icloud.com>> wrote:
>>>> 
>>>>    Ping.
>>>> 
>>>>    So what is your decision on this.
>>>> 
>>>>>    在 2015年12月10日,下午1:06,HePeng <xnhp0...@icloud.com
>>>>>    <mailto:xnhp0...@icloud.com>> 写道:
>>>>> 
>>>>>>    在 2015年12月9日,下午6:49,Ola Liljedahl
>>>>>>    <ola.liljed...@linaro.org <mailto:ola.liljed...@linaro.org>> 写道:
>>>>>> 
>>>>>>    On 9 December 2015 at 06:31, HePeng<xnhp0...@icloud.com
>>>>>>    <mailto:xnhp0...@icloud.com>>wrote:
>>>>>> 
>>>>>> 
>>>>>>>        在 2015年12月8日,下午9:34,Ola Liljedahl
>>>>>>>        <ola.liljed...@linaro.org
>>>>>>>        <mailto:ola.liljed...@linaro.org>> 写道:
>>>>>>> 
>>>>>>>        On 8 December 2015 at 12:42, Bill
>>>>>>>        Fischofer<bill.fischo...@linaro.org
>>>>>>>        <mailto:bill.fischo...@linaro.org>>wrote:
>>>>>>> 
>>>>>>>            This is an interesting topic.  I'd like to discuss this
>>>>>>>            a bit during today's ODP public call.
>>>>>>> 
>>>>>>>            I think the issue is that while a ring is a very useful
>>>>>>>            implementation construct its semantics are very SW centric.
>>>>>>> 
>>>>>>>        But isn't the use case here SW centric?
>>>>>>> 
>>>>>>>            Perhaps there's opportunity here for a new Queue type
>>>>>>>            that would permit an implementation to implement the
>>>>>>>            queue API as a ring for additional performance?
>>>>>>> 
>>>>>>>        I think it is a bad idea to overload the ODP queue with
>>>>>>>        another type of usage and implied implementation. Better to
>>>>>>>        define a new ODP object.
>>>>>>> 
>>>>>>>        What are the real requirements of the "ring" as used by the
>>>>>>>        cuckoo hash design? Enqueue/dequeue operations. Fixed size
>>>>>>>        is OK? Storing arbitrary objects (not just ODP events)?
>>>>>>        The real requirement is use the ring as a sort of container.
>>>>>>        Fixed Size is OK. And the ring should be used to
>>>>>>        store any ODP events, since it is used as a container,
>>>>>>        like vector in C++ STL.
>>>>>> 
>>>>>>    What if I would like to store objects that are not ODP events in
>>>>>>    the cuckoo hash table? E.g. arbitrary pointers (to memory that
>>>>>>    is managed in some other way).
>>>>>> 
>>>>>    Yes, that is currently ring’s implementation.
>>>>> 
>>>>> 
>>>>>> 
>>>>>>>              The scheduler itself could use this since its use of
>>>>>>>            queues is subject to the same issues.
>>>>>>> 
>>>>>>>            On Mon, Dec 7, 2015 at 11:39 PM,
>>>>>>>            HePeng<xnhp0...@icloud.com
>>>>>>>            <mailto:xnhp0...@icloud.com>>wrote:
>>>>>>> 
>>>>>>>                Hi Maxim,
>>>>>>>                        I implement a new version of cuckoo hash
>>>>>>>                based on the ODP buffer/pool.
>>>>>>> 
>>>>>>>                        As I’ve mentioned earlier, the use of ring
>>>>>>>                in cuckoo hash is like to the use of
>>>>>>>                a container, e.g. a queue in C++ STL. In current
>>>>>>>                ODP implementation, I found that
>>>>>>>                the ODP queue is implemented more likely a facility
>>>>>>>                for transmitting objects, not
>>>>>>>                a container to store objects. Look at the ODP queue
>>>>>>>                enqueue interface:
>>>>>>> 
>>>>>>>                        int odp_queue_enq(odp_queue_t queue,
>>>>>>>                odp_event_t ev);
>>>>>>> 
>>>>>>>                the *odp_event_t* is related to the events, but I
>>>>>>>                only want to use odp_queue to
>>>>>>>                storing and retrieving objects, any objects.
>>>>>>> 
>>>>>>> 
>>>>>>>                        So I use ODP buffer/pool interfaces instead
>>>>>>>                of ODP queue
>>>>>>>                for the new cuckoo hash implementation.
>>>>>>> 
>>>>>>>                However, compared to the previous implementation
>>>>>>>                based on ring, this version
>>>>>>>                suffers a serious performance degradation. The
>>>>>>>                evaluation is carried out on a Intel
>>>>>>>                servers. I test lookup time for 1000 lookups on a
>>>>>>>                table storing 1 million items.
>>>>>>>                The ODP buffer/pool version suffers at least a 2x
>>>>>>>                performance degradation.
>>>>>>> 
>>>>>>>                This is the buffer/pool version, for 1M insert, and
>>>>>>>                1000 lookup time:
>>>>>>> 
>>>>>>>                Average insert time = 2.383836, lookup time = 0.000353,
>>>>>>> 
>>>>>>>                This is the ring version.
>>>>>>> 
>>>>>>>                Average insert time = 1.629115, lookup time = 0.000098
>>>>>>> 
>>>>>>>                        This performance degradation stems from the
>>>>>>>                heavy implementation of
>>>>>>>                 ODP buffer/pool. In the ring based one, all the
>>>>>>>                key is stored in a big array, and
>>>>>>>                the ring only stores the array indexes of each key.
>>>>>>>                Keys are retrieved using array indexes.
>>>>>>>                In the new one, I use ODP buffer to store the key
>>>>>>>                content. Keys are retrieved by
>>>>>>>                dereferencing a *odp_buffer_t* handle.
>>>>>>> 
>>>>>>>                        With this high performance degradation, I
>>>>>>>                suggest moving ring into the odp/api
>>>>>>>                as a container implementation, and use the previous
>>>>>>>                implementation of cuckoo hash.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>                _______________________________________________
>>>>>>>                lng-odp mailing list
>>>>>>>                lng-odp@lists.linaro.org
>>>>>>>                <mailto:lng-odp@lists.linaro.org>
>>>>>>>                https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>            _______________________________________________
>>>>>>>            lng-odp mailing list
>>>>>>>            lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>>>>            https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>    _______________________________________________
>>>>>    lng-odp mailing list
>>>>>    lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>>    https://lists.linaro.org/mailman/listinfo/lng-odp
>>>> 
>>>>    _______________________________________________
>>>>    lng-odp mailing list
>>>>    lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>    https://lists.linaro.org/mailman/listinfo/lng-odp
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Mike Holmes
>>>> Technical Manager - Linaro Networking Group
>>>> Linaro.org <http://www.linaro.org/>***│ *Open source software for ARM SoCs
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> lng-odp mailing list
>>>> lng-odp@lists.linaro.org
>>>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>> _______________________________________________
>>> lng-odp mailing list
>>> lng-odp@lists.linaro.org
>>> https://lists.linaro.org/mailman/listinfo/lng-odp
> 

_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to