On 10/21/2015 10:28 AM, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>
>> -----Original Message-----
>> From: EXT Nicolas Morey-Chaisemartin [mailto:nmo...@kalray.eu]
>> Sent: Wednesday, October 21, 2015 11:03 AM
>> To: Savolainen, Petri (Nokia - FI/Espoo); lng-odp@lists.linaro.org
>> Subject: Re: [lng-odp] [PATCH 2/2] linux-generic: pktio: replace pktio
>> ticketlock by a rwlock
>>
>>
>>
>> On 10/21/2015 09:35 AM, Savolainen, Petri (Nokia - FI/Espoo) wrote:
>>> It would be better to move send / recv locking inside the callbacks. E.g.
>> netmap with multiple queues could reserve single  thread per input queue
>> and remove need for any receive side locking (instead of the double locking
>> of this patch). Locking for pktio status data and send/recv could be
>> separated. Send/recv is performance critical, where as e.g. read or write
>> mtu is not. Pktio status checks could be done only in a debug build.
>>> -Petri
>> We still need a lock at the pktio level to make sure the interface is not
>> being updated (closed might be a user issue but start/stop/defq less so).
>> The double locking is a temporary thing until netmap can handle concurrent
>> rx and/or tx. As soon as the implementation supports that, it's just the RW
>> lock which is not that costly and scales quite well with the number of
>> cores.
> The pending pktio start/stop spec says that by default any interface config 
> must not be changed when interface is active (started). When send/recv 
> locking is inside the calls, those can decide if e.g. read lock of interface 
> object is needed or not (may need it only in DEBUG mode).
>
> -Petri
>
Ok it would work. But do you feel like the performance gain would outweigh the 
added "complexity" it brings?
IMHO RWlock are very cheap when you only do read access.
But removing them moves all the risks and careful handling of race conditions 
to each implementation.

Nicolas
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to