On 08/17/2011 04:46 PM, Paolo Bonzini wrote:
> On 08/16/2011 12:58 AM, Lai Jiangshan wrote:
>> These series patches implelent a priority-boost urcu
>> based on pi-lock.
>>
>> Some other locks(especial rcu_gp_lock) should be also
>> priority-aware, these patches did touch them and make
>> the patchset simpler.
> 
> While really cool, I found this patchset overly complex.
> 
> What we should introduce is abstractions over futexes. 

I think any general purpose abstractions over futexes should be included in 
pthread lib.
manual-reset event can/should be implemented over pthread API's.
But I don't want to add overhead for read site.

> This is what I did to experimentally port URCU to QEMU---my secret goal since 
> commit 806f811 (use kernel style makefile output, 2010-03-01). :)  Our use of 
> futexes is exceptionally similar to a Windows manual-reset event (yes, 
> Windows: 
> http://msdn.microsoft.com/en-us/library/system.threading.manualresetevent%28v=vs.80%29.aspx).
>   In QEMU I added the manual-reset event and use it in the implementation of 
> RCU.
> 
> By introducing an abstraction for this, we can make the code a lot clearer 
> and secondarily gain in portability.  For QEMU portability was actually my 
> primary goal, but URCU might have different priorities. :)
> 
> PI futex support can also be implemented in the same framework.
> 
> By the way, it is my impression that MB (perhaps MEMBARRIER too?) is way way 
> more similar to QSBR than to SIGNAL:
> 
>    MB rcu_read_unlock = QSBR rcu_thread_offline + nesting count
>    MB rcu_read_lock   = QSBR rcu_thread_online + nesting count
> 
> Perhaps moving around code could make the code simpler?  Following the 
> master/slave memory barrier functions is quite hard, and this is complicated 
> by the KICK_READER_LOOPS that (if I understand correctly) makes little sense 
> for non-SIGNAL models.
> 
> Paolo
> 


_______________________________________________
ltt-dev mailing list
[email protected]
http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev

Reply via email to