On 23/04/15 13:03, Tim Deegan wrote:
> Hi,
> 
> At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
>>  void _spin_lock(spinlock_t *lock)
>>  {
>> +    spinlock_tickets_t tickets = { .tail = 1, };
>>      LOCK_PROFILE_VAR;
>>  
>>      check_lock(&lock->debug);
>> -    while ( unlikely(!_raw_spin_trylock(&lock->raw)) )
>> +    tickets.head_tail = xadd(&lock->tickets.head_tail, tickets.head_tail);
>> +    while ( tickets.tail != observe_head(&lock->tickets) )
>>      {
>>          LOCK_PROFILE_BLOCK;
>> -        while ( likely(_raw_spin_is_locked(&lock->raw)) )
>> -            cpu_relax();
>> +        cpu_relax();
>>      }
>>      LOCK_PROFILE_GOT;
>>      preempt_disable();
> 
> I think you need an smp_mb() here to make sure that locked accesses
> don't get hoisted past the wait-for-my-ticket loop by an out-of-order
> (ARM) cpu.

Ok, but smp_mb() turns into an mfence on x86.  Is this a
problem/sub-optimal?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to