Hi Peter,

On 2018/11/6 17:06, Peter Zijlstra wrote:
> On Mon, Nov 05, 2018 at 10:49:21PM +0000, Will Deacon wrote:
>> diff --git a/include/asm-generic/bitops/lock.h 
>> b/include/asm-generic/bitops/lock.h
>> index 3ae021368f48..9de8d3544630 100644
>> --- a/include/asm-generic/bitops/lock.h
>> +++ b/include/asm-generic/bitops/lock.h
>> @@ -6,6 +6,15 @@
>>  #include <linux/compiler.h>
>>  #include <asm/barrier.h>
>>  
>> +static inline void spin_until_bit_unlock(unsigned int nr,
>> +                                     volatile unsigned long *p)
>> +{
>> +    unsigned long mask = BIT_MASK(bitnum);
>> +
>> +    p += BIT_WORD(nr);
>> +    smp_cond_load_relaxed(p, VAL & mask);
>> +}
>> +
>>  /**
>>   * test_and_set_bit_lock - Set a bit and return its old value, for lock
>>   * @nr: Bit to set
>> diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
>> index bbc4730a6505..d711c62e718c 100644
>> --- a/include/linux/bit_spinlock.h
>> +++ b/include/linux/bit_spinlock.h
>> @@ -26,9 +26,7 @@ static inline void bit_spin_lock(int bitnum, unsigned long 
>> *addr)
>>  #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>>      while (unlikely(test_and_set_bit_lock(bitnum, addr))) {
>>              preempt_enable();
>> -            do {
>> -                    cpu_relax();
>> -            } while (test_bit(bitnum, addr));
>> +            spin_until_bit_unlock(bitnum, addr);
>>              preempt_disable();
>>      }
>>  #endif
> 
> Yes, that's much better. Ideally though, we'd get rid of bit spinlocks
> that have significant enough contention for this to matter.

OK, I will send v3 to fix like the above.

Thanks,
Gao Xiang

> 
> 

Reply via email to