>> My apologies, the e-mail editor was not configured properly.
>> CC'ed to relevant maintainers and reposting once again with proper 
>> formatting.
>> 
>> Since 16 bit half word exchange was not there and MCS based qspinlock 
>> by Waiman's xchg_tail() requires an atomic exchange on a half word, 
>> here is a small modification to __xchg() code to support the exchange.
>> ARMv6 and lower does not have support for LDREXH, so we need to make 
>> sure things do not break when we're compiling on ARMv6.
>> 
>> Signed-off-by: Sarbojit Ganguly <gangul...@samsung.com>>
>> ---
>>  arch/arm/include/asm/cmpxchg.h | 18 ++++++++++++++++++
>>  1 file changed, 18 insertions(+)
>> 
>> diff --git a/arch/arm/include/asm/cmpxchg.h 
>> b/arch/arm/include/asm/cmpxchg.h index 1692a05..547101d 100644
>> --- a/arch/arm/include/asm/cmpxchg.h
>> +++ b/arch/arm/include/asm/cmpxchg.h
>> @@ -50,6 +50,24 @@ static inline unsigned long __xchg(unsigned long x, 
>> volatile void *ptr, int size
>>                         : "r" (x), "r" (ptr)
>>                         : "memory", "cc");
>>                 break;
>> +#if !defined (CONFIG_CPU_V6)
>> +               /*
>> +                * Halfword exclusive exchange
>> +                * This is new implementation as qspinlock
>> +                * wants 16 bit atomic CAS.
>> +                * This is not supported on ARMv6.
>> +                */

>I don't think you need this comment. We don't use qspinlock on arch/arm/.

Yes, till date mainline ARM does not support but I've ported Qspinlock on ARM 
hence I think that comment
might be required.

>> +       case 2:
>> +               asm volatile("@ __xchg2\n"
>> +               "1:     ldrexh  %0, [%3]\n"
>> +               "       strexh  %1, %2, [%3]\n"
>> +               "       teq     %1, #0\n"
>> +               "       bne     1b"
>> +               : "=&r" (ret), "=&r" (tmp)
>> +               : "r" (x), "r" (ptr)
>> +               : "memory", "cc");
>> +               break;
>> +#endif
>>         case 4:
>>                 asm volatile("@ __xchg4\n"
>>                 "1:     ldrex   %0, [%3]\n"

>We have the same issue with the byte exclusives, so I think you need to extend 
>the guard you're adding to cover that case too (which is a bug in current 
>mainline).

Ok, I will work on this and release a v2 soon. 

>Will

- Sarbojit
N‹§²æìr¸›yúèšØb²X¬¶Ç§vØ^–)Þº{.nÇ+‰·¥Š{±‘êçzX§¶›¡Ü¨}©ž²Æ 
zÚ&j:+v‰¨¾«‘êçzZ+€Ê+zf£¢·hšˆ§~†­†Ûiÿûàz¹®w¥¢¸?™¨è­Ú&¢)ߢf”ù^jÇ«y§m…á@A«a¶Úÿ
0¶ìh®å’i

Reply via email to