2017-11-27 21:57 GMT+08:00 Mark Rutland <mark.rutl...@arm.com>:
> Hi,
>
> On Mon, Nov 27, 2017 at 08:27:57PM +0800, Greentime Hu wrote:
>> +static inline void arch_spin_unlock(arch_spinlock_t * lock)
>> +{
>> +     asm volatile(
>> +             "xor    $r15,  $r15, $r15\n"
>> +             "swi    $r15, [%0]\n"
>> +             :
>> +             :"r"(&lock->lock)
>> +             :"memory");
>> +}
>
> This looks suspicious. There's no clobber for $r15, so isn't this
> corrupting a GPR behind the back of the compiler?
>

Thanks.
$r15 is reserved for assembler.
For safety, compiler always put $r15 in clobber register list by default.


> Shouldn't there be a tmp variable for the register allocation rather
> than hard-coding $r15?
>
>> +static inline void arch_write_unlock(arch_rwlock_t * rw)
>> +{
>> +     asm volatile(
>> +             "xor    $r15, $r15, $r15\n"
>> +             "swi    $r15, [%0]\n"
>> +             :
>> +             :"r"(&rw->lock)
>> +             :"memory","$r15");
>> +}
>
> This time you have a clobber, but it's still not clear to me why you
> don't use a tmp variable and leave the register allocation to the
> compiler.
>

Thanks.
When I recheck the code, I found spinlock.h is not needed due that we
do not support SMP.
So, We decide to remove spinlock.h in the next version patch.

Vincent
> Thanks,
> Mark.

Reply via email to