Excerpts from Christophe Leroy's message of March 15, 2021 5:50 pm:
> 
> 
> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>> Update the new C and asm interrupt return code to account for 64e
>> specifics, switch over to use it.
>> 
>> The now-unused old ret_from_except code, that was moved to 64e after the
>> 64s conversion, is removed.
>> 
>> Signed-off-by: Nicholas Piggin <npig...@gmail.com>
>> ---
>>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>   arch/powerpc/kernel/entry_64.S            |   9 +-
>>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>   arch/powerpc/kernel/interrupt.c           |  27 +-
>>   arch/powerpc/kernel/irq.c                 |  76 -----
>>   5 files changed, 56 insertions(+), 379 deletions(-)
>> 

...

>> diff --git a/arch/powerpc/kernel/exceptions-64e.S 
>> b/arch/powerpc/kernel/exceptions-64e.S
>> index da78eb6ab92f..1bb4e9b37748 100644
>> --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -139,7 +139,8 @@ ret_from_level_except:
>>      ld      r3,_MSR(r1)
>>      andi.   r3,r3,MSR_PR
>>      beq     1f
>> -    b       ret_from_except
>> +    REST_NVGPRS(r1)
> 
> Could this be in a separate preceding patch (only the adding of 
> REST_NVGPRS(), the call to 
> ret_from_except can remain as is by removing the REST_NVGPRS() which is there 
> to make 
> ret_from_except and ret_from_except_lite identical).
> 
> Or maybe you can also do the name change to interrupt_return in that 
> preceeding patch, so than the 
> "use new interrupt return" patch only contains the interesting parts.

I don't like that so much, maybe the better split is to first change the 
common code to add the 64e bits, and then convert 64e from 
ret_from_except to interrupt_return and remove the old code.

...

>> @@ -1016,284 +1021,8 @@ alignment_more:
> 
> ...
> 
>> -fast_exception_return:
>> -    wrteei  0
>> -1:  mr      r0,r13
>> -    ld      r10,_MSR(r1)
>> -    REST_4GPRS(2, r1)
>> -    andi.   r6,r10,MSR_PR
>> -    REST_2GPRS(6, r1)
>> -    beq     1f
>> -    ACCOUNT_CPU_USER_EXIT(r13, r10, r11)
> 
> Then ACCOUNT_CPU_USER_EXIT can be removed from asm/ppc_asm.h

Will do.

>> @@ -387,7 +396,11 @@ notrace unsigned long 
>> interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>      while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>>              local_irq_enable(); /* returning to user: may enable */
>>              if (ti_flags & _TIF_NEED_RESCHED) {
>> +#ifdef CONFIG_PPC_BOOK3E_64
>> +                    schedule_user();
>> +#else
>>                      schedule();
>> +#endif
>>              } else {
>>                      if (ti_flags & _TIF_SIGPENDING)
>>                              ret |= _TIF_RESTOREALL;
>> @@ -435,7 +448,10 @@ notrace unsigned long 
>> interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>      /*
>>       * We do this at the end so that we do context switch with KERNEL AMR
>>       */
>> +#ifndef CONFIG_PPC_BOOK3E_64
>>      kuap_user_restore(regs);
> 
> Why do you need to ifdef this out ?
> Only PPC_8xx, PPC_BOOK3S_32 and PPC_RADIX_MMU select PPC_HAVE_KUAP.
> When PPC_KUAP is not selected, kuap_user_restore() is a static inline {} 
> defined in asm/kup.h

It came in from an old patch rebase. I'll get rid of them.

...

Thanks,
Nick

Reply via email to