Excerpts from Peter Zijlstra's message of July 23, 2020 9:40 pm:
> On Thu, Jul 23, 2020 at 08:56:14PM +1000, Nicholas Piggin wrote:
> 
>> diff --git a/arch/powerpc/include/asm/hw_irq.h 
>> b/arch/powerpc/include/asm/hw_irq.h
>> index 3a0db7b0b46e..35060be09073 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -200,17 +200,14 @@ static inline bool arch_irqs_disabled(void)
>>  #define powerpc_local_irq_pmu_save(flags)                   \
>>       do {                                                   \
>>              raw_local_irq_pmu_save(flags);                  \
>> -            trace_hardirqs_off();                           \
>> +            if (!raw_irqs_disabled_flags(flags))            \
>> +                    trace_hardirqs_off();                   \
>>      } while(0)
>>  #define powerpc_local_irq_pmu_restore(flags)                        \
>>      do {                                                    \
>> -            if (raw_irqs_disabled_flags(flags)) {           \
>> -                    raw_local_irq_pmu_restore(flags);       \
>> -                    trace_hardirqs_off();                   \
>> -            } else {                                        \
>> +            if (!raw_irqs_disabled_flags(flags))            \
>>                      trace_hardirqs_on();                    \
>> -                    raw_local_irq_pmu_restore(flags);       \
>> -            }                                               \
>> +            raw_local_irq_pmu_restore(flags);               \
>>      } while(0)
> 
> You shouldn't be calling lockdep from NMI context!

After this patch it doesn't.

trace_hardirqs_on/off implementation appears to expect to be called in NMI 
context though, for some reason.

> That is, I recently
> added suport for that on x86:
> 
>   https://lkml.kernel.org/r/20200623083721.155449...@infradead.org
>   https://lkml.kernel.org/r/20200623083721.216740...@infradead.org
> 
> But you need to be very careful on how you order things, as you can see
> the above relies on preempt_count() already having been incremented with
> NMI_MASK.

Hmm. My patch seems simpler.

I don't know this stuff very well, I don't really understand what your patch 
enables for x86 but at least it shouldn't be incompatible with this one 
AFAIKS.

Thanks,
Nick

Reply via email to