Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-03-15 Thread Christophe Leroy




On 02/05/2019 10:10 AM, Michael Ellerman wrote:

Christophe Leroy  writes:

Le 20/12/2018 à 23:35, Benjamin Herrenschmidt a écrit :



/*
 * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif


That's not part of your patch, it's already in the tree.


Yup rebase glitch.

   .../...


I tested it on the 8xx with the below changes in addition. No issue seen
so far.


Thanks !

I'll merge that in.


I'm currently working on a refactorisation and simplification of
exception and syscall entry on ppc32.

I plan to take your patch in my serie as it helps quite a bit. I hope
you don't mind. I expect to come out with a series this week.


Ben's AFK so go ahead and pull it in to your series if that helps you.
  

The main obscure area is that business with the irqsoff tracer and thus
the need to create stack frames around calls to trace_hardirqs_* ... we
do it in some places and not others, but I've not managed to make it
crash either. I need to get to the bottom of that, and possibly provide
proper macro helpers like ppc64 has to do it.


I can't see anything special around this in ppc32 code. As far as I
understand, a stack frame is put in place when there is a need to
save and restore some volatile registers. At the places where nothing
needs to be saved, nothing is done. I think that's the normal way for
any function call, isn't it ?


The concern was that the irqsoff tracer was doing
__builtin_return_address(1) (or some number > 0) and that crashes if
there aren't sufficiently many stack frames available.

See ftrace_return_address.

Possibly the answer is that we don't have CONFIG_FRAME_POINTER and so we
get the empty version of that.



Yes indeed, ftrace_return_address(1) is not __builtin_return_address(1) 
but 0ul as CONFIG_FRAME_POINTER is not defined. So the crash can't be 
due to that as it would then crash regardless of whether we set a stack 
frame or not.
And anyway, as far as I understand, if the stack is properly 
initialised, __builtin_return_address(X) returns NULL and don't crash 
when the top of backtrace is reached.


Do you have more details about the said crash ? I think we should file 
an issue for it in our issue databse.


For the time being, I'll get rid of that unneccessary stack frame in 
entry_32.S as part of my syscall prolog optimising series.


Christophe


Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-02-05 Thread Benjamin Herrenschmidt
On Tue, 2019-02-05 at 10:45 +0100, Christophe Leroy wrote:
> > > I tested it on the 8xx with the below changes in addition. No issue seen
> > > so far.
> > 
> > Thanks !
> > 
> > I'll merge that in.
> 
> I'm currently working on a refactorisation and simplification of
> exception and syscall entry on ppc32.
> 
> I plan to take your patch in my serie as it helps quite a bit. I hope 
> you don't mind. I expect to come out with a series this week.

Ah ok, you want to take over the series then ? We still need to convert
all the other CPU variants... to be honest I've been distracted, and
taking some time off. I'll be leaving IBM by the end of next week, so I
don't really see myself finishing this work properly.

> > The main obscure area is that business with the irqsoff tracer and thus
> > the need to create stack frames around calls to trace_hardirqs_* ... we
> > do it in some places and not others, but I've not managed to make it
> > crash either. I need to get to the bottom of that, and possibly provide
> > proper macro helpers like ppc64 has to do it.
> 
> I can't see anything special around this in ppc32 code. As far as I 
> understand, a stack frame is put in place when there is a need to
> save and restore some volatile registers. At the places where nothing 
> needs to be saved, nothing is done. I think that's the normal way for 
> any function call, isn't it ?

Not exactly. There's an issue with one of the tracers using
__bultin_return_address(1) which can crash afaik if we don't have
"enough" stack frames on the stack, so there are cases where we need to
create one explicitly around the tracing calls bcs there's only one on
the actual stack.

I don't know the full details, I was planning on doing a bunch of tests
in sim to figure out exactly what happens and what needs to be done
(and whether our existing code is correct or not), but didn't get to it
so far.

Cheers,
Ben.
 



Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-02-05 Thread Christophe Leroy




Le 20/12/2018 à 06:40, Benjamin Herrenschmidt a écrit :

Hi folks !

Why trying to figure out why we had occasionally lockdep barf about
interrupt state on ppc32 (440 in my case but I could reproduce on e500
as well using qemu), I realized that we are still doing something
rather gothic and wrong on 32-bit which we stopped doing on 64-bit
a while ago.

We have that thing where some handlers "copy" the EE value from the
original stack frame into the new MSR before transferring to the
handler.

Thus for a number of exceptions, we enter the handlers with interrupts
enabled.

This is rather fishy, some of the stuff that handlers might do early
on such as irq_enter/exit or user_exit, context tracking, etc... should
be run with interrupts off afaik.

Generally our handlers know when to re-enable interrupts if needed
(though some of the FSL specific SPE ones don't).

The problem we were having is that we assumed these interrupts would
return with interrupts enabled. However that isn't the case.

Instead, this changes things so that we always enter exception handlers
with interrupts *off* with the notable exception of syscalls which are
special (and get a fast path).

Currently, the patch only changes BookE (440 and E5xx tested in qemu),
the same recipe needs to be applied to 6xx, 8xx and 40x.

Also I'm not sure whether we need to create a stack frame around some
of the calls to trace_hardirqs_* in asm. ppc64 does it, due to problems
with the irqsoff tracer, but I haven't managed to reproduce those
issues. We need to look into it a bit more.

I'll work more on this in the next few days, comments appreciated.

Not-signed-off-by: Benjamin Herrenschmidt 

---
  arch/powerpc/kernel/entry_32.S   | 113 ++-
  arch/powerpc/kernel/head_44x.S   |   9 +--
  arch/powerpc/kernel/head_booke.h |  34 ---
  arch/powerpc/kernel/head_fsl_booke.S |  28 -
  arch/powerpc/kernel/traps.c  |   8 +++
  5 files changed, 111 insertions(+), 81 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 3841d74..39b4cb5 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -34,6 +34,9 @@
  #include 
  #include 
  #include 
+#include 
+#include 
+#include 
  
  /*

   * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif
+
  #ifdef CONFIG_TRACE_IRQFLAGS
+   /*
+* When tracing IRQ state (lockdep) we enable the MMU before we call
+* the IRQ tracing functions as they might access vmalloc space or
+* perform IOs for console output.
+*
+* To speed up the syscall path where interrupts stay on, let's check
+* first if we are changing the MSR value at all.
+*/
+   lwz r12,_MSR(r1)


This one cannot work. MMU is not reenabled yet, so r1 cannot be used. 
And r11 now has the virt address of handler, so can't be used either.


Christophe


+   xor r0,r10,r12
+   andi.   r0,r0,MSR_EE
+   bne 1f
+
+   /* MSR isn't changing, just transition directly */
+   lwz r0,GPR0(r1)
+   mtspr   SPRN_SRR0,r11
+   mtspr   SPRN_SRR1,r10
+   mtlrr9
+   SYNC
+   RFI
+
+1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
+* keep interrupts disabled at this point otherwise we might risk
+* taking an interrupt before we tell lockdep they are enabled.
+*/
lis r12,reenable_mmu@h
ori r12,r12,reenable_mmu@l
+   lis r0,MSR_KERNEL@h
+   ori r0,r0,MSR_KERNEL@l
mtspr   SPRN_SRR0,r12
-   mtspr   SPRN_SRR1,r10
+   mtspr   SPRN_SRR1,r0
SYNC
RFI
-reenable_mmu:  /* re-enable mmu so we can */
-   mfmsr   r10
-   lwz r12,_MSR(r1)
-   xor r10,r10,r12
-   andi.   r10,r10,MSR_EE  /* Did EE change? */
-   beq 1f
  
+reenable_mmu:

/*
 * The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1.
 * If from user mode there is only one stack frame on the stack, and
@@ -239,8 +268,29 @@ reenable_mmu:  /* re-enable 
mmu so we can */
stw r3,16(r1)
stw r4,20(r1)
stw r5,24(r1)
-   bl  trace_hardirqs_off
-   lwz r5,24(r1)
+
+   /* Are we enabling or disabling interrupts ? */
+   andi.   r0,r10,MSR_EE
+   beq 1f
+
+   /* If we are enabling interrupt, this is a syscall. They shouldn't
+* happen while interrupts are disabled, so let's do a warning here.
+*/
+0: trap
+   EMIT_BUG_ENTRY 0b,__FILE__,__LINE__, 

Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-02-05 Thread Michael Ellerman
Christophe Leroy  writes:
> Le 20/12/2018 à 23:35, Benjamin Herrenschmidt a écrit :
>> 
/*
 * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
 @@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
 +#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
 +  mtspr   SPRN_NRI, r0
 +#endif
>>>
>>> That's not part of your patch, it's already in the tree.
>> 
>> Yup rebase glitch.
>> 
>>   .../...
>> 
>>> I tested it on the 8xx with the below changes in addition. No issue seen
>>> so far.
>> 
>> Thanks !
>> 
>> I'll merge that in.
>
> I'm currently working on a refactorisation and simplification of
> exception and syscall entry on ppc32.
>
> I plan to take your patch in my serie as it helps quite a bit. I hope 
> you don't mind. I expect to come out with a series this week.

Ben's AFK so go ahead and pull it in to your series if that helps you.
 
>> The main obscure area is that business with the irqsoff tracer and thus
>> the need to create stack frames around calls to trace_hardirqs_* ... we
>> do it in some places and not others, but I've not managed to make it
>> crash either. I need to get to the bottom of that, and possibly provide
>> proper macro helpers like ppc64 has to do it.
>
> I can't see anything special around this in ppc32 code. As far as I 
> understand, a stack frame is put in place when there is a need to
> save and restore some volatile registers. At the places where nothing 
> needs to be saved, nothing is done. I think that's the normal way for 
> any function call, isn't it ?

The concern was that the irqsoff tracer was doing
__builtin_return_address(1) (or some number > 0) and that crashes if
there aren't sufficiently many stack frames available.

See ftrace_return_address.

Possibly the answer is that we don't have CONFIG_FRAME_POINTER and so we
get the empty version of that.

cheers


Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-02-05 Thread Christophe Leroy




Le 20/12/2018 à 23:35, Benjamin Herrenschmidt a écrit :



   /*
* MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif


That's not part of your patch, it's already in the tree.


Yup rebase glitch.

  .../...


I tested it on the 8xx with the below changes in addition. No issue seen
so far.


Thanks !

I'll merge that in.


I'm currently working on a refactorisation and simplification of
exception and syscall entry on ppc32.

I plan to take your patch in my serie as it helps quite a bit. I hope 
you don't mind. I expect to come out with a series this week.




The main obscure area is that business with the irqsoff tracer and thus
the need to create stack frames around calls to trace_hardirqs_* ... we
do it in some places and not others, but I've not managed to make it
crash either. I need to get to the bottom of that, and possibly provide
proper macro helpers like ppc64 has to do it.


I can't see anything special around this in ppc32 code. As far as I 
understand, a stack frame is put in place when there is a need to
save and restore some volatile registers. At the places where nothing 
needs to be saved, nothing is done. I think that's the normal way for 
any function call, isn't it ?


Christophe


Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-01-27 Thread christophe leroy




Le 20/12/2018 à 06:40, Benjamin Herrenschmidt a écrit :

Hi folks !

Why trying to figure out why we had occasionally lockdep barf about
interrupt state on ppc32 (440 in my case but I could reproduce on e500
as well using qemu), I realized that we are still doing something
rather gothic and wrong on 32-bit which we stopped doing on 64-bit
a while ago.

We have that thing where some handlers "copy" the EE value from the
original stack frame into the new MSR before transferring to the
handler.

Thus for a number of exceptions, we enter the handlers with interrupts
enabled.

This is rather fishy, some of the stuff that handlers might do early
on such as irq_enter/exit or user_exit, context tracking, etc... should
be run with interrupts off afaik.

Generally our handlers know when to re-enable interrupts if needed
(though some of the FSL specific SPE ones don't).

The problem we were having is that we assumed these interrupts would
return with interrupts enabled. However that isn't the case.

Instead, this changes things so that we always enter exception handlers
with interrupts *off* with the notable exception of syscalls which are
special (and get a fast path).

Currently, the patch only changes BookE (440 and E5xx tested in qemu),
the same recipe needs to be applied to 6xx, 8xx and 40x.


In the scope of the implementation of vmapped stacks, I'm preparing 
serie to refactor the EXCEPTION_PROLOG on the 40x/8xx/6xx. I plan to 
send it out this week.


Christophe



Also I'm not sure whether we need to create a stack frame around some
of the calls to trace_hardirqs_* in asm. ppc64 does it, due to problems
with the irqsoff tracer, but I haven't managed to reproduce those
issues. We need to look into it a bit more.

I'll work more on this in the next few days, comments appreciated.

Not-signed-off-by: Benjamin Herrenschmidt 

---
  arch/powerpc/kernel/entry_32.S   | 113 ++-
  arch/powerpc/kernel/head_44x.S   |   9 +--
  arch/powerpc/kernel/head_booke.h |  34 ---
  arch/powerpc/kernel/head_fsl_booke.S |  28 -
  arch/powerpc/kernel/traps.c  |   8 +++
  5 files changed, 111 insertions(+), 81 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 3841d74..39b4cb5 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -34,6 +34,9 @@
  #include 
  #include 
  #include 
+#include 
+#include 
+#include 
  
  /*

   * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif
+
  #ifdef CONFIG_TRACE_IRQFLAGS
+   /*
+* When tracing IRQ state (lockdep) we enable the MMU before we call
+* the IRQ tracing functions as they might access vmalloc space or
+* perform IOs for console output.
+*
+* To speed up the syscall path where interrupts stay on, let's check
+* first if we are changing the MSR value at all.
+*/
+   lwz r12,_MSR(r1)
+   xor r0,r10,r12
+   andi.   r0,r0,MSR_EE
+   bne 1f
+
+   /* MSR isn't changing, just transition directly */
+   lwz r0,GPR0(r1)
+   mtspr   SPRN_SRR0,r11
+   mtspr   SPRN_SRR1,r10
+   mtlrr9
+   SYNC
+   RFI
+
+1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
+* keep interrupts disabled at this point otherwise we might risk
+* taking an interrupt before we tell lockdep they are enabled.
+*/
lis r12,reenable_mmu@h
ori r12,r12,reenable_mmu@l
+   lis r0,MSR_KERNEL@h
+   ori r0,r0,MSR_KERNEL@l
mtspr   SPRN_SRR0,r12
-   mtspr   SPRN_SRR1,r10
+   mtspr   SPRN_SRR1,r0
SYNC
RFI
-reenable_mmu:  /* re-enable mmu so we can */
-   mfmsr   r10
-   lwz r12,_MSR(r1)
-   xor r10,r10,r12
-   andi.   r10,r10,MSR_EE  /* Did EE change? */
-   beq 1f
  
+reenable_mmu:

/*
 * The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1.
 * If from user mode there is only one stack frame on the stack, and
@@ -239,8 +268,29 @@ reenable_mmu:  /* re-enable 
mmu so we can */
stw r3,16(r1)
stw r4,20(r1)
stw r5,24(r1)
-   bl  trace_hardirqs_off
-   lwz r5,24(r1)
+
+   /* Are we enabling or disabling interrupts ? */
+   andi.   r0,r10,MSR_EE
+   beq 1f
+
+   /* If we are enabling interrupt, this is a syscall. They shouldn't
+* happen while interrupts are disabled, so let's do a warning here.
+*/
+0: trap
+   EMIT_BUG_ENTRY 

Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2019-01-27 Thread christophe leroy




Le 20/12/2018 à 06:40, Benjamin Herrenschmidt a écrit :

Hi folks !

Why trying to figure out why we had occasionally lockdep barf about
interrupt state on ppc32 (440 in my case but I could reproduce on e500
as well using qemu), I realized that we are still doing something
rather gothic and wrong on 32-bit which we stopped doing on 64-bit
a while ago.

We have that thing where some handlers "copy" the EE value from the
original stack frame into the new MSR before transferring to the
handler.

Thus for a number of exceptions, we enter the handlers with interrupts
enabled.

This is rather fishy, some of the stuff that handlers might do early
on such as irq_enter/exit or user_exit, context tracking, etc... should
be run with interrupts off afaik.

Generally our handlers know when to re-enable interrupts if needed
(though some of the FSL specific SPE ones don't).

The problem we were having is that we assumed these interrupts would
return with interrupts enabled. However that isn't the case.

Instead, this changes things so that we always enter exception handlers
with interrupts *off* with the notable exception of syscalls which are
special (and get a fast path).

Currently, the patch only changes BookE (440 and E5xx tested in qemu),
the same recipe needs to be applied to 6xx, 8xx and 40x.

Also I'm not sure whether we need to create a stack frame around some
of the calls to trace_hardirqs_* in asm. ppc64 does it, due to problems
with the irqsoff tracer, but I haven't managed to reproduce those
issues. We need to look into it a bit more.

I'll work more on this in the next few days, comments appreciated.

Not-signed-off-by: Benjamin Herrenschmidt 

---
  arch/powerpc/kernel/entry_32.S   | 113 ++-
  arch/powerpc/kernel/head_44x.S   |   9 +--
  arch/powerpc/kernel/head_booke.h |  34 ---
  arch/powerpc/kernel/head_fsl_booke.S |  28 -
  arch/powerpc/kernel/traps.c  |   8 +++
  5 files changed, 111 insertions(+), 81 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 3841d74..39b4cb5 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -34,6 +34,9 @@
  #include 
  #include 
  #include 
+#include 
+#include 
+#include 
  
  /*

   * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif


Was already there before your patch.


+
  #ifdef CONFIG_TRACE_IRQFLAGS
+   /*
+* When tracing IRQ state (lockdep) we enable the MMU before we call
+* the IRQ tracing functions as they might access vmalloc space or
+* perform IOs for console output.
+*
+* To speed up the syscall path where interrupts stay on, let's check
+* first if we are changing the MSR value at all.
+*/
+   lwz r12,_MSR(r1)
+   xor r0,r10,r12
+   andi.   r0,r0,MSR_EE
+   bne 1f
+
+   /* MSR isn't changing, just transition directly */
+   lwz r0,GPR0(r1)
+   mtspr   SPRN_SRR0,r11
+   mtspr   SPRN_SRR1,r10
+   mtlrr9
+   SYNC
+   RFI
+
+1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
+* keep interrupts disabled at this point otherwise we might risk
+* taking an interrupt before we tell lockdep they are enabled.
+*/
lis r12,reenable_mmu@h
ori r12,r12,reenable_mmu@l
+   lis r0,MSR_KERNEL@h
+   ori r0,r0,MSR_KERNEL@l


You should use LOAD_MSR_KERNEL(), not all targets need an upper part.


mtspr   SPRN_SRR0,r12
-   mtspr   SPRN_SRR1,r10
+   mtspr   SPRN_SRR1,r0
SYNC
RFI
-reenable_mmu:  /* re-enable mmu so we can */
-   mfmsr   r10
-   lwz r12,_MSR(r1)
-   xor r10,r10,r12
-   andi.   r10,r10,MSR_EE  /* Did EE change? */
-   beq 1f
  
+reenable_mmu:

/*
 * The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1.
 * If from user mode there is only one stack frame on the stack, and
@@ -239,8 +268,29 @@ reenable_mmu:  /* re-enable 
mmu so we can */
stw r3,16(r1)
stw r4,20(r1)
stw r5,24(r1)
-   bl  trace_hardirqs_off
-   lwz r5,24(r1)
+
+   /* Are we enabling or disabling interrupts ? */
+   andi.   r0,r10,MSR_EE
+   beq 1f


This branch is likely, could we avoid it ?

For instance by moving the below part after the bctr and branching to it 
with a bne- ?



+
+   /* If we are enabling interrupt, this is a syscall. They shouldn't
+* happen while interrupts are disabled, so 

Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2018-12-20 Thread Benjamin Herrenschmidt


> >   /*
> >* MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
> > @@ -205,20 +208,46 @@ transfer_to_handler_cont:
> > mflrr9
> > lwz r11,0(r9)   /* virtual address of handler */
> > lwz r9,4(r9)/* where to go when done */
> > +#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
> > +   mtspr   SPRN_NRI, r0
> > +#endif
> 
> That's not part of your patch, it's already in the tree.

Yup rebase glitch.

 .../...

> I tested it on the 8xx with the below changes in addition. No issue seen 
> so far.

Thanks !

I'll merge that in.

The main obscure area is that business with the irqsoff tracer and thus
the need to create stack frames around calls to trace_hardirqs_* ... we
do it in some places and not others, but I've not managed to make it
crash either. I need to get to the bottom of that, and possibly provide
proper macro helpers like ppc64 has to do it.

Cheers,
Ben.



Re: [RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2018-12-20 Thread Christophe Leroy




On 12/20/2018 05:40 AM, Benjamin Herrenschmidt wrote:

Hi folks !

Why trying to figure out why we had occasionally lockdep barf about
interrupt state on ppc32 (440 in my case but I could reproduce on e500
as well using qemu), I realized that we are still doing something
rather gothic and wrong on 32-bit which we stopped doing on 64-bit
a while ago.

We have that thing where some handlers "copy" the EE value from the
original stack frame into the new MSR before transferring to the
handler.

Thus for a number of exceptions, we enter the handlers with interrupts
enabled.

This is rather fishy, some of the stuff that handlers might do early
on such as irq_enter/exit or user_exit, context tracking, etc... should
be run with interrupts off afaik.

Generally our handlers know when to re-enable interrupts if needed
(though some of the FSL specific SPE ones don't).

The problem we were having is that we assumed these interrupts would
return with interrupts enabled. However that isn't the case.

Instead, this changes things so that we always enter exception handlers
with interrupts *off* with the notable exception of syscalls which are
special (and get a fast path).

Currently, the patch only changes BookE (440 and E5xx tested in qemu),
the same recipe needs to be applied to 6xx, 8xx and 40x.

Also I'm not sure whether we need to create a stack frame around some
of the calls to trace_hardirqs_* in asm. ppc64 does it, due to problems
with the irqsoff tracer, but I haven't managed to reproduce those
issues. We need to look into it a bit more.

I'll work more on this in the next few days, comments appreciated.

Not-signed-off-by: Benjamin Herrenschmidt 

---
  arch/powerpc/kernel/entry_32.S   | 113 ++-
  arch/powerpc/kernel/head_44x.S   |   9 +--
  arch/powerpc/kernel/head_booke.h |  34 ---
  arch/powerpc/kernel/head_fsl_booke.S |  28 -
  arch/powerpc/kernel/traps.c  |   8 +++
  5 files changed, 111 insertions(+), 81 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 3841d74..39b4cb5 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -34,6 +34,9 @@
  #include 
  #include 
  #include 
+#include 
+#include 
+#include 
  
  /*

   * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif


That's not part of your patch, it's already in the tree.


+
  #ifdef CONFIG_TRACE_IRQFLAGS
+   /*
+* When tracing IRQ state (lockdep) we enable the MMU before we call
+* the IRQ tracing functions as they might access vmalloc space or
+* perform IOs for console output.
+*
+* To speed up the syscall path where interrupts stay on, let's check
+* first if we are changing the MSR value at all.
+*/
+   lwz r12,_MSR(r1)
+   xor r0,r10,r12
+   andi.   r0,r0,MSR_EE
+   bne 1f
+
+   /* MSR isn't changing, just transition directly */
+   lwz r0,GPR0(r1)
+   mtspr   SPRN_SRR0,r11
+   mtspr   SPRN_SRR1,r10
+   mtlrr9
+   SYNC
+   RFI
+
+1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
+* keep interrupts disabled at this point otherwise we might risk
+* taking an interrupt before we tell lockdep they are enabled.
+*/
lis r12,reenable_mmu@h
ori r12,r12,reenable_mmu@l
+   lis r0,MSR_KERNEL@h
+   ori r0,r0,MSR_KERNEL@l
mtspr   SPRN_SRR0,r12
-   mtspr   SPRN_SRR1,r10
+   mtspr   SPRN_SRR1,r0
SYNC
RFI
-reenable_mmu:  /* re-enable mmu so we can */
-   mfmsr   r10
-   lwz r12,_MSR(r1)
-   xor r10,r10,r12
-   andi.   r10,r10,MSR_EE  /* Did EE change? */
-   beq 1f
  
+reenable_mmu:

/*
 * The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1.
 * If from user mode there is only one stack frame on the stack, and
@@ -239,8 +268,29 @@ reenable_mmu:  /* re-enable 
mmu so we can */
stw r3,16(r1)
stw r4,20(r1)
stw r5,24(r1)
-   bl  trace_hardirqs_off
-   lwz r5,24(r1)
+
+   /* Are we enabling or disabling interrupts ? */
+   andi.   r0,r10,MSR_EE
+   beq 1f
+
+   /* If we are enabling interrupt, this is a syscall. They shouldn't
+* happen while interrupts are disabled, so let's do a warning here.
+*/
+0: trap
+   EMIT_BUG_ENTRY 0b,__FILE__,__LINE__, BUGFLAG_WARNING
+   bl  trace_hardirqs_on
+
+   /* Now enable for real */
+   mfmsr  

[RFC/WIP] powerpc: Fix 32-bit handling of MSR_EE on exceptions

2018-12-19 Thread Benjamin Herrenschmidt
Hi folks !

Why trying to figure out why we had occasionally lockdep barf about
interrupt state on ppc32 (440 in my case but I could reproduce on e500
as well using qemu), I realized that we are still doing something
rather gothic and wrong on 32-bit which we stopped doing on 64-bit
a while ago.

We have that thing where some handlers "copy" the EE value from the
original stack frame into the new MSR before transferring to the
handler.

Thus for a number of exceptions, we enter the handlers with interrupts
enabled.

This is rather fishy, some of the stuff that handlers might do early
on such as irq_enter/exit or user_exit, context tracking, etc... should
be run with interrupts off afaik.

Generally our handlers know when to re-enable interrupts if needed
(though some of the FSL specific SPE ones don't).

The problem we were having is that we assumed these interrupts would
return with interrupts enabled. However that isn't the case.

Instead, this changes things so that we always enter exception handlers
with interrupts *off* with the notable exception of syscalls which are
special (and get a fast path).

Currently, the patch only changes BookE (440 and E5xx tested in qemu),
the same recipe needs to be applied to 6xx, 8xx and 40x.

Also I'm not sure whether we need to create a stack frame around some
of the calls to trace_hardirqs_* in asm. ppc64 does it, due to problems
with the irqsoff tracer, but I haven't managed to reproduce those
issues. We need to look into it a bit more.

I'll work more on this in the next few days, comments appreciated.

Not-signed-off-by: Benjamin Herrenschmidt 

---
 arch/powerpc/kernel/entry_32.S   | 113 ++-
 arch/powerpc/kernel/head_44x.S   |   9 +--
 arch/powerpc/kernel/head_booke.h |  34 ---
 arch/powerpc/kernel/head_fsl_booke.S |  28 -
 arch/powerpc/kernel/traps.c  |   8 +++
 5 files changed, 111 insertions(+), 81 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 3841d74..39b4cb5 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -34,6 +34,9 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
 
 /*
  * MSR_KERNEL is > 0x1 on 4xx/Book-E since it include MSR_CE.
@@ -205,20 +208,46 @@ transfer_to_handler_cont:
mflrr9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
+#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+   mtspr   SPRN_NRI, r0
+#endif
+
 #ifdef CONFIG_TRACE_IRQFLAGS
+   /*
+* When tracing IRQ state (lockdep) we enable the MMU before we call
+* the IRQ tracing functions as they might access vmalloc space or
+* perform IOs for console output.
+*
+* To speed up the syscall path where interrupts stay on, let's check
+* first if we are changing the MSR value at all.
+*/
+   lwz r12,_MSR(r1)
+   xor r0,r10,r12
+   andi.   r0,r0,MSR_EE
+   bne 1f
+
+   /* MSR isn't changing, just transition directly */
+   lwz r0,GPR0(r1)
+   mtspr   SPRN_SRR0,r11
+   mtspr   SPRN_SRR1,r10
+   mtlrr9
+   SYNC
+   RFI
+
+1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
+* keep interrupts disabled at this point otherwise we might risk
+* taking an interrupt before we tell lockdep they are enabled.
+*/
lis r12,reenable_mmu@h
ori r12,r12,reenable_mmu@l
+   lis r0,MSR_KERNEL@h
+   ori r0,r0,MSR_KERNEL@l
mtspr   SPRN_SRR0,r12
-   mtspr   SPRN_SRR1,r10
+   mtspr   SPRN_SRR1,r0
SYNC
RFI
-reenable_mmu:  /* re-enable mmu so we can */
-   mfmsr   r10
-   lwz r12,_MSR(r1)
-   xor r10,r10,r12
-   andi.   r10,r10,MSR_EE  /* Did EE change? */
-   beq 1f
 
+reenable_mmu:
/*
 * The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1.
 * If from user mode there is only one stack frame on the stack, and
@@ -239,8 +268,29 @@ reenable_mmu:  /* re-enable 
mmu so we can */
stw r3,16(r1)
stw r4,20(r1)
stw r5,24(r1)
-   bl  trace_hardirqs_off
-   lwz r5,24(r1)
+
+   /* Are we enabling or disabling interrupts ? */
+   andi.   r0,r10,MSR_EE
+   beq 1f
+
+   /* If we are enabling interrupt, this is a syscall. They shouldn't
+* happen while interrupts are disabled, so let's do a warning here.
+*/
+0: trap
+   EMIT_BUG_ENTRY 0b,__FILE__,__LINE__, BUGFLAG_WARNING
+   bl  trace_hardirqs_on
+
+   /* Now enable for real */
+   mfmsr   r10
+   ori r10,r10,MSR_EE
+   mtmsr   r10
+   b   2f
+
+   /* If we are disabling interrupts (normal case),