[PATCH v4] powerpc: Avoid code patching freed init sections

2018-09-13 Thread Michael Neuling
This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
kvm_use_magic_page() ->
  fault_in_pages_readable() ->
 __get_user() ->
   __get_user_nocheck() ->
 barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

cc: sta...@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling 

---
For stable I've marked this as v4.13+ since that's when we refactored
code-patching.c but it could go back even further than that. In
reality though, I think we can only hit this since the first
spectre/meltdown changes.

v4:
 Feedback from Christophe Leroy:
   - init_mem_free -> init_mem_is_free
   - prlog %lx -> %px

v3:
 Add init_mem_free flag to avoid potential race.
 Feedback from Christophe Leroy:
   - use init_section_contains()
   - change order of init test for performance
   - use pr_debug()
   - remove blank line

v2:
  Print when we skip an address
---
 arch/powerpc/include/asm/setup.h | 1 +
 arch/powerpc/lib/code-patching.c | 6 ++
 arch/powerpc/mm/mem.c| 2 ++
 3 files changed, 9 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 1a951b0046..1fffbba8d6 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
 
 extern unsigned int rtas_data;
 extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 850f3b8f4d..6ae2777c22 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, 
unsigned int instr,
 {
int err;
 
+   /* Make sure we aren't patching a freed init section */
+   if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+   pr_debug("Skipping init section patching addr: 0x%px\n", 
exec_addr);
+   return 0;
+   }
+
__put_user_size(instr, patch_addr, 4, err);
if (err)
return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 5c8530d0c6..04ccb274a6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -63,6 +63,7 @@
 #endif
 
 unsigned long long memory_limit;
+bool init_mem_is_free;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
@@ -396,6 +397,7 @@ void free_initmem(void)
 {
ppc_md.progress = ppc_printk_progress;
mark_initmem_nx();
+   init_mem_is_free = true;
free_initmem_default(POISON_FREE_INITMEM);
 }
 
-- 
2.17.1



Re: [v4] powerpc: Avoid code patching freed init sections

2018-09-21 Thread Michael Ellerman
On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> This stops us from doing code patching in init sections after they've
> been freed.
> 
> In this chain:
>   kvm_guest_init() ->
> kvm_use_magic_page() ->
>   fault_in_pages_readable() ->
>__get_user() ->
>  __get_user_nocheck() ->
>barrier_nospec();
> 
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
> 
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
> 
> cc: sta...@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling 
> Reviewed-by: Nicholas Piggin 
> Reviewed-by: Christophe Leroy 

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e

cheers


Re: [v4] powerpc: Avoid code patching freed init sections

2018-10-01 Thread Christophe LEROY




Le 21/09/2018 à 13:59, Michael Ellerman a écrit :

On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
   kvm_guest_init() ->
 kvm_use_magic_page() ->
   fault_in_pages_readable() ->
 __get_user() ->
   __get_user_nocheck() ->
 barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

cc: sta...@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling 
Reviewed-by: Nicholas Piggin 
Reviewed-by: Christophe Leroy 


Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e



This patch breaks booting on my MPC83xx board (book3s/32) very early 
(before console is active), provoking restart.

u-boot reports a checkstop reset at restart.

Reverting this commit fixes the issue.

The following patch fixes the issue as well, but I think it is not the 
best solution. I still think the test should be in patch_instruction() 
instead of being in __patch_instruction(), see my comment on v2


Christophe

diff --git a/arch/powerpc/lib/code-patching.c 
b/arch/powerpc/lib/code-patching.c

index 6ae2777..6192fda 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -29,7 +29,7 @@ static int __patch_instruction(unsigned int 
*exec_addr, unsigned int instr,

int err;

/* Make sure we aren't patching a freed init section */
-   if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+   if (*PTRRELOC(&init_mem_is_free) && 
init_section_contains(exec_addr, 4)) {
pr_debug("Skipping init section patching addr: 
0x%px\n", exec_addr);

return 0;
}


Christophe


Re: [v4] powerpc: Avoid code patching freed init sections

2018-10-01 Thread Michael Neuling
On Mon, 2018-10-01 at 13:25 +0200, Christophe LEROY wrote:
> 
> Le 21/09/2018 à 13:59, Michael Ellerman a écrit :
> > On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> > > This stops us from doing code patching in init sections after they've
> > > been freed.
> > > 
> > > In this chain:
> > >kvm_guest_init() ->
> > >  kvm_use_magic_page() ->
> > >fault_in_pages_readable() ->
> > >__get_user() ->
> > >  __get_user_nocheck() ->
> > >barrier_nospec();
> > > 
> > > We have a code patching location at barrier_nospec() and
> > > kvm_guest_init() is an init function. This whole chain gets inlined,
> > > so when we free the init section (hence kvm_guest_init()), this code
> > > goes away and hence should no longer be patched.
> > > 
> > > We seen this as userspace memory corruption when using a memory
> > > checker while doing partition migration testing on powervm (this
> > > starts the code patching post migration via
> > > /sys/kernel/mobility/migration). In theory, it could also happen when
> > > using /sys/kernel/debug/powerpc/barrier_nospec.
> > > 
> > > cc: sta...@vger.kernel.org # 4.13+
> > > Signed-off-by: Michael Neuling 
> > > Reviewed-by: Nicholas Piggin 
> > > Reviewed-by: Christophe Leroy 
> > 
> > Applied to powerpc fixes, thanks.
> > 
> > https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e
> > 
> 
> This patch breaks booting on my MPC83xx board (book3s/32) very early 
> (before console is active), provoking restart.
> u-boot reports a checkstop reset at restart.
> 
> Reverting this commit fixes the issue.
> 
> The following patch fixes the issue as well, but I think it is not the 
> best solution. I still think the test should be in patch_instruction() 
> instead of being in __patch_instruction(), see my comment on v2

Arrh, sorry.

Can you write this up as a real patch with a signed off by so mpe can take it?

Mikey

> 
> Christophe
> 
> diff --git a/arch/powerpc/lib/code-patching.c 
> b/arch/powerpc/lib/code-patching.c
> index 6ae2777..6192fda 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -29,7 +29,7 @@ static int __patch_instruction(unsigned int 
> *exec_addr, unsigned int instr,
>  int err;
> 
>  /* Make sure we aren't patching a freed init section */
> -   if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> +   if (*PTRRELOC(&init_mem_is_free) && 
> init_section_contains(exec_addr, 4)) {
>  pr_debug("Skipping init section patching addr: 
> 0x%px\n", exec_addr);
>  return 0;
>  }
> 
> 
> Christophe
> 


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-09-13 Thread Nicholas Piggin
On Fri, 14 Sep 2018 11:14:11 +1000
Michael Neuling  wrote:

> This stops us from doing code patching in init sections after they've
> been freed.
> 
> In this chain:
>   kvm_guest_init() ->
> kvm_use_magic_page() ->
>   fault_in_pages_readable() ->
>__get_user() ->
>  __get_user_nocheck() ->
>barrier_nospec();
> 
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
> 
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
> 
> cc: sta...@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling 
> 
> ---
> For stable I've marked this as v4.13+ since that's when we refactored
> code-patching.c but it could go back even further than that. In
> reality though, I think we can only hit this since the first
> spectre/meltdown changes.
> 
> v4:
>  Feedback from Christophe Leroy:
>- init_mem_free -> init_mem_is_free
>- prlog %lx -> %px
> 
> v3:
>  Add init_mem_free flag to avoid potential race.
>  Feedback from Christophe Leroy:
>- use init_section_contains()
>- change order of init test for performance
>- use pr_debug()
>- remove blank line
> 
> v2:
>   Print when we skip an address
> ---
>  arch/powerpc/include/asm/setup.h | 1 +
>  arch/powerpc/lib/code-patching.c | 6 ++
>  arch/powerpc/mm/mem.c| 2 ++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/setup.h 
> b/arch/powerpc/include/asm/setup.h
> index 1a951b0046..1fffbba8d6 100644
> --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>  
>  extern unsigned int rtas_data;
>  extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
>  extern unsigned long klimit;
>  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>  
> diff --git a/arch/powerpc/lib/code-patching.c 
> b/arch/powerpc/lib/code-patching.c
> index 850f3b8f4d..6ae2777c22 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, 
> unsigned int instr,
>  {
>   int err;
>  
> + /* Make sure we aren't patching a freed init section */
> + if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> + pr_debug("Skipping init section patching addr: 0x%px\n", 
> exec_addr);
> + return 0;
> + }

What we should do is a whitelist, make sure it's only patching the
sections we want it to.

That's a bigger job when you consider modules and things too though,
so this looks good for now. Thanks,

Reviewed-by: Nicholas Piggin 


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-09-13 Thread Christophe LEROY




Le 14/09/2018 à 03:14, Michael Neuling a écrit :

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
   kvm_guest_init() ->
 kvm_use_magic_page() ->
   fault_in_pages_readable() ->
 __get_user() ->
   __get_user_nocheck() ->
 barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

cc: sta...@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling 

---
For stable I've marked this as v4.13+ since that's when we refactored
code-patching.c but it could go back even further than that. In
reality though, I think we can only hit this since the first
spectre/meltdown changes.

v4:
  Feedback from Christophe Leroy:
- init_mem_free -> init_mem_is_free
- prlog %lx -> %px

v3:
  Add init_mem_free flag to avoid potential race.
  Feedback from Christophe Leroy:
- use init_section_contains()
- change order of init test for performance
- use pr_debug()
- remove blank line

v2:
   Print when we skip an address
---
  arch/powerpc/include/asm/setup.h | 1 +
  arch/powerpc/lib/code-patching.c | 6 ++
  arch/powerpc/mm/mem.c| 2 ++
  3 files changed, 9 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 1a951b0046..1fffbba8d6 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
  
  extern unsigned int rtas_data;

  extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
  extern unsigned long klimit;
  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
  
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c

index 850f3b8f4d..6ae2777c22 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, 
unsigned int instr,
  {
int err;
  
+	/* Make sure we aren't patching a freed init section */

+   if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+   pr_debug("Skipping init section patching addr: 0x%px\n", 
exec_addr);
+   return 0;
+   }
+
__put_user_size(instr, patch_addr, 4, err);
if (err)
return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 5c8530d0c6..04ccb274a6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -63,6 +63,7 @@
  #endif
  
  unsigned long long memory_limit;

+bool init_mem_is_free;
  
  #ifdef CONFIG_HIGHMEM

  pte_t *kmap_pte;
@@ -396,6 +397,7 @@ void free_initmem(void)
  {
ppc_md.progress = ppc_printk_progress;
mark_initmem_nx();
+   init_mem_is_free = true;
free_initmem_default(POISON_FREE_INITMEM);
  }
  



Reviewed-by: Christophe Leroy 


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-09-18 Thread Christophe LEROY




Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :

On Fri, 14 Sep 2018 11:14:11 +1000
Michael Neuling  wrote:


This stops us from doing code patching in init sections after they've
been freed.

In this chain:
   kvm_guest_init() ->
 kvm_use_magic_page() ->
   fault_in_pages_readable() ->
 __get_user() ->
   __get_user_nocheck() ->
 barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

cc: sta...@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling 

---
For stable I've marked this as v4.13+ since that's when we refactored
code-patching.c but it could go back even further than that. In
reality though, I think we can only hit this since the first
spectre/meltdown changes.

v4:
  Feedback from Christophe Leroy:
- init_mem_free -> init_mem_is_free
- prlog %lx -> %px

v3:
  Add init_mem_free flag to avoid potential race.
  Feedback from Christophe Leroy:
- use init_section_contains()
- change order of init test for performance
- use pr_debug()
- remove blank line

v2:
   Print when we skip an address
---
  arch/powerpc/include/asm/setup.h | 1 +
  arch/powerpc/lib/code-patching.c | 6 ++
  arch/powerpc/mm/mem.c| 2 ++
  3 files changed, 9 insertions(+)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 1a951b0046..1fffbba8d6 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
  
  extern unsigned int rtas_data;

  extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
  extern unsigned long klimit;
  extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
  
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c

index 850f3b8f4d..6ae2777c22 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, 
unsigned int instr,
  {
int err;
  
+	/* Make sure we aren't patching a freed init section */

+   if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+   pr_debug("Skipping init section patching addr: 0x%px\n", 
exec_addr);
+   return 0;
+   }


What we should do is a whitelist, make sure it's only patching the
sections we want it to.

That's a bigger job when you consider modules and things too though,
so this looks good for now. Thanks,


What about using kernel_text_address() for it then ? It also handles 
modules, is it more complicated than that ?


Christophe


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-09-18 Thread Michal Suchánek
On Tue, 18 Sep 2018 10:52:09 +0200
Christophe LEROY  wrote:

> 
> 
> Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :
> > On Fri, 14 Sep 2018 11:14:11 +1000
> > Michael Neuling  wrote:
> > 
> >> This stops us from doing code patching in init sections after
> >> they've been freed.
> >>
> >> In this chain:
> >>kvm_guest_init() ->
> >>  kvm_use_magic_page() ->
> >>fault_in_pages_readable() ->
> >> __get_user() ->
> >>   __get_user_nocheck() ->
> >> barrier_nospec();
> >>
> >> We have a code patching location at barrier_nospec() and
> >> kvm_guest_init() is an init function. This whole chain gets
> >> inlined, so when we free the init section (hence
> >> kvm_guest_init()), this code goes away and hence should no longer
> >> be patched.
> >>
> >> We seen this as userspace memory corruption when using a memory
> >> checker while doing partition migration testing on powervm (this
> >> starts the code patching post migration via
> >> /sys/kernel/mobility/migration). In theory, it could also happen
> >> when using /sys/kernel/debug/powerpc/barrier_nospec.
> >>
> >> cc: sta...@vger.kernel.org # 4.13+
> >> Signed-off-by: Michael Neuling 
> >>
> >> ---
> >> For stable I've marked this as v4.13+ since that's when we
> >> refactored code-patching.c but it could go back even further than
> >> that. In reality though, I think we can only hit this since the
> >> first spectre/meltdown changes.
> >>
> >> v4:
> >>   Feedback from Christophe Leroy:
> >> - init_mem_free -> init_mem_is_free
> >> - prlog %lx -> %px
> >>
> >> v3:
> >>   Add init_mem_free flag to avoid potential race.
> >>   Feedback from Christophe Leroy:
> >> - use init_section_contains()
> >> - change order of init test for performance
> >> - use pr_debug()
> >> - remove blank line
> >>
> >> v2:
> >>Print when we skip an address
> >> ---
> >>   arch/powerpc/include/asm/setup.h | 1 +
> >>   arch/powerpc/lib/code-patching.c | 6 ++
> >>   arch/powerpc/mm/mem.c| 2 ++
> >>   3 files changed, 9 insertions(+)
> >>
> >> diff --git a/arch/powerpc/include/asm/setup.h
> >> b/arch/powerpc/include/asm/setup.h index 1a951b0046..1fffbba8d6
> >> 100644 --- a/arch/powerpc/include/asm/setup.h
> >> +++ b/arch/powerpc/include/asm/setup.h
> >> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned
> >> short hex); 
> >>   extern unsigned int rtas_data;
> >>   extern unsigned long long memory_limit;
> >> +extern bool init_mem_is_free;
> >>   extern unsigned long klimit;
> >>   extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
> >>   
> >> diff --git a/arch/powerpc/lib/code-patching.c
> >> b/arch/powerpc/lib/code-patching.c index 850f3b8f4d..6ae2777c22
> >> 100644 --- a/arch/powerpc/lib/code-patching.c
> >> +++ b/arch/powerpc/lib/code-patching.c
> >> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int
> >> *exec_addr, unsigned int instr, {
> >>int err;
> >>   
> >> +  /* Make sure we aren't patching a freed init section */
> >> +  if (init_mem_is_free && init_section_contains(exec_addr,
> >> 4)) {
> >> +  pr_debug("Skipping init section patching addr:
> >> 0x%px\n", exec_addr);
> >> +  return 0;
> >> +  }
> > 
> > What we should do is a whitelist, make sure it's only patching the
> > sections we want it to.
> > 
> > That's a bigger job when you consider modules and things too though,
> > so this looks good for now. Thanks,
> 
> What about using kernel_text_address() for it then ? It also handles 
> modules, is it more complicated than that ?

Modules are patched separately so should not need to be excluded here.

There is a different problem with modules: when the mitigation type
changes the modules are not re-patched with the new settings.

Thanks

Michal


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-10-02 Thread Andreas Schwab
On Sep 14 2018, Michael Neuling  wrote:

> This stops us from doing code patching in init sections after they've
> been freed.

This breaks booting on PowerBook6,7, crashing very early.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-10-02 Thread Michael Neuling
On Tue, 2018-10-02 at 23:35 +0200, Andreas Schwab wrote:
> On Sep 14 2018, Michael Neuling  wrote:
> 
> > This stops us from doing code patching in init sections after they've
> > been freed.
> 
> This breaks booting on PowerBook6,7, crashing very early.

Sorry, Can you try?

http://patchwork.ozlabs.org/patch/977195/

Mikey


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-10-02 Thread Michael Ellerman
Andreas Schwab  writes:

> On Sep 14 2018, Michael Neuling  wrote:
>
>> This stops us from doing code patching in init sections after they've
>> been freed.
>
> This breaks booting on PowerBook6,7, crashing very early.

Crud, sorry.

My CI setup tests with the mac99 qemu model, but that boots happily, not
sure why.

cheers


Re: [PATCH v4] powerpc: Avoid code patching freed init sections

2018-10-02 Thread Christophe LEROY




Le 03/10/2018 à 05:20, Michael Ellerman a écrit :

Andreas Schwab  writes:


On Sep 14 2018, Michael Neuling  wrote:


This stops us from doing code patching in init sections after they've
been freed.


This breaks booting on PowerBook6,7, crashing very early.


Crud, sorry.

My CI setup tests with the mac99 qemu model, but that boots happily, not
sure why.


Maybe mac99 doesn't use relocation ?

The issue is that during early boot, 'init_mem_is_free' is not yet at 
its definitif address.


I sent a fix for it : https://patchwork.ozlabs.org/patch/977195/

Christophe