Re: [PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging

2019-01-23 Thread Aneesh Kumar K.V
Michael Ellerman  writes:

> On machines with 1TB segments and a 32-entry SLB it's quite hard to
> cause sufficient SLB pressure to trigger bugs caused due to badly
> timed SLB faults.
>
> We have seen this in the past and a few years ago added the
> disable_1tb_segments command line option to force the use of 256MB
> segments. However even this allows some bugs to slip through testing
> if the SLB entry in question was recently accessed.
>
> So add a new command line parameter for debugging which shrinks the
> SLB to the minimum size we can support. Currently that size is 3, two
> bolted SLBs and 1 for dynamic use. This creates the maximal SLB
> pressure while still allowing the kernel to operate.
>

Should we put this within DEBUG_VM? 

Reviewed-by: Aneesh Kumar K.V  


> Signed-off-by: Michael Ellerman 
> ---
>  arch/powerpc/mm/slb.c | 14 ++
>  1 file changed, 14 insertions(+)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 61450a9cf30d..0f33e28f97da 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -506,10 +506,24 @@ void switch_slb(struct task_struct *tsk, struct 
> mm_struct *mm)
>   asm volatile("isync" : : : "memory");
>  }
>  
> +static bool shrink_slb = false;
> +
> +static int __init parse_shrink_slb(char *p)
> +{
> + shrink_slb = true;
> + slb_set_size(0);

Why do we need call slb_set_size(0) here? htab_dt_scan_seg_sizes should 
find the shirnk_slb = true?

> +
> + return 0;
> +}
> +early_param("shrink_slb", parse_shrink_slb);
> +
>  static u32 slb_full_bitmap;
>  
>  void slb_set_size(u16 size)
>  {
> + if (shrink_slb)
> + size = SLB_NUM_BOLTED + 1;
> +
>   mmu_slb_size = size;
>  
>   if (size >= 32)
> -- 
> 2.20.1

-aneesh



Re: [PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging

2019-01-23 Thread Nicholas Piggin
Michael Ellerman's on January 19, 2019 8:27 pm:
> Michal Suchánek  writes:
> 
>> On Thu, 17 Jan 2019 23:13:28 +1100
>> Michael Ellerman  wrote:
>>
>>> On machines with 1TB segments and a 32-entry SLB it's quite hard to
>>> cause sufficient SLB pressure to trigger bugs caused due to badly
>>> timed SLB faults.
>>> 
>>> We have seen this in the past and a few years ago added the
>>> disable_1tb_segments command line option to force the use of 256MB
>>> segments. However even this allows some bugs to slip through testing
>>> if the SLB entry in question was recently accessed.
>>> 
>>> So add a new command line parameter for debugging which shrinks the
>>> SLB to the minimum size we can support. Currently that size is 3, two
>>> bolted SLBs and 1 for dynamic use. This creates the maximal SLB
>>
>> Doesn't this violate the power of 2 requirement stated in 2/4?
> 
> Yes. Good point. This was originally a hack patch in my tree, back when
> SLB_NUM_BOLTED was 3 and before Nick even added the slb_used_bitmap, so
> back then it was a power of 2 but it also didn't matter :)
> 
> I think I'll rework the slb_full_bitmap patch anyway and remove the
> power of 2 requirement, so then this patch will be OK.

Thanks, good patch and really will help testing. Sometimes even if you 
can get enough pressure with the 256MB segments, you actually want to 
test 1TB segment code paths too.

Thanks,
Nick



Re: [PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging

2019-01-19 Thread Michael Ellerman
Michal Suchánek  writes:

> On Thu, 17 Jan 2019 23:13:28 +1100
> Michael Ellerman  wrote:
>
>> On machines with 1TB segments and a 32-entry SLB it's quite hard to
>> cause sufficient SLB pressure to trigger bugs caused due to badly
>> timed SLB faults.
>> 
>> We have seen this in the past and a few years ago added the
>> disable_1tb_segments command line option to force the use of 256MB
>> segments. However even this allows some bugs to slip through testing
>> if the SLB entry in question was recently accessed.
>> 
>> So add a new command line parameter for debugging which shrinks the
>> SLB to the minimum size we can support. Currently that size is 3, two
>> bolted SLBs and 1 for dynamic use. This creates the maximal SLB
>
> Doesn't this violate the power of 2 requirement stated in 2/4?

Yes. Good point. This was originally a hack patch in my tree, back when
SLB_NUM_BOLTED was 3 and before Nick even added the slb_used_bitmap, so
back then it was a power of 2 but it also didn't matter :)

I think I'll rework the slb_full_bitmap patch anyway and remove the
power of 2 requirement, so then this patch will be OK.

Thanks for the review!

cheers


Re: [PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging

2019-01-18 Thread Michal Suchánek
On Thu, 17 Jan 2019 23:13:28 +1100
Michael Ellerman  wrote:

> On machines with 1TB segments and a 32-entry SLB it's quite hard to
> cause sufficient SLB pressure to trigger bugs caused due to badly
> timed SLB faults.
> 
> We have seen this in the past and a few years ago added the
> disable_1tb_segments command line option to force the use of 256MB
> segments. However even this allows some bugs to slip through testing
> if the SLB entry in question was recently accessed.
> 
> So add a new command line parameter for debugging which shrinks the
> SLB to the minimum size we can support. Currently that size is 3, two
> bolted SLBs and 1 for dynamic use. This creates the maximal SLB

Doesn't this violate the power of 2 requirement stated in 2/4?

Thanks

Michal

> pressure while still allowing the kernel to operate.
> 
> Signed-off-by: Michael Ellerman 
> ---
>  arch/powerpc/mm/slb.c | 14 ++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 61450a9cf30d..0f33e28f97da 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -506,10 +506,24 @@ void switch_slb(struct task_struct *tsk, struct 
> mm_struct *mm)
>   asm volatile("isync" : : : "memory");
>  }
>  
> +static bool shrink_slb = false;
> +
> +static int __init parse_shrink_slb(char *p)
> +{
> + shrink_slb = true;
> + slb_set_size(0);
> +
> + return 0;
> +}
> +early_param("shrink_slb", parse_shrink_slb);
> +
>  static u32 slb_full_bitmap;
>  
>  void slb_set_size(u16 size)
>  {
> + if (shrink_slb)
> + size = SLB_NUM_BOLTED + 1;
> +
>   mmu_slb_size = size;
>  
>   if (size >= 32)



[PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging

2019-01-17 Thread Michael Ellerman
On machines with 1TB segments and a 32-entry SLB it's quite hard to
cause sufficient SLB pressure to trigger bugs caused due to badly
timed SLB faults.

We have seen this in the past and a few years ago added the
disable_1tb_segments command line option to force the use of 256MB
segments. However even this allows some bugs to slip through testing
if the SLB entry in question was recently accessed.

So add a new command line parameter for debugging which shrinks the
SLB to the minimum size we can support. Currently that size is 3, two
bolted SLBs and 1 for dynamic use. This creates the maximal SLB
pressure while still allowing the kernel to operate.

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/mm/slb.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 61450a9cf30d..0f33e28f97da 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -506,10 +506,24 @@ void switch_slb(struct task_struct *tsk, struct mm_struct 
*mm)
asm volatile("isync" : : : "memory");
 }
 
+static bool shrink_slb = false;
+
+static int __init parse_shrink_slb(char *p)
+{
+   shrink_slb = true;
+   slb_set_size(0);
+
+   return 0;
+}
+early_param("shrink_slb", parse_shrink_slb);
+
 static u32 slb_full_bitmap;
 
 void slb_set_size(u16 size)
 {
+   if (shrink_slb)
+   size = SLB_NUM_BOLTED + 1;
+
mmu_slb_size = size;
 
if (size >= 32)
-- 
2.20.1