Re: [PATCH v3] powerpc/mm: Implemented default_hugepagesz verification for powerpc

2017-08-14 Thread Victor Aoqui

Em 2017-08-04 15:17, Mike Kravetz escreveu:

On 07/24/2017 04:52 PM, Victor Aoqui wrote:

Implemented default hugepage size verification (default_hugepagesz=)
in order to allow allocation of defined number of pages (hugepages=)
only for supported hugepage sizes.

Signed-off-by: Victor Aoqui 
---
v2:

- Renamed default_hugepage_setup_sz function to 
hugetlb_default_size_setup;

- Added powerpc string to error message.

v3:

- Renamed hugetlb_default_size_setup() to hugepage_default_setup_sz();
- Implemented hugetlb_bad_default_size();
- Reimplemented hugepage_setup_sz() to just parse default_hugepagesz= 
and

check if it's a supported size;
- Added verification of default_hugepagesz= value on 
hugetlb_nrpages_setup()

before allocating hugepages.

 arch/powerpc/mm/hugetlbpage.c | 15 +++
 include/linux/hugetlb.h   |  1 +
 mm/hugetlb.c  | 17 +++--
 3 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage.c 
b/arch/powerpc/mm/hugetlbpage.c

index e1bf5ca..5990381 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -780,6 +780,21 @@ static int __init hugepage_setup_sz(char *str)
 }
 __setup("hugepagesz=", hugepage_setup_sz);

+static int __init hugepage_default_setup_sz(char *str)
+{
+   unsigned long long size;
+
+   size = memparse(str, &str);
+
+   if (add_huge_page_size(size) != 0) {
+   hugetlb_bad_default_size();
+		pr_err("Invalid ppc default huge page size specified(%llu)\n", 
size);

+   }
+
+   return 1;
+}
+__setup("default_hugepagesz=", hugepage_default_setup_sz);
+
 struct kmem_cache *hugepte_cache;
 static int __init hugetlbpage_init(void)
 {
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 0ed8e41..2927200 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -361,6 +361,7 @@ int huge_add_to_page_cache(struct page *page, 
struct address_space *mapping,

 int __init alloc_bootmem_huge_page(struct hstate *h);

 void __init hugetlb_bad_size(void);
+void __init hugetlb_bad_default_size(void);
 void __init hugetlb_add_hstate(unsigned order);
 struct hstate *size_to_hstate(unsigned long size);

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bc48ee7..3c24266 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -54,6 +54,7 @@
 static unsigned long __initdata default_hstate_max_huge_pages;
 static unsigned long __initdata default_hstate_size;
 static bool __initdata parsed_valid_hugepagesz = true;
+static bool __initdata parsed_valid_default_hugepagesz = true;

 /*
  * Protects updates to hugepage_freelists, hugepage_activelist, 
nr_huge_pages,

@@ -2804,6 +2805,12 @@ void __init hugetlb_bad_size(void)
parsed_valid_hugepagesz = false;
 }

+/* Should be called on processing a default_hugepagesz=... option */
+void __init hugetlb_bad_default_size(void)
+{
+   parsed_valid_default_hugepagesz = false;
+}
+
 void __init hugetlb_add_hstate(unsigned int order)
 {
struct hstate *h;
@@ -2846,8 +2853,14 @@ static int __init hugetlb_nrpages_setup(char 
*s)
 	 * !hugetlb_max_hstate means we haven't parsed a hugepagesz= 
parameter yet,

 * so this hugepages= parameter goes to the "default hstate".
 */
-   else if (!hugetlb_max_hstate)
-   mhp = &default_hstate_max_huge_pages;
+   else if (!hugetlb_max_hstate) {
+   if (!parsed_valid_default_hugepagesz) {
+   pr_warn("hugepages = %s cannot be allocated for "
+   "unsupported default_hugepagesz, ignoring\n", 
s);
+   parsed_valid_default_hugepagesz = true;
+   } else
+   mhp = &default_hstate_max_huge_pages;
+   }
else
mhp = &parsed_hstate->max_huge_pages;




My compiler tells me,

mm/hugetlb.c: In function ‘hugetlb_nrpages_setup’:
mm/hugetlb.c:2873:8: warning: ‘mhp’ may be used uninitialized in
this function [-Wmaybe-uninitialized]

You have added a way of getting out of that big if/else if statement 
without
setting mhp.  mhp will be examined later in the code, so this is indeed 
a bug.


Like Aneesh, I am not sure if there is great benefit in this patch.

You added this change in functionality only for powerpc.  IMO, it would 
be
best if behavior was consistent in all architectures.  So, if we change 
it

for powerpc we may want to change everywhere.


Hi Mike,

Yes, the patch mentioned by Aneesh solves the issue.

Thanks

--
Victor Aoqui



Re: [PATCH v3] powerpc/mm: Implemented default_hugepagesz verification for powerpc

2017-08-14 Thread Victor Aoqui

Em 2017-08-04 02:57, Aneesh Kumar K.V escreveu:

Victor Aoqui  writes:


Implemented default hugepage size verification (default_hugepagesz=)
in order to allow allocation of defined number of pages (hugepages=)
only for supported hugepage sizes.

Signed-off-by: Victor Aoqui 


I am still not sure about this. With current upstream we get

 PCI: Probing PCI hardware
 PCI: Probing PCI hardware done


 HugeTLB: unsupported default_hugepagesz 2097152. Reverting to
16777216

 HugeTLB registered 16.0 MiB page size, pre-allocated 0 pages


 HugeTLB registered 16.0 GiB page size, pre-allocated 0 pages

That warning is added by

d715cf804a0318e83c75c0a7abd1a4b9ce13e8da

Which should be good enough right ?

-aneesh


Hi Aneesh,

Sorry for the delay. I was on vacation.
Yes, that solves the issue. This patch was accepted when I was fixing 
the last version.

Sorry for the inconvenience.

--
Victor Aoqui



Re: [PATCH] drivers: cpuidle: Disable preemption before get_lppaca function call in pseries_idle_probe function

2017-08-01 Thread Victor Aoqui

Em 2017-07-20 18:21, Benjamin Herrenschmidt escreveu:

On Thu, 2017-07-20 at 14:57 -0300, Victor Aoqui wrote:

When CONFIG_PREEMPT=y, the following warning shows up:

BUG: using smp_processor_id() in preemptible [] code: 
swapper/0/1

caller is pseries_processor_idle_init+0x58/0x21c

This warning shows up because preemption cannot occur when using
get_paca(), otherwise the paca_struct it points to may be the wrong 
one

just after.

For this reason, preemption needs to be disabled before
lppaca_shared_proc(get_lppaca()).


Also chekc the generated assembly. We had all sort of interesting
issues where gcc would copy the paca pointer or the lppaca pointer
to a GPR *outside* of the preempt disabled section...

In that specific case it's not a big deal but overall, I am not
comfortable with PREEMPT on powerpc until we do something a bit
more drastic...

I would like to remove all such direct accesses to paca, instead have a
"new" get_paca() written in asm that does the preempt disable then
returns the PACA in a GPR (not directly use r13, hide that from gcc),
and which is paired with a put_paca().

The few places where we want to directly access r13 should be hand
written in asm too to hide r13 from gcc, for accessing the irq_happened
in the fast path of local_irq_enable/disable/... we should do the same
with lock tokens.

Ben.


Hi Benjamin,

Sorry for the delay. I was a little bit busy last days.
I took note of your comments and I will work on those changes.
I will let you know soon when it's done.

Thanks

--
Victor Aoqui



[PATCH v3] powerpc/mm: Implemented default_hugepagesz verification for powerpc

2017-07-24 Thread Victor Aoqui
Implemented default hugepage size verification (default_hugepagesz=)
in order to allow allocation of defined number of pages (hugepages=)
only for supported hugepage sizes.

Signed-off-by: Victor Aoqui 
---
v2:

- Renamed default_hugepage_setup_sz function to hugetlb_default_size_setup;
- Added powerpc string to error message.

v3:

- Renamed hugetlb_default_size_setup() to hugepage_default_setup_sz();
- Implemented hugetlb_bad_default_size();
- Reimplemented hugepage_setup_sz() to just parse default_hugepagesz= and
check if it's a supported size;
- Added verification of default_hugepagesz= value on hugetlb_nrpages_setup()
before allocating hugepages.

 arch/powerpc/mm/hugetlbpage.c | 15 +++
 include/linux/hugetlb.h   |  1 +
 mm/hugetlb.c  | 17 +++--
 3 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index e1bf5ca..5990381 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -780,6 +780,21 @@ static int __init hugepage_setup_sz(char *str)
 }
 __setup("hugepagesz=", hugepage_setup_sz);
 
+static int __init hugepage_default_setup_sz(char *str)
+{
+   unsigned long long size;
+
+   size = memparse(str, &str);
+
+   if (add_huge_page_size(size) != 0) {
+   hugetlb_bad_default_size();
+   pr_err("Invalid ppc default huge page size specified(%llu)\n", 
size);
+   }
+
+   return 1;
+}
+__setup("default_hugepagesz=", hugepage_default_setup_sz);
+
 struct kmem_cache *hugepte_cache;
 static int __init hugetlbpage_init(void)
 {
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 0ed8e41..2927200 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -361,6 +361,7 @@ int huge_add_to_page_cache(struct page *page, struct 
address_space *mapping,
 int __init alloc_bootmem_huge_page(struct hstate *h);
 
 void __init hugetlb_bad_size(void);
+void __init hugetlb_bad_default_size(void);
 void __init hugetlb_add_hstate(unsigned order);
 struct hstate *size_to_hstate(unsigned long size);
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bc48ee7..3c24266 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -54,6 +54,7 @@
 static unsigned long __initdata default_hstate_max_huge_pages;
 static unsigned long __initdata default_hstate_size;
 static bool __initdata parsed_valid_hugepagesz = true;
+static bool __initdata parsed_valid_default_hugepagesz = true;
 
 /*
  * Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages,
@@ -2804,6 +2805,12 @@ void __init hugetlb_bad_size(void)
parsed_valid_hugepagesz = false;
 }
 
+/* Should be called on processing a default_hugepagesz=... option */
+void __init hugetlb_bad_default_size(void)
+{
+   parsed_valid_default_hugepagesz = false;
+}
+
 void __init hugetlb_add_hstate(unsigned int order)
 {
struct hstate *h;
@@ -2846,8 +2853,14 @@ static int __init hugetlb_nrpages_setup(char *s)
 * !hugetlb_max_hstate means we haven't parsed a hugepagesz= parameter 
yet,
 * so this hugepages= parameter goes to the "default hstate".
 */
-   else if (!hugetlb_max_hstate)
-   mhp = &default_hstate_max_huge_pages;
+   else if (!hugetlb_max_hstate) {
+   if (!parsed_valid_default_hugepagesz) {
+   pr_warn("hugepages = %s cannot be allocated for "
+   "unsupported default_hugepagesz, ignoring\n", 
s);
+   parsed_valid_default_hugepagesz = true;
+   } else
+   mhp = &default_hstate_max_huge_pages;
+   }
else
mhp = &parsed_hstate->max_huge_pages;
 
-- 
1.8.3.1



[PATCH] drivers: cpuidle: Disable preemption before get_lppaca function call in pseries_idle_probe function

2017-07-20 Thread Victor Aoqui
When CONFIG_PREEMPT=y, the following warning shows up:

BUG: using smp_processor_id() in preemptible [] code: swapper/0/1
caller is pseries_processor_idle_init+0x58/0x21c

This warning shows up because preemption cannot occur when using
get_paca(), otherwise the paca_struct it points to may be the wrong one
just after.

For this reason, preemption needs to be disabled before
lppaca_shared_proc(get_lppaca()).

Signed-off-by: Victor Aoqui 
---
 drivers/cpuidle/cpuidle-pseries.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/cpuidle-pseries.c 
b/drivers/cpuidle/cpuidle-pseries.c
index e9b3853..d6dda8c 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -233,12 +233,16 @@ static int pseries_cpuidle_driver_init(void)
  */
 static int pseries_idle_probe(void)
 {
+   int retval;
 
if (cpuidle_disable != IDLE_NO_OVERRIDE)
return -ENODEV;
 
if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
-   if (lppaca_shared_proc(get_lppaca())) {
+   preempt_disable();
+   retval = lppaca_shared_proc(get_lppaca());
+   preempt_enable();
+   if (retval) {
cpuidle_state_table = shared_states;
max_idle_state = ARRAY_SIZE(shared_states);
} else {
-- 
1.8.3.1



[PATCH] powerpc/kernel: Avoid preemption check during iommu_range_alloc

2017-07-20 Thread Victor Aoqui
Replaced __this_cpu_read function call by raw_cpu_read in
iommu_range_alloc function.
Preemption doesn't need to be disabled since any CPU can safely
use IOMMU pool.

Signed-off-by: Victor Aoqui 
---
 arch/powerpc/kernel/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 233ca3f..0e49a45 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -208,7 +208,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
 * We don't need to disable preemption here because any CPU can
 * safely use any IOMMU pool.
 */
-   pool_nr = __this_cpu_read(iommu_pool_hash) & (tbl->nr_pools - 1);
+   pool_nr = raw_cpu_read(iommu_pool_hash) & (tbl->nr_pools - 1);
 
if (largealloc)
pool = &(tbl->large_pool);
-- 
1.8.3.1



[PATCH v2] powerpc/mm: Implemented default_hugepagesz verification for powerpc

2017-07-20 Thread Victor Aoqui
Implemented default hugepage size verification (default_hugepagesz=)
in order to allow allocation of defined number of pages (hugepages=)
only for supported hugepage sizes.

Signed-off-by: Victor Aoqui 
---
v2:

- Renamed default_hugepage_setup_sz function to hugetlb_default_size_setup;

- Added powerpc string to error message.

 arch/powerpc/mm/hugetlbpage.c | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index e1bf5ca..3a142fe 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -780,6 +780,21 @@ static int __init hugepage_setup_sz(char *str)
 }
 __setup("hugepagesz=", hugepage_setup_sz);
 
+static int __init hugetlb_default_size_setup(char *str)
+{
+   unsigned long long size;
+
+   size = memparse(str, &str);
+
+   if (add_huge_page_size(size) != 0) {
+   hugetlb_bad_size();
+   pr_err("Invalid powerpc default huge page size 
specified(%llu)\n", size);
+   }
+
+   return 1;
+}
+__setup("default_hugepagesz=", hugetlb_default_size_setup);
+
 struct kmem_cache *hugepte_cache;
 static int __init hugetlbpage_init(void)
 {
-- 
1.8.3.1



[PATCH] powerpc/mm: Implemented default_hugepagesz verification for powerpc

2017-07-03 Thread Victor Aoqui
Implemented default hugepage size verification (default_hugepagesz=)
in order to allow allocation of defined number of pages (hugepages=)
only for supported hugepage sizes.

Signed-off-by: Victor Aoqui 
---
 arch/powerpc/mm/hugetlbpage.c | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index a4f33de..464e72e 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -797,6 +797,21 @@ static int __init hugepage_setup_sz(char *str)
 }
 __setup("hugepagesz=", hugepage_setup_sz);
 
+static int __init default_hugepage_setup_sz(char *str)
+{
+unsigned long long size;
+
+size = memparse(str, &str);
+
+if (add_huge_page_size(size) != 0) {
+hugetlb_bad_size();
+pr_err("Invalid default huge page size specified(%llu)\n", 
size);
+}
+
+return 1;
+}
+__setup("default_hugepagesz=", default_hugepage_setup_sz);
+
 struct kmem_cache *hugepte_cache;
 static int __init hugetlbpage_init(void)
 {
-- 
1.8.3.1