Re: [PATCH 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-25 Thread Miaohe Lin
On 2021/3/25 8:28, Mike Kravetz wrote:
> The helper routine hstate_next_node_to_alloc accesses and modifies the
> hstate variable next_nid_to_alloc.  The helper is used by the routines
> alloc_pool_huge_page and adjust_pool_surplus.  adjust_pool_surplus is
> called with hugetlb_lock held.  However, alloc_pool_huge_page can not
> be called with the hugetlb lock held as it will call the page allocator.
> Two instances of alloc_pool_huge_page could be run in parallel or
> alloc_pool_huge_page could run in parallel with adjust_pool_surplus
> which may result in the variable next_nid_to_alloc becoming invalid
> for the caller and pages being allocated on the wrong node.
> > Both alloc_pool_huge_page and adjust_pool_surplus are only called from
> the routine set_max_huge_pages after boot.  set_max_huge_pages is only
> called as the reusult of a user writing to the proc/sysfs nr_hugepages,
> or nr_hugepages_mempolicy file to adjust the number of hugetlb pages.
> 
> It makes little sense to allow multiple adjustment to the number of
> hugetlb pages in parallel.  Add a mutex to the hstate and use it to only
> allow one hugetlb page adjustment at a time.  This will synchronize
> modifications to the next_nid_to_alloc variable.
> 
> Signed-off-by: Mike Kravetz 
> ---
>  include/linux/hugetlb.h | 1 +
>  mm/hugetlb.c| 5 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index a7f7d5f328dc..8817ec987d68 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -566,6 +566,7 @@ HPAGEFLAG(Freed, freed)
>  #define HSTATE_NAME_LEN 32
>  /* Defines one hugetlb page size */
>  struct hstate {
> + struct mutex mutex;

I am also with Michal and Oscar here, renaming the mutex to something closer to
its function.

Reviewed-by: Miaohe Lin 

>   int next_nid_to_alloc;
>   int next_nid_to_free;
>   unsigned int order;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f9ba63fc1747..404b0b1c5258 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2616,6 +2616,8 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>   else
>   return -ENOMEM;
>  
> + /* mutex prevents concurrent adjustments for the same hstate */
> + mutex_lock(>mutex);
>   spin_lock(_lock);
>  
>   /*
> @@ -2648,6 +2650,7 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>   if (hstate_is_gigantic(h) && !IS_ENABLED(CONFIG_CONTIG_ALLOC)) {
>   if (count > persistent_huge_pages(h)) {
>   spin_unlock(_lock);
> + mutex_unlock(>mutex);
>   NODEMASK_FREE(node_alloc_noretry);
>   return -EINVAL;
>   }
> @@ -2722,6 +2725,7 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>  out:
>   h->max_huge_pages = persistent_huge_pages(h);
>   spin_unlock(_lock);
> + mutex_unlock(>mutex);
>  
>   NODEMASK_FREE(node_alloc_noretry);
>  
> @@ -3209,6 +3213,7 @@ void __init hugetlb_add_hstate(unsigned int order)
>   BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE);
>   BUG_ON(order == 0);
>   h = [hugetlb_max_hstate++];
> + mutex_init(>mutex);
>   h->order = order;
>   h->mask = ~(huge_page_size(h) - 1);
>   for (i = 0; i < MAX_NUMNODES; ++i)
> 



Re: [PATCH 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-25 Thread Oscar Salvador
On Wed, Mar 24, 2021 at 05:28:30PM -0700, Mike Kravetz wrote:
> The helper routine hstate_next_node_to_alloc accesses and modifies the
> hstate variable next_nid_to_alloc.  The helper is used by the routines
> alloc_pool_huge_page and adjust_pool_surplus.  adjust_pool_surplus is
> called with hugetlb_lock held.  However, alloc_pool_huge_page can not
> be called with the hugetlb lock held as it will call the page allocator.
> Two instances of alloc_pool_huge_page could be run in parallel or
> alloc_pool_huge_page could run in parallel with adjust_pool_surplus
> which may result in the variable next_nid_to_alloc becoming invalid
> for the caller and pages being allocated on the wrong node.

Is this something you have seen happening? If so, it is easier to
trigger? I doubt so as I have not seen any bug report, but just
wondering whether a Fixes tag is needed, or probably not worth, right?

> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -566,6 +566,7 @@ HPAGEFLAG(Freed, freed)
>  #define HSTATE_NAME_LEN 32
>  /* Defines one hugetlb page size */
>  struct hstate {
> + struct mutex mutex;

I am also with Michal here, renaming the mutex to something closer to
its function might be better to understand it without diving too much in
the code.

>   int next_nid_to_alloc;
>   int next_nid_to_free;
>   unsigned int order;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f9ba63fc1747..404b0b1c5258 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2616,6 +2616,8 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>   else
>   return -ENOMEM;
>  
> + /* mutex prevents concurrent adjustments for the same hstate */
> + mutex_lock(>mutex);
>   spin_lock(_lock);

I find above comment a bit misleading.
AFAIK, hugetlb_lock also protects from concurrent adjustments for the
same hstate (hugepage_activelist, free_huge_pages, surplus_huge_pages,
etc...).
Would it be more apropiate saying that mutex_lock() only prevents from
simultaneously sysfs/proc operations?

Reviewed-by: Oscar Salvador 


-- 
Oscar Salvador
SUSE L3


Re: [PATCH 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-25 Thread Michal Hocko
On Wed 24-03-21 17:28:30, Mike Kravetz wrote:
> The helper routine hstate_next_node_to_alloc accesses and modifies the
> hstate variable next_nid_to_alloc.  The helper is used by the routines
> alloc_pool_huge_page and adjust_pool_surplus.  adjust_pool_surplus is
> called with hugetlb_lock held.  However, alloc_pool_huge_page can not
> be called with the hugetlb lock held as it will call the page allocator.
> Two instances of alloc_pool_huge_page could be run in parallel or
> alloc_pool_huge_page could run in parallel with adjust_pool_surplus
> which may result in the variable next_nid_to_alloc becoming invalid
> for the caller and pages being allocated on the wrong node.
> 
> Both alloc_pool_huge_page and adjust_pool_surplus are only called from
> the routine set_max_huge_pages after boot.  set_max_huge_pages is only
> called as the reusult of a user writing to the proc/sysfs nr_hugepages,
> or nr_hugepages_mempolicy file to adjust the number of hugetlb pages.
> 
> It makes little sense to allow multiple adjustment to the number of
> hugetlb pages in parallel.  Add a mutex to the hstate and use it to only
> allow one hugetlb page adjustment at a time.  This will synchronize
> modifications to the next_nid_to_alloc variable.
> 
> Signed-off-by: Mike Kravetz 

Acked-by: Michal Hocko 

I would just recommend s@mutex@resize_lock@ so that the intention is
more clear from the name.
> ---
>  include/linux/hugetlb.h | 1 +
>  mm/hugetlb.c| 5 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index a7f7d5f328dc..8817ec987d68 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -566,6 +566,7 @@ HPAGEFLAG(Freed, freed)
>  #define HSTATE_NAME_LEN 32
>  /* Defines one hugetlb page size */
>  struct hstate {
> + struct mutex mutex;
>   int next_nid_to_alloc;
>   int next_nid_to_free;
>   unsigned int order;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f9ba63fc1747..404b0b1c5258 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2616,6 +2616,8 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>   else
>   return -ENOMEM;
>  
> + /* mutex prevents concurrent adjustments for the same hstate */
> + mutex_lock(>mutex);
>   spin_lock(_lock);
>  
>   /*
> @@ -2648,6 +2650,7 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>   if (hstate_is_gigantic(h) && !IS_ENABLED(CONFIG_CONTIG_ALLOC)) {
>   if (count > persistent_huge_pages(h)) {
>   spin_unlock(_lock);
> + mutex_unlock(>mutex);
>   NODEMASK_FREE(node_alloc_noretry);
>   return -EINVAL;
>   }
> @@ -2722,6 +2725,7 @@ static int set_max_huge_pages(struct hstate *h, 
> unsigned long count, int nid,
>  out:
>   h->max_huge_pages = persistent_huge_pages(h);
>   spin_unlock(_lock);
> + mutex_unlock(>mutex);
>  
>   NODEMASK_FREE(node_alloc_noretry);
>  
> @@ -3209,6 +3213,7 @@ void __init hugetlb_add_hstate(unsigned int order)
>   BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE);
>   BUG_ON(order == 0);
>   h = [hugetlb_max_hstate++];
> + mutex_init(>mutex);
>   h->order = order;
>   h->mask = ~(huge_page_size(h) - 1);
>   for (i = 0; i < MAX_NUMNODES; ++i)
> -- 
> 2.30.2
> 

-- 
Michal Hocko
SUSE Labs


[PATCH 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-24 Thread Mike Kravetz
The helper routine hstate_next_node_to_alloc accesses and modifies the
hstate variable next_nid_to_alloc.  The helper is used by the routines
alloc_pool_huge_page and adjust_pool_surplus.  adjust_pool_surplus is
called with hugetlb_lock held.  However, alloc_pool_huge_page can not
be called with the hugetlb lock held as it will call the page allocator.
Two instances of alloc_pool_huge_page could be run in parallel or
alloc_pool_huge_page could run in parallel with adjust_pool_surplus
which may result in the variable next_nid_to_alloc becoming invalid
for the caller and pages being allocated on the wrong node.

Both alloc_pool_huge_page and adjust_pool_surplus are only called from
the routine set_max_huge_pages after boot.  set_max_huge_pages is only
called as the reusult of a user writing to the proc/sysfs nr_hugepages,
or nr_hugepages_mempolicy file to adjust the number of hugetlb pages.

It makes little sense to allow multiple adjustment to the number of
hugetlb pages in parallel.  Add a mutex to the hstate and use it to only
allow one hugetlb page adjustment at a time.  This will synchronize
modifications to the next_nid_to_alloc variable.

Signed-off-by: Mike Kravetz 
---
 include/linux/hugetlb.h | 1 +
 mm/hugetlb.c| 5 +
 2 files changed, 6 insertions(+)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index a7f7d5f328dc..8817ec987d68 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -566,6 +566,7 @@ HPAGEFLAG(Freed, freed)
 #define HSTATE_NAME_LEN 32
 /* Defines one hugetlb page size */
 struct hstate {
+   struct mutex mutex;
int next_nid_to_alloc;
int next_nid_to_free;
unsigned int order;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f9ba63fc1747..404b0b1c5258 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2616,6 +2616,8 @@ static int set_max_huge_pages(struct hstate *h, unsigned 
long count, int nid,
else
return -ENOMEM;
 
+   /* mutex prevents concurrent adjustments for the same hstate */
+   mutex_lock(>mutex);
spin_lock(_lock);
 
/*
@@ -2648,6 +2650,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned 
long count, int nid,
if (hstate_is_gigantic(h) && !IS_ENABLED(CONFIG_CONTIG_ALLOC)) {
if (count > persistent_huge_pages(h)) {
spin_unlock(_lock);
+   mutex_unlock(>mutex);
NODEMASK_FREE(node_alloc_noretry);
return -EINVAL;
}
@@ -2722,6 +2725,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned 
long count, int nid,
 out:
h->max_huge_pages = persistent_huge_pages(h);
spin_unlock(_lock);
+   mutex_unlock(>mutex);
 
NODEMASK_FREE(node_alloc_noretry);
 
@@ -3209,6 +3213,7 @@ void __init hugetlb_add_hstate(unsigned int order)
BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE);
BUG_ON(order == 0);
h = [hugetlb_max_hstate++];
+   mutex_init(>mutex);
h->order = order;
h->mask = ~(huge_page_size(h) - 1);
for (i = 0; i < MAX_NUMNODES; ++i)
-- 
2.30.2