On 3/19/21 3:42 PM, Mike Kravetz wrote:
> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in
> non-task context") was added to address the issue of free_huge_page
> being called from irq context.  That commit hands off free_huge_page
> processing to a workqueue if !in_task.  However, as seen in [1] this
> does not cover all cases.  Instead, make the locks taken in the
> free_huge_page irq safe.
> 
> This patch does the following:
> - Make hugetlb_lock irq safe.  This is mostly a simple process of
>   changing spin_*lock calls to spin_*lock_irq* calls.
> - Make subpool lock irq safe in a similar manner.
> - Revert the !in_task check and workqueue handoff.
> 
> [1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43a...@google.com/
> 
> Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>
> ---
>  mm/hugetlb.c        | 206 ++++++++++++++++++++------------------------
>  mm/hugetlb_cgroup.c |  10 ++-
>  2 files changed, 100 insertions(+), 116 deletions(-)

I missed the following changes:

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5efff5ce337f..13d77d94d185 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2803,10 +2803,10 @@ static int set_max_huge_pages(struct hstate *h, 
unsigned long count, int nid,
                        break;
 
                /* Drop lock as free routines may sleep */
-               spin_unlock(&hugetlb_lock);
+               spin_unlock_irqrestore(&hugetlb_lock, flags);
                update_and_free_page(h, page);
                cond_resched();
-               spin_lock(&hugetlb_lock);
+               spin_lock_irqsave(&hugetlb_lock, flags);
 
                /* Recompute min_count in case hugetlb_lock was dropped */
                min_count = min_hp_count(h, count);

Reply via email to