On 07/26/2016 08:44 AM, Jia He wrote:
> This patch is to fix such soft lockup. I thouhgt it is safe to call 
> cond_resched() because alloc_fresh_gigantic_page and alloc_fresh_huge_page 
> are out of spin_lock/unlock section.

Yikes.  So the call site for both the things you patch is this:

>         while (count > persistent_huge_pages(h)) {
...
>                 spin_unlock(&hugetlb_lock);
>                 if (hstate_is_gigantic(h))
>                         ret = alloc_fresh_gigantic_page(h, nodes_allowed);
>                 else
>                         ret = alloc_fresh_huge_page(h, nodes_allowed);
>                 spin_lock(&hugetlb_lock);

and you choose to patch both of the alloc_*() functions.  Why not just
fix it at the common call site?  Seems like that
spin_lock(&hugetlb_lock) could be a cond_resched_lock() which would fix
both cases.

Also, putting that cond_resched() inside the for_each_node*() loop is an
odd choice.  It seems to indicate that the loops can take a long time,
which really isn't the case.  The _loop_ isn't long, right?

Reply via email to