User can change a node specific hugetlb count. i.e.
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
the calculated value of count is a total number of huge pages. It could
be overflow when a user entering a crazy high value. If so, the total
number of huge pages could be a small value which is not user expect.
We can simply fix it by setting count to ULONG_MAX, then it goes on. This
may be more in line with user's intention of allocating as many huge pages
as possible.

Signed-off-by: Jing Xiangfeng <jingxiangf...@huawei.com>
---
 mm/hugetlb.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index afef616..6688894 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2423,7 +2423,14 @@ static ssize_t __nr_hugepages_store_common(bool 
obey_mempolicy,
                 * per node hstate attribute: adjust count to global,
                 * but restrict alloc/free to the specified node.
                 */
+               unsigned long old_count = count;
                count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+               /*
+                * If user specified count causes overflow, set to
+                * largest possible value.
+                */
+               if (count < old_count)
+                       count = ULONG_MAX;
                init_nodemask_of_node(nodes_allowed, nid);
        } else
                nodes_allowed = &node_states[N_MEMORY];
-- 
2.7.4

Reply via email to