On Mon, 10 Feb 2014, Luiz Capitulino wrote: > HugeTLB command-line option hugepages= allows the user to specify how many > huge pages should be allocated at boot. On NUMA systems, this argument > automatically distributes huge pages allocation among nodes, which can > be undesirable. >
And when hugepages can no longer be allocated on a node because it is too small, the remaining hugepages are distributed over nodes with memory available, correct? > The hugepagesnid= option introduced by this commit allows the user > to specify which NUMA nodes should be used to allocate boot-time HugeTLB > pages. For example, hugepagesnid=0,2,2G will allocate two 2G huge pages > from node 0 only. More details on patch 3/4 and patch 4/4. > Strange, it would seem better to just reserve as many hugepages as you want so that you get the desired number on each node and then free the ones you don't need at runtime. That probably doesn't work because we can't free very large hugepages that are reserved at boot, would fixing that issue reduce the need for this patchset? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/