On Thu, Aug 8, 2019 at 1:23 PM shuah <sh...@kernel.org> wrote: > > On 8/8/19 1:40 PM, Mina Almasry wrote: > > Problem: > > Currently tasks attempting to allocate more hugetlb memory than is > > available get > > a failure at mmap/shmget time. This is thanks to Hugetlbfs Reservations [1]. > > However, if a task attempts to allocate hugetlb memory only more than its > > hugetlb_cgroup limit allows, the kernel will allow the mmap/shmget call, > > but will SIGBUS the task when it attempts to fault the memory in. > > > > We have developers interested in using hugetlb_cgroups, and they have > > expressed > > dissatisfaction regarding this behavior. We'd like to improve this > > behavior such that tasks violating the hugetlb_cgroup limits get an error on > > mmap/shmget time, rather than getting SIGBUS'd when they try to fault > > the excess memory in. > > > > The underlying problem is that today's hugetlb_cgroup accounting happens > > at hugetlb memory *fault* time, rather than at *reservation* time. > > Thus, enforcing the hugetlb_cgroup limit only happens at fault time, and > > the offending task gets SIGBUS'd. > > > > Proposed Solution: > > A new page counter named hugetlb.xMB.reservation_[limit|usage]_in_bytes. > > This > > counter has slightly different semantics than > > hugetlb.xMB.[limit|usage]_in_bytes: > > > > - While usage_in_bytes tracks all *faulted* hugetlb memory, > > reservation_usage_in_bytes tracks all *reserved* hugetlb memory. > > > > - If a task attempts to reserve more memory than limit_in_bytes allows, > > the kernel will allow it to do so. But if a task attempts to reserve > > more memory than reservation_limit_in_bytes, the kernel will fail this > > reservation. > > > > This proposal is implemented in this patch, with tests to verify > > functionality and show the usage. > > > > Alternatives considered: > > 1. A new cgroup, instead of only a new page_counter attached to > > the existing hugetlb_cgroup. Adding a new cgroup seemed like a lot of > > code > > duplication with hugetlb_cgroup. Keeping hugetlb related page counters > > under > > hugetlb_cgroup seemed cleaner as well. > > > > 2. Instead of adding a new counter, we considered adding a sysctl that > > modifies > > the behavior of hugetlb.xMB.[limit|usage]_in_bytes, to do accounting at > > reservation time rather than fault time. Adding a new page_counter seems > > better as userspace could, if it wants, choose to enforce different > > cgroups > > differently: one via limit_in_bytes, and another via > > reservation_limit_in_bytes. This could be very useful if you're > > transitioning how hugetlb memory is partitioned on your system one > > cgroup at a time, for example. Also, someone may find usage for both > > limit_in_bytes and reservation_limit_in_bytes concurrently, and this > > approach gives them the option to do so. > > > > Caveats: > > 1. This support is implemented for cgroups-v1. I have not tried > > hugetlb_cgroups with cgroups v2, and AFAICT it's not supported yet. > > This is largely because we use cgroups-v1 for now. If required, I > > can add hugetlb_cgroup support to cgroups v2 in this patch or > > a follow up. > > 2. Most complicated bit of this patch I believe is: where to store the > > pointer to the hugetlb_cgroup to uncharge at unreservation time? > > Normally the cgroup pointers hang off the struct page. But, with > > hugetlb_cgroup reservations, one task can reserve a specific page and > > another > > task may fault it in (I believe), so storing the pointer in struct > > page is not appropriate. Proposed approach here is to store the pointer > > in > > the resv_map. See patch for details. > > > > [1]: https://www.kernel.org/doc/html/latest/vm/hugetlbfs_reserv.html > > > > Signed-off-by: Mina Almasry <almasrym...@google.com> > > --- > > include/linux/hugetlb.h | 10 +- > > include/linux/hugetlb_cgroup.h | 19 +- > > mm/hugetlb.c | 256 ++++++++-- > > mm/hugetlb_cgroup.c | 153 +++++- > > Is there a reason why all these changes are in a single patch? > I can see these split in at least 2 or 3 patches with the test > as a separate patch. >
Only because I was expecting feedback on the approach and alternative approaches before an in-detail review. But, no problem; I'll break it into smaller patches now. > Makes it lot easier to review. > > thanks, > -- Shuah