On 11/21/2017 11:59 AM, Roman Gushchin wrote:
> On Tue, Nov 21, 2017 at 11:19:07AM -0800, Andrew Morton wrote:
>>
>> Why not
>>
>>      seq_printf(m,
>>                      "HugePages_Total:   %5lu\n"
>>                      "HugePages_Free:    %5lu\n"
>>                      "HugePages_Rsvd:    %5lu\n"
>>                      "HugePages_Surp:    %5lu\n"
>>                      "Hugepagesize:   %8lu kB\n",
>>                      h->nr_huge_pages,
>>                      h->free_huge_pages,
>>                      h->resv_huge_pages,
>>                      h->surplus_huge_pages,
>>                      1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
>>
>>      for_each_hstate(h)
>>              total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
>>      seq_printf(m, "Hugetlb:        %8lu kB\n", total / 1024);
>>      
>> ?
> 
> The idea was that the local variable guarantees the consistency
> between Hugetlb and HugePages_Total numbers. Otherwise we have
> to take hugetlb_lock.

Most important it prevents HugePages_Total from being larger than 
Hugetlb.

> What we can do, is to rename "count" into "nr_huge_pages", like:
> 
>       for_each_hstate(h) {
>               unsigned long nr_huge_pages = h->nr_huge_pages;
> 
>               total += (PAGE_SIZE << huge_page_order(h)) * nr_huge_pages;
> 
>               if (h == &default_hstate)
>                       seq_printf(m,
>                                  "HugePages_Total:   %5lu\n"
>                                  "HugePages_Free:    %5lu\n"
>                                  "HugePages_Rsvd:    %5lu\n"
>                                  "HugePages_Surp:    %5lu\n"
>                                  "Hugepagesize:   %8lu kB\n",
>                                  nr_huge_pages,
>                                  h->free_huge_pages,
>                                  h->resv_huge_pages,
>                                  h->surplus_huge_pages,
>                                  (PAGE_SIZE << huge_page_order(h)) / 1024);
>       }
> 
>       seq_printf(m, "Hugetlb:        %8lu kB\n", total / 1024);
> 
> But maybe taking a lock is not a bad idea, because it will also
> guarantee consistency between other numbers (like HugePages_Free) as well,
> which is not true right now.

You are correct in that there is no consistency guarantee for the numbers
with the default huge page size today.  However, I am not really a fan of
taking the lock for that guarantee.  IMO, the above code is fine.

This discussion reminds me that ideally there should be a per-hstate lock.
My guess is that the global lock is a carry over from the days when only
a single huge page size was supported.  In practice, I don't think this is
much of an issue as people typically only use a single huge page size.  But,
if anyone thinks is/may be an issue I am happy to make the changes.

-- 
Mike Kravetz

> 
> Thanks!
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"d...@kvack.org";> em...@kvack.org </a>
> 

Reply via email to