On Wed, Nov 15, 2017 at 02:46:00PM -0800, David Rientjes wrote:
> On Wed, 15 Nov 2017, Michal Hocko wrote:
> 
> > > > >       if (!hugepages_supported())
> > > > >               return;
> > > > >       seq_printf(m,
> > > > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > > > >                       h->resv_huge_pages,
> > > > >                       h->surplus_huge_pages,
> > > > >                       1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > > > > +
> > > > > +     for_each_hstate(h)
> > > > > +             total += (PAGE_SIZE << huge_page_order(h)) * 
> > > > > h->nr_huge_pages;
> > > > 
> > > > Please keep the total calculation consistent with what we have there
> > > > already.
> > > > 
> > > 
> > > Yeah, and I'm not sure if your comment eludes to this being racy, but it 
> > > would be better to store the default size for default_hstate during the 
> > > iteration to total the size for all hstates.
> > 
> > I just meant to have the code consistent. I do not prefer one or other
> > option.
> 
> It's always nice when HugePages_Total * Hugepagesize cannot become greater 
> than Hugetlb.  Roman, could you factor something like this into your 
> change accompanied with a documentation upodate as suggested by Dave?

Hi David!

Working on it... I'll post an update soon.

Thanks!

Reply via email to