On Mon, Feb 18, 2019 at 07:11:55PM +0100, Michal Hocko wrote:
> On Mon 18-02-19 09:57:26, Matthew Wilcox wrote:
> > On Mon, Feb 18, 2019 at 06:05:58PM +0100, Michal Hocko wrote:
> > > + end_pfn = min(start_pfn + nr_pages,
> > > +                 zone_end_pfn(page_zone(pfn_to_page(start_pfn))));
> > >  
> > >   /* Check the starting page of each pageblock within the range */
> > > - for (; page < end_page; page = next_active_pageblock(page)) {
> > > -         if (!is_pageblock_removable_nolock(page))
> > > + for (; start_pfn < end_pfn; start_pfn = 
> > > next_active_pageblock(start_pfn)) {
> > > +         if (!is_pageblock_removable_nolock(start_pfn))
> > 
> > If you have a zone which contains pfns that run from ULONG_MAX-n to 
> > ULONG_MAX,
> > end_pfn is going to wrap around to 0 and this loop won't execute.
> 
> Is this a realistic situation to bother?

How insane do you think hardware manufacturers are ... ?  I don't know
of one today, but I wouldn't bet on something like that never existing.

> > I think
> > you should use:
> > 
> >     max_pfn = min(start_pfn + nr_pages,
> >                     zone_end_pfn(page_zone(pfn_to_page(start_pfn)))) - 1;
> > 
> >     for (; start_pfn <= max_pfn; ...)
> 
> I do not really care strongly, but we have more places were we do
> start_pfn + nr_pages and then use it as pfn < end_pfn construct. I
> suspect we would need to make a larger audit and make the code
> consistent so unless there are major concerns I would stick with what
> I have for now and leave the rest for the cleanup. Does that sound
> reasonable?

Yes, I think so.  There are a number of other places where we can wrap
around from ULONG_MAX to 0 fairly easily (eg page offsets in a file on
32-bit machines).  I started thinking about this with the XArray and
rapidly convinced myself we have a problem throughout Linux.

Reply via email to