On Sun, Jun 12, 2016 at 03:33:25PM +0800, Hillf Danton wrote:
> > @@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int 
> > order, int classzone_idx)
> >                     sc.may_writepage = 1;
> > 
> >             /*
> > -            * Now scan the zone in the dma->highmem direction, stopping
> > -            * at the last zone which needs scanning.
> > -            *
> > -            * We do this because the page allocator works in the opposite
> > -            * direction.  This prevents the page allocator from allocating
> > -            * pages behind kswapd's direction of progress, which would
> > -            * cause too much scanning of the lower zones.
> > +            * Continue scanning in the highmem->dma direction stopping at
> > +            * the last zone which needs scanning. This may reclaim lowmem
> > +            * pages that are not necessary for zone balancing but it
> > +            * preserves LRU ordering. It is assumed that the bulk of
> > +            * allocation requests can use arbitrary zones with the
> > +            * possible exception of big highmem:lowmem configurations.
> >              */
> > -           for (i = 0; i <= end_zone; i++) {
> > +           for (i = end_zone; i >= end_zone; i--) {
> 
> s/i >= end_zone;/i >= 0;/ ?
> 

Yes although it's eliminated by "mm, vmscan: Make kswapd reclaim in
terms of nodes"

-- 
Mel Gorman
SUSE Labs

Reply via email to