On Fri, 06 Mar 2015, David Rientjes wrote:

> On Fri, 6 Mar 2015, Eric B Munson wrote:
> 
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 8c0d945..33c81e1 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -1056,7 +1056,7 @@ static isolate_migrate_t isolate_migratepages(struct 
> > zone *zone,
> >  {
> >     unsigned long low_pfn, end_pfn;
> >     struct page *page;
> > -   const isolate_mode_t isolate_mode =
> > +   const isolate_mode_t isolate_mode = ISOLATE_UNEVICTABLE |
> >             (cc->mode == MIGRATE_ASYNC ? ISOLATE_ASYNC_MIGRATE : 0);
> >  
> >     /*
> 
> I agree that memory compaction should be isolating and migrating 
> unevictable memory for better results, and we have been running with a 
> similar patch internally for about a year for the same purpose as you, 
> higher probability of allocating hugepages.
> 
> This would be better off removing the notion of ISOLATE_UNEVICTABLE 
> entirely, however, since CMA and now memory compaction would be using it, 
> so the check in __isolate_lru_page() is no longer necessary.  Has the 
> added bonus of removing about 10 lines of soure code.

Thanks for having a look, I will send out a V2 that removes
ISOLATE_UNEVICTABLE and the check in __isolate_lru_page().

Eric

Attachment: signature.asc
Description: Digital signature

Reply via email to