On Tue, Jun 09, 2020 at 10:27:47PM +0800, Baoquan He wrote:
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 13cc653122b7..00869378d387 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3553,6 +3553,11 @@ static inline bool zone_watermark_fast(struct zone 
> > *z, unsigned int order,
> >  {
> >     long free_pages = zone_page_state(z, NR_FREE_PAGES);
> >     long cma_pages = 0;
> > +   long highatomic = 0;
> > +   const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
> > +
> > +   if (likely(!alloc_harder))
> > +           highatomic = z->nr_reserved_highatomic;
> >  
> >  #ifdef CONFIG_CMA
> >     /* If allocation can't use CMA areas don't use free CMA pages */
> > @@ -3567,8 +3572,12 @@ static inline bool zone_watermark_fast(struct zone 
> > *z, unsigned int order,
> >      * the caller is !atomic then it'll uselessly search the free
> >      * list. That corner case is then slower but it is harmless.
> >      */
> > -   if (!order && (free_pages - cma_pages) > mark + 
> > z->lowmem_reserve[classzone_idx])
> > -           return true;
> > +   if (!order) {
> > +           long fast_free = free_pages - cma_pages - highatomic;
> > +
> > +           if (fast_free > mark + z->lowmem_reserve[classzone_idx])
> 
> This looks reasonable to me. However, this change may not be rebased on
> top of the latest mainline tree or mm tree. E.g in this commit 97a225e69a1f8
> ("mm/page_alloc: integrate classzone_idx and high_zoneidx"), classzone_idx
> has been changed to highest_zoneidx.
> 

That's fine, I simply wanted to illustrate where I thought the check
should go to minimise the impact to the majority of allocations.

-- 
Mel Gorman
SUSE Labs

Reply via email to