On Sun, Jul 19, 2020 at 2:12 AM Alex Shi <alex....@linux.alibaba.com> wrote:
>
>
>
> 在 2020/7/18 下午10:15, Alex Shi 写道:
> >>>
> >>>  struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb);
> >>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >>> index 14c668b7e793..36c1680efd90 100644
> >>> --- a/include/linux/mmzone.h
> >>> +++ b/include/linux/mmzone.h
> >>> @@ -261,6 +261,8 @@ struct lruvec {
> >>>         atomic_long_t                   nonresident_age;
> >>>         /* Refaults at the time of last reclaim cycle */
> >>>         unsigned long                   refaults;
> >>> +       /* per lruvec lru_lock for memcg */
> >>> +       spinlock_t                      lru_lock;
> >>>         /* Various lruvec state flags (enum lruvec_flags) */
> >>>         unsigned long                   flags;
> >> Any reason for placing this here instead of at the end of the
> >> structure? From what I can tell it looks like lruvec is already 128B
> >> long so placing the lock on the end would put it into the next
> >> cacheline which may provide some performance benefit since it is
> >> likely to be bounced quite a bit.
> > Rong Chen(Cced) once reported a performance regression when the lock at
> > the end of struct, and move here could remove it.
> > Although I can't not reproduce. But I trust his report.
> >
> Oops, Rong's report is on another member which is different with current
> struct.
>
> Compare to move to tail, how about to move it to head of struct, which is
> close to lru list? Did you have some data of the place change?

I don't have specific data, just anecdotal evidence from the past that
usually you want to keep locks away from read-mostly items since they
cause obvious cache thrash. My concern was more with the other fields
in the structure such as pgdat since it should be a static value and
having it evicted would likely be more expensive than just leaving the
cacheline as it is.

> > ...
> >
> >>>  putback:
> >>> -               spin_unlock_irq(&zone->zone_pgdat->lru_lock);
> >>>                 pagevec_add(&pvec_putback, pvec->pages[i]);
> >>>                 pvec->pages[i] = NULL;
> >>>         }
> >>> -       /* tempary disable irq, will remove later */
> >>> -       local_irq_disable();
> >>>         __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
> >>> -       local_irq_enable();
> >>> +       if (lruvec)
> >>> +               unlock_page_lruvec_irq(lruvec);
> >> So I am not a fan of this change. You went to all the trouble of
> >> reducing the lock scope just to bring it back out here again. In
> >> addition it implies there is a path where you might try to update the
> >> page state without disabling interrupts.
> > Right. but any idea to avoid this except a extra local_irq_disable?
> >
>
> The following changes would resolve the problem. Is this ok?
> @@ -324,7 +322,8 @@ static void __munlock_pagevec(struct pagevec *pvec, 
> struct zone *zone)
>                 pagevec_add(&pvec_putback, pvec->pages[i]);
>                 pvec->pages[i] = NULL;
>         }
> -       __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
> +       if (delta_munlocked)
> +               __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
>         if (lruvec)
>                 unlock_page_lruvec_irq(lruvec);

Why not just wrap the entire thing in a check for "lruvec"? Yes you
could theoretically be modding with a value of 0, but it avoids a
secondary unnecessary check and branching.

Reply via email to