On Sun 17-09-17 10:45:34, Jerome Glisse wrote:
> On Fri, Sep 15, 2017 at 09:01:00AM +0200, Michal Hocko wrote:
> > On Thu 14-09-17 15:00:11, jgli...@redhat.com wrote:
> > > From: Jérôme Glisse <jgli...@redhat.com>
> > > 
> > > Fix for 4.14, zone device page always have an elevated refcount
> > > of one and thus page count sanity check in uncharge_page() is
> > > inappropriate for them.
> > > 
> > > Signed-off-by: Jérôme Glisse <jgli...@redhat.com>
> > > Reported-by: Evgeny Baskakov <ebaska...@nvidia.com>
> > > Cc: Andrew Morton <a...@linux-foundation.org>
> > > Cc: Johannes Weiner <han...@cmpxchg.org>
> > > Cc: Michal Hocko <mho...@kernel.org>
> > > Cc: Vladimir Davydov <vdavydov....@gmail.com>
> > 
> > Acked-by: Michal Hocko <mho...@suse.com>
> > 
> > Side note. Wouldn't it be better to re-organize the check a bit? It is
> > true that this is VM_BUG so it is not usually compiled in but when it
> > preferably checks for unlikely cases first while the ref count will be
> > 0 in the prevailing cases. So can we have
> >     VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) &&
> >                     !PageHWPoison(page), page);
> > 
> > I would simply fold this nano optimization into the patch as you are
> > touching it already. Not sure it is worth a separate commit.
> 
> I am traveling sorry for late answer. This nano optimization make sense
> Andrew do you want me to respin or should we leave it be ? I don't mind
> either way.

Andrew, could you fold this into the patch then?
---
>From 73b5c07aed76aa68b413e708852da63ed9eb965c Mon Sep 17 00:00:00 2001
From: Michal Hocko <mho...@suse.com>
Date: Mon, 18 Sep 2017 08:27:43 +0200
Subject: [PATCH] memcg: nano-optimize VM_BUG_ON in uncharge_page

Even though VM_BUG* is usually not compiled in there are systems which
enable CONFIG_DEBUG_VM by default. VM_BUG_ON_PAGE in uncharge_page is
not very optimal for the normal case. All pages should have counter==0
so that is the first thing to check. is_zone_device_page is nicely
noop if ZONE_DEVICE is not compiled in and finally HW poison pages
should be least probable. So reorder the check to bail out as early as
possible on normal case.

Signed-off-by: Michal Hocko <mho...@suse.com>
---
 mm/memcontrol.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ae37b5624eb2..d5f3a62887cf 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5658,8 +5658,8 @@ static void uncharge_batch(const struct uncharge_gather 
*ug)
 static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 {
        VM_BUG_ON_PAGE(PageLRU(page), page);
-       VM_BUG_ON_PAGE(!PageHWPoison(page) && !is_zone_device_page(page) &&
-                       page_count(page), page);
+       VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) &&
+                       !PageHWPoison(page) , page);
 
        if (!page->mem_cgroup)
                return;
-- 
2.14.1

-- 
Michal Hocko
SUSE Labs

Reply via email to