On 11/5/20 9:55 AM, Alex Shi wrote:
This patch moves per node lru_lock into lruvec, thus bring a lru_lock for
each of memcg per node. So on a large machine, each of memcg don't
have to suffer from per node pgdat->lru_lock competition. They could go
fast with their self lru_lock.
After move memcg charge before lru inserting, page isolation could
serialize page's memcg, then per memcg lruvec lock is stable and could
replace per node lru lock.
In func isolate_migratepages_block, compact_unlock_should_abort and
lock_page_lruvec_irqsave are open coded to work with compact_control.
Also add a debug func in locking which may give some clues if there are
sth out of hands.
Daniel Jordan's testing show 62% improvement on modified readtwice case
on his 2P * 10 core * 2 HT broadwell box.
https://lore.kernel.org/lkml/20200915165807.kpp7uhiw7l3lo...@ca-dmjordan1.us.oracle.com/
On a large machine with memcg enabled but not used, the page's lruvec
seeking pass a few pointers, that may lead to lru_lock holding time
increase and a bit regression.
Hugh Dickins helped on the patch polish, thanks!
Signed-off-by: Alex Shi <alex....@linux.alibaba.com>
Acked-by: Hugh Dickins <hu...@google.com>
Cc: Rong Chen <rong.a.c...@intel.com>
Cc: Hugh Dickins <hu...@google.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Michal Hocko <mho...@kernel.org>
Cc: Vladimir Davydov <vdavydov....@gmail.com>
Cc: Yang Shi <yang....@linux.alibaba.com>
Cc: Matthew Wilcox <wi...@infradead.org>
Cc: Konstantin Khlebnikov <khlebni...@yandex-team.ru>
Cc: Tejun Heo <t...@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Cc: cgro...@vger.kernel.org
Acked-by: Vlastimil Babka <vba...@suse.cz>