From: Huang Ying <ying.hu...@intel.com>

This patch is only a code clean up patch without functionality
changes.  It moves CONFIG_THP_SWAP checking from inside swap slot
allocation to before we start swapping the THP.  This makes the code
path a little easier to be followed and understood.

Signed-off-by: "Huang, Ying" <ying.hu...@intel.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Minchan Kim <minc...@kernel.org>
---
 mm/swap_slots.c | 3 +--
 mm/vmscan.c     | 3 ++-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/swap_slots.c b/mm/swap_slots.c
index 90c1032a8ac3..14c2a91289e5 100644
--- a/mm/swap_slots.c
+++ b/mm/swap_slots.c
@@ -310,8 +310,7 @@ swp_entry_t get_swap_page(struct page *page)
        entry.val = 0;
 
        if (PageTransHuge(page)) {
-               if (IS_ENABLED(CONFIG_THP_SWAP))
-                       get_swap_pages(1, true, &entry);
+               get_swap_pages(1, true, &entry);
                return entry;
        }
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f7e949ac9756..90722afd4916 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1134,7 +1134,8 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
                                 * away. Chances are some or all of the
                                 * tail pages can be freed without IO.
                                 */
-                               if (!compound_mapcount(page) &&
+                               if ((!IS_ENABLED(CONFIG_THP_SWAP) ||
+                                    !compound_mapcount(page)) &&
                                    split_huge_page_to_list(page, page_list))
                                        goto activate_locked;
                        }
-- 
2.11.0

Reply via email to