[PATCH v3 for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs

2020-09-29 Thread js1304
From: Joonsoo Kim memalloc_nocma_{save/restore} APIs can be used to skip page allocation on CMA area, but, there is a missing case and the page on CMA area could be allocated even if APIs are used. This patch handles this case to fix the potential issue. For now, these APIs are used to prevent

[PATCH v2 for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs

2020-09-28 Thread js1304
From: Joonsoo Kim memalloc_nocma_{save/restore} APIs can be used to skip page allocation on CMA area, but, there is a missing case and the page on CMA area could be allocated even if APIs are used. This patch handles this case to fix the potential issue. Missing case is an allocation from the

[PATCH for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs

2020-08-24 Thread js1304
From: Joonsoo Kim memalloc_nocma_{save/restore} APIs can be used to skip page allocation on CMA area, but, there is a missing case and the page on CMA area could be allocated even if APIs are used. This patch handles this case to fix the potential issue. Missing case is an allocation from the

[PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API

2020-07-31 Thread js1304
From: Joonsoo Kim We have well defined scope API to exclude CMA region. Use it rather than manipulating gfp_mask manually. With this change, we can now restore __GFP_MOVABLE for gfp_mask like as usual migration target allocation. It would result in that the ZONE_MOVABLE is also searched by page

[PATCH v3 3/3] mm/gup: use a standard migration target allocation callback

2020-07-31 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Acked-by: Vlastimil Babka Acked-by: Michal Hocko Signed-off-by: Joonsoo Kim --- mm/gup.c | 54 ++ 1 file changed, 6 insertions(+), 48 deletions(-)

[PATCH v3 2/3] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-31 Thread js1304
From: Joonsoo Kim new_non_cma_page() in gup.c requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by using allocation scope APIs. However, there is a work-around for hugetlb. Normal hugetlb page allocation API for migration is

[PATCH v7 6/6] mm/vmscan: restore active/inactive ratio for anonymous LRU

2020-07-23 Thread js1304
From: Joonsoo Kim Now that workingset detection is implemented for anonymous LRU, we don't need large inactive list to allow detecting frequently accessed pages before they are reclaimed, anymore. This effectively reverts the temporary measure put in by commit "mm/vmscan: make active/inactive

[PATCH v7 1/6] mm/vmscan: make active/inactive ratio as 1:1 for anon lru

2020-07-23 Thread js1304
From: Joonsoo Kim Current implementation of LRU management for anonymous page has some problems. Most important one is that it doesn't protect the workingset, that is, pages on the active LRU list. Although, this problem will be fixed in the following patchset, the preparation is required and

[PATCH v7 4/6] mm/swapcache: support to handle the shadow entries

2020-07-23 Thread js1304
From: Joonsoo Kim Workingset detection for anonymous page will be implemented in the following patch and it requires to store the shadow entries into the swapcache. This patch implements an infrastructure to store the shadow entry in the swapcache. Acked-by: Johannes Weiner Signed-off-by:

[PATCH v7 3/6] mm/workingset: prepare the workingset detection infrastructure for anon LRU

2020-07-23 Thread js1304
From: Joonsoo Kim To prepare the workingset detection for anon LRU, this patch splits workingset event counters for refault, activate and restore into anon and file variants, as well as the refaults counter in struct lruvec. Acked-by: Johannes Weiner Acked-by: Vlastimil Babka Signed-off-by:

[PATCH v7 0/6] workingset protection/detection on the anonymous LRU list

2020-07-23 Thread js1304
From: Joonsoo Kim Hello, This patchset implements workingset protection and detection on the anonymous LRU list. * Changes on v7 - fix a bug on clear_shadow_from_swap_cache() - enhance the commit description - fix workingset detection formula * Changes on v6 - rework to reflect a new LRU

[PATCH v7 5/6] mm/swap: implement workingset detection for anonymous LRU

2020-07-23 Thread js1304
From: Joonsoo Kim This patch implements workingset detection for anonymous LRU. All the infrastructure is implemented by the previous patches so this patch just activates the workingset detection by installing/retrieving the shadow entry and adding refault calculation. Acked-by: Johannes Weiner

[PATCH v7 2/6] mm/vmscan: protect the workingset on anonymous LRU

2020-07-23 Thread js1304
From: Joonsoo Kim In current implementation, newly created or swap-in anonymous page is started on active list. Growing active list results in rebalancing active/inactive list so old pages on active list are demoted to inactive list. Hence, the page on active list isn't protected at all.

[PATCH v2] mm/page_alloc: fix memalloc_nocma_{save/restore} APIs

2020-07-22 Thread js1304
From: Joonsoo Kim Currently, memalloc_nocma_{save/restore} API that prevents CMA area in page allocation is implemented by using current_gfp_context(). However, there are two problems of this implementation. First, this doesn't work for allocation fastpath. In the fastpath, original gfp_mask is

[PATCH] mm/page_alloc: fix memalloc_nocma_{save/restore} APIs

2020-07-20 Thread js1304
From: Joonsoo Kim Currently, memalloc_nocma_{save/restore} API that prevents CMA area in page allocation is implemented by using current_gfp_context(). However, there are two problems of this implementation. First, this doesn't work for allocation fastpath. In the fastpath, original gfp_mask is

[PATCH v2 4/4] mm/gup: use a standard migration target allocation callback

2020-07-19 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Acked-by: Vlastimil Babka Acked-by: Michal Hocko Signed-off-by: Joonsoo Kim --- mm/gup.c | 54 ++ 1 file changed, 6 insertions(+), 48 deletions(-)

[PATCH v2 3/4] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-19 Thread js1304
From: Joonsoo Kim new_non_cma_page() in gup.c requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by using allocation scope APIs. However, there is a work-around for hugetlb. Normal hugetlb page allocation API for migration is

[PATCH v2 1/4] mm/page_alloc: fix non cma alloc context

2020-07-19 Thread js1304
From: Joonsoo Kim Currently, preventing cma area in page allocation is implemented by using current_gfp_context(). However, there are two problems of this implementation. First, this doesn't work for allocation fastpath. In the fastpath, original gfp_mask is used since current_gfp_context() is

[PATCH v2 2/4] mm/gup: restrict CMA region by using allocation scope API

2020-07-19 Thread js1304
From: Joonsoo Kim We have well defined scope API to exclude CMA region. Use it rather than manipulating gfp_mask manually. With this change, we can now restore __GFP_MOVABLE for gfp_mask like as usual migration target allocation. It would result in that the ZONE_MOVABLE is also searched by page

[PATCH 2/4] mm/gup: restrict CMA region by using allocation scope API

2020-07-14 Thread js1304
From: Joonsoo Kim We have well defined scope API to exclude CMA region. Use it rather than manipulating gfp_mask manually. With this change, we can now use __GFP_MOVABLE for gfp_mask and the ZONE_MOVABLE is also searched by page allocator. For hugetlb, gfp_mask is redefined since it has a

[PATCH 3/4] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-14 Thread js1304
From: Joonsoo Kim new_non_cma_page() in gup.c requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by using allocation scope APIs. However, there is a work-around for hugetlb. Normal hugetlb page allocation API for migration is

[PATCH 4/4] mm/gup: use a standard migration target allocation callback

2020-07-14 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/gup.c | 54 ++ 1 file changed, 6 insertions(+), 48 deletions(-) diff --git a/mm/gup.c

[PATCH 1/4] mm/page_alloc: fix non cma alloc context

2020-07-14 Thread js1304
From: Joonsoo Kim Currently, preventing cma area in page allocation is implemented by using current_gfp_context(). However, there are two problems of this implementation. First, this doesn't work for allocation fastpath. In the fastpath, original gfp_mask is used since current_gfp_context() is

[PATCH v5 5/9] mm/migrate: make a standard migration target allocation function

2020-07-13 Thread js1304
From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will

[PATCH v5 8/9] mm/memory-failure: remove a wrapper for alloc_migration_target()

2020-07-13 Thread js1304
From: Joonsoo Kim There is a well-defined standard migration target callback. Use it directly. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/memory-failure.c | 18 ++ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/mm/memory-failure.c

[PATCH v5 6/9] mm/mempolicy: use a standard migration target allocation callback

2020-07-13 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Acked-by: Michal Hocko Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/internal.h | 1 - mm/mempolicy.c | 31 ++- mm/migrate.c | 8 ++-- 3 files changed,

[PATCH v5 2/9] mm/migrate: move migration helper from .h to .c

2020-07-13 Thread js1304
From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Acked-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 33 +

[PATCH v5 9/9] mm/memory_hotplug: remove a wrapper for alloc_migration_target()

2020-07-13 Thread js1304
From: Joonsoo Kim To calculate the correct node to migrate the page for hotplug, we need to check node id of the page. Wrapper for alloc_migration_target() exists for this purpose. However, Vlastimil informs that all migration source pages come from a single node. In this case, we don't need to

[PATCH v5 0/9] clean-up the migration target allocation functions

2020-07-13 Thread js1304
From: Joonsoo Kim This patchset clean-up the migration target allocation functions. * Changes on v5 - remove new_non_cma_page() related patches (implementation for memalloc_nocma_{save,restore} has a critical bug that cannot exclude CMA memory in some cases so cannot use them here. Need to fix

[PATCH v5 1/9] mm/page_isolation: prefer the node of the source page

2020-07-13 Thread js1304
From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Acked-by: Roman Gushchin Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/page_isolation.c | 4 +++- 1 file changed, 3

[PATCH v5 4/9] mm/migrate: clear __GFP_RECLAIM to make the migration callback consistent with regular THP allocations

2020-07-13 Thread js1304
From: Joonsoo Kim new_page_nodemask is a migration callback and it tries to use a common gfp flags for the target page allocation whether it is a base page or a THP. The later only adds GFP_TRANSHUGE to the given mask. This results in the allocation being slightly more aggressive than necessary

[PATCH v5 3/9] mm/hugetlb: unify migration callbacks

2020-07-13 Thread js1304
From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. It's redundant to have two almost similar functions in order to handle this flag. So, this patch tries to remove one by

[PATCH v5 7/9] mm/page_alloc: remove a wrapper for alloc_migration_target()

2020-07-13 Thread js1304
From: Joonsoo Kim There is a well-defined standard migration target callback. Use it directly. Acked-by: Michal Hocko Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 8 ++-- mm/page_isolation.c | 10 -- 2 files changed, 6 insertions(+), 12

[PATCH v4 11/11] mm/memory_hotplug: remove a wrapper for alloc_migration_target()

2020-07-07 Thread js1304
From: Joonsoo Kim To calculate the correct node to migrate the page for hotplug, we need to check node id of the page. Wrapper for alloc_migration_target() exists for this purpose. However, Vlastimil informs that all migration source pages come from a single node. In this case, we don't need to

[PATCH v4 10/11] mm/memory-failure: remove a wrapper for alloc_migration_target()

2020-07-07 Thread js1304
From: Joonsoo Kim There is a well-defined standard migration target callback. Use it directly. Signed-off-by: Joonsoo Kim --- mm/memory-failure.c | 18 ++ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index

[PATCH v4 09/11] mm/page_alloc: remove a wrapper for alloc_migration_target()

2020-07-07 Thread js1304
From: Joonsoo Kim There is a well-defined standard migration target callback. Use it directly. Acked-by: Michal Hocko Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 8 ++-- mm/page_isolation.c | 10 -- 2 files changed, 6 insertions(+), 12

[PATCH v4 08/11] mm/mempolicy: use a standard migration target allocation callback

2020-07-07 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Acked-by: Michal Hocko Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/internal.h | 1 - mm/mempolicy.c | 31 ++- mm/migrate.c | 8 ++-- 3 files changed,

[PATCH v4 00/11] clean-up the migration target allocation functions

2020-07-07 Thread js1304
From: Joonsoo Kim This patchset clean-up the migration target allocation functions. * Changes on v4 - use full gfp_mask - use memalloc_nocma_{save,restore} to exclude CMA memory - separate __GFP_RECLAIM handling for THP allocation - remove more wrapper functions * Changes on v3 - As Vlastimil

[PATCH v4 01/11] mm/page_isolation: prefer the node of the source page

2020-07-07 Thread js1304
From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Acked-by: Roman Gushchin Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/page_isolation.c | 4 +++- 1 file changed, 3

[PATCH v4 07/11] mm/gup: use a standard migration target allocation callback

2020-07-07 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. It's mostly similar with new_non_cma_page() except considering CMA pages. This patch adds a CMA consideration to the standard migration target allocation callback and use it on gup.c. Acked-by: Vlastimil Babka

[PATCH v4 05/11] mm/migrate: clear __GFP_RECLAIM for THP allocation for migration

2020-07-07 Thread js1304
From: Joonsoo Kim In mm/migrate.c, THP allocation for migration is called with the provided gfp_mask | GFP_TRANSHUGE. This gfp_mask contains __GFP_RECLAIM and it would be conflict with the intention of the GFP_TRANSHUGE. GFP_TRANSHUGE/GFP_TRANSHUGE_LIGHT is introduced to control the reclaim

[PATCH v4 06/11] mm/migrate: make a standard migration target allocation function

2020-07-07 Thread js1304
From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will

[PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-07 Thread js1304
From: Joonsoo Kim new_non_cma_page() in gup.c which try to allocate migration target page requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way works well for THP page or normal page but not for hugetlb page.

[PATCH v4 03/11] mm/hugetlb: unify migration callbacks

2020-07-07 Thread js1304
From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. It's redundant to have two almost similar functions in order to handle this flag. So, this patch tries to remove one by

[PATCH v4 02/11] mm/migrate: move migration helper from .h to .c

2020-07-07 Thread js1304
From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Acked-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 33 +

[PATCH v3 6/8] mm/gup: use a standard migration target allocation callback

2020-06-23 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. It's mostly similar with new_non_cma_page() except considering CMA pages. This patch adds a CMA consideration to the standard migration target allocation callback and use it on gup.c. Signed-off-by: Joonsoo Kim

[PATCH v3 2/8] mm/migrate: move migration helper from .h to .c

2020-06-23 Thread js1304
From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Acked-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 33 +

[PATCH v3 3/8] mm/hugetlb: unify migration callbacks

2020-06-23 Thread js1304
From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. This patch adds an argument, gfp_mask, on alloc_huge_page_nodemask() and replace the callsite for alloc_huge_page_node() with

[PATCH v3 5/8] mm/migrate: make a standard migration target allocation function

2020-06-23 Thread js1304
From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be

[PATCH v3 7/8] mm/mempolicy: use a standard migration target allocation callback

2020-06-23 Thread js1304
From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Signed-off-by: Joonsoo Kim --- mm/internal.h | 1 - mm/mempolicy.c | 30 ++ mm/migrate.c | 8 ++-- 3 files changed, 12 insertions(+), 27 deletions(-) diff --git

[PATCH v3 4/8] mm/hugetlb: make hugetlb migration callback CMA aware

2020-06-23 Thread js1304
From: Joonsoo Kim new_non_cma_page() in gup.c which try to allocate migration target page requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way works well for THP page or normal page but not for hugetlb page.

[PATCH v3 8/8] mm/page_alloc: remove a wrapper for alloc_migration_target()

2020-06-23 Thread js1304
From: Joonsoo Kim There is a well-defined standard migration target callback. Use it directly. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 9 +++-- mm/page_isolation.c | 11 --- 2 files changed, 7 insertions(+), 13 deletions(-) diff --git a/mm/page_alloc.c

[PATCH v3 1/8] mm/page_isolation: prefer the node of the source page

2020-06-23 Thread js1304
From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Acked-by: Roman Gushchin Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/page_isolation.c | 4 +++- 1 file changed, 3

[PATCH v3 0/8] clean-up the migration target allocation functions

2020-06-23 Thread js1304
From: Joonsoo Kim This patchset clean-up the migration target allocation functions. * Changes on v3 - do not introduce alloc_control for hugetlb functions - do not change the signature of migrate_pages() - rename alloc_control to migration_target_control * Changes on v2 - add acked-by tags -

[PATCH v6 0/6] workingset protection/detection on the anonymous LRU list

2020-06-16 Thread js1304
From: Joonsoo Kim Hello, This patchset implements workingset protection and detection on the anonymous LRU list. * Changes on v6 - rework to reflect a new LRU balance model - remove memcg charge timing stuff on v5 since alternative is already merged on mainline - remove readahead stuff on v5

[PATCH v6 2/6] mm/vmscan: protect the workingset on anonymous LRU

2020-06-16 Thread js1304
From: Joonsoo Kim In current implementation, newly created or swap-in anonymous page is started on active list. Growing active list results in rebalancing active/inactive list so old pages on active list are demoted to inactive list. Hence, the page on active list isn't protected at all.

[PATCH v6 6/6] mm/vmscan: restore active/inactive ratio for anonymous LRU

2020-06-16 Thread js1304
From: Joonsoo Kim Now, workingset detection is implemented for anonymous LRU. We don't have to worry about the misfound for workingset due to the ratio of active/inactive. Let's restore the ratio. Acked-by: Johannes Weiner Signed-off-by: Joonsoo Kim --- mm/vmscan.c | 2 +- 1 file changed, 1

[PATCH v6 3/6] mm/workingset: extend the workingset detection for anon LRU

2020-06-16 Thread js1304
From: Joonsoo Kim In the following patch, workingset detection will be applied to anonymous LRU. To prepare it, this patch adds some code to distinguish/handle the both LRUs. v6: do not introduce a new nonresident_age for anon LRU since we need to use *unified* nonresident_age to implement

[PATCH v6 1/6] mm/vmscan: make active/inactive ratio as 1:1 for anon lru

2020-06-16 Thread js1304
From: Joonsoo Kim Current implementation of LRU management for anonymous page has some problems. Most important one is that it doesn't protect the workingset, that is, pages on the active LRU list. Although, this problem will be fixed in the following patchset, the preparation is required and

[PATCH v6 4/6] mm/swapcache: support to handle the exceptional entries in swapcache

2020-06-16 Thread js1304
From: Joonsoo Kim Swapcache doesn't handle the exceptional entries since there is no case using it. In the following patch, workingset detection for anonymous page will be implemented and it stores the shadow entries as exceptional entries into the swapcache. So, we need to handle the

[PATCH v6 5/6] mm/swap: implement workingset detection for anonymous LRU

2020-06-16 Thread js1304
From: Joonsoo Kim This patch implements workingset detection for anonymous LRU. All the infrastructure is implemented by the previous patches so this patch just activates the workingset detection by installing/retrieving the shadow entry. Signed-off-by: Joonsoo Kim --- include/linux/swap.h |

[PATCH for v5.8 1/3] mm: workingset: age nonresident information alongside anonymous pages

2020-06-16 Thread js1304
From: Johannes Weiner After ("mm: workingset: let cache workingset challenge anon fix"), we compare refault distances to active_file + anon. But age of the non-resident information is only driven by the file LRU. As a result, we may overestimate the recency of any incoming refaults and activate

[PATCH for v5.8 2/3] mm/swap: fix for "mm: workingset: age nonresident information alongside anonymous pages"

2020-06-16 Thread js1304
From: Joonsoo Kim Non-file-lru page could also be activated in mark_page_accessed() and we need to count this activation for nonresident_age. Note that it's better for this patch to be squashed into the patch "mm: workingset: age nonresident information alongside anonymous pages".

[PATCH for v5.8 0/3] fix for "mm: balance LRU lists based on relative thrashing" patchset

2020-06-16 Thread js1304
From: Joonsoo Kim This patchset fixes some problems of the patchset, "mm: balance LRU lists based on relative thrashing", which is now merged on the mainline. Patch "mm: workingset: let cache workingset challenge anon fix" is the result of discussion with Johannes. See following link.

[PATCH for v5.8 3/3] mm/memory: fix IO cost for anonymous page

2020-06-16 Thread js1304
From: Joonsoo Kim With synchronous IO swap device, swap-in is directly handled in fault code. Since IO cost notation isn't added there, with synchronous IO swap device, LRU balancing could be wrongly biased. Fix it to count it in fault code. Signed-off-by: Joonsoo Kim --- mm/memory.c | 8

[PATCH v2 12/12] mm/page_alloc: use standard migration target allocation function directly

2020-05-27 Thread js1304
From: Joonsoo Kim There is no need to make a function in order to call standard migration target allocation function. Use standard one directly. Signed-off-by: Joonsoo Kim --- include/linux/page-isolation.h | 2 -- mm/page_alloc.c| 9 +++-- mm/page_isolation.c

[PATCH v2 07/12] mm/hugetlb: do not modify user provided gfp_mask

2020-05-27 Thread js1304
From: Joonsoo Kim It's not good practice to modify user input. Instead of using it to build correct gfp_mask for APIs, this patch introduces another gfp_mask field, __gfp_mask, for internal usage. Signed-off-by: Joonsoo Kim --- mm/hugetlb.c | 19 ++- mm/internal.h | 2 ++ 2

[PATCH v2 10/12] mm/gup: use standard migration target allocation function

2020-05-27 Thread js1304
From: Joonsoo Kim There is no reason to implement it's own function for migration target allocation. Use standard one. Signed-off-by: Joonsoo Kim --- mm/gup.c | 61 ++--- 1 file changed, 10 insertions(+), 51 deletions(-) diff --git

[PATCH v2 11/12] mm/mempolicy: use standard migration target allocation function

2020-05-27 Thread js1304
From: Joonsoo Kim There is no reason to implement it's own function for migration target allocation. Use standard one. Signed-off-by: Joonsoo Kim --- mm/internal.h | 3 --- mm/mempolicy.c | 32 +++- mm/migrate.c | 3 ++- 3 files changed, 5 insertions(+), 33

[PATCH v2 08/12] mm/migrate: change the interface of the migration target alloc/free functions

2020-05-27 Thread js1304
From: Joonsoo Kim To prepare unifying duplicated functions in following patches, this patch changes the interface of the migration target alloc/free functions. Functions now use struct alloc_control as an argument. There is no functional change. Signed-off-by: Joonsoo Kim ---

[PATCH v2 02/12] mm/migrate: move migration helper from .h to .c

2020-05-27 Thread js1304
From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Acked-by: Mike Kravetz Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 33 + mm/migrate.c| 29

[PATCH v2 09/12] mm/migrate: make standard migration target allocation functions

2020-05-27 Thread js1304
From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be

[PATCH v2 00/12] clean-up the migration target allocation functions

2020-05-27 Thread js1304
From: Joonsoo Kim This patchset clean-up the migration target allocation functions. * Changes on v2 - add acked-by tags - fix missing compound_head() call for the patch #3 - remove thisnode field on alloc_control and use __GFP_THISNODE directly - fix missing __gfp_mask setup for the patch

[PATCH v2 03/12] mm/hugetlb: introduce alloc_control structure to simplify migration target allocation APIs

2020-05-27 Thread js1304
From: Joonsoo Kim Currently, page allocation functions for migration requires some arguments. More worse, in the following patch, more argument will be needed to unify the similar functions. To simplify them, in this patch, unified data structure that controls allocation behaviour is introduced.

[PATCH v2 04/12] mm/hugetlb: use provided ac->gfp_mask for allocation

2020-05-27 Thread js1304
From: Joonsoo Kim gfp_mask handling on alloc_huge_page_(node|nodemask) is slightly changed, from ASSIGN to OR. It's safe since caller of these functions doesn't pass extra gfp_mask except htlb_alloc_mask(). This is a preparation step for following patches. Signed-off-by: Joonsoo Kim ---

[PATCH v2 01/12] mm/page_isolation: prefer the node of the source page

2020-05-27 Thread js1304
From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Acked-by: Roman Gushchin Signed-off-by: Joonsoo Kim --- mm/page_isolation.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git

[PATCH v2 06/12] mm/hugetlb: make hugetlb migration target allocation APIs CMA aware

2020-05-27 Thread js1304
From: Joonsoo Kim There is a user who do not want to use CMA memory for migration. Until now, it is implemented by caller side but it's not optimal since there is limited information on caller. This patch implements it on callee side to get better result. Acked-by: Mike Kravetz Signed-off-by:

[PATCH v2 05/12] mm/hugetlb: unify hugetlb migration callback function

2020-05-27 Thread js1304
From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. This patch moves this handling to alloc_huge_page_nodemask() and function caller. Then, remove alloc_huge_page_node().

[PATCH 06/11] mm/hugetlb: do not modify user provided gfp_mask

2020-05-17 Thread js1304
From: Joonsoo Kim It's not good practice to modify user input. Instead of using it to build correct gfp_mask for APIs, this patch introduces another gfp_mask field, __gfp_mask, for internal usage. Signed-off-by: Joonsoo Kim --- mm/hugetlb.c | 15 --- mm/internal.h | 2 ++ 2

[PATCH 10/11] mm/mempolicy: use standard migration target allocation function

2020-05-17 Thread js1304
From: Joonsoo Kim There is no reason to implement it's own function for migration target allocation. Use standard one. Signed-off-by: Joonsoo Kim --- mm/internal.h | 3 --- mm/mempolicy.c | 33 - mm/migrate.c | 4 +++- 3 files changed, 7 insertions(+), 33

[PATCH 03/11] mm/hugetlb: introduce alloc_control structure to simplify migration target allocation APIs

2020-05-17 Thread js1304
From: Joonsoo Kim Currently, page allocation functions for migration requires some arguments. More worse, in the following patch, more argument will be needed to unify the similar functions. To simplify them, in this patch, unified data structure that controls allocation behaviour is introduced.

[PATCH 02/11] mm/migrate: move migration helper from .h to .c

2020-05-17 Thread js1304
From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 33 + mm/migrate.c| 29 + 2 files changed,

[PATCH 00/11] clean-up the migration target allocation functions

2020-05-17 Thread js1304
From: Joonsoo Kim This patchset clean-up the migration target allocation functions. Contributions of this patchset are: 1. unify two hugetlb alloc functions. As a result, one is remained. 2. make one external hugetlb alloc function to internal one. 3. unify three functions for migration target

[PATCH 11/11] mm/page_alloc: use standard migration target allocation function directly

2020-05-17 Thread js1304
From: Joonsoo Kim There is no need to make a function in order to call standard migration target allocation function. Use standard one directly. Signed-off-by: Joonsoo Kim --- include/linux/page-isolation.h | 2 -- mm/page_alloc.c| 9 +++-- mm/page_isolation.c

[PATCH 07/11] mm/migrate: change the interface of the migration target alloc/free functions

2020-05-17 Thread js1304
From: Joonsoo Kim To prepare unifying duplicated functions in following patches, this patch changes the interface of the migration target alloc/free functions. Functions now use struct alloc_control as an argument. There is no functional change. Signed-off-by: Joonsoo Kim ---

[PATCH 01/11] mm/page_isolation: prefer the node of the source page

2020-05-17 Thread js1304
From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Signed-off-by: Joonsoo Kim --- mm/page_isolation.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/page_isolation.c

[PATCH 05/11] mm/hugetlb: make hugetlb migration target allocation APIs CMA aware

2020-05-17 Thread js1304
From: Joonsoo Kim There is a user who do not want to use CMA memory for migration. Until now, it is implemented by caller side but it's not optimal since there is limited information on caller. This patch implements it on callee side to get better result. Signed-off-by: Joonsoo Kim ---

[PATCH 04/11] mm/hugetlb: unify hugetlb migration callback function

2020-05-17 Thread js1304
From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. This patch adds one more field on to the alloc_control and handles this exception. Signed-off-by: Joonsoo Kim ---

[PATCH 09/11] mm/gup: use standard migration target allocation function

2020-05-17 Thread js1304
From: Joonsoo Kim There is no reason to implement it's own function for migration target allocation. Use standard one. Signed-off-by: Joonsoo Kim --- mm/gup.c | 61 ++--- 1 file changed, 10 insertions(+), 51 deletions(-) diff --git

[PATCH 08/11] mm/migrate: make standard migration target allocation functions

2020-05-17 Thread js1304
From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be

[PATCH v2 05/10] mm/gup: separate PageHighMem() and PageHighMemZone() use case

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Now, we have separate functions, PageHighMem() and

[PATCH v2 06/10] mm/hugetlb: separate PageHighMem() and PageHighMemZone() use case

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Now, we have separate functions, PageHighMem() and

[PATCH v2 04/10] power: separate PageHighMem() and PageHighMemZone() use case

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Now, we have separate functions, PageHighMem() and

[PATCH v2 08/10] mm/page_alloc: correct the use of is_highmem_idx()

2020-04-28 Thread js1304
From: Joonsoo Kim What we'd like to check here is whether page has direct mapping or not. Use PageHighMem() since it is perfectly matched for this purpose. Acked-by: Roman Gushchin Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff

[PATCH v2 10/10] mm/page-flags: change the implementation of the PageHighMem()

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Previous patches introduce PageHighMemZone() macro and

[PATCH v2 09/10] mm/migrate: replace PageHighMem() with open-code

2020-04-28 Thread js1304
From: Joonsoo Kim Implementation of PageHighMem() will be changed in following patches. Before that, use open-code to avoid the side effect of implementation change on PageHighMem(). Acked-by: Roman Gushchin Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 4 +++- 1 file changed, 3

[PATCH v2 07/10] mm: separate PageHighMem() and PageHighMemZone() use case

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Now, we have separate functions, PageHighMem() and

[PATCH v2 03/10] kexec: separate PageHighMem() and PageHighMemZone() use case

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Now, we have separate functions, PageHighMem() and

[PATCH v2 02/10] drm/ttm: separate PageHighMem() and PageHighMemZone() use case

2020-04-28 Thread js1304
From: Joonsoo Kim Until now, PageHighMem() is used for two different cases. One is to check if there is a direct mapping for this page or not. The other is to check the zone of this page, that is, weather it is the highmem type zone or not. Now, we have separate functions, PageHighMem() and

[PATCH v2 00/10] change the implementation of the PageHighMem()

2020-04-28 Thread js1304
From: Joonsoo Kim Changes on v2 - add "acked-by", "reviewed-by" tags - replace PageHighMem() with use open-code, instead of using new PageHighMemZone() macro. Related file is "include/linux/migrate.h" Hello, This patchset separates two use cases of PageHighMem() by introducing

  1   2   3   4   5   6   >