is changed to before unlocking sub-pages. So
that all sub-pages will be kept locked from the THP has been split to
the huge swap cluster is split. This makes the code much easier to be
reasoned.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Hi, Andrew, could you help me to check whether the overall design is
reasonable?
Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
swap part of the patchset? Especially [02/21], [03/21], [04/21],
[05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
[12/21], [20/21],
refactoring, there is no any functional change in
this patch.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horig
will be split if
its PMD swap mapping count is 0.
The first parameter of swap_duplicate() is changed to return the swap
entry to call add_swap_count_continuation() for. Because we may need
to call it for a swap entry in the middle of a huge swap cluster.
Signed-off-by: "Huang, Ying"
C
refactoring, there is no any functional change in
this patch.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horig
will be split if
its PMD swap mapping count is 0.
The first parameter of swap_duplicate() is changed to return the swap
entry to call add_swap_count_continuation() for. Because we may need
to call it for a swap entry in the middle of a huge swap cluster.
Signed-off-by: "Huang, Ying"
C
ead. Some functions enabled by CONFIG_ARCH_ENABLE_THP_MIGRATION
are for page migration only, they are still enabled only for that.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickin
ead. Some functions enabled by CONFIG_ARCH_ENABLE_THP_MIGRATION
are for page migration only, they are still enabled only for that.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickin
Christopher Lameter writes:
> On Thu, 6 Sep 2018, Huang, Ying wrote:
>
>> > Certainly interested in attending but this overlaps supercomputing 2018 in
>> > Dallas Texas...
>>
>> Sorry to know this. It appears that there are too many conferences in
&
Christopher Lameter writes:
> On Thu, 6 Sep 2018, Huang, Ying wrote:
>
>> > Certainly interested in attending but this overlaps supercomputing 2018 in
>> > Dallas Texas...
>>
>> Sorry to know this. It appears that there are too many conferences in
&
rested in attending but this overlaps supercomputing 2018 in
> Dallas Texas...
Sorry to know this. It appears that there are too many conferences in
November...
Best Regards,
Huang, Ying
rested in attending but this overlaps supercomputing 2018 in
> Dallas Texas...
Sorry to know this. It appears that there are too many conferences in
November...
Best Regards,
Huang, Ying
to PMD swap mappings to the corresponding swap
cluster. So when clearing the SWAP_HAS_CACHE flag, the huge swap
cluster will only be split if the PMD swap mapping count is 0.
Otherwise, we will keep it as the huge swap cluster. So that we can
swapin a THP in one piece later.
Signed-off-by: "
will be split if
its PMD swap mapping count is 0.
The first parameter of swap_duplicate() is changed to return the swap
entry to call add_swap_count_continuation() for. Because we may need
to call it for a swap entry in the middle of a huge swap cluster.
Signed-off-by: "Huang, Ying"
C
to PMD swap mappings to the corresponding swap
cluster. So when clearing the SWAP_HAS_CACHE flag, the huge swap
cluster will only be split if the PMD swap mapping count is 0.
Otherwise, we will keep it as the huge swap cluster. So that we can
swapin a THP in one piece later.
Signed-off-by: "
will be split if
its PMD swap mapping count is 0.
The first parameter of swap_duplicate() is changed to return the swap
entry to call add_swap_count_continuation() for. Because we may need
to call it for a swap entry in the middle of a huge swap cluster.
Signed-off-by: "Huang, Ying"
C
to fallback to normal page swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horiguchi
Cc: Zi Yan
Cc: Daniel
to fallback to normal page swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horiguchi
Cc: Zi Yan
Cc: Daniel
be freed.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horiguchi
Cc: Zi Yan
Cc: Daniel Jordan
---
arch/s39
be freed.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horiguchi
Cc: Zi Yan
Cc: Daniel Jordan
---
arch/s39
refactoring, there is no any functional change in
this patch.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horig
refactoring, there is no any functional change in
this patch.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horig
continuation failed to allocate a page with
GFP_ATOMIC, we need to unlock the spinlock and try again with
GFP_KERNEL.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minc
For a PMD swap mapping, zap_huge_pmd() will clear the PMD and call
free_swap_and_cache() to decrease the swap reference count and maybe
free or split the huge swap cluster and the THP in swap cache.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
to PTE processing.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horiguchi
Cc: Zi Yan
Cc: Daniel Jordan
During MADV_WILLNEED, for a PMD swap mapping, if THP swapin is enabled
for the VMA, the whole swap cluster will be swapin. Otherwise, the
huge swap cluster and the PMD swap mapping will be split and fallback
to PTE swap mapping.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutem
is
disabled, the huge swap cluster and the PMD swap mapping will be split
and fallback to normal page swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Ri
During mincore(), for PMD swap mapping, swap cache will be looked up.
If the resulting page isn't compound page, the PMD swap mapping will
be split and fallback to PTE swap mapping processing.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
continuation failed to allocate a page with
GFP_ATOMIC, we need to unlock the spinlock and try again with
GFP_KERNEL.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minc
For a PMD swap mapping, zap_huge_pmd() will clear the PMD and call
free_swap_and_cache() to decrease the swap reference count and maybe
free or split the huge swap cluster and the THP in swap cache.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
to PTE processing.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Naoya Horiguchi
Cc: Zi Yan
Cc: Daniel Jordan
During MADV_WILLNEED, for a PMD swap mapping, if THP swapin is enabled
for the VMA, the whole swap cluster will be swapin. Otherwise, the
huge swap cluster and the PMD swap mapping will be split and fallback
to PTE swap mapping.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutem
is
disabled, the huge swap cluster and the PMD swap mapping will be split
and fallback to normal page swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Ri
During mincore(), for PMD swap mapping, swap cache will be looked up.
If the resulting page isn't compound page, the PMD swap mapping will
be split and fallback to PTE swap mapping processing.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
swapping is
used, so that we can take full advantage of THP including its high
performance for swapout/swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
mapping count and probably free the swap space
and the THP in swap cache too.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
is changed to before unlocking sub-pages. So
that all sub-pages will be kept locked from the THP has been split to
the huge swap cluster is split. This makes the code much easier to be
reasoned.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
The help of CONFIG_THP_SWAP is updated to reflect the latest progress
of THP (Tranparent Huge Page) swap optimization.
Signed-off-by: "Huang, Ying"
Reviewed-by: Dan Williams
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua
Original code is only for PMD migration entry, it is revised to
support PMD swap mapping.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
ead. Some functions enabled by CONFIG_ARCH_ENABLE_THP_MIGRATION
are for page migration only, they are still enabled only for that.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickin
2 new /proc/vmstat fields are added, "thp_swapin" and
"thp_swapin_fallback" to count swapin a THP from swap device in one
piece and fallback to normal page swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Ho
The help of CONFIG_THP_SWAP is updated to reflect the latest progress
of THP (Tranparent Huge Page) swap optimization.
Signed-off-by: "Huang, Ying"
Reviewed-by: Dan Williams
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua
Original code is only for PMD migration entry, it is revised to
support PMD swap mapping.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
ead. Some functions enabled by CONFIG_ARCH_ENABLE_THP_MIGRATION
are for page migration only, they are still enabled only for that.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickin
2 new /proc/vmstat fields are added, "thp_swapin" and
"thp_swapin_fallback" to count swapin a THP from swap device in one
piece and fallback to normal page swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Ho
swapping is
used, so that we can take full advantage of THP including its high
performance for swapout/swapin.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
mapping count and probably free the swap space
and the THP in swap cache too.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
is changed to before unlocking sub-pages. So
that all sub-pages will be kept locked from the THP has been split to
the huge swap cluster is split. This makes the code much easier to be
reasoned.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
is split already, we will split the PMD swap mapping and
unuse the PTEs.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Nao
swap cluster too. If the PMD swap mapping count
becomes 0, the huge swap cluster will be split.
Notice: is_huge_zero_pmd() and pmd_page() doesn't work well with swap
PMD, so pmd_present() check is called before them.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc:
a THP, add it into the swap cache. So later the contents
of the huge swap cluster can be read into the THP.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
is split already, we will split the PMD swap mapping and
unuse the PTEs.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dave Hansen
Cc: Nao
swap cluster too. If the PMD swap mapping count
becomes 0, the huge swap cluster will be split.
Notice: is_huge_zero_pmd() and pmd_page() doesn't work well with swap
PMD, so pmd_present() check is called before them.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc:
a THP, add it into the swap cache. So later the contents
of the huge swap cluster can be read into the THP.
Signed-off-by: "Huang, Ying"
Cc: "Kirill A. Shutemov"
Cc: Andrea Arcangeli
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Hi, Andrew, could you help me to check whether the overall design is
reasonable?
Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
swap part of the patchset? Especially [02/21], [03/21], [04/21],
[05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
[12/21], [20/21],
Hi, Andrew, could you help me to check whether the overall design is
reasonable?
Hi, Hugh, Shaohua, Minchan and Rik, could you help me to review the
swap part of the patchset? Especially [02/21], [03/21], [04/21],
[05/21], [06/21], [07/21], [08/21], [09/21], [10/21], [11/21],
[12/21], [20/21],
the code.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 40
the code.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 40
ode is moved to swap_free_cluster() to avoid
the downside.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/sw
ode is moved to swap_free_cluster() to avoid
the downside.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/sw
To improve the code readability. Some swap free related functions are
refactored.
This patchset is based on 8/23 HEAD of mmotm tree.
Best Regards,
Huang, Ying
and the swap entry can be reclaimed
later eventually.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 57 +
1 file c
To improve the code readability. Some swap free related functions are
refactored.
This patchset is based on 8/23 HEAD of mmotm tree.
Best Regards,
Huang, Ying
and the swap entry can be reclaimed
later eventually.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 57 +
1 file c
and the swap entry can be reclaimed
later eventually.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 57 +
1 file changed, 25
and the swap entry can be reclaimed
later eventually.
Signed-off-by: "Huang, Ying"
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
---
mm/swapfile.c | 57 +
1 file changed, 25
e information about where is your use case? Is it a
kernel driver? Then it is better to submit the patch together with its
user.
And I have the similar concern as Steven Rostedt. That is, if you are
the only user forever, it's not necessary to change the common code.
Best Regards,
Huang, Ying
e information about where is your use case? Is it a
kernel driver? Then it is better to submit the patch together with its
user.
And I have the similar concern as Steven Rostedt. That is, if you are
the only user forever, it's not necessary to change the common code.
Best Regards,
Huang, Ying
Byungchul Park writes:
> On Tue, Jul 31, 2018 at 09:37:50AM +0800, Huang, Ying wrote:
>> Byungchul Park writes:
>>
>> > Hello folks,
>> >
>> > I'm careful in saying.. and curious about..
>> >
>> > In restrictive cases like only addti
Byungchul Park writes:
> On Tue, Jul 31, 2018 at 09:37:50AM +0800, Huang, Ying wrote:
>> Byungchul Park writes:
>>
>> > Hello folks,
>> >
>> > I'm careful in saying.. and curious about..
>> >
>> > In restrictive cases like only addti
r locks could be used to provide mutual exclusive between
- llist add, llist traverse
and
- llist delete
Is this your use case?
Best Regards,
Huang, Ying
r locks could be used to provide mutual exclusive between
- llist add, llist traverse
and
- llist delete
Is this your use case?
Best Regards,
Huang, Ying
7 when stressed.
>> >
>> >
>> > PAGES_BETWEEN_RESCHED state AVG stddev
>> > 1 4 KiB idle36.086 1.920
>> > 16 64 KiB idle34.797 1.702
>> > 32 128 KiB idle35.104 1.752
>> > 64 256 KiB idle34.468 0.661
>> > 512 2048 KiBidle36.427 0.946
>> > 20488192 KiBidle34.988 2.406
>> > 262144 1048576 KiB idle36.792 0.193
>> > infin 512 GiB idle38.817 0.238 [causes softlockup]
>> > 1 4 KiB stress 55.562 0.661
>> > 16 64 KiB stress 57.509 0.248
>> > 32 128 KiB stress 69.265 3.913
>> > 64 256 KiB stress 70.217 4.534
>> > 512 2048 KiBstress 68.474 1.708
>> > 20488192 KiBstress 70.806 1.068
>> > 262144 1048576 KiB stress 55.217 1.184
>> > infin 512 GiB stress 55.062 0.291 [causes softlockup]
I think it may be good to separate the two optimization into 2 patches.
This makes it easier to evaluate the benefit of individual optimization.
Best Regards,
Huang, Ying
7 when stressed.
>> >
>> >
>> > PAGES_BETWEEN_RESCHED state AVG stddev
>> > 1 4 KiB idle36.086 1.920
>> > 16 64 KiB idle34.797 1.702
>> > 32 128 KiB idle35.104 1.752
>> > 64 256 KiB idle34.468 0.661
>> > 512 2048 KiBidle36.427 0.946
>> > 20488192 KiBidle34.988 2.406
>> > 262144 1048576 KiB idle36.792 0.193
>> > infin 512 GiB idle38.817 0.238 [causes softlockup]
>> > 1 4 KiB stress 55.562 0.661
>> > 16 64 KiB stress 57.509 0.248
>> > 32 128 KiB stress 69.265 3.913
>> > 64 256 KiB stress 70.217 4.534
>> > 512 2048 KiBstress 68.474 1.708
>> > 20488192 KiBstress 70.806 1.068
>> > 262144 1048576 KiB stress 55.217 1.184
>> > infin 512 GiB stress 55.062 0.291 [causes softlockup]
I think it may be good to separate the two optimization into 2 patches.
This makes it easier to evaluate the benefit of individual optimization.
Best Regards,
Huang, Ying
. More difference comes from the ORC unwinder
segments: (1480 + 2220) - (1380 + 2070) = 250. If the frame pointer
unwinder is used, this costs nothing.
Signed-off-by: "Huang, Ying"
Reviewed-by: Daniel Jordan
Acked-by: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
=n.
Signed-off-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapf
The part of __swap_entry_free() with lock held is separated into a new
function __swap_entry_free_locked(). Because we want to reuse that
piece of code in some other places.
Just mechanical code refactoring, there is no any functional change in
this function.
Signed-off-by: "Huang,
filename
base 242152028 340 2658367d7 mm/swapfile.o
head 241232004 340 264676763 mm/swapfile.o
Signed-off-by: "Huang, Ying"
Acked-by: Dave Hansen
Cc: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
C
. More difference comes from the ORC unwinder
segments: (1480 + 2220) - (1380 + 2070) = 250. If the frame pointer
unwinder is used, this costs nothing.
Signed-off-by: "Huang, Ying"
Reviewed-by: Daniel Jordan
Acked-by: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
=n.
Signed-off-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapf
The part of __swap_entry_free() with lock held is separated into a new
function __swap_entry_free_locked(). Because we want to reuse that
piece of code in some other places.
Just mechanical code refactoring, there is no any functional change in
this function.
Signed-off-by: "Huang,
filename
base 242152028 340 2658367d7 mm/swapfile.o
head 241232004 340 264676763 mm/swapfile.o
Signed-off-by: "Huang, Ying"
Acked-by: Dave Hansen
Cc: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
C
out by Daniel, it is better to use "swap_count(map[i])"
here, because it works for "map[i] == 0" case too.
And this makes the implementation more consistent between normal and
huge swap entry.
Signed-off-by: "Huang, Ying"
Suggested-and-reviewed-by: Daniel Jordan
it is a public
function with a stub implementation for CONFIG_THP_SWAP=n in swap.h.
Signed-off-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Da
it is a public
function with a stub implementation for CONFIG_THP_SWAP=n in swap.h.
Signed-off-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Da
out by Daniel, it is better to use "swap_count(map[i])"
here, because it works for "map[i] == 0" case too.
And this makes the implementation more consistent between normal and
huge swap entry.
Signed-off-by: "Huang, Ying"
Suggested-and-reviewed-by: Daniel Jordan
To improve the code readability.
Signed-off-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapfile.c | 9 +++
-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapfile.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff
To improve the code readability.
Signed-off-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapfile.c | 9 +++
-by: "Huang, Ying"
Suggested-and-acked-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapfile.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff
via merging huge/normal
code path/functions if possible.
One concern is that this may cause code size to dilate when
!CONFIG_TRANSPARENT_HUGEPAGE. The data shows that most refactoring
will only cause quite slight code size increase.
Best Regards,
Huang, Ying
via merging huge/normal
code path/functions if possible.
One concern is that this may cause code size to dilate when
!CONFIG_TRANSPARENT_HUGEPAGE. The data shows that most refactoring
will only cause quite slight code size increase.
Best Regards,
Huang, Ying
Christoph Hellwig writes:
> On Thu, Jul 19, 2018 at 04:48:35PM +0800, Huang Ying wrote:
>> +/*
>> + * Determine the locking method in use for this device. Return
>> + * swap_cluster_info if SSD-style cluster-based locking is in place.
>> + */
>> stati
Christoph Hellwig writes:
> On Thu, Jul 19, 2018 at 04:48:35PM +0800, Huang Ying wrote:
>> +/*
>> + * Determine the locking method in use for this device. Return
>> + * swap_cluster_info if SSD-style cluster-based locking is in place.
>> + */
>> stati
E;
>> +else
>> +return false;
>
> Nitpick: no need for an else after a return:
>
> if (IS_ENABLED(CONFIG_THP_SWAP))
> return info->flags & CLUSTER_FLAG_HUGE;
> return false;
Sure. Will change this in next version.
Best Regards,
Huang, Ying
E;
>> +else
>> +return false;
>
> Nitpick: no need for an else after a return:
>
> if (IS_ENABLED(CONFIG_THP_SWAP))
> return info->flags & CLUSTER_FLAG_HUGE;
> return false;
Sure. Will change this in next version.
Best Regards,
Huang, Ying
. More difference comes from the ORC unwinder
segments: (1480 + 2220) - (1380 + 2070) = 250. If the frame pointer
unwinder is used, this costs nothing.
Signed-off-by: "Huang, Ying"
Reviewed-by: Daniel Jordan
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh D
. More difference comes from the ORC unwinder
segments: (1480 + 2220) - (1380 + 2070) = 250. If the frame pointer
unwinder is used, this costs nothing.
Signed-off-by: "Huang, Ying"
Reviewed-by: Daniel Jordan
Cc: Dave Hansen
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh D
To improve the code readability.
Signed-off-by: "Huang, Ying"
Suggested-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapfile.c | 6 ++
1 file
=n.
Signed-off-by: "Huang, Ying"
Suggested-by: Dave Hansen
Reviewed-by: Daniel Jordan
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Shaohua Li
Cc: Hugh Dickins
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Dan Williams
---
mm/swapf
601 - 700 of 3349 matches
Mail list logo