Wei Wang writes:
> Android devices boot time benefits by bigger readahead window setting from
> init. This patch will make readahead window a config so early boot can
> benefit by it as well.
Can you change the source code of init to call ioctl(BLKRASET) early?
Best Regards,
Huang, Ying
From: Huang Ying
The address argument passed in madvise_free_huge_pmd() may be not THP
aligned. But some THP operations like pmdp_invalidate(),
set_pmd_at(), and tlb_remove_pmd_tlb_entry() need the address to be
THP aligned. Fix this via using THP aligned address for these
functions in
From: Huang Ying
>From commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks") on, after swapoff, the address_space associated with the swap
device will be freed. So swap_address_space() users which touch the
address_space need some kind of mechanism to prevent the addre
"Huang, Ying" writes:
> From: Huang Ying
>
> From commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
> trunks") on, after swapoff, the address_space associated with the swap
> device will be freed. So page_mapping() users which may touch the
> add
From: Huang Ying
>From commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks") on, after swapoff, the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the addre
Andrew Morton writes:
> On Fri, 2 Mar 2018 16:04:26 +0800 "Huang, Ying" wrote:
>
>> From: Huang Ying
>>
>> >From commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
>> trunks") on, after swapoff, the address_space associated wi
From: Huang Ying
>From commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks") on, after swapoff, the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the addre
have such a large impact.
The test is run with full load, this means near or more than 100
processes will allocate memory in parallel. According to Amdahl's law,
the performance of a parallel program will be dominated by the serial
part. For this case, the part protected by zone->lock. So small
changes to code under zone->lock could make bigger changes to overall
score.
Best Regards,
Huang, Ying
From: Huang Ying
>From commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks") on, after swapoff, the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the addre
Minchan Kim writes:
> On Mon, Feb 26, 2018 at 01:18:50PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Fri, Feb 23, 2018 at 04:02:27PM +0800, Huang, Ying wrote:
>> >> writes:
>> >> [snip]
>> >>
>> >> >
Minchan Kim writes:
> Hi Jan,
>
> On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
>> Hi Minchan,
>>
>> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
>> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
>> > > From: Huang Yi
Minchan Kim writes:
> On Fri, Feb 23, 2018 at 04:02:27PM +0800, Huang, Ying wrote:
>> writes:
>> [snip]
>>
>> > diff --git a/mm/swap_state.c b/mm/swap_state.c
>> > index 39ae7cfad90f..c56cce64b2c3 100644
>> > --- a/mm/swap_state.c
>> >
ble_vma_readahead' was not
> declared. Should it be static?
> mm/swap_state.c:742:13: warning: symbol 'swap_vma_readahead' was not
> declared. Should it be static?
>
> Signed-off-by: Colin Ian King
Acked-by: "Huang, Ying"
> ---
> mm/swap_state.
NO_COMPOUND)
> INC_CACHE_INFO(find_success);
> if (unlikely(PageTransCompound(page)))
> return page;
> - readahead = TestClearPageReadahead(page);
So we can only call it here after checking whether page is compound.
Best Regards,
Hu
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
On Wed, Feb 21, 2018 at 7:38 AM, Andrew Morton
wrote:
> On Sun, 18 Feb 2018 09:06:47 +0800 huang ying
> wrote:
>
>> >> >> +struct swap_info_struct *get_swap_device(swp_entry_t entry)
>> >> >> +{
>> >> >> + stru
On Sat, Feb 17, 2018 at 7:38 AM, Andrew Morton
wrote:
> On Wed, 14 Feb 2018 08:38:00 +0800 "Huang\, Ying"
> wrote:
>
>> Andrew Morton writes:
>>
>> > On Tue, 13 Feb 2018 09:42:20 +0800 "Huang, Ying"
>> > wrote:
>> >
>
Andrew Morton writes:
> On Tue, 13 Feb 2018 09:42:20 +0800 "Huang, Ying" wrote:
>
>> From: Huang Ying
>>
>> When the swapin is performed, after getting the swap entry information
>> from the page table, system will swap in the swap entry, without any
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
From: Huang Ying
When page_mapping() is called and the mapping is dereferenced in
page_evicatable() through shrink_active_list(), it is possible for the
inode to be truncated and the embedded address space to be freed at
the same time. This may lead to the following race.
CPU1
er architectures follow the similar logic. Would it be
better for page_mapping() here to return NULL for anonymous pages even
if they are in swap cache? Of course we need to change the function
name. page_file_mapping() appears a good name, but that has been used
already. Any suggestion?
Is my understanding correct? Could you help me on this?
Best Regards,
Huang, Ying
From: Huang Ying
It was reported by Sergey Senozhatsky that if THP (Transparent Huge
Page) and frontswap (via zswap) are both enabled, when memory goes low
so that swap is triggered, segfault and memory corruption will occur
in random user space applications as follow,
kernel: urxvt[338
Minchan Kim writes:
> Hi Huang,
>
> On Thu, Feb 08, 2018 at 11:27:50PM +0800, huang ying wrote:
>> On Wed, Feb 7, 2018 at 3:00 PM, Huang, Ying wrote:
>> > From: Huang Ying
>> >
>> > It was reported by Sergey Senozhatsky that if THP (Transparent Hug
On Wed, Feb 7, 2018 at 3:00 PM, Huang, Ying wrote:
> From: Huang Ying
>
> It was reported by Sergey Senozhatsky that if THP (Transparent Huge
> Page) and frontswap (via zswap) are both enabled, when memory goes low
> so that swap is triggered, segfault and memory corruption
On Thu, Feb 8, 2018 at 6:17 PM, Minchan Kim wrote:
> On Wed, Feb 07, 2018 at 03:00:35PM +0800, Huang, Ying wrote:
>> From: Huang Ying
>>
>> It was reported by Sergey Senozhatsky that if THP (Transparent Huge
>> Page) and frontswap (via zswap) are both enabled, when
Andrew Morton writes:
> On Wed, 7 Feb 2018 15:00:35 +0800 "Huang, Ying" wrote:
>
>> From: Huang Ying
>>
>> It was reported by Sergey Senozhatsky that if THP (Transparent Huge
>> Page) and frontswap (via zswap) are both enabled, when memory goes low
&g
From: Huang Ying
It was reported by Sergey Senozhatsky that if THP (Transparent Huge
Page) and frontswap (via zswap) are both enabled, when memory goes low
so that swap is triggered, segfault and memory corruption will occur
in random user space applications as follow,
kernel: urxvt[338
Minchan Kim writes:
> On Tue, Feb 06, 2018 at 09:34:44PM +0800, huang ying wrote:
>> On Tue, Feb 6, 2018 at 5:02 PM, Minchan Kim wrote:
>> > On Tue, Feb 06, 2018 at 04:39:18PM +0800, Huang, Ying wrote:
>> >> Hi, Minchan,
>> >>
>>
On Tue, Feb 6, 2018 at 5:02 PM, Minchan Kim wrote:
> On Tue, Feb 06, 2018 at 04:39:18PM +0800, Huang, Ying wrote:
>> Hi, Minchan,
>>
>> Minchan Kim writes:
>>
>> > Hi Huang,
>> >
>> > On Tue, Feb 06, 2018 at 02:54:04PM +0800, Huang, Ying wrot
Hi, Minchan,
Minchan Kim writes:
> Hi Huang,
>
> On Tue, Feb 06, 2018 at 02:54:04PM +0800, Huang, Ying wrote:
>> From: Huang Ying
>>
>> It was reported by Sergey Senozhatsky that if THP (Transparent Huge
>> Page) and frontswap (via zswap) are both enabled, whe
From: Huang Ying
It was reported by Sergey Senozhatsky that if THP (Transparent Huge
Page) and frontswap (via zswap) are both enabled, when memory goes low
so that swap is triggered, segfault and memory corruption will occur
in random user space applications as follow,
kernel: urxvt[338
Andrew Morton writes:
> On Mon, 5 Feb 2018 21:39:47 +0900 Sergey Senozhatsky
> wrote:
>
>> > -8<---
>> > From 4c52d531680f91572ebc6f4525a018e32a934ef0 Mon Sep 17 00:00:00 2001
>> > From: Huang Yi
Sergey Senozhatsky writes:
> Hi,
>
> On (02/04/18 22:21), huang ying wrote:
> [..]
>> >> After disabling zswap no crashes at all.
>> >>
>> >> /etc/systemd/swap.conf
>> >> zswap_enabled=1
>> >> zswap_compressor=lz4
Sergey Senozhatsky writes:
> Hi,
>
> On (02/04/18 22:21), huang ying wrote:
> [..]
>> >> After disabling zswap no crashes at all.
>> >>
>> >> /etc/systemd/swap.conf
>> >> zswap_enabled=1
>> >> zswap_compressor=lz4
73cf5e0d0a3] memcg, THP, swap: support
> move mem cgroup charge for THP swapped out
> git bisect good 3e14a57b2416b7c94189b95baffd673cf5e0d0a3
> # good: [d6810d730022016d9c0f389452b86b035dba1492] memcg, THP, swap: make
> mem_cgroup_swapout() support THP
> git bisect good d6810d
Mel Gorman writes:
> On Wed, Jan 03, 2018 at 08:42:15AM +0800, Huang, Ying wrote:
>> Mel Gorman writes:
>>
>> > On Tue, Jan 02, 2018 at 12:29:55PM +0100, Jan Kara wrote:
>> >> On Tue 02-01-18 10:21:03, Mel Gorman wrote:
>> >> > On Sat,
ing which is potentially already free.
>>
>
> Hmm, possible if unlikely.
>
> Before delete_from_page_cache, we called truncate_cleanup_page so the
> page is likely to be !PageDirty or PageWriteback which gets skipped by
> the only caller that checks the mappping in __isolate_lru_page. The race
> is tiny but it does exist. One way of closing it is to check the mapping
> under the page lock which will prevent races with truncation. The
> overhead is minimal as the calling context (compaction) is quite a heavy
> operation anyway.
>
I think another possible fix is to use call_rcu_sched() to free inode
(and address_space). Because __isolate_lru_page() will be called with
LRU spinlock held and IRQ disabled, call_rcu_sched() will wait
LRU spin_unlock and IRQ enabled.
Best Regards,
Huang, Ying
"Huang, Ying" writes:
> From: Huang Ying
>
> When the swapin is performed, after getting the swap entry information
> from the page table, system will swap in the swap entry, without any
> lock held to prevent the swap device from being swapoff. This may
> cause th
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
Vitaly Wool writes:
> 2017-12-22 14:57 GMT+01:00 Huang, Ying :
>
>> Vitaly Wool writes:
>>
>> > 2017-12-20 1:57 GMT+01:00 Huang, Ying :
>> >
>> >
>> >
>> >>
>> >> > Could you please elaborate how this would be i
Minchan Kim writes:
> On Fri, Dec 22, 2017 at 10:14:43PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Thu, Dec 21, 2017 at 03:48:56PM +0800, Huang, Ying wrote:
>> >> Minchan Kim writes:
>> >>
>> >> > On Wed, Dec
"Paul E. McKenney" writes:
> On Fri, Dec 22, 2017 at 10:14:43PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Thu, Dec 21, 2017 at 03:48:56PM +0800, Huang, Ying wrote:
>> >> Minchan Kim writes:
>> >>
>>
Minchan Kim writes:
> On Thu, Dec 21, 2017 at 03:48:56PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Wed, Dec 20, 2017 at 09:26:32AM +0800, Huang, Ying wrote:
>> >> From: Huang Ying
>> >>
>> >> When the swapin is pe
Vitaly Wool writes:
> 2017-12-20 1:57 GMT+01:00 Huang, Ying :
>
>
>
>>
>> > Could you please elaborate how this would be implemented "on top"?
>>
>> struct llist_node *my_del_first_exclusive(struct llist_head *head)
>> {
&g
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
Vitaly Wool writes:
> 2017-12-19 2:35 GMT+01:00 Huang, Ying :
>
>> Vitaly Wool writes:
>>
>> > It sometimes is necessary to be able to be able to use llist in
>> > the following manner:
>> > if (node_unlisted(node))
>> >
"Paul E. McKenney" writes:
> On Tue, Dec 19, 2017 at 09:57:21AM +0800, Huang, Ying wrote:
>> "Paul E. McKenney" writes:
>>
>> > On Mon, Dec 18, 2017 at 03:41:41PM +0800, Huang, Ying wrote:
>> >> "Huang, Ying" writes:
>&
node->next, next, entry)) != next)
> + return false;
> + next = entry;
> + } while (cmpxchg(&head->first, entry, node) != entry);
> + return true;
> +}
> +EXPORT_SYMBOL_GPL(llist_add_exclusive);
I think this could be implemented on top of llist, why add it into llist
itself?
Best Regards,
Huang, Ying
"Huang, Ying" writes:
> From: Huang Ying
>
> When the swapin is performed, after getting the swap entry information
> from the page table, system will swap in the swap entry, without any
> lock held to prevent the swap device from being swapoff. This may
> cause th
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, system will swap in the swap entry, without any
lock held to prevent the swap device from being swapoff. This may
cause the race like below,
CPU 1 CPU 2
will be notified for TLB flushing caused by all ways that the kernel
changes the page tables including vmalloc, kmap, etc.
Best Regards,
Huang, Ying
Minchan Kim writes:
> Hi Huang,
>
> Sorry for the late response. I'm in middle of long vacation.
>
> On Fri, Dec 08, 2017 at 08:32:16PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Fri, Dec 08, 2017 at 04:41:38PM +0800, Hua
"Paul E. McKenney" writes:
> On Tue, Dec 12, 2017 at 09:12:20AM +0800, Huang, Ying wrote:
>> Hi, Pual,
>>
>> "Paul E. McKenney" writes:
>>
>> > On Mon, Dec 11, 2017 at 01:30:03PM +0800, Huang, Ying wrote:
>> >> Andrew Mort
Hi, Pual,
"Paul E. McKenney" writes:
> On Mon, Dec 11, 2017 at 01:30:03PM +0800, Huang, Ying wrote:
>> Andrew Morton writes:
>>
>> > On Fri, 08 Dec 2017 16:41:38 +0800 "Huang\, Ying"
>> > wrote:
>> >
>> >> > W
Andrew Morton writes:
> On Fri, 08 Dec 2017 16:41:38 +0800 "Huang\, Ying"
> wrote:
>
>> > Why do we need srcu here? Is it enough with rcu like below?
>> >
>> > It might have a bug/room to be optimized about performance/naming.
>> > I ju
Minchan Kim writes:
> On Fri, Dec 08, 2017 at 04:41:38PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Fri, Dec 08, 2017 at 01:41:10PM +0800, Huang, Ying wrote:
>> >> Minchan Kim writes:
>> >>
>> >> > On Thu, Dec
Minchan Kim writes:
> On Fri, Dec 08, 2017 at 01:41:10PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Thu, Dec 07, 2017 at 04:29:37PM -0800, Andrew Morton wrote:
>> >> On Thu, 7 Dec 2017 09:14:26 +0800 "Huang, Ying"
>> &
From: Huang Ying
If THP migration is enabled, for a VMA handled by userfaultfd,
consider the following situation,
do_page_fault()
__do_huge_pmd_anonymous_page()
handle_userfault()
userfault_msg()
/* a huge page is allocated and mapped at fault address */
/* the huge page
From: Huang Ying
When the swapin is performed, after getting the swap entry information
from the page table, the PTL (page table lock) will be released, then
system will go to swap in the swap entry, without any lock held to
prevent the swap device from being swapoff. This may cause the race
The resulting bug will cause the
> memory cgroups whose THPs were swapped out to become zombies on
> deletion.
Good catch! Thanks a lot for fixing!
Best Regards,
Huang, Ying
> Fixes: d6810d730022 ("memcg, THP, swap: make mem_cgroup_swapout() support
> THP")
> Signed-off
Andrea Arcangeli writes:
> Hello,
>
> On Sun, Nov 05, 2017 at 11:01:05AM +0800, huang ying wrote:
>> On Fri, Nov 3, 2017 at 11:00 PM, Zi Yan wrote:
>> > On 3 Nov 2017, at 3:52, Huang, Ying wrote:
>> >
>> >> From: Huang Ying
>> >>
>
On Fri, Nov 3, 2017 at 11:00 PM, Zi Yan wrote:
> On 3 Nov 2017, at 3:52, Huang, Ying wrote:
>
>> From: Huang Ying
>>
>> If THP migration is enabled, the following situation is possible,
>>
>> - A THP is mapped at source address
>> - Migration is started
From: Huang Ying
If THP migration is enabled, the following situation is possible,
- A THP is mapped at source address
- Migration is started to move the THP to another node
- Page fault occurs
- The PMD (migration entry) is copied to the destination address in mremap
That is, it is possible
From: Huang Ying
When a page fault occurs for a swap entry, the physical swap readahead
(not the VMA base swap readahead) may readahead several swap entries
after the fault swap entry. The readahead algorithm calculates some
of the swap entries to readahead via increasing the offset of the
u need this? You saved copying one page from memory to memory
(COW) now, at the cost of reading a page from disk to memory later?
Best Regards,
Huang, Ying
> Signed-off-by: zhouxianrong
> ---
> mm/swapfile.c |9 +++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
&g
Minchan Kim writes:
> This patch makes do_swap_page no need to be aware of two different
> swap readahead algorithm. Just unify cluster-based and vma-based
> readahead function call.
>
> Signed-off-by: Minchan Kim
> ---
> include/linux/swap.h | 17 -
> mm/memory.c | 11
Minchan Kim writes:
> Hi Huang,
>
> On Wed, Nov 01, 2017 at 01:41:00PM +0800, Huang, Ying wrote:
>> Hi, Minchan,
>>
>> Minchan Kim writes:
>>
>> > When I see recent change of swap readahead, I am very unhappy
>> > about current code structu
e)
> - put_page(page);
> + if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
> goto out;
> - }
The page table holding PTE may be unmapped in pte_unmap_same(), so is it
safe for us to access page table after this in d
Minchan Kim writes:
> Hi Huang,
>
> On Tue, Oct 31, 2017 at 01:32:32PM +0800, Huang, Ying wrote:
>> Hi, Minchan,
>>
>> Minchan Kim writes:
>>
>> > Hi Huang,
>> >
>> > On Fri, Oct 27, 2017 at 01:53:27PM +0800, Huang, Ying wrote:
>&
Hi, Minchan,
Minchan Kim writes:
> Hi Huang,
>
> On Fri, Oct 27, 2017 at 01:53:27PM +0800, Huang, Ying wrote:
>> From: Huang Ying
>>
>> When a page fault occurs for a swap entry, the physical swap readahead
>> (not the VMA base swap readahead) may readahead
From: Huang Ying
When a page fault occurs for a swap entry, the physical swap readahead
(not the VMA base swap readahead) may readahead several swap entries
after the fault swap entry. The readahead algorithm calculates some
of the swap entries to readahead via increasing the offset of the
Minchan Kim writes:
> On Tue, Oct 24, 2017 at 10:47:00AM +0800, Huang, Ying wrote:
>> From: Ying Huang
>>
>> __swp_swapcount() is used in __read_swap_cache_async(). Where the
>> invalid swap entry (offset > max) may be supplied during swap
>> readahead. Bu
Michal Hocko writes:
> On Tue 24-10-17 23:15:32, Huang, Ying wrote:
>> Hi, Michal,
>>
>> Michal Hocko writes:
>>
>> > On Tue 24-10-17 10:47:00, Huang, Ying wrote:
>> >> From: Ying Huang
>> >>
>> >> __swp_swapcount() i
Hi, Michal,
Michal Hocko writes:
> On Tue 24-10-17 10:47:00, Huang, Ying wrote:
>> From: Ying Huang
>>
>> __swp_swapcount() is used in __read_swap_cache_async(). Where the
>> invalid swap entry (offset > max) may be supplied during swap
>> readahead. Bu
Tim Chen
Cc: Minchan Kim
Cc: Michal Hocko
Cc: # 4.11-4.13
Reported-by: Christian Kujau
Fixes: e8c26ab60598 ("mm/swap: skip readahead for unreferenced swap slots")
Signed-off-by: "Huang, Ying"
---
mm/swapfile.c | 42 --
1 file change
On Sat, Oct 21, 2017 at 9:07 AM, Christian Kujau wrote:
> On Fri, 20 Oct 2017, huang ying wrote:
>> > 4 May < Linux version 4.11.2-1-ARCH
>> > 4 Jun < Linux version 4.11.3-1-ARCH
>> > 7 Jul < Linux version 4.11.9-1-ARCH
>>
rsion 4.12.13-1-ARCH
> 158 Oct < Linux version 4.13.5-1-ARCH
So you have never seen this before 4.11 like 4.10? Which operations
will trigger this error messages? Is it possible for you to check
whether the error exists for normal swap device (not ZRAM)? 32bit or
64bit kernel do
From: Huang Ying
One page may store a set of entries of the
sis->swap_map (swap_info_struct->swap_map) in multiple swap clusters.
If some of the entries has sis->swap_map[offset] > SWAP_MAP_MAX,
multiple pages will be used to store the set of entries of the
sis->swap_map. An
From: Huang Ying
Now, when the page table is walked in the implementation of
/proc//pagemap, pmd_soft_dirty() is used for both the PMD huge
page map and the PMD migration entries. That is wrong,
pmd_swp_soft_dirty() should be used for the PMD migration entries
instead because the different page
Zi Yan writes:
> Huang, Ying wrote:
>> "Kirill A. Shutemov" writes:
>>
>>> On Tue, Oct 17, 2017 at 04:18:18PM +0800, Huang, Ying wrote:
>>>> From: Huang Ying
>>>>
>>>> Now, when the page table is walked in the implementa
Michal Hocko writes:
> On Tue 17-10-17 16:13:20, Huang, Ying wrote:
>> From: Huang Ying
>>
>> One page may store a set of entries of the
>> sis->swap_map (swap_info_struct->swap_map) in multiple swap clusters.
>> If some of the entries has sis->sw
From: Huang Ying
Now, when the page table is walked in the implementation of
/proc//pagemap, pmd_soft_dirty() is used for both the PMD huge
page map and the PMD migration entries. That is wrong,
pmd_swp_soft_dirty() should be used for the PMD migration entries
instead because the different page
From: Huang Ying
One page may store a set of entries of the
sis->swap_map (swap_info_struct->swap_map) in multiple swap clusters.
If some of the entries has sis->swap_map[offset] > SWAP_MAP_MAX,
multiple pages will be used to store the set of entries of the
sis->swap_map. An
Minchan Kim writes:
> On Wed, Oct 11, 2017 at 03:08:47PM +0800, Huang, Ying wrote:
>> From: Huang Ying
>>
>> When the VMA based swap readahead was introduced, a new knob
>>
>> /sys/kernel/mm/swap/vma_ra_max_order
>>
>> was added as the max wind
From: Huang Ying
When the VMA based swap readahead was introduced, a new knob
/sys/kernel/mm/swap/vma_ra_max_order
was added as the max window of VMA swap readahead. This is to make it
possible to use different max window for VMA based readahead and
original physical readahead. But Minchan
Minchan Kim writes:
> On Tue, Oct 10, 2017 at 04:50:10PM +0800, Huang, Ying wrote:
>> Minchan Kim writes:
>>
>> > On Tue, Oct 10, 2017 at 02:08:55PM +0800, Huang, Ying wrote:
>> >> From: Huang Ying
>> >>
>> >> When the VMA based s
Minchan Kim writes:
> On Tue, Oct 10, 2017 at 02:08:55PM +0800, Huang, Ying wrote:
>> From: Huang Ying
>>
>> When the VMA based swap readahead was introduced, a new knob
>>
>> /sys/kernel/mm/swap/vma_ra_max_order
>>
>> was added as the max wind
From: Huang Ying
When the VMA based swap readahead was introduced, a new knob
/sys/kernel/mm/swap/vma_ra_max_order
was added as the max window of VMA swap readahead. This is to make it
possible to use different max window for VMA based readahead and
original physical readahead. But Minchan
Minchan Kim writes:
> Hi Huang,
>
> Sorry for the late response. It was long national holiday.
>
> On Fri, Sep 29, 2017 at 04:51:17PM +0800, huang ying wrote:
>> On Wed, Sep 20, 2017 at 1:43 PM, Minchan Kim wrote:
>> > With fast swap storage, platform want to use s
On Fri, Sep 29, 2017 at 4:51 PM, huang ying
wrote:
> On Wed, Sep 20, 2017 at 1:43 PM, Minchan Kim wrote:
[snip]
>> diff --git a/mm/memory.c b/mm/memory.c
>> index ec4e15494901..163ab2062385 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@
On Tue, Oct 3, 2017 at 5:49 AM, Andrew Morton wrote:
> On Mon, 2 Oct 2017 08:45:40 -0700 Dave Hansen wrote:
>
>> On 09/27/2017 06:02 PM, Huang, Ying wrote:
>> > I still think there may be a performance regression for some users
>> > because of the change of the al
le, because of fork). With swap cache, after swapping out
and swapping in, the page will be still shared by these processes.
But with your changes, it appears that there will be multiple pages
with same contents mapped in multiple processes, even if the page
isn't written in these processes. So
ly ack it.
I still think there may be a performance regression for some users
because of the change of the algorithm and the knobs, and the
performance regression can be resolved via setting the new knob. But I
don't think there will be a functionality regression. Do you agree?
Best Regar
Michal Hocko writes:
> On Thu 21-09-17 09:33:10, Huang, Ying wrote:
>> From: Huang Ying
>>
>> This patch adds a new Kconfig option VMA_SWAP_READAHEAD and wraps VMA
>> based swap readahead code inside #ifdef CONFIG_VMA_SWAP_READAHEAD/#endif.
>> This is more f
"박병철/선임연구원/SW Platform(연)AOT팀(byungchul.p...@lge.com)"
writes:
>> -Original Message-
>> From: Huang, Ying [mailto:ying.hu...@intel.com]
>> Sent: Tuesday, September 26, 2017 4:02 PM
>> To: Byungchul Park
>> Cc: pet...@infradead.org; mi...@
> true); \
> + (pos) = (n))
>
> /**
> * llist_empty - tests whether a lock-less list is empty
The original code follows the style of list_for_each_entry_safe(). The
parameters "pos" and "n" must be variable. Because list_xxx family
functions work well so far, I think we needn't to change it too.
Best Regards,
Huang, Ying
Minchan Kim writes:
> On Mon, Sep 25, 2017 at 01:54:42PM +0800, Huang, Ying wrote:
>> Hi, Minchan,
>>
>> Minchan Kim writes:
>>
>> > Hi Huang,
>> >
>> > On Thu, Sep 21, 2017 at 09:33:10AM +0800, Huang, Ying wrote:
>> >> From
Hi, Minchan,
Minchan Kim writes:
> Hi Huang,
>
> On Thu, Sep 21, 2017 at 09:33:10AM +0800, Huang, Ying wrote:
>> From: Huang Ying
[snip]
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index 9c4b80c2..e62c8e2e34ef 100644
>> --- a/mm/Kconfig
>> +++ b/mm
701 - 800 of 1756 matches
Mail list logo