From: zhouxianrong
The last partial object in last subpage of zspage should not be linked
in allocation list. Otherwise it could trigger BUG_ON explicitly at
function zs_map_object. But it happened rarely.
Signed-off-by: zhouxianrong
---
mm/zsmalloc.c | 2 ++
1 file changed, 2 insertions
From: zhouxianrong
The last partial object in last subpage of zspage should not be linked
in allocation list. Otherwise it could trigger BUG_ON explicitly at
function zs_map_object. But it happened rarely.
Signed-off-by: zhouxianrong
---
mm/zsmalloc.c | 2 ++
1 file changed, 2 insertions
The last partial object in last subpage of zspage should not be linked
in allocation list.
Signed-off-by: zhouxianrong
---
mm/zsmalloc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8d87e973a4f5..24dd8da0aa59 100644
--- a/mm/zsmalloc.c
+++ b/mm
The last partial object in last subpage of zspage should not be linked
in allocation list.
Signed-off-by: zhouxianrong
---
mm/zsmalloc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8d87e973a4f5..24dd8da0aa59 100644
--- a/mm/zsmalloc.c
+++ b/mm
-by: zhouxianrong <zhouxianr...@tom.com>
---
drivers/block/zram/zram_drv.c | 3 +++
include/linux/mm.h | 1 +
include/linux/page-flags.h | 10 ++
include/linux/swap.h | 17 +
include/trace/events/mmflags.h | 9 -
mm/K
-by: zhouxianrong
---
drivers/block/zram/zram_drv.c | 3 +++
include/linux/mm.h | 1 +
include/linux/page-flags.h | 10 ++
include/linux/swap.h | 17 +
include/trace/events/mmflags.h | 9 -
mm/Kconfig | 12
mm
originally.
On 2017/11/2 21:22, Michal Hocko wrote:
On Thu 02-11-17 20:35:19, zhouxianr...@huawei.com wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
originally.
On 2017/11/2 21:22, Michal Hocko wrote:
On Thu 02-11-17 20:35:19, zhouxianr...@huawei.com wrote:
From: zhouxianrong
the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
to one, then try_to_free_swap could remove this page
From: zhouxianrong <zhouxianr...@huawei.com>
the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
to one, then try_to_free_swap could remove this page from
swap cache and mark this page dirty. so if later we reclaimed
thi
From: zhouxianrong
the purpose of this patch is that when a reading swap fault
happens on a clean swap cache page whose swap count is equal
to one, then try_to_free_swap could remove this page from
swap cache and mark this page dirty. so if later we reclaimed
this page then we could pageout
From: zhouxianrong <zhouxianr...@huawei.com>
origanlly reuse_swap_page requires that the sum of page's
mapcount and swapcount less than or equal to one.
in this case we can reuse this page and avoid COW currently.
now reuse_swap_page requires only that page's mapcount
less than or equal
From: zhouxianrong
origanlly reuse_swap_page requires that the sum of page's
mapcount and swapcount less than or equal to one.
in this case we can reuse this page and avoid COW currently.
now reuse_swap_page requires only that page's mapcount
less than or equal to one and the page is not dirty
every 2s i sample /proc/buddyinfo in the whole test process.
the last about 90 samples were sampled after the test was done.
Node 0, zone DMA
4706 2099838266 50 5 3 2 1 2 38
0395 1261211 57 6 1 0 0 0
every 2s i sample /proc/buddyinfo in the whole test process.
the last about 90 samples were sampled after the test was done.
Node 0, zone DMA
4706 2099838266 50 5 3 2 1 2 38
0395 1261211 57 6 1 0 0 0
i do the test again. after minutes i tell you the result.
On 2017/7/4 14:52, Michal Hocko wrote:
On Tue 04-07-17 09:21:00, zhouxianrong wrote:
the test was done as follows:
1. the environment is android 7.0 and kernel is 4.1 and managed memory is 3.5GB
There have been many changes
i do the test again. after minutes i tell you the result.
On 2017/7/4 14:52, Michal Hocko wrote:
On Tue 04-07-17 09:21:00, zhouxianrong wrote:
the test was done as follows:
1. the environment is android 7.0 and kernel is 4.1 and managed memory is 3.5GB
There have been many changes
-17 20:02:16, zhouxianrong wrote:
[...]
from above i think after applying the patch the result is better.
You haven't described your testing methodology, nor the workload that was
tested. As such this data is completely meaningless.
-17 20:02:16, zhouxianrong wrote:
[...]
from above i think after applying the patch the result is better.
You haven't described your testing methodology, nor the workload that was
tested. As such this data is completely meaningless.
On 2017/7/3 15:48, Michal Hocko wrote:
On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
when buddy is under fragment i find that still there are some pages
just like AFFA mode. A is allocated, F is free, AF is buddy pair for
oder
On 2017/7/3 15:48, Michal Hocko wrote:
On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote:
From: zhouxianrong
when buddy is under fragment i find that still there are some pages
just like AFFA mode. A is allocated, F is free, AF is buddy pair for
oder n, FA is buddy pair for oder n
On 2017/7/3 15:48, Michal Hocko wrote:
On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
when buddy is under fragment i find that still there are some pages
just like AFFA mode. A is allocated, F is free, AF is buddy pair for
oder
On 2017/7/3 15:48, Michal Hocko wrote:
On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote:
From: zhouxianrong
when buddy is under fragment i find that still there are some pages
just like AFFA mode. A is allocated, F is free, AF is buddy pair for
oder n, FA is buddy pair for oder n
From: zhouxianrong <zhouxianr...@huawei.com>
when buddy is under fragment i find that still there are some pages
just like AFFA mode. A is allocated, F is free, AF is buddy pair for
oder n, FA is buddy pair for oder n as well. I want to compse the
FF as
From: zhouxianrong
when buddy is under fragment i find that still there are some pages
just like AFFA mode. A is allocated, F is free, AF is buddy pair for
oder n, FA is buddy pair for oder n as well. I want to compse the
FF as oder n + 1 and align to n other
From: z00281421
Signed-off-by: z00281421
---
include/linux/gfp.h |8 +-
include/linux/mmzone.h |2 +
include/linux/page-flags.h |9 ++
include/linux/thread_info.h |5 +-
mm/compaction.c |
From: z00281421
Signed-off-by: z00281421
---
include/linux/gfp.h |8 +-
include/linux/mmzone.h |2 +
include/linux/page-flags.h |9 ++
include/linux/thread_info.h |5 +-
mm/compaction.c | 17
mm/internal.h |7 ++
* Remember to use Documentation/SubmitChecklist when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------
From: zhouxianrong <zhouxianr...@huawei.com>
Subject: compaction: add def_
* Remember to use Documentation/SubmitChecklist when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------
From: zhouxianrong
Subject: compaction: add def_blk_aops migrate function for memory
From: zhouxianrong <zhouxianr...@huawei.com>
the reason for why to do this is based on below factors.
1. larg file read/write operations with order 0 can fragmentize
memory rapidly.
2. when a special filesystem does not supply migratepage callback,
kernel would fallback to d
From: zhouxianrong
the reason for why to do this is based on below factors.
1. larg file read/write operations with order 0 can fragmentize
memory rapidly.
2. when a special filesystem does not supply migratepage callback,
kernel would fallback to default function fallback_migrate_page
On 2017/3/1 18:41, Jisheng Zhang wrote:
Add Chen, Catalin
On Thu, 16 Feb 2017 09:11:29 +0800 zhouxianrong wrote:
On 2017/2/15 15:10, Ard Biesheuvel wrote:
On 15 February 2017 at 01:44, zhouxianrong wrote:
On 2017/2/14 17:03, Ard Biesheuvel wrote:
On 14 February 2017 at 06:53
On 2017/3/1 18:41, Jisheng Zhang wrote:
Add Chen, Catalin
On Thu, 16 Feb 2017 09:11:29 +0800 zhouxianrong wrote:
On 2017/2/15 15:10, Ard Biesheuvel wrote:
On 15 February 2017 at 01:44, zhouxianrong wrote:
On 2017/2/14 17:03, Ard Biesheuvel wrote:
On 14 February 2017 at 06:53, wrote
On 2017/2/15 15:10, Ard Biesheuvel wrote:
On 15 February 2017 at 01:44, zhouxianrong <zhouxianr...@huawei.com> wrote:
On 2017/2/14 17:03, Ard Biesheuvel wrote:
On 14 February 2017 at 06:53, <zhouxianr...@huawei.com> wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
On 2017/2/15 15:10, Ard Biesheuvel wrote:
On 15 February 2017 at 01:44, zhouxianrong wrote:
On 2017/2/14 17:03, Ard Biesheuvel wrote:
On 14 February 2017 at 06:53, wrote:
From: zhouxianrong
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved
On 2017/2/14 17:03, Ard Biesheuvel wrote:
On 14 February 2017 at 06:53, <zhouxianr...@huawei.com> wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved area's memmap (areas of memblock.reser
On 2017/2/14 17:03, Ard Biesheuvel wrote:
On 14 February 2017 at 06:53, wrote:
From: zhouxianrong
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved area's memmap (areas of memblock.reserved)
as well only when user of reserved area indicate that we can
: zhouxianrong <zhouxianr...@huawei.com>
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved area's memmap (areas of memblock.reserved)
as well only when user of reserved area indicate that we can do
this in drivers. that is, user of reserved area know how
: zhouxianrong
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved area's memmap (areas of memblock.reserved)
as well only when user of reserved area indicate that we can do
this in drivers. that is, user of reserved area know how to
use the reserved area who could
From: zhouxianrong <zhouxianr...@huawei.com>
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved area's memmap (areas of memblock.reserved)
as well only when user of reserved area indicate that we can do
this in drivers. that is, user of reserved area kn
From: zhouxianrong
just like freeing no-map area's memmap (gaps of memblock.memory)
we could free reserved area's memmap (areas of memblock.reserved)
as well only when user of reserved area indicate that we can do
this in drivers. that is, user of reserved area know how to
use the reserved area
From: zhouxianrong <zhouxianr...@huawei.com>
just like freeing no-map area's memmap we could free reserved
area's memmap as well only when user of reserved area indicate
that we can do this in dts or drivers. that is, user of reserved
area know how to use the reserved area who
From: zhouxianrong
just like freeing no-map area's memmap we could free reserved
area's memmap as well only when user of reserved area indicate
that we can do this in dts or drivers. that is, user of reserved
area know how to use the reserved area who could not memblock_free
or free_reserved_xxx
On 2017/2/7 10:54, Minchan Kim wrote:
On Tue, Feb 07, 2017 at 10:20:57AM +0800, zhouxianrong wrote:
< snip >
3. the below should be modified.
static inline bool zram_meta_get(struct zram *zram)
@@ -495,11 +553,17 @@ static void zram_meta_free(struct zram_meta *meta, u64
di
On 2017/2/7 10:54, Minchan Kim wrote:
On Tue, Feb 07, 2017 at 10:20:57AM +0800, zhouxianrong wrote:
< snip >
3. the below should be modified.
static inline bool zram_meta_get(struct zram *zram)
@@ -495,11 +553,17 @@ static void zram_meta_free(struct zram_meta *meta, u64
di
On 2017/2/6 22:49, Matthew Wilcox wrote:
[adding linux-arch to see if anyone there wants to do an optimised
version of memfill for their CPU]
On Mon, Feb 06, 2017 at 12:16:44AM +0900, Minchan Kim wrote:
+static inline void zram_fill_page(char *ptr, unsigned long len,
+
On 2017/2/6 22:49, Matthew Wilcox wrote:
[adding linux-arch to see if anyone there wants to do an optimised
version of memfill for their CPU]
On Mon, Feb 06, 2017 at 12:16:44AM +0900, Minchan Kim wrote:
+static inline void zram_fill_page(char *ptr, unsigned long len,
+
On 2017/2/7 7:48, Minchan Kim wrote:
Hi
On Mon, Feb 06, 2017 at 09:28:18AM +0800, zhouxianrong wrote:
On 2017/2/5 22:21, Minchan Kim wrote:
Hi zhouxianrong,
On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
test
On 2017/2/7 7:48, Minchan Kim wrote:
Hi
On Mon, Feb 06, 2017 at 09:28:18AM +0800, zhouxianrong wrote:
On 2017/2/5 22:21, Minchan Kim wrote:
Hi zhouxianrong,
On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote:
From: zhouxianrong
test result as listed below:
zero
On 2017/2/5 22:21, Minchan Kim wrote:
Hi zhouxianrong,
On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
test result as listed below:
zero pattern_char pattern_short pattern_int pattern_long total (unit)
162989
On 2017/2/5 22:21, Minchan Kim wrote:
Hi zhouxianrong,
On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote:
From: zhouxianrong
test result as listed below:
zero pattern_char pattern_short pattern_int pattern_long total (unit)
162989 144543534 23516
right, thanks.
On 2017/2/3 23:33, Matthew Wilcox wrote:
On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote:
+static inline void zram_fill_page_partial(char *ptr, unsigned int size,
+ unsigned long value)
+{
+ int i;
+ unsigned long *page;
+
+
right, thanks.
On 2017/2/3 23:33, Matthew Wilcox wrote:
On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote:
+static inline void zram_fill_page_partial(char *ptr, unsigned int size,
+ unsigned long value)
+{
+ int i;
+ unsigned long *page;
+
+
From: zhouxianrong <zhouxianr...@huawei.com>
test result as listed below:
zero pattern_char pattern_short pattern_int pattern_long total (unit)
162989 144543534 23516 2769 3294399 (page)
statistics for the result:
zero pattern_char pattern
From: zhouxianrong
test result as listed below:
zero pattern_char pattern_short pattern_int pattern_long total (unit)
162989 144543534 23516 2769 3294399 (page)
statistics for the result:
zero pattern_char pattern_short pattern_int pattern_long
From: zhouxianrong <zhouxianr...@huawei.com>
test result as listed below:
zero pattern_char pattern_short pattern_int pattern_long total (unit)
162989 144543534 23516 2769 3294399 (page)
statistics for the result:
zero pattern_char pattern
From: zhouxianrong
test result as listed below:
zero pattern_char pattern_short pattern_int pattern_long total (unit)
162989 144543534 23516 2769 3294399 (page)
statistics for the result:
zero pattern_char pattern_short pattern_int pattern_long
ng dedup ratio
if memset is really fast rather than open-looping. So in future,
if we can prove bigger pattern can increase dedup ratio a lot, then,
we could consider to extend it at the cost of make that path slow.
In summary, zhouxianrong, please test pattern as Joonsoo asked.
So if there are not
ng dedup ratio
if memset is really fast rather than open-looping. So in future,
if we can prove bigger pattern can increase dedup ratio a lot, then,
we could consider to extend it at the cost of make that path slow.
In summary, zhouxianrong, please test pattern as Joonsoo asked.
So if there are not
A_lw, A_lw, A_lw, lsl #16
orr A_l, A_l, A_l, lsl #32
-邮件原件-
发件人: Matthew Wilcox [mailto:wi...@infradead.org]
发送时间: 2017年1月23日 14:26
收件人: zhouxianrong
抄送: Sergey Senozhatsky; linux...@kvack.org; linux-kernel@vger.kernel.org;
a...@linux-foundation.org; sergey.senozhat
A_lw, A_lw, A_lw, lsl #16
orr A_l, A_l, A_l, lsl #32
-邮件原件-
发件人: Matthew Wilcox [mailto:wi...@infradead.org]
发送时间: 2017年1月23日 14:26
收件人: zhouxianrong
抄送: Sergey Senozhatsky; linux...@kvack.org; linux-kernel@vger.kernel.org;
a...@linux-foundation.org; sergey.senozhat
articles have been said
decrement loop is faster like zhouxianrong's mentiond although I don't
think it makes marginal difference.
Joonsoo, why do you think incremental is faster?
zhouxianrong, why do you think decrement loops makes cache problem?
I'm okay either way. Just want to know why you
articles have been said
decrement loop is faster like zhouxianrong's mentiond although I don't
think it makes marginal difference.
Joonsoo, why do you think incremental is faster?
zhouxianrong, why do you think decrement loops makes cache problem?
I'm okay either way. Just want to know why you
, Jan 23, 2017 at 10:55:23AM +0900, Minchan Kim wrote:
From: zhouxianrong <zhouxianr...@huawei.com>
the idea is that without doing more calculations we extend zero pages
to same element pages for zram. zero page is special case of
same element page with zero element.
1. the test is done
, Jan 23, 2017 at 10:55:23AM +0900, Minchan Kim wrote:
From: zhouxianrong
the idea is that without doing more calculations we extend zero pages
to same element pages for zram. zero page is special case of
same element page with zero element.
1. the test is done under android 7.0
2. startup too many
hey Joonsoo:
i would test and give the same element type later.
On 2017/1/23 10:58, Joonsoo Kim wrote:
Hello,
On Sun, Jan 22, 2017 at 10:58:38AM +0800, zhouxianrong wrote:
1. memset is just set a int value but i want to set a long value.
Sorry for late review.
Do we really need
hey Joonsoo:
i would test and give the same element type later.
On 2017/1/23 10:58, Joonsoo Kim wrote:
Hello,
On Sun, Jan 22, 2017 at 10:58:38AM +0800, zhouxianrong wrote:
1. memset is just set a int value but i want to set a long value.
Sorry for late review.
Do we really need
1. memset is just set a int value but i want to set a long value.
2. using clear_page rather than memset MAYBE due to in arm64 arch
it is a 64-bytes operations.
6.6.4. Data Cache Zero
The ARMv8-A architecture introduces a Data Cache Zero by Virtual Address (DC
ZVA) instruction. This enables
1. memset is just set a int value but i want to set a long value.
2. using clear_page rather than memset MAYBE due to in arm64 arch
it is a 64-bytes operations.
6.6.4. Data Cache Zero
The ARMv8-A architecture introduces a Data Cache Zero by Virtual Address (DC
ZVA) instruction. This enables
From: zhouxianrong <zhouxianr...@huawei.com>
the idea is that without doing more calculations we extend zero pages
to same element pages for zram. zero page is special case of
same element page with zero element.
1. the test is done under android 7.0
2. startup too many applications circul
From: zhouxianrong
the idea is that without doing more calculations we extend zero pages
to same element pages for zram. zero page is special case of
same element page with zero element.
1. the test is done under android 7.0
2. startup too many applications circularly
3. sample the zero pages
From: zhouxianrong <zhouxianr...@huawei.com>
the idea is that without doing more calculations we extend zero pages
to same element pages for zram. zero page is special case of
same element page with zero element.
1. the test is done under android 7.0
2. startup too many applications circul
From: zhouxianrong
the idea is that without doing more calculations we extend zero pages
to same element pages for zram. zero page is special case of
same element page with zero element.
1. the test is done under android 7.0
2. startup too many applications circularly
3. sample the zero pages
From: z00281421
Signed-off-by: z00281421
---
drivers/block/zram/zram_drv.c | 67 ++---
drivers/block/zram/zram_drv.h | 11 ---
2 files changed, 49 insertions(+), 29 deletions(-)
diff
From: z00281421
Signed-off-by: z00281421
---
drivers/block/zram/zram_drv.c | 67 ++---
drivers/block/zram/zram_drv.h | 11 ---
2 files changed, 49 insertions(+), 29 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c
From: z00281421
The bdi flusher should be throttled only depends on
own bdi and is decoupled with others.
separate PGDAT_WRITEBACK into PGDAT_ANON_WRITEBACK and
PGDAT_FILE_WRITEBACK avoid scanning anon lru and it is ok
then throttled on file WRITEBACK.
i think
From: z00281421
The bdi flusher should be throttled only depends on
own bdi and is decoupled with others.
separate PGDAT_WRITEBACK into PGDAT_ANON_WRITEBACK and
PGDAT_FILE_WRITEBACK avoid scanning anon lru and it is ok
then throttled on file WRITEBACK.
i think above may be not right.
Call trace:
[] __switch_to+0x80/0x98
[] __schedule+0x314/0x854
[] schedule+0x48/0xa4
[] schedule_timeout+0x158/0x2c8
[] io_schedule_timeout+0xbc/0x14c
[] wait_iff_congested+0x1d4/0x1ec
[] shrink_inactive_list+0x530/0x760
[] shrink_lruvec+0x534/0x76c
[] shrink_zone+0x88/0x1b8
[]
Call trace:
[] __switch_to+0x80/0x98
[] __schedule+0x314/0x854
[] schedule+0x48/0xa4
[] schedule_timeout+0x158/0x2c8
[] io_schedule_timeout+0xbc/0x14c
[] wait_iff_congested+0x1d4/0x1ec
[] shrink_inactive_list+0x530/0x760
[] shrink_lruvec+0x534/0x76c
[] shrink_zone+0x88/0x1b8
[]
From: z00281421
bdi flusher may enter page alloc slow path due to writepage and kmalloc.
in that case the flusher as a direct reclaimer should not be throttled here
because it can not to reclaim clean file pages or anaonymous pages
for next moment; furthermore
From: z00281421
bdi flusher may enter page alloc slow path due to writepage and kmalloc.
in that case the flusher as a direct reclaimer should not be throttled here
because it can not to reclaim clean file pages or anaonymous pages
for next moment; furthermore writeback rate of dirty pages
hey Hellwig:
cond_resched_lock is a good choice. i mixed the cond_resched_lock and
batch to balance of
realtime and performance and resubmit this patch.
On 2016/10/16 0:55, Christoph Hellwig wrote:
On Sat, Oct 15, 2016 at 10:12:48PM +0800, zhouxianr...@huawei.com wrote:
From:
hey Hellwig:
cond_resched_lock is a good choice. i mixed the cond_resched_lock and
batch to balance of
realtime and performance and resubmit this patch.
On 2016/10/16 0:55, Christoph Hellwig wrote:
On Sat, Oct 15, 2016 at 10:12:48PM +0800, zhouxianr...@huawei.com wrote:
From:
From: z00281421
Signed-off-by: z00281421
---
mm/vmalloc.c |9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 91f44e7..e9c9c04 100644
--- a/mm/vmalloc.c
+++
From: z00281421
Signed-off-by: z00281421
---
mm/vmalloc.c |9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 91f44e7..e9c9c04 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -661,13 +661,18 @@ static void
From: z00281421
i think no need to place __free_vmap_area loop in purge_lock;
_free_vmap_area could be non-atomic operations with flushing tlb
but must be done after flush tlb. and the whole__free_vmap_area loops
also could be non-atomic operations. if so we could
From: z00281421
i think no need to place __free_vmap_area loop in purge_lock;
_free_vmap_area could be non-atomic operations with flushing tlb
but must be done after flush tlb. and the whole__free_vmap_area loops
also could be non-atomic operations. if so we could improve realtime
because the
From: z00281421
set anon_vma of first rmap_item of ksm page to page's anon_vma
other than vma's anon_vma so that we can lookup all the forked
vma of kpage via reserve map. thus we can try_to_unmap ksm page
completely and reclaim or migrate the ksm page
From: z00281421
set anon_vma of first rmap_item of ksm page to page's anon_vma
other than vma's anon_vma so that we can lookup all the forked
vma of kpage via reserve map. thus we can try_to_unmap ksm page
completely and reclaim or migrate the ksm page successfully and
need not to merg other
.
is this a problem ? do you think about this ?
On 2016/6/22 9:39, Hugh Dickins wrote:
On Tue, 21 Jun 2016, zhouxianrong wrote:
hey hugh:
could you please give me some suggestion about this ?
I must ask you to be more patient: everyone would like me to be
quicker, but I cannot; and this does
.
is this a problem ? do you think about this ?
On 2016/6/22 9:39, Hugh Dickins wrote:
On Tue, 21 Jun 2016, zhouxianrong wrote:
hey hugh:
could you please give me some suggestion about this ?
I must ask you to be more patient: everyone would like me to be
quicker, but I cannot; and this does
hey hugh:
could you please give me some suggestion about this ?
On 2016/6/15 9:56, zhouxianr...@huawei.com wrote:
From: z00281421
more mapcount page as kpage could reduce total replacement times
than fewer mapcount one when ksmd scan and replace among
hey hugh:
could you please give me some suggestion about this ?
On 2016/6/15 9:56, zhouxianr...@huawei.com wrote:
From: z00281421
more mapcount page as kpage could reduce total replacement times
than fewer mapcount one when ksmd scan and replace among
forked pages later.
Signed-off-by:
From: z00281421
more mapcount page as kpage could reduce total replacement times
than fewer mapcount one when ksmd scan and replace among
forked pages later.
Signed-off-by: z00281421
---
mm/ksm.c |8
1 file
From: z00281421
more mapcount page as kpage could reduce total replacement times
than fewer mapcount one when ksmd scan and replace among
forked pages later.
Signed-off-by: z00281421
---
mm/ksm.c |8
1 file changed, 8 insertions(+)
diff --git a/mm/ksm.c b/mm/ksm.c
index
From: z00281421
more mapcount page as kpage could reduce total replacement
times than fewer mapcount one when ksmd scan and replace
among forked pages later.
Signed-off-by: z00281421
---
mm/ksm.c | 15 +++
1 file
From: z00281421
more mapcount page as kpage could reduce total replacement
times than fewer mapcount one when ksmd scan and replace
among forked pages later.
Signed-off-by: z00281421
---
mm/ksm.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/mm/ksm.c b/mm/ksm.c
96 matches
Mail list logo