[PATCH] zsmalloc: fix linking bug in init_zspage

2018-08-09 Thread zhouxianrong
From: zhouxianrong The last partial object in last subpage of zspage should not be linked in allocation list. Otherwise it could trigger BUG_ON explicitly at function zs_map_object. But it happened rarely. Signed-off-by: zhouxianrong --- mm/zsmalloc.c | 2 ++ 1 file changed, 2 insertions

[PATCH] zsmalloc: fix linking bug in init_zspage

2018-08-09 Thread zhouxianrong
From: zhouxianrong The last partial object in last subpage of zspage should not be linked in allocation list. Otherwise it could trigger BUG_ON explicitly at function zs_map_object. But it happened rarely. Signed-off-by: zhouxianrong --- mm/zsmalloc.c | 2 ++ 1 file changed, 2 insertions

[PATCH] zsmalloc: fix linking bug in init_zspage

2018-08-09 Thread zhouxianrong
The last partial object in last subpage of zspage should not be linked in allocation list. Signed-off-by: zhouxianrong --- mm/zsmalloc.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 8d87e973a4f5..24dd8da0aa59 100644 --- a/mm/zsmalloc.c +++ b/mm

[PATCH] zsmalloc: fix linking bug in init_zspage

2018-08-09 Thread zhouxianrong
The last partial object in last subpage of zspage should not be linked in allocation list. Signed-off-by: zhouxianrong --- mm/zsmalloc.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 8d87e973a4f5..24dd8da0aa59 100644 --- a/mm/zsmalloc.c +++ b/mm

[PATCH] mm: Reset zero swap page to empty_zero_page for reading swap fault.

2018-02-10 Thread zhouxianrong
-by: zhouxianrong <zhouxianr...@tom.com> --- drivers/block/zram/zram_drv.c | 3 +++ include/linux/mm.h | 1 + include/linux/page-flags.h | 10 ++ include/linux/swap.h | 17 + include/trace/events/mmflags.h | 9 - mm/K

[PATCH] mm: Reset zero swap page to empty_zero_page for reading swap fault.

2018-02-10 Thread zhouxianrong
-by: zhouxianrong --- drivers/block/zram/zram_drv.c | 3 +++ include/linux/mm.h | 1 + include/linux/page-flags.h | 10 ++ include/linux/swap.h | 17 + include/trace/events/mmflags.h | 9 - mm/Kconfig | 12 mm

Re: [PATCH] mm: try to free swap only for reading swap fault

2017-11-02 Thread zhouxianrong
originally. On 2017/11/2 21:22, Michal Hocko wrote: On Thu 02-11-17 20:35:19, zhouxianr...@huawei.com wrote: From: zhouxianrong <zhouxianr...@huawei.com> the purpose of this patch is that when a reading swap fault happens on a clean swap cache page whose swap count is equal

Re: [PATCH] mm: try to free swap only for reading swap fault

2017-11-02 Thread zhouxianrong
originally. On 2017/11/2 21:22, Michal Hocko wrote: On Thu 02-11-17 20:35:19, zhouxianr...@huawei.com wrote: From: zhouxianrong the purpose of this patch is that when a reading swap fault happens on a clean swap cache page whose swap count is equal to one, then try_to_free_swap could remove this page

[PATCH] mm: try to free swap only for reading swap fault

2017-11-02 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> the purpose of this patch is that when a reading swap fault happens on a clean swap cache page whose swap count is equal to one, then try_to_free_swap could remove this page from swap cache and mark this page dirty. so if later we reclaimed thi

[PATCH] mm: try to free swap only for reading swap fault

2017-11-02 Thread zhouxianrong
From: zhouxianrong the purpose of this patch is that when a reading swap fault happens on a clean swap cache page whose swap count is equal to one, then try_to_free_swap could remove this page from swap cache and mark this page dirty. so if later we reclaimed this page then we could pageout

[PATCH] mm: extend reuse_swap_page range as much as possible

2017-11-01 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> origanlly reuse_swap_page requires that the sum of page's mapcount and swapcount less than or equal to one. in this case we can reuse this page and avoid COW currently. now reuse_swap_page requires only that page's mapcount less than or equal

[PATCH] mm: extend reuse_swap_page range as much as possible

2017-11-01 Thread zhouxianrong
From: zhouxianrong origanlly reuse_swap_page requires that the sum of page's mapcount and swapcount less than or equal to one. in this case we can reuse this page and avoid COW currently. now reuse_swap_page requires only that page's mapcount less than or equal to one and the page is not dirty

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-04 Thread zhouxianrong
every 2s i sample /proc/buddyinfo in the whole test process. the last about 90 samples were sampled after the test was done. Node 0, zone DMA 4706 2099838266 50 5 3 2 1 2 38 0395 1261211 57 6 1 0 0 0

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-04 Thread zhouxianrong
every 2s i sample /proc/buddyinfo in the whole test process. the last about 90 samples were sampled after the test was done. Node 0, zone DMA 4706 2099838266 50 5 3 2 1 2 38 0395 1261211 57 6 1 0 0 0

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-04 Thread zhouxianrong
i do the test again. after minutes i tell you the result. On 2017/7/4 14:52, Michal Hocko wrote: On Tue 04-07-17 09:21:00, zhouxianrong wrote: the test was done as follows: 1. the environment is android 7.0 and kernel is 4.1 and managed memory is 3.5GB There have been many changes

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-04 Thread zhouxianrong
i do the test again. after minutes i tell you the result. On 2017/7/4 14:52, Michal Hocko wrote: On Tue 04-07-17 09:21:00, zhouxianrong wrote: the test was done as follows: 1. the environment is android 7.0 and kernel is 4.1 and managed memory is 3.5GB There have been many changes

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-03 Thread zhouxianrong
-17 20:02:16, zhouxianrong wrote: [...] from above i think after applying the patch the result is better. You haven't described your testing methodology, nor the workload that was tested. As such this data is completely meaningless.

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-03 Thread zhouxianrong
-17 20:02:16, zhouxianrong wrote: [...] from above i think after applying the patch the result is better. You haven't described your testing methodology, nor the workload that was tested. As such this data is completely meaningless.

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-03 Thread zhouxianrong
On 2017/7/3 15:48, Michal Hocko wrote: On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote: From: zhouxianrong <zhouxianr...@huawei.com> when buddy is under fragment i find that still there are some pages just like AFFA mode. A is allocated, F is free, AF is buddy pair for oder

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-03 Thread zhouxianrong
On 2017/7/3 15:48, Michal Hocko wrote: On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote: From: zhouxianrong when buddy is under fragment i find that still there are some pages just like AFFA mode. A is allocated, F is free, AF is buddy pair for oder n, FA is buddy pair for oder n

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-03 Thread zhouxianrong
On 2017/7/3 15:48, Michal Hocko wrote: On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote: From: zhouxianrong <zhouxianr...@huawei.com> when buddy is under fragment i find that still there are some pages just like AFFA mode. A is allocated, F is free, AF is buddy pair for oder

Re: [PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-07-03 Thread zhouxianrong
On 2017/7/3 15:48, Michal Hocko wrote: On Fri 30-06-17 19:25:41, zhouxianr...@huawei.com wrote: From: zhouxianrong when buddy is under fragment i find that still there are some pages just like AFFA mode. A is allocated, F is free, AF is buddy pair for oder n, FA is buddy pair for oder n

[PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-06-30 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> when buddy is under fragment i find that still there are some pages just like AFFA mode. A is allocated, F is free, AF is buddy pair for oder n, FA is buddy pair for oder n as well. I want to compse the FF as

[PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-06-30 Thread zhouxianrong
From: zhouxianrong when buddy is under fragment i find that still there are some pages just like AFFA mode. A is allocated, F is free, AF is buddy pair for oder n, FA is buddy pair for oder n as well. I want to compse the FF as oder n + 1 and align to n other

[PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-06-30 Thread zhouxianrong
From: z00281421 Signed-off-by: z00281421 --- include/linux/gfp.h |8 +- include/linux/mmzone.h |2 + include/linux/page-flags.h |9 ++ include/linux/thread_info.h |5 +- mm/compaction.c |

[PATCH mm] introduce reverse buddy concept to reduce buddy fragment

2017-06-30 Thread zhouxianrong
From: z00281421 Signed-off-by: z00281421 --- include/linux/gfp.h |8 +- include/linux/mmzone.h |2 + include/linux/page-flags.h |9 ++ include/linux/thread_info.h |5 +- mm/compaction.c | 17 mm/internal.h |7 ++

Re: + compaction-add-def_blk_aops-migrate-function-for-memory-compaction.patch added to -mm tree

2017-03-09 Thread zhouxianrong
* Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------ From: zhouxianrong <zhouxianr...@huawei.com> Subject: compaction: add def_

Re: + compaction-add-def_blk_aops-migrate-function-for-memory-compaction.patch added to -mm tree

2017-03-09 Thread zhouxianrong
* Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------ From: zhouxianrong Subject: compaction: add def_blk_aops migrate function for memory

[PATCH] compaction: add def_blk_aops migrate function for memory compaction

2017-03-07 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> the reason for why to do this is based on below factors. 1. larg file read/write operations with order 0 can fragmentize memory rapidly. 2. when a special filesystem does not supply migratepage callback, kernel would fallback to d

[PATCH] compaction: add def_blk_aops migrate function for memory compaction

2017-03-07 Thread zhouxianrong
From: zhouxianrong the reason for why to do this is based on below factors. 1. larg file read/write operations with order 0 can fragmentize memory rapidly. 2. when a special filesystem does not supply migratepage callback, kernel would fallback to default function fallback_migrate_page

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-03-01 Thread zhouxianrong
On 2017/3/1 18:41, Jisheng Zhang wrote: Add Chen, Catalin On Thu, 16 Feb 2017 09:11:29 +0800 zhouxianrong wrote: On 2017/2/15 15:10, Ard Biesheuvel wrote: On 15 February 2017 at 01:44, zhouxianrong wrote: On 2017/2/14 17:03, Ard Biesheuvel wrote: On 14 February 2017 at 06:53

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-03-01 Thread zhouxianrong
On 2017/3/1 18:41, Jisheng Zhang wrote: Add Chen, Catalin On Thu, 16 Feb 2017 09:11:29 +0800 zhouxianrong wrote: On 2017/2/15 15:10, Ard Biesheuvel wrote: On 15 February 2017 at 01:44, zhouxianrong wrote: On 2017/2/14 17:03, Ard Biesheuvel wrote: On 14 February 2017 at 06:53, wrote

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-02-15 Thread zhouxianrong
On 2017/2/15 15:10, Ard Biesheuvel wrote: On 15 February 2017 at 01:44, zhouxianrong <zhouxianr...@huawei.com> wrote: On 2017/2/14 17:03, Ard Biesheuvel wrote: On 14 February 2017 at 06:53, <zhouxianr...@huawei.com> wrote: From: zhouxianrong <zhouxianr...@huawei.com>

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-02-15 Thread zhouxianrong
On 2017/2/15 15:10, Ard Biesheuvel wrote: On 15 February 2017 at 01:44, zhouxianrong wrote: On 2017/2/14 17:03, Ard Biesheuvel wrote: On 14 February 2017 at 06:53, wrote: From: zhouxianrong just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-02-14 Thread zhouxianrong
On 2017/2/14 17:03, Ard Biesheuvel wrote: On 14 February 2017 at 06:53, <zhouxianr...@huawei.com> wrote: From: zhouxianrong <zhouxianr...@huawei.com> just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved area's memmap (areas of memblock.reser

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-02-14 Thread zhouxianrong
On 2017/2/14 17:03, Ard Biesheuvel wrote: On 14 February 2017 at 06:53, wrote: From: zhouxianrong just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved area's memmap (areas of memblock.reserved) as well only when user of reserved area indicate that we can

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-02-13 Thread zhouxianrong
: zhouxianrong <zhouxianr...@huawei.com> just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved area's memmap (areas of memblock.reserved) as well only when user of reserved area indicate that we can do this in drivers. that is, user of reserved area know how

Re: [PATCH] mm: free reserved area's memmap if possiable

2017-02-13 Thread zhouxianrong
: zhouxianrong just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved area's memmap (areas of memblock.reserved) as well only when user of reserved area indicate that we can do this in drivers. that is, user of reserved area know how to use the reserved area who could

[PATCH] mm: free reserved area's memmap if possiable

2017-02-13 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved area's memmap (areas of memblock.reserved) as well only when user of reserved area indicate that we can do this in drivers. that is, user of reserved area kn

[PATCH] mm: free reserved area's memmap if possiable

2017-02-13 Thread zhouxianrong
From: zhouxianrong just like freeing no-map area's memmap (gaps of memblock.memory) we could free reserved area's memmap (areas of memblock.reserved) as well only when user of reserved area indicate that we can do this in drivers. that is, user of reserved area know how to use the reserved area

[PATCH] mm: free reserved area's memmap if possiable

2017-02-13 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> just like freeing no-map area's memmap we could free reserved area's memmap as well only when user of reserved area indicate that we can do this in dts or drivers. that is, user of reserved area know how to use the reserved area who

[PATCH] mm: free reserved area's memmap if possiable

2017-02-13 Thread zhouxianrong
From: zhouxianrong just like freeing no-map area's memmap we could free reserved area's memmap as well only when user of reserved area indicate that we can do this in dts or drivers. that is, user of reserved area know how to use the reserved area who could not memblock_free or free_reserved_xxx

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-06 Thread zhouxianrong
On 2017/2/7 10:54, Minchan Kim wrote: On Tue, Feb 07, 2017 at 10:20:57AM +0800, zhouxianrong wrote: < snip > 3. the below should be modified. static inline bool zram_meta_get(struct zram *zram) @@ -495,11 +553,17 @@ static void zram_meta_free(struct zram_meta *meta, u64 di

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-06 Thread zhouxianrong
On 2017/2/7 10:54, Minchan Kim wrote: On Tue, Feb 07, 2017 at 10:20:57AM +0800, zhouxianrong wrote: < snip > 3. the below should be modified. static inline bool zram_meta_get(struct zram *zram) @@ -495,11 +553,17 @@ static void zram_meta_free(struct zram_meta *meta, u64 di

Re: memfill

2017-02-06 Thread zhouxianrong
On 2017/2/6 22:49, Matthew Wilcox wrote: [adding linux-arch to see if anyone there wants to do an optimised version of memfill for their CPU] On Mon, Feb 06, 2017 at 12:16:44AM +0900, Minchan Kim wrote: +static inline void zram_fill_page(char *ptr, unsigned long len, +

Re: memfill

2017-02-06 Thread zhouxianrong
On 2017/2/6 22:49, Matthew Wilcox wrote: [adding linux-arch to see if anyone there wants to do an optimised version of memfill for their CPU] On Mon, Feb 06, 2017 at 12:16:44AM +0900, Minchan Kim wrote: +static inline void zram_fill_page(char *ptr, unsigned long len, +

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-06 Thread zhouxianrong
On 2017/2/7 7:48, Minchan Kim wrote: Hi On Mon, Feb 06, 2017 at 09:28:18AM +0800, zhouxianrong wrote: On 2017/2/5 22:21, Minchan Kim wrote: Hi zhouxianrong, On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote: From: zhouxianrong <zhouxianr...@huawei.com> test

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-06 Thread zhouxianrong
On 2017/2/7 7:48, Minchan Kim wrote: Hi On Mon, Feb 06, 2017 at 09:28:18AM +0800, zhouxianrong wrote: On 2017/2/5 22:21, Minchan Kim wrote: Hi zhouxianrong, On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote: From: zhouxianrong test result as listed below: zero

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-05 Thread zhouxianrong
On 2017/2/5 22:21, Minchan Kim wrote: Hi zhouxianrong, On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote: From: zhouxianrong <zhouxianr...@huawei.com> test result as listed below: zero pattern_char pattern_short pattern_int pattern_long total (unit) 162989

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-05 Thread zhouxianrong
On 2017/2/5 22:21, Minchan Kim wrote: Hi zhouxianrong, On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote: From: zhouxianrong test result as listed below: zero pattern_char pattern_short pattern_int pattern_long total (unit) 162989 144543534 23516

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-03 Thread zhouxianrong
right, thanks. On 2017/2/3 23:33, Matthew Wilcox wrote: On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote: +static inline void zram_fill_page_partial(char *ptr, unsigned int size, + unsigned long value) +{ + int i; + unsigned long *page; + +

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-02-03 Thread zhouxianrong
right, thanks. On 2017/2/3 23:33, Matthew Wilcox wrote: On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianr...@huawei.com wrote: +static inline void zram_fill_page_partial(char *ptr, unsigned int size, + unsigned long value) +{ + int i; + unsigned long *page; + +

[PATCH] mm: extend zero pages to same element pages for zram

2017-02-03 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> test result as listed below: zero pattern_char pattern_short pattern_int pattern_long total (unit) 162989 144543534 23516 2769 3294399 (page) statistics for the result: zero pattern_char pattern

[PATCH] mm: extend zero pages to same element pages for zram

2017-02-03 Thread zhouxianrong
From: zhouxianrong test result as listed below: zero pattern_char pattern_short pattern_int pattern_long total (unit) 162989 144543534 23516 2769 3294399 (page) statistics for the result: zero pattern_char pattern_short pattern_int pattern_long

[PATCH] mm: extend zero pages to same element pages for zram

2017-02-03 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> test result as listed below: zero pattern_char pattern_short pattern_int pattern_long total (unit) 162989 144543534 23516 2769 3294399 (page) statistics for the result: zero pattern_char pattern

[PATCH] mm: extend zero pages to same element pages for zram

2017-02-03 Thread zhouxianrong
From: zhouxianrong test result as listed below: zero pattern_char pattern_short pattern_int pattern_long total (unit) 162989 144543534 23516 2769 3294399 (page) statistics for the result: zero pattern_char pattern_short pattern_int pattern_long

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-24 Thread zhouxianrong
ng dedup ratio if memset is really fast rather than open-looping. So in future, if we can prove bigger pattern can increase dedup ratio a lot, then, we could consider to extend it at the cost of make that path slow. In summary, zhouxianrong, please test pattern as Joonsoo asked. So if there are not

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-24 Thread zhouxianrong
ng dedup ratio if memset is really fast rather than open-looping. So in future, if we can prove bigger pattern can increase dedup ratio a lot, then, we could consider to extend it at the cost of make that path slow. In summary, zhouxianrong, please test pattern as Joonsoo asked. So if there are not

答复: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-22 Thread zhouxianrong
A_lw, A_lw, A_lw, lsl #16 orr A_l, A_l, A_l, lsl #32 -邮件原件- 发件人: Matthew Wilcox [mailto:wi...@infradead.org] 发送时间: 2017年1月23日 14:26 收件人: zhouxianrong 抄送: Sergey Senozhatsky; linux...@kvack.org; linux-kernel@vger.kernel.org; a...@linux-foundation.org; sergey.senozhat

答复: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-22 Thread zhouxianrong
A_lw, A_lw, A_lw, lsl #16 orr A_l, A_l, A_l, lsl #32 -邮件原件- 发件人: Matthew Wilcox [mailto:wi...@infradead.org] 发送时间: 2017年1月23日 14:26 收件人: zhouxianrong 抄送: Sergey Senozhatsky; linux...@kvack.org; linux-kernel@vger.kernel.org; a...@linux-foundation.org; sergey.senozhat

Re: [PATCH v3] zram: extend zero pages to same element pages

2017-01-22 Thread zhouxianrong
articles have been said decrement loop is faster like zhouxianrong's mentiond although I don't think it makes marginal difference. Joonsoo, why do you think incremental is faster? zhouxianrong, why do you think decrement loops makes cache problem? I'm okay either way. Just want to know why you

Re: [PATCH v3] zram: extend zero pages to same element pages

2017-01-22 Thread zhouxianrong
articles have been said decrement loop is faster like zhouxianrong's mentiond although I don't think it makes marginal difference. Joonsoo, why do you think incremental is faster? zhouxianrong, why do you think decrement loops makes cache problem? I'm okay either way. Just want to know why you

Re: [PATCH v3] zram: extend zero pages to same element pages

2017-01-22 Thread zhouxianrong
, Jan 23, 2017 at 10:55:23AM +0900, Minchan Kim wrote: From: zhouxianrong <zhouxianr...@huawei.com> the idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done

Re: [PATCH v3] zram: extend zero pages to same element pages

2017-01-22 Thread zhouxianrong
, Jan 23, 2017 at 10:55:23AM +0900, Minchan Kim wrote: From: zhouxianrong the idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-22 Thread zhouxianrong
hey Joonsoo: i would test and give the same element type later. On 2017/1/23 10:58, Joonsoo Kim wrote: Hello, On Sun, Jan 22, 2017 at 10:58:38AM +0800, zhouxianrong wrote: 1. memset is just set a int value but i want to set a long value. Sorry for late review. Do we really need

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-22 Thread zhouxianrong
hey Joonsoo: i would test and give the same element type later. On 2017/1/23 10:58, Joonsoo Kim wrote: Hello, On Sun, Jan 22, 2017 at 10:58:38AM +0800, zhouxianrong wrote: 1. memset is just set a int value but i want to set a long value. Sorry for late review. Do we really need

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-21 Thread zhouxianrong
1. memset is just set a int value but i want to set a long value. 2. using clear_page rather than memset MAYBE due to in arm64 arch it is a 64-bytes operations. 6.6.4. Data Cache Zero The ARMv8-A architecture introduces a Data Cache Zero by Virtual Address (DC ZVA) instruction. This enables

Re: [PATCH] mm: extend zero pages to same element pages for zram

2017-01-21 Thread zhouxianrong
1. memset is just set a int value but i want to set a long value. 2. using clear_page rather than memset MAYBE due to in arm64 arch it is a 64-bytes operations. 6.6.4. Data Cache Zero The ARMv8-A architecture introduces a Data Cache Zero by Virtual Address (DC ZVA) instruction. This enables

[PATCH] mm: extend zero pages to same element pages for zram

2017-01-13 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> the idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circul

[PATCH] mm: extend zero pages to same element pages for zram

2017-01-13 Thread zhouxianrong
From: zhouxianrong the idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages

[PATCH] mm: extend zero pages to same element pages for zram

2017-01-06 Thread zhouxianrong
From: zhouxianrong <zhouxianr...@huawei.com> the idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circul

[PATCH] mm: extend zero pages to same element pages for zram

2017-01-06 Thread zhouxianrong
From: zhouxianrong the idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages

[PATCH zram] extend zero pages to same element pages

2017-01-03 Thread zhouxianrong
From: z00281421 Signed-off-by: z00281421 --- drivers/block/zram/zram_drv.c | 67 ++--- drivers/block/zram/zram_drv.h | 11 --- 2 files changed, 49 insertions(+), 29 deletions(-) diff

[PATCH zram] extend zero pages to same element pages

2017-01-03 Thread zhouxianrong
From: z00281421 Signed-off-by: z00281421 --- drivers/block/zram/zram_drv.c | 67 ++--- drivers/block/zram/zram_drv.h | 11 --- 2 files changed, 49 insertions(+), 29 deletions(-) diff --git a/drivers/block/zram/zram_drv.c

[PATCH] bdi flusher should not be throttled here when it fall into buddy slow path

2016-10-20 Thread zhouxianrong
From: z00281421 The bdi flusher should be throttled only depends on own bdi and is decoupled with others. separate PGDAT_WRITEBACK into PGDAT_ANON_WRITEBACK and PGDAT_FILE_WRITEBACK avoid scanning anon lru and it is ok then throttled on file WRITEBACK. i think

[PATCH] bdi flusher should not be throttled here when it fall into buddy slow path

2016-10-20 Thread zhouxianrong
From: z00281421 The bdi flusher should be throttled only depends on own bdi and is decoupled with others. separate PGDAT_WRITEBACK into PGDAT_ANON_WRITEBACK and PGDAT_FILE_WRITEBACK avoid scanning anon lru and it is ok then throttled on file WRITEBACK. i think above may be not right.

Re: [PATCH] bdi flusher should not be throttled here when it fall into buddy slow path

2016-10-18 Thread zhouxianrong
Call trace: [] __switch_to+0x80/0x98 [] __schedule+0x314/0x854 [] schedule+0x48/0xa4 [] schedule_timeout+0x158/0x2c8 [] io_schedule_timeout+0xbc/0x14c [] wait_iff_congested+0x1d4/0x1ec [] shrink_inactive_list+0x530/0x760 [] shrink_lruvec+0x534/0x76c [] shrink_zone+0x88/0x1b8 []

Re: [PATCH] bdi flusher should not be throttled here when it fall into buddy slow path

2016-10-18 Thread zhouxianrong
Call trace: [] __switch_to+0x80/0x98 [] __schedule+0x314/0x854 [] schedule+0x48/0xa4 [] schedule_timeout+0x158/0x2c8 [] io_schedule_timeout+0xbc/0x14c [] wait_iff_congested+0x1d4/0x1ec [] shrink_inactive_list+0x530/0x760 [] shrink_lruvec+0x534/0x76c [] shrink_zone+0x88/0x1b8 []

[PATCH] bdi flusher should not be throttled here when it fall into buddy slow path

2016-10-18 Thread zhouxianrong
From: z00281421 bdi flusher may enter page alloc slow path due to writepage and kmalloc. in that case the flusher as a direct reclaimer should not be throttled here because it can not to reclaim clean file pages or anaonymous pages for next moment; furthermore

[PATCH] bdi flusher should not be throttled here when it fall into buddy slow path

2016-10-18 Thread zhouxianrong
From: z00281421 bdi flusher may enter page alloc slow path due to writepage and kmalloc. in that case the flusher as a direct reclaimer should not be throttled here because it can not to reclaim clean file pages or anaonymous pages for next moment; furthermore writeback rate of dirty pages

Re: [PATCH vmalloc] reduce purge_lock range and hold time of

2016-10-17 Thread zhouxianrong
hey Hellwig: cond_resched_lock is a good choice. i mixed the cond_resched_lock and batch to balance of realtime and performance and resubmit this patch. On 2016/10/16 0:55, Christoph Hellwig wrote: On Sat, Oct 15, 2016 at 10:12:48PM +0800, zhouxianr...@huawei.com wrote: From:

Re: [PATCH vmalloc] reduce purge_lock range and hold time of

2016-10-17 Thread zhouxianrong
hey Hellwig: cond_resched_lock is a good choice. i mixed the cond_resched_lock and batch to balance of realtime and performance and resubmit this patch. On 2016/10/16 0:55, Christoph Hellwig wrote: On Sat, Oct 15, 2016 at 10:12:48PM +0800, zhouxianr...@huawei.com wrote: From:

[PATCH vmalloc] reduce purge_lock range and hold time of vmap_area_lock

2016-10-17 Thread zhouxianrong
From: z00281421 Signed-off-by: z00281421 --- mm/vmalloc.c |9 +++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 91f44e7..e9c9c04 100644 --- a/mm/vmalloc.c +++

[PATCH vmalloc] reduce purge_lock range and hold time of vmap_area_lock

2016-10-17 Thread zhouxianrong
From: z00281421 Signed-off-by: z00281421 --- mm/vmalloc.c |9 +++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 91f44e7..e9c9c04 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -661,13 +661,18 @@ static void

[PATCH vmalloc] reduce purge_lock range and hold time of

2016-10-15 Thread zhouxianrong
From: z00281421 i think no need to place __free_vmap_area loop in purge_lock; _free_vmap_area could be non-atomic operations with flushing tlb but must be done after flush tlb. and the whole__free_vmap_area loops also could be non-atomic operations. if so we could

[PATCH vmalloc] reduce purge_lock range and hold time of

2016-10-15 Thread zhouxianrong
From: z00281421 i think no need to place __free_vmap_area loop in purge_lock; _free_vmap_area could be non-atomic operations with flushing tlb but must be done after flush tlb. and the whole__free_vmap_area loops also could be non-atomic operations. if so we could improve realtime because the

[PATCH] ksm: set anon_vma of first rmap_item of ksm page to page's anon_vma other than vma's anon_vma

2016-06-23 Thread zhouxianrong
From: z00281421 set anon_vma of first rmap_item of ksm page to page's anon_vma other than vma's anon_vma so that we can lookup all the forked vma of kpage via reserve map. thus we can try_to_unmap ksm page completely and reclaim or migrate the ksm page

[PATCH] ksm: set anon_vma of first rmap_item of ksm page to page's anon_vma other than vma's anon_vma

2016-06-23 Thread zhouxianrong
From: z00281421 set anon_vma of first rmap_item of ksm page to page's anon_vma other than vma's anon_vma so that we can lookup all the forked vma of kpage via reserve map. thus we can try_to_unmap ksm page completely and reclaim or migrate the ksm page successfully and need not to merg other

Re: [PATCH v2] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-22 Thread zhouxianrong
. is this a problem ? do you think about this ? On 2016/6/22 9:39, Hugh Dickins wrote: On Tue, 21 Jun 2016, zhouxianrong wrote: hey hugh: could you please give me some suggestion about this ? I must ask you to be more patient: everyone would like me to be quicker, but I cannot; and this does

Re: [PATCH v2] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-22 Thread zhouxianrong
. is this a problem ? do you think about this ? On 2016/6/22 9:39, Hugh Dickins wrote: On Tue, 21 Jun 2016, zhouxianrong wrote: hey hugh: could you please give me some suggestion about this ? I must ask you to be more patient: everyone would like me to be quicker, but I cannot; and this does

Re: [PATCH v2] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-20 Thread zhouxianrong
hey hugh: could you please give me some suggestion about this ? On 2016/6/15 9:56, zhouxianr...@huawei.com wrote: From: z00281421 more mapcount page as kpage could reduce total replacement times than fewer mapcount one when ksmd scan and replace among

Re: [PATCH v2] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-20 Thread zhouxianrong
hey hugh: could you please give me some suggestion about this ? On 2016/6/15 9:56, zhouxianr...@huawei.com wrote: From: z00281421 more mapcount page as kpage could reduce total replacement times than fewer mapcount one when ksmd scan and replace among forked pages later. Signed-off-by:

[PATCH v2] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-14 Thread zhouxianrong
From: z00281421 more mapcount page as kpage could reduce total replacement times than fewer mapcount one when ksmd scan and replace among forked pages later. Signed-off-by: z00281421 --- mm/ksm.c |8 1 file

[PATCH v2] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-14 Thread zhouxianrong
From: z00281421 more mapcount page as kpage could reduce total replacement times than fewer mapcount one when ksmd scan and replace among forked pages later. Signed-off-by: z00281421 --- mm/ksm.c |8 1 file changed, 8 insertions(+) diff --git a/mm/ksm.c b/mm/ksm.c index

[PATCH] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-14 Thread zhouxianrong
From: z00281421 more mapcount page as kpage could reduce total replacement times than fewer mapcount one when ksmd scan and replace among forked pages later. Signed-off-by: z00281421 --- mm/ksm.c | 15 +++ 1 file

[PATCH] more mapcount page as kpage could reduce total replacement times than fewer mapcount one in probability.

2016-06-14 Thread zhouxianrong
From: z00281421 more mapcount page as kpage could reduce total replacement times than fewer mapcount one when ksmd scan and replace among forked pages later. Signed-off-by: z00281421 --- mm/ksm.c | 15 +++ 1 file changed, 15 insertions(+) diff --git a/mm/ksm.c b/mm/ksm.c