[PATCH] exit: avoid undefined behaviour when call wait4

2017-06-12 Thread zhongjiang
x1cb/0x1e0 [518871.543999] [] ? SyS_waitid+0x220/0x220 [518871.549661] [] ? __audit_syscall_entry+0x1f7/0x2a0 [518871.556278] [] system_call_fastpath+0x16/0x1b The patch by excluding the overflow to avoid the UBSAN warning. Signed-off-by: zhongjiang --- kernel/exit.c | 4 1 file changed, 4 insertion

[PATCH] mm: correct the comment when reclaimed pages exceed the scanned pages

2017-06-07 Thread zhongjiang
ned-off-by: zhongjiang --- mm/vmpressure.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/vmpressure.c b/mm/vmpressure.c index 6063581..0e91ba3 100644 --- a/mm/vmpressure.c +++ b/mm/vmpressure.c @@ -116,8 +116,9 @@ static enum vmpressure_levels vmpressure_calc_leve

[PATCH] Revert "mm: vmpressure: fix sending wrong events on underflow"

2017-06-06 Thread zhongjiang
This reverts commit e1587a4945408faa58d0485002c110eb2454740c. THP lru page is reclaimed , THP is split to normal page and loop again. reclaimed pages should not be bigger than nr_scan. because of each loop will increase nr_scan counter. Signed-off-by: zhongjiang --- mm/vmpressure.c | 10

[PATCH v3] signal: Avoid undefined behaviour in kill_something_info

2017-06-05 Thread zhongjiang
l+0xe/0x10 [ 304.803859] [] system_call_fastpath+0x16/0x1b The patch add particular case to avoid the UBSAN detection. Signed-off-by: zhongjiang --- kernel/signal.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/signal.c b/kernel/signal.c index ca92bcf..1c3fd9a 100644 --- a/kernel/sign

[PATCH v2] signal: Avoid undefined behaviour in kill_something_info

2017-06-05 Thread zhongjiang
l+0xe/0x10 [ 304.803859] [] system_call_fastpath+0x16/0x1b The patch add particular case to avoid the UBSAN detection. Signed-off-by: zhongjiang --- kernel/signal.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/kernel/signal.c b/kernel/signal.c index ca92bcf..63148f7 100644 --- a/kernel/sign

[PATCH] signal: Avoid undefined behaviour in kill_something_info

2017-06-05 Thread zhongjiang
l+0xe/0x10 [ 304.803859] [] system_call_fastpath+0x16/0x1b The patch assign the particular pid to the INT_MAX to avoid the overflow issue. Signed-off-by: zhongjiang --- kernel/signal.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/kernel/signal.c b/kernel/signal.c ind

[RESENT PATCH] x86/mem: fix the offset overflow when read/write mem

2017-04-27 Thread zhongjiang
From: zhong jiang Recently, I found the following issue, it will result in the panic. [ 168.739152] mmap1: Corrupted page table at address 7f3e6275a002 [ 168.745039] PGD 61f4a1067 [ 168.745040] PUD 61ab19067 [ 168.747730] PMD 61fb8b067 [ 168.750418] PTE 80001225 [ 168.753109] [ 16

[PATCH] mm: do not export ioremap_page_range symbol for external module

2017-01-22 Thread zhongjiang
From: zhong jiang Recently, I find the ioremap_page_range had been abusing. The improper address mapping is a issue. it will result in the crash. so, remove the symbol. It can be replaced by the ioremap_cache or others symbol. Signed-off-by: zhong jiang --- lib/ioremap.c | 1 - 1 file changed,

[RESEND PATCH 0/2] fix some trivial bug involving the contiguous bit

2016-12-14 Thread zhongjiang
From: zhong jiang Hi, The following patch have sent it last week, but it fails to receive any reply. These patch is simple but reasonable. I hope it can merge to next version. So, if anyone has any objection, just please let me know. Thanks zhongjiang zhong jiang (2): arm64: change

[RESEND PATCH 1/2] arm64: change from CONT_PMD_SHIFT to CONT_PTE_SHIFT

2016-12-14 Thread zhongjiang
From: zhong jiang I think that CONT_PTE_SHIFT is more reasonable even if they are some value. and the patch is not any functional change. Signed-off-by: zhong jiang --- arch/arm64/mm/hugetlbpage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/mm/hugetlbpage.c b

[RESEND PATCH 2/2] arm64: make WANT_HUGE_PMD_SHARE depends on HUGETLB_PAGE

2016-12-14 Thread zhongjiang
From: zhong jiang when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the fuctions should not be use. therefore, we add the dependency. Signed-off-by: zhong jiang --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 969

[RFC PATCH] arm64: make WANT_HUGE_PMD_SHARE depends on HUGETLB_PAGE

2016-12-10 Thread zhongjiang
From: zhong jiang when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the fuctions should not be use. therefore, we add the dependency. Signed-off-by: zhong jiang --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 969

[RFC PATCH] arm64: change from CONT_PMD_SHIFT to CONT_PTE_SHIFT

2016-12-09 Thread zhongjiang
From: zhong jiang I think that CONT_PTE_SHIFT is more reasonable even if they are some value. and the patch is not any functional change. Signed-off-by: zhong jiang --- arch/arm64/mm/hugetlbpage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/mm/hugetlbpage.c b

[PATCH v2] kexec: add cond_resched into kimage_alloc_crash_control_pages

2016-12-07 Thread zhongjiang
From: zhong jiang A soft lookup will occur when I run trinity in syscall kexec_load. the corresponding stack information is as follows. [ 237.235937] BUG: soft lockup - CPU#6 stuck for 22s! [trinity-c6:13859] [ 237.242699] Kernel panic - not syncing: softlockup: hung tasks [ 237.248573] CPU:

[PATCH] kexec: add cond_resched into kimage_alloc_crash_control_pages

2016-12-07 Thread zhongjiang
From: zhong jiang A soft lookup will occur when I run trinity in syscall kexec_load. the corresponding stack information is as follows. [ 237.235937] BUG: soft lockup - CPU#6 stuck for 22s! [trinity-c6:13859] [ 237.242699] Kernel panic - not syncing: softlockup: hung tasks [ 237.248573] CPU:

[RFC PATCH] hugetlbfs: fix the hugetlbfs can not be mounted

2016-10-28 Thread zhongjiang
From: zhong jiang Since 'commit 3e89e1c5ea84 ("hugetlb: make mm and fs code explicitly non-modular")' bring in the mainline. mount hugetlbfs will result in the following issue. mount: unknown filesystme type 'hugetlbfs' because previous patch remove the module_alias_fs, when we mount the fs ty

[PATCH] net: avoid uninitialized variable

2016-10-26 Thread zhongjiang
From: zhong jiang when I compiler the newest kernel, I hit the following error with Werror=may-uninitalized. net/core/flow_dissector.c: In function ?._skb_flow_dissect? include/uapi/linux/swab.h:100:46: error: ?.lan?.may be used uninitialized in this function [-Werror=maybe-uninitialized] net/c

[PATCH] z3fold: limit first_num to the actual range of possible buddy indexes

2016-10-18 Thread zhongjiang
From: zhong jiang At present, Tying the first_num size to NCHUNKS_ORDER is confusing. the number of chunks is completely unrelated to the number of buddies. The patch limit the first_num to actual range of possible buddy indexes. and that is more reasonable and obvious without functional change.

[PATCH] z3fold: remove the unnecessary limit in z3fold_compact_page

2016-10-14 Thread zhongjiang
From: zhong jiang z3fold compact page has nothing with the last_chunks. even if last_chunks is not free, compact page will proceed. The patch just remove the limit without functional change. Signed-off-by: zhong jiang --- mm/z3fold.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) di

[PATCH v2] z3fold: fix the potential encode bug in encod_handle

2016-10-12 Thread zhongjiang
From: zhong jiang At present, zhdr->first_num plus bud can exceed the BUDDY_MASK in encode_handle, it will lead to the the caller handle_to_buddy return the error value. The patch fix the issue by changing the BUDDY_MASK to PAGE_MASK, it will be consistent with handle_to_z3fold_header. At the sa

[PATCH] z3fold: fix the potential encode bug in encod_handle

2016-10-12 Thread zhongjiang
From: zhong jiang At present, zhdr->first_num plus bud can exceed the BUDDY_MASK in encode_handle, it will lead to the the caller handle_to_buddy return the error value. The patch fix the issue by changing the BUDDY_MASK to PAGE_MASK, it will be consistent with handle_to_z3fold_header. At the sa

[PATCH] mm,numa: boot cpu should bound to the node0 when node_off enable

2016-08-18 Thread zhongjiang
start_kernel+0x1a0/0x414 The patch fix it by fallback to node 0. therefore, the cpu will bound to the node correctly. Signed-off-by: zhongjiang --- arch/arm64/mm/numa.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index 4dcd7d6..1f8f5da

[PATCH] mm: fix the incorrect hugepages count

2016-08-07 Thread zhongjiang
From: zhong jiang when memory hotplug enable, free hugepages will be freed if movable node offline. therefore, /proc/sys/vm/nr_hugepages will be incorrect. The patch fix it by reduce the max_huge_pages when the node offline. Signed-off-by: zhong jiang --- mm/hugetlb.c | 1 + 1 file changed,

[PATCH] mm: optimize find_zone_movable_pfns_for_nodes to avoid unnecessary loop.

2016-08-05 Thread zhongjiang
From: zhong jiang when required_kernelcore decrease to zero, we should exit the loop in time. because It will waste time to scan the remainder node. Signed-off-by: zhong jiang --- mm/page_alloc.c | 10 +++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/m

[PATCH] fs: fix a bug when new_insert_key is not initialization

2016-07-29 Thread zhongjiang
From: zhong jiang when compile the kenrel code, I happens to the following warn. fs/reiserfs/ibalance.c:1156:2: warning: ‘new_insert_key’ may be used uninitialized in this function. memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE); The patch fix it by check the new_insert_ptr. if new_inser

[PATCH] fs: wipe off the compiler warn

2016-07-29 Thread zhongjiang
From: zhong jiang when compile the kenrel code, I happens to the following warn. fs/reiserfs/ibalance.c:1156:2: warning: ‘new_insert_key’ may be used uninitialized in this function. memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE); ^ The patch just fix it to avoid the warn. Signed-off-by:

[PATCH] fs: wipe off the compiler warn

2016-07-29 Thread zhongjiang
From: zhong jiang when compile the kenrel code, I happens to the following warn. fs/reiserfs/ibalance.c:1156:2: warning: ‘new_insert_key’ may be used uninitialized in this function. memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE); ^ The patch just fix it to avoid the warn. Signed-off-by:

[PATCH 1/2] kexec: remove unnecessary unusable_pages

2016-07-10 Thread zhongjiang
From: zhong jiang In general, kexec alloc pages from buddy system, it cannot exceed the physical address in the system. The patch just remove this code, no functional change. Signed-off-by: zhong jiang --- include/linux/kexec.h | 1 - kernel/kexec_core.c | 13 - 2 files changed

[PATCH 2/2] kexec: add a pmd huge entry condition during the page table

2016-07-10 Thread zhongjiang
From: zhong jiang when image is loaded into kernel, we need set up page table for it. and all valid pfn also set up new mapping. it will set up a pmd huge entry if pud_present is true. relocate_kernel points to code segment can locate in the pmd huge entry in init_transtion_pgtable. therefore, w

[PATCH] mm/huge_memory: fix the memory leak due to the race

2016-06-21 Thread zhongjiang
From: zhong jiang with great pressure, I run some test cases. As a result, I found that the THP is not freed, it is detected by check_mm(). BUG: Bad rss-counter state mm:8827edb7 idx:1 val:512 Consider the following race : CPU0 CPU1 __handle_mm_f

[PATCH] mm/huge_memory: fix the memory leak due to the race

2016-06-21 Thread zhongjiang
From: zhong jiang with great pressure, I run some test cases. As a result, I found that the THP is not freed, it is detected by check_mm(). BUG: Bad rss-counter state mm:8827edb7 idx:1 val:512 Consider the following race : CPU0 CPU1 __handle_mm_f

[PATCH] mm: update the comment in __isolate_free_page

2016-06-17 Thread zhongjiang
From: zhong jiang we need to assure the code is consistent with comment. otherwise, Freshman feel hard to learn it. Signed-off-by: zhong jiang --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6903b69..3842400 100644

[PATCH v2] mm: fix account pmd page to the process

2016-06-17 Thread zhongjiang
From: zhong jiang huge_pmd_share accounts the number of pmds incorrectly when it races with a parallel pud instantiation. vma_interval_tree_foreach will increase the counter but then has to recheck the pud with the pte lock held and the back off path should drop the increment. The previous code w

[PATCH] mm: fix account pmd page to the process

2016-06-17 Thread zhongjiang
From: zhong jiang hen a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion

[PATCH] mm: fix account pmd page to the process

2016-06-16 Thread zhongjiang
From: zhong jiang when a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 inser

[PATCH] mm: fix account pmd page to the process

2016-06-16 Thread zhongjiang
From: zhong jiang when a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 inser

[PATCH] mm: fix account pmd page to the process

2016-06-16 Thread zhongjiang
From: zhong jiang when a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 inser

[PATCH] mm: fix account pmd page to the process

2016-06-16 Thread zhongjiang
From: zhong jiang when a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 inser

[PATCH] mm: fix account pmd page to the process

2016-06-16 Thread zhongjiang
From: zhong jiang when a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 inser

[PATCH] mm: fix account pmd page to the process

2016-06-16 Thread zhongjiang
From: zhong jiang when a process acquire a pmd table shared by other process, we increase the account to current process. otherwise, a race result in other tasks have set the pud entry. so it no need to increase it. Signed-off-by: zhong jiang --- mm/hugetlb.c | 5 ++--- 1 file changed, 2 inser

[PATCH] arm64: fix add kasan bug

2015-12-31 Thread zhongjiang
From: zhong jiang In general, each process have 16kb stack space to use, but stack need extra space to store red_zone when kasan enable. the patch fix above question. Signed-off-by: zhong jiang --- arch/arm64/include/asm/thread_info.h | 15 +-- 1 file changed, 13 insertions(+), 2 d

[PATCH] arm64: fix add kasan bug

2015-12-31 Thread zhongjiang
From: zhong jiang In general, each process have 16kb stack space to use, but stack need extra space to store red_zone when kasan enable. the patch fix above question. Signed-off-by: zhong jiang --- arch/arm64/include/asm/thread_info.h | 15 +-- 1 file changed, 13 insertions(+), 2 d

[PATCH] arm64: add a function to show the different types of pagetable

2015-12-05 Thread zhongjiang
there is a large page spliting and merging. Large page will significantly reduce the TLB miss, and improve the system performance. Signed-off-by: zhongjiang --- arch/arm64/include/asm/pgtable-types.h | 19 + arch/arm64/mm/mmu.c| 12 +++ arch/arm64

[PATCH] arm64: add a function to show the different types of pagetable

2015-12-04 Thread zhongjiang
there is a large page spliting and merging. Large page will significantly reduce the TLB miss, and improve the system performance. Signed-off-by: zhongjiang --- arch/arm64/include/asm/pgtable-types.h | 19 + arch/arm64/mm/mmu.c| 12 +++ arch/arm64

[PATCH] arm64: calculate the various pages number to show

2015-11-25 Thread zhongjiang
This patch add the interface to show the number of 4KB or 64KB page, aims to statistics the number of different types of pages. Signed-off-by: zhongjiang --- arch/arm64/include/asm/pgtable-types.h | 24 arch/arm64/mm/mmu.c| 28