x1cb/0x1e0
[518871.543999] [] ? SyS_waitid+0x220/0x220
[518871.549661] [] ? __audit_syscall_entry+0x1f7/0x2a0
[518871.556278] [] system_call_fastpath+0x16/0x1b
The patch by excluding the overflow to avoid the UBSAN warning.
Signed-off-by: zhongjiang
---
kernel/exit.c | 4
1 file changed, 4 insertion
ned-off-by: zhongjiang
---
mm/vmpressure.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/vmpressure.c b/mm/vmpressure.c
index 6063581..0e91ba3 100644
--- a/mm/vmpressure.c
+++ b/mm/vmpressure.c
@@ -116,8 +116,9 @@ static enum vmpressure_levels
vmpressure_calc_leve
This reverts commit e1587a4945408faa58d0485002c110eb2454740c.
THP lru page is reclaimed , THP is split to normal page and loop again.
reclaimed pages should not be bigger than nr_scan. because of each
loop will increase nr_scan counter.
Signed-off-by: zhongjiang
---
mm/vmpressure.c | 10
l+0xe/0x10
[ 304.803859] [] system_call_fastpath+0x16/0x1b
The patch add particular case to avoid the UBSAN detection.
Signed-off-by: zhongjiang
---
kernel/signal.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/signal.c b/kernel/signal.c
index ca92bcf..1c3fd9a 100644
--- a/kernel/sign
l+0xe/0x10
[ 304.803859] [] system_call_fastpath+0x16/0x1b
The patch add particular case to avoid the UBSAN detection.
Signed-off-by: zhongjiang
---
kernel/signal.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/kernel/signal.c b/kernel/signal.c
index ca92bcf..63148f7 100644
--- a/kernel/sign
l+0xe/0x10
[ 304.803859] [] system_call_fastpath+0x16/0x1b
The patch assign the particular pid to the INT_MAX to avoid the overflow issue.
Signed-off-by: zhongjiang
---
kernel/signal.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/kernel/signal.c b/kernel/signal.c
ind
From: zhong jiang
Recently, I found the following issue, it will result in the panic.
[ 168.739152] mmap1: Corrupted page table at address 7f3e6275a002
[ 168.745039] PGD 61f4a1067
[ 168.745040] PUD 61ab19067
[ 168.747730] PMD 61fb8b067
[ 168.750418] PTE 80001225
[ 168.753109]
[ 16
From: zhong jiang
Recently, I find the ioremap_page_range had been abusing. The improper
address mapping is a issue. it will result in the crash. so, remove
the symbol. It can be replaced by the ioremap_cache or others symbol.
Signed-off-by: zhong jiang
---
lib/ioremap.c | 1 -
1 file changed,
From: zhong jiang
Hi,
The following patch have sent it last week, but it fails to receive any
reply.
These patch is simple but reasonable. I hope it can merge to next version. So,
if anyone has any objection, just please let me know.
Thanks
zhongjiang
zhong jiang (2):
arm64: change
From: zhong jiang
I think that CONT_PTE_SHIFT is more reasonable even if they are some
value. and the patch is not any functional change.
Signed-off-by: zhong jiang
---
arch/arm64/mm/hugetlbpage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/mm/hugetlbpage.c b
From: zhong jiang
when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the
fuctions should not be use. therefore, we add the dependency.
Signed-off-by: zhong jiang
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 969
From: zhong jiang
when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the
fuctions should not be use. therefore, we add the dependency.
Signed-off-by: zhong jiang
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 969
From: zhong jiang
I think that CONT_PTE_SHIFT is more reasonable even if they are some
value. and the patch is not any functional change.
Signed-off-by: zhong jiang
---
arch/arm64/mm/hugetlbpage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/mm/hugetlbpage.c b
From: zhong jiang
A soft lookup will occur when I run trinity in syscall kexec_load.
the corresponding stack information is as follows.
[ 237.235937] BUG: soft lockup - CPU#6 stuck for 22s! [trinity-c6:13859]
[ 237.242699] Kernel panic - not syncing: softlockup: hung tasks
[ 237.248573] CPU:
From: zhong jiang
A soft lookup will occur when I run trinity in syscall kexec_load.
the corresponding stack information is as follows.
[ 237.235937] BUG: soft lockup - CPU#6 stuck for 22s! [trinity-c6:13859]
[ 237.242699] Kernel panic - not syncing: softlockup: hung tasks
[ 237.248573] CPU:
From: zhong jiang
Since 'commit 3e89e1c5ea84 ("hugetlb: make mm and fs code explicitly
non-modular")'
bring in the mainline. mount hugetlbfs will result in the following issue.
mount: unknown filesystme type 'hugetlbfs'
because previous patch remove the module_alias_fs, when we mount the fs ty
From: zhong jiang
when I compiler the newest kernel, I hit the following error with
Werror=may-uninitalized.
net/core/flow_dissector.c: In function ?._skb_flow_dissect?
include/uapi/linux/swab.h:100:46: error: ?.lan?.may be used uninitialized in
this function [-Werror=maybe-uninitialized]
net/c
From: zhong jiang
At present, Tying the first_num size to NCHUNKS_ORDER is confusing.
the number of chunks is completely unrelated to the number of buddies.
The patch limit the first_num to actual range of possible buddy indexes.
and that is more reasonable and obvious without functional change.
From: zhong jiang
z3fold compact page has nothing with the last_chunks. even if
last_chunks is not free, compact page will proceed.
The patch just remove the limit without functional change.
Signed-off-by: zhong jiang
---
mm/z3fold.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
di
From: zhong jiang
At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
in encode_handle, it will lead to the the caller handle_to_buddy
return the error value.
The patch fix the issue by changing the BUDDY_MASK to PAGE_MASK,
it will be consistent with handle_to_z3fold_header. At the sa
From: zhong jiang
At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
in encode_handle, it will lead to the the caller handle_to_buddy
return the error value.
The patch fix the issue by changing the BUDDY_MASK to PAGE_MASK,
it will be consistent with handle_to_z3fold_header. At the sa
start_kernel+0x1a0/0x414
The patch fix it by fallback to node 0. therefore, the cpu will bound to the
node
correctly.
Signed-off-by: zhongjiang
---
arch/arm64/mm/numa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
index 4dcd7d6..1f8f5da
From: zhong jiang
when memory hotplug enable, free hugepages will be freed if movable node
offline.
therefore, /proc/sys/vm/nr_hugepages will be incorrect.
The patch fix it by reduce the max_huge_pages when the node offline.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 1 +
1 file changed,
From: zhong jiang
when required_kernelcore decrease to zero, we should exit the loop in time.
because It will waste time to scan the remainder node.
Signed-off-by: zhong jiang
---
mm/page_alloc.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/m
From: zhong jiang
when compile the kenrel code, I happens to the following warn.
fs/reiserfs/ibalance.c:1156:2: warning: ‘new_insert_key’ may be used
uninitialized in this function.
memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE);
The patch fix it by check the new_insert_ptr. if new_inser
From: zhong jiang
when compile the kenrel code, I happens to the following warn.
fs/reiserfs/ibalance.c:1156:2: warning: ‘new_insert_key’ may be used
uninitialized in this function.
memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE);
^
The patch just fix it to avoid the warn.
Signed-off-by:
From: zhong jiang
when compile the kenrel code, I happens to the following warn.
fs/reiserfs/ibalance.c:1156:2: warning: ‘new_insert_key’ may be used
uninitialized in this function.
memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE);
^
The patch just fix it to avoid the warn.
Signed-off-by:
From: zhong jiang
In general, kexec alloc pages from buddy system, it cannot exceed
the physical address in the system.
The patch just remove this code, no functional change.
Signed-off-by: zhong jiang
---
include/linux/kexec.h | 1 -
kernel/kexec_core.c | 13 -
2 files changed
From: zhong jiang
when image is loaded into kernel, we need set up page table for it.
and all valid pfn also set up new mapping. it will set up a pmd huge
entry if pud_present is true. relocate_kernel points to code segment
can locate in the pmd huge entry in init_transtion_pgtable. therefore,
w
From: zhong jiang
with great pressure, I run some test cases. As a result, I found
that the THP is not freed, it is detected by check_mm().
BUG: Bad rss-counter state mm:8827edb7 idx:1 val:512
Consider the following race :
CPU0 CPU1
__handle_mm_f
From: zhong jiang
with great pressure, I run some test cases. As a result, I found
that the THP is not freed, it is detected by check_mm().
BUG: Bad rss-counter state mm:8827edb7 idx:1 val:512
Consider the following race :
CPU0 CPU1
__handle_mm_f
From: zhong jiang
we need to assure the code is consistent with comment. otherwise,
Freshman feel hard to learn it.
Signed-off-by: zhong jiang
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6903b69..3842400 100644
From: zhong jiang
huge_pmd_share accounts the number of pmds incorrectly when it races
with a parallel pud instantiation. vma_interval_tree_foreach will
increase the counter but then has to recheck the pud with the pte lock
held and the back off path should drop the increment. The previous
code w
From: zhong jiang
hen a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion
From: zhong jiang
when a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 5 ++---
1 file changed, 2 inser
From: zhong jiang
when a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 5 ++---
1 file changed, 2 inser
From: zhong jiang
when a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 5 ++---
1 file changed, 2 inser
From: zhong jiang
when a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 5 ++---
1 file changed, 2 inser
From: zhong jiang
when a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 5 ++---
1 file changed, 2 inser
From: zhong jiang
when a process acquire a pmd table shared by other process, we
increase the account to current process. otherwise, a race result
in other tasks have set the pud entry. so it no need to increase it.
Signed-off-by: zhong jiang
---
mm/hugetlb.c | 5 ++---
1 file changed, 2 inser
From: zhong jiang
In general, each process have 16kb stack space to use, but
stack need extra space to store red_zone when kasan enable.
the patch fix above question.
Signed-off-by: zhong jiang
---
arch/arm64/include/asm/thread_info.h | 15 +--
1 file changed, 13 insertions(+), 2 d
From: zhong jiang
In general, each process have 16kb stack space to use, but
stack need extra space to store red_zone when kasan enable.
the patch fix above question.
Signed-off-by: zhong jiang
---
arch/arm64/include/asm/thread_info.h | 15 +--
1 file changed, 13 insertions(+), 2 d
there is a large page spliting and
merging.
Large page will significantly reduce the TLB miss, and improve the system
performance.
Signed-off-by: zhongjiang
---
arch/arm64/include/asm/pgtable-types.h | 19 +
arch/arm64/mm/mmu.c| 12 +++
arch/arm64
there is a large page spliting and
merging.
Large page will significantly reduce the TLB miss, and improve the system
performance.
Signed-off-by: zhongjiang
---
arch/arm64/include/asm/pgtable-types.h | 19 +
arch/arm64/mm/mmu.c| 12 +++
arch/arm64
This patch add the interface to show the number of 4KB or 64KB page,
aims to statistics the number of different types of pages.
Signed-off-by: zhongjiang
---
arch/arm64/include/asm/pgtable-types.h | 24
arch/arm64/mm/mmu.c| 28
45 matches
Mail list logo