[PATCH] metag/kernel/stacktrace.c: Use current_stack_pointer instead of current_sp
From: Chen Gang Since we have defined current_stack_pointer, current_sp is redudancy, so remove it. Signed-off-by: Chen Gang --- arch/metag/kernel/stacktrace.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/metag/kernel/stacktrace.c b/arch/metag/kernel/stacktrace.c index 5510361..5db1176 100644 --- a/arch/metag/kernel/stacktrace.c +++ b/arch/metag/kernel/stacktrace.c @@ -165,11 +165,9 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) frame.pc = thread_saved_pc(tsk); #endif } else { - register unsigned long current_sp asm ("A0StP"); - data.no_sched_functions = 0; frame.fp = (unsigned long)__builtin_frame_address(0); - frame.sp = current_sp; + frame.sp = current_stack_pointer; frame.lr = (unsigned long)__builtin_return_address(0); frame.pc = (unsigned long)save_stack_trace_tsk; } -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] metag/kernel/stacktrace.c: Use current_stack_pointer instead of current_sp
From: Chen GangSince we have defined current_stack_pointer, current_sp is redudancy, so remove it. Signed-off-by: Chen Gang --- arch/metag/kernel/stacktrace.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/metag/kernel/stacktrace.c b/arch/metag/kernel/stacktrace.c index 5510361..5db1176 100644 --- a/arch/metag/kernel/stacktrace.c +++ b/arch/metag/kernel/stacktrace.c @@ -165,11 +165,9 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) frame.pc = thread_saved_pc(tsk); #endif } else { - register unsigned long current_sp asm ("A0StP"); - data.no_sched_functions = 0; frame.fp = (unsigned long)__builtin_frame_address(0); - frame.sp = current_sp; + frame.sp = current_stack_pointer; frame.lr = (unsigned long)__builtin_return_address(0); frame.pc = (unsigned long)save_stack_trace_tsk; } -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] mm/mmap.c: Remove redundent 'get_area' function pointer in get_unmapped_area()
From: Chen Gang Call the function pointer directly, then let code a bit simpler. Signed-off-by: Chen Gang --- mm/mmap.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 4db7cf0..39fd727 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2012,10 +2012,8 @@ unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - unsigned long (*get_area)(struct file *, unsigned long, - unsigned long, unsigned long, unsigned long); - unsigned long error = arch_mmap_check(addr, len, flags); + if (error) return error; @@ -2023,10 +2021,12 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (len > TASK_SIZE) return -ENOMEM; - get_area = current->mm->get_unmapped_area; if (file && file->f_op->get_unmapped_area) - get_area = file->f_op->get_unmapped_area; - addr = get_area(file, addr, len, pgoff, flags); + addr = file->f_op->get_unmapped_area(file, addr, len, + pgoff, flags); + else + addr = current->mm->get_unmapped_area(file, addr, len, + pgoff, flags); if (IS_ERR_VALUE(addr)) return addr; -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] mm/mmap.c: Remove useless statement "vma = NULL" in find_vma()
From: Chen Gang Before the main looping, vma is already is NULL, so need not set it to NULL, again. Signed-off-by: Chen Gang --- mm/mmap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/mmap.c b/mm/mmap.c index df6d5f0..4db7cf0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2054,7 +2054,6 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr) return vma; rb_node = mm->mm_rb.rb_node; - vma = NULL; while (rb_node) { struct vm_area_struct *tmp; -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] mm/mmap.c: Only call vma_unlock_anon_vm() when failure occurs in expand_upwards() and expand_downwards()
At present, I can use git client via 21cn mail address. Hope it can be accepted by our mailing list. Thanks. On 9/1/15 21:49, Chen Gang wrote: > > Sorry for the incorrect format of the patch. So I put the patch into the > attachment which generated by "git format-patch -M HEAD^". Please help > check, thanks. > > Next, I shall try to find another mail address which can be accepted by > both China and our mailing list. > > Thanks. > > On 9/1/15 04:54, Chen Gang wrote: >> When failure occurs, we need not call khugepaged_enter_vma_merge() or >> validate_mm(). >> >> Also simplify do_munmap(): declare 'error' 1 time instead of 2 times in >> sub-blocks. >> >> Signed-off-by: Chen Gang >> --- >> mm/mmap.c | 116 >> +++--- >> 1 file changed, 58 insertions(+), 58 deletions(-) >> >> diff --git a/mm/mmap.c b/mm/mmap.c >> index df6d5f0..d32199a 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -2182,10 +2182,9 @@ int expand_upwards(struct vm_area_struct *vma, >> unsigned long address) >> if (address < PAGE_ALIGN(address+4)) >> address = PAGE_ALIGN(address+4); >> else { >> - vma_unlock_anon_vma(vma); >> - return -ENOMEM; >> + error = -ENOMEM; >> + goto err; >> } >> - error = 0; >> >> /* Somebody else might have raced and expanded it already */ >> if (address> vma->vm_end) { >> @@ -2194,38 +2193,39 @@ int expand_upwards(struct vm_area_struct *vma, >> unsigned long address) >> size = address - vma->vm_start; >> grow = (address - vma->vm_end)>> PAGE_SHIFT; >> >> - error = -ENOMEM; >> - if (vma->vm_pgoff + (size>> PAGE_SHIFT)>= vma->vm_pgoff) { >> - error = acct_stack_growth(vma, size, grow); >> - if (!error) { >> - /* >> - * vma_gap_update() doesn't support concurrent >> - * updates, but we only hold a shared mmap_sem >> - * lock here, so we need to protect against >> - * concurrent vma expansions. >> - * vma_lock_anon_vma() doesn't help here, as >> - * we don't guarantee that all growable vmas >> - * in a mm share the same root anon vma. >> - * So, we reuse mm->page_table_lock to guard >> - * against concurrent vma expansions. >> - */ >> - spin_lock(>vm_mm->page_table_lock); >> - anon_vma_interval_tree_pre_update_vma(vma); >> - vma->vm_end = address; >> - anon_vma_interval_tree_post_update_vma(vma); >> - if (vma->vm_next) >> - vma_gap_update(vma->vm_next); >> - else >> - vma->vm_mm->highest_vm_end = address; >> - spin_unlock(>vm_mm->page_table_lock); >> - >> - perf_event_mmap(vma); >> - } >> + if (vma->vm_pgoff + (size>> PAGE_SHIFT) < vma->vm_pgoff) { >> + error = -ENOMEM; >> + goto err; >> } >> + error = acct_stack_growth(vma, size, grow); >> + if (error) >> + goto err; >> + /* >> + * vma_gap_update() doesn't support concurrent updates, but we >> + * only hold a shared mmap_sem lock here, so we need to protect >> + * against concurrent vma expansions. vma_lock_anon_vma() >> + * doesn't help here, as we don't guarantee that all growable >> + * vmas in a mm share the same root anon vma. So, we reuse mm-> >> + * page_table_lock to guard against concurrent vma expansions. >> + */ >> + spin_lock(>vm_mm->page_table_lock); >> + anon_vma_interval_tree_pre_update_vma(vma); >> + vma->vm_end = address; >> + anon_vma_interval_tree_post_update_vma(vma); >> + if (vma->vm_next) >> + vma_gap_update(vma->vm_next); >> + else >> + vma->vm_mm->highest_vm_end = address; >> + spin_unlock(>vm_mm->page_table_lock); >> + >> + perf_event_mmap(vma); >> } >> vma_unlock_anon_vma(vma); >> khugepaged_enter_vma_merge(vma, vma->vm_flags); >> validate_mm(vma->vm_mm); >> + return 0; >> +err: >> + vma_unlock_anon_vma(vma); >> return error; >> } >> #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */ >> @@ -2265,36 +2265,37 @@ int expand_downwards(struct vm_area_struct *vma, >> size = vma->vm_end - address; >> grow = (vma->vm_start - address)>> PAGE_SHIFT; >> >> - error = -ENOMEM; >> - if (grow <= vma->vm_pgoff) { >> - error = acct_stack_growth(vma, size, grow); >> - if (!error) { >> - /* >> - * vma_gap_update() doesn't support concurrent >> - * updates, but we only hold a shared mmap_sem >> - * lock here, so we need to protect against >> - * concurrent vma expansions. >> - * vma_lock_anon_vma() doesn't help here, as >> - * we don't guarantee that all growable vmas >> - * in a mm share the same root anon vma. >> - * So, we reuse mm->page_table_lock to guard >> - * against concurrent vma expansions. >> - */ >> - spin_lock(>vm_mm->page_table_lock); >> - anon_vma_interval_tree_pre_update_vma(vma); >> - vma->vm_start = address; >> - vma->vm_pgoff -= grow; >> - anon_vma_interval_tree_post_update_vma(vma); >> - vma_gap_update(vma); >> - spin_unlock(>vm_mm->page_table_lock); >> - >> - perf_event_mmap(vma); >> - } >> + if (grow> vma->vm_pgoff) { >> + error = -ENOMEM; >> + goto err; >> } >> + error = acct_stack_growth(vma, size, grow); >> + if (error) >> + goto err; >> + /* >> + * vma_gap_update() doesn't support concurrent updates, but we >> + * only hold a shared mmap_sem lock here, so we need to
[PATCH] mm/mmap.c: Remove redundent 'get_area' function pointer in get_unmapped_area()
From: Chen GangCall the function pointer directly, then let code a bit simpler. Signed-off-by: Chen Gang --- mm/mmap.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 4db7cf0..39fd727 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2012,10 +2012,8 @@ unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - unsigned long (*get_area)(struct file *, unsigned long, - unsigned long, unsigned long, unsigned long); - unsigned long error = arch_mmap_check(addr, len, flags); + if (error) return error; @@ -2023,10 +2021,12 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (len > TASK_SIZE) return -ENOMEM; - get_area = current->mm->get_unmapped_area; if (file && file->f_op->get_unmapped_area) - get_area = file->f_op->get_unmapped_area; - addr = get_area(file, addr, len, pgoff, flags); + addr = file->f_op->get_unmapped_area(file, addr, len, + pgoff, flags); + else + addr = current->mm->get_unmapped_area(file, addr, len, + pgoff, flags); if (IS_ERR_VALUE(addr)) return addr; -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] mm/mmap.c: Remove useless statement "vma = NULL" in find_vma()
From: Chen GangBefore the main looping, vma is already is NULL, so need not set it to NULL, again. Signed-off-by: Chen Gang --- mm/mmap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/mmap.c b/mm/mmap.c index df6d5f0..4db7cf0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2054,7 +2054,6 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr) return vma; rb_node = mm->mm_rb.rb_node; - vma = NULL; while (rb_node) { struct vm_area_struct *tmp; -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] mm/mmap.c: Only call vma_unlock_anon_vm() when failure occurs in expand_upwards() and expand_downwards()
At present, I can use git client via 21cn mail address. Hope it can be accepted by our mailing list. Thanks. On 9/1/15 21:49, Chen Gang wrote: > > Sorry for the incorrect format of the patch. So I put the patch into the > attachment which generated by "git format-patch -M HEAD^". Please help > check, thanks. > > Next, I shall try to find another mail address which can be accepted by > both China and our mailing list. > > Thanks. > > On 9/1/15 04:54, Chen Gang wrote: >> When failure occurs, we need not call khugepaged_enter_vma_merge() or >> validate_mm(). >> >> Also simplify do_munmap(): declare 'error' 1 time instead of 2 times in >> sub-blocks. >> >> Signed-off-by: Chen Gang>> --- >> mm/mmap.c | 116 >> +++--- >> 1 file changed, 58 insertions(+), 58 deletions(-) >> >> diff --git a/mm/mmap.c b/mm/mmap.c >> index df6d5f0..d32199a 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -2182,10 +2182,9 @@ int expand_upwards(struct vm_area_struct *vma, >> unsigned long address) >> if (address < PAGE_ALIGN(address+4)) >> address = PAGE_ALIGN(address+4); >> else { >> - vma_unlock_anon_vma(vma); >> - return -ENOMEM; >> + error = -ENOMEM; >> + goto err; >> } >> - error = 0; >> >> /* Somebody else might have raced and expanded it already */ >> if (address> vma->vm_end) { >> @@ -2194,38 +2193,39 @@ int expand_upwards(struct vm_area_struct *vma, >> unsigned long address) >> size = address - vma->vm_start; >> grow = (address - vma->vm_end)>> PAGE_SHIFT; >> >> - error = -ENOMEM; >> - if (vma->vm_pgoff + (size>> PAGE_SHIFT)>= vma->vm_pgoff) { >> - error = acct_stack_growth(vma, size, grow); >> - if (!error) { >> - /* >> - * vma_gap_update() doesn't support concurrent >> - * updates, but we only hold a shared mmap_sem >> - * lock here, so we need to protect against >> - * concurrent vma expansions. >> - * vma_lock_anon_vma() doesn't help here, as >> - * we don't guarantee that all growable vmas >> - * in a mm share the same root anon vma. >> - * So, we reuse mm->page_table_lock to guard >> - * against concurrent vma expansions. >> - */ >> - spin_lock(>vm_mm->page_table_lock); >> - anon_vma_interval_tree_pre_update_vma(vma); >> - vma->vm_end = address; >> - anon_vma_interval_tree_post_update_vma(vma); >> - if (vma->vm_next) >> - vma_gap_update(vma->vm_next); >> - else >> - vma->vm_mm->highest_vm_end = address; >> - spin_unlock(>vm_mm->page_table_lock); >> - >> - perf_event_mmap(vma); >> - } >> + if (vma->vm_pgoff + (size>> PAGE_SHIFT) < vma->vm_pgoff) { >> + error = -ENOMEM; >> + goto err; >> } >> + error = acct_stack_growth(vma, size, grow); >> + if (error) >> + goto err; >> + /* >> + * vma_gap_update() doesn't support concurrent updates, but we >> + * only hold a shared mmap_sem lock here, so we need to protect >> + * against concurrent vma expansions. vma_lock_anon_vma() >> + * doesn't help here, as we don't guarantee that all growable >> + * vmas in a mm share the same root anon vma. So, we reuse mm-> >> + * page_table_lock to guard against concurrent vma expansions. >> + */ >> + spin_lock(>vm_mm->page_table_lock); >> + anon_vma_interval_tree_pre_update_vma(vma); >> + vma->vm_end = address; >> + anon_vma_interval_tree_post_update_vma(vma); >> + if (vma->vm_next) >> + vma_gap_update(vma->vm_next); >> + else >> + vma->vm_mm->highest_vm_end = address; >> + spin_unlock(>vm_mm->page_table_lock); >> + >> + perf_event_mmap(vma); >> } >> vma_unlock_anon_vma(vma); >> khugepaged_enter_vma_merge(vma, vma->vm_flags); >> validate_mm(vma->vm_mm); >> + return 0; >> +err: >> + vma_unlock_anon_vma(vma); >> return error; >> } >> #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */ >> @@ -2265,36 +2265,37 @@ int expand_downwards(struct vm_area_struct *vma, >> size = vma->vm_end - address; >> grow = (vma->vm_start - address)>> PAGE_SHIFT; >> >> - error = -ENOMEM; >> - if (grow <= vma->vm_pgoff) { >> - error = acct_stack_growth(vma, size, grow); >> - if (!error) { >> - /* >> - * vma_gap_update() doesn't support concurrent >> - * updates, but we only hold a shared mmap_sem >> - * lock here, so we need to protect against >> - * concurrent vma expansions. >> - * vma_lock_anon_vma() doesn't help here, as >> - * we don't guarantee that all growable vmas >> - * in a mm share the same root anon vma. >> - * So, we reuse mm->page_table_lock to guard >> - * against concurrent vma expansions. >> - */ >> - spin_lock(>vm_mm->page_table_lock); >> - anon_vma_interval_tree_pre_update_vma(vma); >> - vma->vm_start = address; >> - vma->vm_pgoff -= grow; >> - anon_vma_interval_tree_post_update_vma(vma); >> - vma_gap_update(vma); >> - spin_unlock(>vm_mm->page_table_lock); >> - >> - perf_event_mmap(vma); >> - } >> + if (grow> vma->vm_pgoff) { >> + error = -ENOMEM; >> + goto err; >> } >> + error = acct_stack_growth(vma, size, grow); >> + if (error) >> + goto err; >> + /* >> + * vma_gap_update() doesn't support concurrent updates, but we >> + * only hold a shared mmap_sem lock