Hi Fan Du,
I think we should change the print in mminit_verify_zonelist too.
This patch changes the order of ZONELIST_FALLBACK, so the default numa policy
can
alloc DRAM first, then PMEM, right?
Thanks,
Xishi Qiu
> On system with heterogeneous memory, reasonable fall back lists woul
e info,
so how about do not clear the flag after setup_vmalloc_vm, and just
update the print in s_show.
...
if (v->flags & VM_ALLOC)
seq_puts(m, " vmalloc");
+ if (v->flags & VM_MAP_RAM) // add a new flag for vm_map_ram?
+
Hi, I find the active file + inactive file is larger than cached + buffers,
about 5G,
and can not free it by "echo 3 > /proc/sys/vm/drop_caches"
The meminfo shows that the mapped is also very small, so maybe some get the
page? (e.g. get_user_pages())
Then it will dec the count of NR_FILE_PAGES w
On 2018/4/12 9:49, Xishi Qiu wrote:
> Hi, I find CONFIG_X86_RESERVE_LOW=64 in my system, so trim_low_memory_range()
> will reserve low 64kb memory. But efi_free_boot_services() will free it to
> buddy system again later because BIOS set the type to EFI_BOOT_SERVICES_CODE.
>
>
Hi, I find CONFIG_X86_RESERVE_LOW=64 in my system, so trim_low_memory_range()
will reserve low 64kb memory. But efi_free_boot_services() will free it to
buddy system again later because BIOS set the type to EFI_BOOT_SERVICES_CODE.
Here is the log:
...
efi: mem03: type=3, attr=0xf, range=[0x000
On 2018/1/17 17:16, Vlastimil Babka wrote:
> On 12/29/2017 09:58 AM, Xishi Qiu wrote:
>> When calling vfree(), it calls unmap_vmap_area() to clear page table,
>> but do not free the memory of page table, why? just for performance?
>
> I guess it's expected that
On 2017/12/29 16:58, Xishi Qiu wrote:
> When calling vfree(), it calls unmap_vmap_area() to clear page table,
> but do not free the memory of page table, why? just for performance?
>
> If a driver use vmalloc() and vfree() frequently, we will lost much
> page table memory,
On 2018/1/6 2:33, Jiri Kosina wrote:
> On Fri, 5 Jan 2018, Xishi Qiu wrote:
>
>> I run the latest RHEL 7.2 with the KAISER/KPTI patch, and boot failed.
>>
>> ...
>> [0.00] PM: Registered nosave memory: [mem
>> 0x810-0x8ff]
>> [
I run the latest RHEL 7.2 with the KAISER/KPTI patch, and boot failed.
...
[0.00] PM: Registered nosave memory: [mem 0x810-0x8ff]
[0.00] PM: Registered nosave memory: [mem 0x910-0xfff]
[0.00] PM: Registered nosave memory: [mem 0x1010-
When calling vfree(), it calls unmap_vmap_area() to clear page table,
but do not free the memory of page table, why? just for performance?
If a driver use vmalloc() and vfree() frequently, we will lost much
page table memory, maybe oom later.
Thanks,
Xishi Qiu
On 2017/12/21 16:55, Xishi Qiu wrote:
> When we use iounmap() to free the mapping, it calls unmap_vmap_area() to
> clear page table,
> but do not free the memory of page table, right?
>
> So when use ioremap() to mapping another area(incluce the area before), it
> may use
&
memory(e.g. pte memory)
will be lost, it cause memory leak, right?
Thanks,
Xishi Qiu
ill work for memory hotplug because it requires
>> MIGRATE_MOVABLE.
>
> Unfortunately, alloc_contig_range() can be called with
> MIGRATE_MOVABLE so this patch cannot perfectly fix the problem.
>
> I did a more thinking and found that it's strange to check if there is
> unmovable page in the pageblock during the set_migratetype_isolate().
> set_migratetype_isolate() should be just for setting the migratetype
> of the pageblock. Checking other things should be done by another
> place, for example, before calling the start_isolate_page_range() in
> __offline_pages().
>
> Thanks.
>
Hi Joonsoo,
How about add a flag to skip or not has_unmovable_pages() in
set_migratetype_isolate()?
Something like the skip_hwpoisoned_pages.
Thanks,
Xishi Qiu
>
> .
>
On 2017/10/10 2:26, Michal Hocko wrote:
> On Wed 27-09-17 13:51:09, Xishi Qiu wrote:
>> On 2017/9/26 19:00, Michal Hocko wrote:
>>
>>> On Tue 26-09-17 11:45:16, Vlastimil Babka wrote:
>>>> On 09/26/2017 11:22 AM, Xishi Qiu wrote:
>>>>> On 2017
On 2017/9/26 19:00, Michal Hocko wrote:
> On Tue 26-09-17 11:45:16, Vlastimil Babka wrote:
>> On 09/26/2017 11:22 AM, Xishi Qiu wrote:
>>> On 2017/9/26 17:13, Xishi Qiu wrote:
>>>>> This is still very fuzzy. What are you actually trying to achieve?
>>>
On 2017/9/26 17:13, Xishi Qiu wrote:
> On 2017/9/26 17:02, Michal Hocko wrote:
>
>> On Tue 26-09-17 16:39:56, Xishi Qiu wrote:
>>> On 2017/9/26 16:17, Michal Hocko wrote:
>>>
>>>> On Tue 26-09-17 15:56:55, Xishi Qiu wrote:
>>>>> When we ca
On 2017/9/26 17:02, Michal Hocko wrote:
> On Tue 26-09-17 16:39:56, Xishi Qiu wrote:
>> On 2017/9/26 16:17, Michal Hocko wrote:
>>
>>> On Tue 26-09-17 15:56:55, Xishi Qiu wrote:
>>>> When we call mlockall(), we will add VM_LOCKED to the vma,
>>>> if
On 2017/9/26 16:17, Michal Hocko wrote:
> On Tue 26-09-17 15:56:55, Xishi Qiu wrote:
>> When we call mlockall(), we will add VM_LOCKED to the vma,
>> if the vma prot is ---p,
>
> not sure what you mean here. apply_mlockall_flags will set the flag on
> all vmas exc
Ignore errors */
(void) __mm_populate(addr, len, 1);
}
And later we call mprotect() to change the prot, then it is
still not alloc memory for the mlocked vma.
My question is that, shall we alloc memory if the prot changed,
and who(kernel, glibc, user) should alloc the memory?
Thanks,
Xishi Qiu
x7fca226a5fb0,
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,
parent_tidptr=0x7fca226a69d0, tls=0x7fca226a6700, child_tidptr=0x7fca226a69d0)
= 21043
...
Thanks,
Xishi Qiu
On 2017/9/4 17:01, Michal Hocko wrote:
> On Mon 04-09-17 16:58:30, Xishi Qiu wrote:
>> On 2017/9/4 16:21, Michal Hocko wrote:
>>
>>> From: Michal Hocko
>>>
>>> We have a hardcoded 120s timeout after which the memory offline fails
>>> basicall
by a signal so if userspace wants
Hi Michal,
If the user know what he should do if migration for a long time,
it is OK, but I don't think all the users know this operation
(e.g. ctrl + c) and the affect.
Thanks,
Xishi Qiu
> some timeout based termination this can be done trivially by send
On 2017/7/19 16:40, Vlastimil Babka wrote:
> On 07/18/2017 12:59 PM, Xishi Qiu wrote:
>> Hi,
>>
>> Unfortunately, this patch(mm: thp: fix SMP race condition between
>> THP page fault and MADV_DONTNEED) didn't help, I got the panic again.
>
> Too bad then. I
On 2017/6/8 21:59, Vlastimil Babka wrote:
> On 06/08/2017 03:44 PM, Xishi Qiu wrote:
>> On 2017/5/23 17:33, Vlastimil Babka wrote:
>>
>>> On 05/23/2017 11:21 AM, zhong jiang wrote:
>>>> On 2017/5/23 0:51, Vlastimil Babka wrote:
>>>>> On 05/20/20
On 2017/6/29 16:22, Xishi Qiu wrote:
> centos 7.2,I got some oops form my production line,
> Anybody has seen these errors before?
>
Here is another one
[ 703.025737] BUG: unable to handle kernel NULL pointer dereference at
0d68
[ 703.026008] IP: [] mlx4_en_QUERY_PORT+0
centos 7.2,I got some oops form my production line,
Anybody has seen these errors before?
1)
2017-06-28T02:18:16.461384+08:00[880983.488036] do nothing after die!
2017-06-28T02:18:16.462068+08:00[880983.488723] Modules linked in: fuse
iptable_filter sha512_generic icp_qa_al_vf(OVE) vfat fat isof
On 2017/6/24 19:12, Greg KH wrote:
> On Sat, Jun 24, 2017 at 05:52:23PM +0800, Yisheng Xie wrote:
>> hi all,
>>
>> I met an Oops problem with linux-3.10. The RIP is sysfs_open_file+0x46/0x2b0
>> (I will and the full
>> crash log in the end of this mail).
>
> 3.10 is _very_ old and obsolete, can
On 2017/5/23 17:33, Vlastimil Babka wrote:
> On 05/23/2017 11:21 AM, zhong jiang wrote:
>> On 2017/5/23 0:51, Vlastimil Babka wrote:
>>> On 05/20/2017 05:01 AM, zhong jiang wrote:
>>>> On 2017/5/20 10:40, Hugh Dickins wrote:
>>>>> On Sat, 20 May 2017,
On 2017/6/4 23:06, Thomas Gleixner wrote:
> On Thu, 1 Jun 2017, Xishi Qiu wrote:
>
> Cc'ed John Stultz
>
>> Hi, this is the test case, and then I got ubsan error
>> (signed integer overflow) report, so the root cause is from
>> user or kernel? Shall w
I got some error report during boot from ubsan,
kernel version is v4.12
[0.001000]
[0.001000] UBSAN: Undefined behaviour in
arch/x86/kernel/apic/apic_flat_64.c:49:11
[0.001000] shift exponent 64 is too
Hi, this is the test case, and then I got ubsan error
(signed integer overflow) report, so the root cause is from
user or kernel? Shall we change something in timeval_valid()?
struct itimerval new_value;
int ret;
new_value.it_interval.tv_sec = 140673496649799L;
new_value.it_interval.tv_usec = 6;
On 2017/5/24 21:16, Vlastimil Babka wrote:
> On 05/24/2017 02:10 PM, Xishi Qiu wrote:
>> On 2017/5/24 19:52, Vlastimil Babka wrote:
>>
>>> On 05/24/2017 01:38 PM, Xishi Qiu wrote:
>>>>>
>>>>> Race condition with what? Who else would isolat
On 2017/5/24 19:52, Vlastimil Babka wrote:
> On 05/24/2017 01:38 PM, Xishi Qiu wrote:
>>>
>>> Race condition with what? Who else would isolate our pages?
>>>
>>
>> Hi Vlastimil,
>>
>> I find the root cause, if the page was not cached on the c
@@ void mlock_vma_page(struct page *page)
count_vm_event(UNEVICTABLE_PGMLOCKED);
if (!isolate_lru_page(page))
putback_lru_page(page);
+ else {
+ ClearPageMlocked(page);
+ mod_zone_page_state(page_zone(page), NR_MLOCK,
+ -hpage_nr_pages(page));
+ }
}
}
Thanks,
Xishi Qiu
at.
>
Hi Vlastimil,
Why the page has marked Mlocked, but not in lru list?
if (TestClearPageMlocked(page)) {
/*
* We already have pin from follow_page_mask()
* so we can spare the get_page() here.
*/
On 2017/5/24 15:49, Vlastimil Babka wrote:
> On 05/24/2017 06:40 AM, Xishi Qiu wrote:
>> On 2017/5/24 9:40, Xishi Qiu wrote:
>>
>>> Hi, I find we use rcu access task_struct in mm_match_cgroup(), but not use
>>> rcu free in free_task_struct(), is it right?
>&g
On 2017/5/24 9:40, Xishi Qiu wrote:
> Hi, I find we use rcu access task_struct in mm_match_cgroup(), but not use
> rcu free in free_task_struct(), is it right?
>
> Here is the backtrace.
>
> PID: 2133 TASK: 881fe3353300 CPU: 2 COMMAND: "CPU 15/KVM&q
Hi, I find we use rcu access task_struct in mm_match_cgroup(), but not use
rcu free in free_task_struct(), is it right?
Here is the backtrace.
PID: 2133 TASK: 881fe3353300 CPU: 2 COMMAND: "CPU 15/KVM"
#0 [881fe276b528] machine_kexec at 8105280b
#1 [881fe276b588] crash_k
On 2017/5/23 3:26, Hugh Dickins wrote:
> On Mon, 22 May 2017, Xishi Qiu wrote:
>> On 2017/5/20 10:40, Hugh Dickins wrote:
>>> On Sat, 20 May 2017, Xishi Qiu wrote:
>>>>
>>>> Here is a bug report form redhat:
>>>> https://bugzilla.redhat.c
On 2017/5/20 10:40, Hugh Dickins wrote:
> On Sat, 20 May 2017, Xishi Qiu wrote:
>>
>> Here is a bug report form redhat:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1305620
>> And I meet the bug too. However it is hard to reproduce, and
>> 624483f3ea82598
On 2017/5/20 10:02, Hugh Dickins wrote:
> On Sat, 20 May 2017, Xishi Qiu wrote:
>> On 2017/5/20 6:00, Hugh Dickins wrote:
>>>
>>> You're ignoring the rcu_read_lock() on entry to page_lock_anon_vma_read(),
>>> and the SLAB_DESTROY_BY_RCU (recently rename
On 2017/5/20 6:00, Hugh Dickins wrote:
> On Fri, 19 May 2017, Xishi Qiu wrote:
>> On 2017/5/19 16:52, Xishi Qiu wrote:
>>> On 2017/5/18 17:46, Xishi Qiu wrote:
>>>
>>>> Hi, my system triggers this bug, and the vmcore shows the anon_vma seems
>>>
On 2017/5/19 16:52, Xishi Qiu wrote:
> On 2017/5/18 17:46, Xishi Qiu wrote:
>
>> Hi, my system triggers this bug, and the vmcore shows the anon_vma seems be
>> freed.
>> The kernel is RHEL 7.2, and the bug is hard to reproduce, so I don't know if
>> it
On 2017/5/18 17:46, Xishi Qiu wrote:
> Hi, my system triggers this bug, and the vmcore shows the anon_vma seems be
> freed.
> The kernel is RHEL 7.2, and the bug is hard to reproduce, so I don't know if
> it
> exists in mainline, any reply is welcome!
>
When we alloc
Hi, my system triggers this bug, and the vmcore shows the anon_vma seems be
freed.
The kernel is RHEL 7.2, and the bug is hard to reproduce, so I don't know if it
exists in mainline, any reply is welcome!
[35030.332666] general protection fault: [#1] SMP
[35030.333016] Modules linked in: vet
2017-05-12T04:46:36.373001+08:00[[32m OK [0m] Reached target System
Initialization.
2017-05-12T04:46:36.385253+08:00[[32m OK [0m] Reached target Basic System.
2017-05-12T04:46:43.049936+08:00[ 25.839157] BUG: unable to handle kernel [
25.841509] floppy0: no floppy controllers found
20
ems simply have large
> gaps in physical memory access. Their memory map
> may look like this:
>
> |MM|IO||..||
>
> Where M is memory, IO is IO space, and the
> dots are simply a gap in physical address
> space with no valid accesses at all.
On 2017/5/2 17:16, Michal Hocko wrote:
> On Tue 02-05-17 16:52:00, Xishi Qiu wrote:
>> On 2017/5/2 16:43, Michal Hocko wrote:
>>
>>> On Tue 02-05-17 15:59:23, Xishi Qiu wrote:
>>>> Hi, I use "memtester -p 0x6c800 10G" to test physical a
On 2017/5/2 16:43, Michal Hocko wrote:
> On Tue 02-05-17 15:59:23, Xishi Qiu wrote:
>> Hi, I use "memtester -p 0x6c800 10G" to test physical address
>> 0x6c800
>> Because this physical address is invalid, and valid_mmap_phys_addr_range()
>>
169.147578] ? panic+0x1f1/0x239
[ 169.150789] oops_end+0xb8/0xd0
[ 169.153910] pgtable_bad+0x8a/0x95
[ 169.157294] __do_page_fault+0x3aa/0x4a0
[ 169.161194] do_page_fault+0x30/0x80
[ 169.164750] ? do_syscall_64+0x175/0x180
[ 169.168649] page_fault+0x28/0x30
Thanks,
Xishi Qiu
On 2017/4/10 17:37, Hillf Danton wrote:
> On April 10, 2017 4:57 PM Xishi Qiu wrote:
>> On 2017/4/10 14:42, Hillf Danton wrote:
>>
>>> On April 08, 2017 9:40 PM zhong Jiang wrote:
>>>>
>>>> when runing the stabile docker cases in the vm. The f
>> }
>>
>> This maks me wonder, the anon_vma do not come from slab structure.
>> and the content is abnormal. IMO, At least anon_vma->root will not NULL.
>> The issue can be reproduced every other week.
>>
> Check please if commit
> 624483f3ea8 ("mm: rmap: fix use-after-free in __put_anon_vma")
> is included in the 3.10 you are running.
>
Hi Hillf,
We missed this patch in RHEL 7.2
Could you please give more details for how it triggered?
Thanks,
Xishi QIu
> btw, why not run the mainline?
>
> Hillf
>
>
>
> .
>
On 2017/3/2 14:55, Xishi Qiu wrote:
ping
> Hi, I test Trinity, and got the following log.
> My OS version is RHEL 7.2, I'm not sure if it has fixed in mainline.
> Any comment is welcome.
>
> [57676.532593] [ cut here ]
> [57676.537415] WARNING:
On 2017/3/7 18:47, Michal Hocko wrote:
> On Tue 07-03-17 18:33:53, Xishi Qiu wrote:
>> MIGRATE_HIGHATOMIC page blocks are reserved for an atomic
>> high-order allocation, so use it as late as possible.
>
> Why is this better? Are you seeing any problem which this patch
>
If direct reclaim failed, unreserve highatomic pageblock
immediately is better than unreserve in should_reclaim_retry().
We may get page in next try rather than reclaim-compact-reclaim-compact...
Signed-off-by: Xishi Qiu
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion
MIGRATE_HIGHATOMIC page blocks are reserved for an atomic
high-order allocation, so use it as late as possible.
Signed-off-by: Xishi Qiu
---
mm/page_alloc.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 40d79a6..2331840 100644
Introduce two helpers, is_migrate_highatomic() and is_migrate_highatomic_page().
Simplify the code, no functional changes.
Signed-off-by: Xishi Qiu
---
include/linux/mmzone.h | 5 +
mm/page_alloc.c| 14 ++
2 files changed, 11 insertions(+), 8 deletions(-)
diff --git a
Use is_migrate_isolate_page() to simplify the code, no functional changes.
Signed-off-by: Xishi Qiu
---
mm/page_isolation.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index f4e17a5..7927bbb 100644
--- a/mm/page_isolation.c
Hi, I test Trinity, and got the following log.
My OS version is RHEL 7.2, I'm not sure if it has fixed in mainline.
Any comment is welcome.
[57676.532593] [ cut here ]
[57676.537415] WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186
__put_rmid+0x28/0x80()
[57676.5
On 2017/2/15 18:47, Vlastimil Babka wrote:
> On 02/14/2017 11:07 AM, Xishi Qiu wrote:
>> On 2017/2/11 1:23, Vlastimil Babka wrote:
>>
>>> When stealing pages from pageblock of a different migratetype, we count how
>>> many free pages were stolen, and change th
ype
expand // split the largest order
list_add // add to the list of start_migratetype
So how about use list_add_tail instead of list_add? Then we can merge the large
block again as soon as the page freed.
Thanks,
Xishi Qiu
From: Tiantian Feng
We need to disable VMX on all CPUs before stop cpu when OS panic,
otherwisewe risk hanging up the machine, because the CPU ignore INIT
signals when VMX is enabled. In kernel mainline this issue existence.
Signed-off-by: Tiantian Feng
Signed-off-by: Xishi Qiu
---
arch/x86
On 2017/1/17 23:18, Paolo Bonzini wrote:
>
>
> On 14/01/2017 02:42, Xishi Qiu wrote:
>> From: Tiantian Feng
>>
>> We need to disable VMX on all CPUs before stop cpu when OS panic,
>> otherwisewe risk hanging up the machine, because the CPU ignore INIT
>>
From: Tiantian Feng
We need to disable VMX on all CPUs before stop cpu when OS panic,
otherwisewe risk hanging up the machine, because the CPU ignore INIT
signals when VMX is enabled. In kernel mainline this issue existence.
Signed-off-by: Tiantian Feng
---
arch/x86/kernel/smp.c | 3 +++
1 fil
On 2017/1/14 9:36, Xishi Qiu wrote:
> From: Tiantian Feng
>
> We need to disable VMX on all CPUs before stop cpu when OS panic, otherwisewe
> risk hanging up the machine, because the CPU ignore INIT signals when VMX is
> enabled.
> In kernel mainline this issue existence.
From: Tiantian Feng
We need to disable VMX on all CPUs before stop cpu when OS panic, otherwisewe
risk hanging up the machine, because the CPU ignore INIT signals when VMX is
enabled.
In kernel mainline this issue existence.
Signed-off-by: Tiantian Feng
---
arch/x86/kernel/smp.c | 2 ++
1 fil
Delete extra semicolon, and fix some typos.
Signed-off-by: Xishi Qiu
Reviewed-by: Sergey Senozhatsky
---
mm/zsmalloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9cc3c0b..a1f2498 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
From: f00186668
We need to disable VMX on all CPUs before stop cpu when OS panic,
otherwisewe risk hanging up the machine, because the CPU ignore INIT
signals when VMX is enabled. In kernel mainline this issue existence.
Signed-off-by: f00186668
---
arch/x86/kernel/smp.c | 3 +++
1 file change
From: f00186668
We need to disable VMX on all CPUs before stop cpu when OS panic, otherwisewe
risk hanging up the machine, because the CPU ignore INIT signals when VMX is
enabled.
In kernel mainline this issue existence.
Signed-off-by: f00186668
---
arch/x86/kernel/smp.c | 2 ++
1 file change
entation fixes
>> : - Non-portable code replaced by portable code (even in arch-specific,
>> : since people copy, as long as it's trivial)
>> : - Any fix by the author/maintainer of the file (ie. patch monkey
>> : in re-transmission mode)
>>
>>
>&g
Delete extra semicolon, it was introduced in
3783689 zsmalloc: introduce zspage structure
Signed-off-by: Xishi Qiu
---
mm/zsmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9cc3c0b..2d6c92e 100644
--- a/mm/zsmalloc.c
+++ b/mm
Signed-off-by: Xishi Qiu
---
mm/zsmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9cc3c0b..2d6c92e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -364,7 +364,7 @@ static struct zspage *cache_alloc_zspage(struct zs_pool
*pool
On 2016/12/16 2:31, Andrea Reale wrote:
> Hi Xishi Qiu,
>
> thanks for your comments.
>
> The short anwser to your question is the following. As you hinted,
> it is related to the way pfn_valid() is implemented in arm64 when
> CONFIG_HAVE_ARCH_PFN_VALID is true (default
On 2016/12/15 14:18, Xishi Qiu wrote:
> On 2016/12/14 20:16, Maciej Bielski wrote:
>
>>
>>
>> -#ifdef CONFIG_MEMORY_HOTREMOVE
>> -int arch_remove_memory(u64 start, u64 size)
>> -{
>> -unsigned long start_pfn = start >> PAGE_SHIFT;
>
struct zone *zone;
> - int ret;
> + SetPageReserved(pfn_to_page(pfn));
> + }
Hi Maciej,
Why we need to set reserved here?
I think the new pages are already reserved in __add_zone() ->
memmap_init_zone(), right?
Thanks,
Xishi Qiu
>
> - zone = page_zone(pfn_
help clean up the patchset.
>
> On 16-12-01 07:11 PM, Xishi Qiu wrote:
>> On 2016/12/2 10:38, Scott Branden wrote:
>>
>>> Hi Xishi,
>>>
>>> Thanks for the reply - please see comments below.
>>>
>>> On 16-12-01 05:49 PM, Xishi Qiu wrote:
>>
A compiler could re-read "old_flags" from the memory location after reading
and calculation "flags" and passes a newer value into the cmpxchg making
the comparison succeed while it should actually fail.
Signed-off-by: Xishi Qiu
Suggested-by: Christian Borntraeger
---
m
A compiler could re-read "old_flags" from the memory location after reading
and calculation "flags" and passes a newer value into the cmpxchg making
the comparison succeed while it should actually fail.
Signed-off-by: Xishi Qiu
Suggested-by: Christian Borntraeger
---
m
On 2016/12/5 16:50, Christian Borntraeger wrote:
> On 12/05/2016 09:31 AM, Christian Borntraeger wrote:
>> On 12/05/2016 09:23 AM, Xishi Qiu wrote:
>>> By reading the code, I find the following code maybe optimized by
>>> compiler, maybe page->flags and old_flags use
By reading the code, I find the following code maybe optimized by
compiler, maybe page->flags and old_flags use the same register,
so use ACCESS_ONCE in page_cpupid_xchg_last() to fix the problem.
Signed-off-by: Xishi Qiu
---
mm/mmzone.c | 2 +-
1 file changed, 1 insertion(+), 1 delet
; 18446744073709551615
>>
>> I looks ok to me, however, I not sure whether other code in the kernel
>> will also use its complement if user write a negative number for an
>> unsigned long. Does anyone have other opinion ?
>
> Largely we need to be very careful with changing these functions as
> they have been around for a long time, and have a very diverse set of
> users.
>
> So while changes are possible a reasonable argument needs to be made
> that nothing in userspace cares.
>
> Eric
>
Hi Eric,
This patch is aimed to change the return value if write invalid value to
ulong type sysctl, just to keep the same as int type sysctl.
Thanks,
Xishi Qiu
> .
>
On 2016/12/2 10:38, Scott Branden wrote:
> Hi Xishi,
>
> Thanks for the reply - please see comments below.
>
> On 16-12-01 05:49 PM, Xishi Qiu wrote:
>> On 2016/12/2 8:19, Scott Branden wrote:
>>
>>> This patchset is sent for comment to add memory hotplug s
memory is added to the kernel memory
> pool for normal allocation?
>
Hi Scott,
Do you mean it still don't support hod-add after apply this patchset?
Thanks,
Xishi Qiu
> Scott Branden (2):
> arm64: memory-hotplug: Add MEMORY_HOTPLUG, MEMORY_HOTREMOVE,
> MEMORY_PROBE
>
The kernel version is v4.1, and I find some error reports from kasan.
I'm not sure whether it is a wrong report.
11-29 07:57:26.513 <3>[12507.758056s][pid:0,cpu3,swapper/3]BUG: KASAN:
stack-out-of-bounds in trace_event_buffer_lock_reserve+0x50/0x170 at addr
ffc035903bf0
11-29 07:57:26.513 <3
On 2016/11/9 19:58, Mel Gorman wrote:
> On Tue, Nov 08, 2016 at 12:43:17PM +0800, Xishi Qiu wrote:
>> On mem-hotplug system, there is a problem, please see the following case.
>>
>> memtester xxG, the memory will be alloced on a movable node. And after numa
>> b
if (!populated_zone(zone))
+ ret = -1;
+ }
+
return ret;
}
Thanks,
Xishi Qiu
On 2016/11/5 20:29, Anshuman Khandual wrote:
> On 11/05/2016 01:27 PM, Xishi Qiu wrote:
>> Usually the memory of android phones is very small, so after a long
>> running, the fragment is very large. Kernel stack which called by
>> alloc_thread_stack_node() usually alloc 16K
ode 6, zone Normal 0.000 0.000 0.001 0.003 0.004 0.006 0.007 0.008 0.009
0.010 0.010
Node 7, zone Normal 0.000 0.000 0.000 0.000 0.000 0.001 0.002 0.002 0.003
0.004 0.005
Signed-off-by: Xishi Qiu
---
mm/page_alloc.c | 29 +
1 file changed, 29 insertions
On 2016/10/26 13:59, Joonsoo Kim wrote:
> On Wed, Oct 26, 2016 at 01:50:37PM +0800, Xishi Qiu wrote:
>> On 2016/10/26 12:37, Joonsoo Kim wrote:
>>
>>> On Mon, Oct 17, 2016 at 05:21:54PM +0800, Xishi Qiu wrote:
>>>> On 2016/10/13 16:08, js1...@gmail.com
On 2016/10/26 12:37, Joonsoo Kim wrote:
> On Mon, Oct 17, 2016 at 05:21:54PM +0800, Xishi Qiu wrote:
>> On 2016/10/13 16:08, js1...@gmail.com wrote:
>>
>>> From: Joonsoo Kim
>>>
>>> Currently, freeing page can stay longer in the buddy list if next hig
the tail, then the rest
of pages
will be hard to be allocated and we can merge them again as soon as the page
freed.
Thanks,
Xishi Qiu
On 2016/10/10 14:40, Vlastimil Babka wrote:
> On 10/10/2016 05:35 AM, Xishi Qiu wrote:
>> We will use gfp_mask in the following path, but it's not init.
>>
>> kcompactd_do_work
>> compact_zone
>> gfpflags_to_migratetype
>>
>> However
t's a little confusion, so init it first.
Signed-off-by: Xishi Qiu
---
mm/compaction.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 9affb29..4b9a9d1 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1895,10 +1895,10 @@ s
On 2016/9/28 13:52, Joonsoo Kim wrote:
> On Mon, Sep 26, 2016 at 01:02:31PM +0200, Michal Hocko wrote:
>> On Mon 26-09-16 18:17:50, Xishi Qiu wrote:
>>> On 2016/9/26 17:43, Michal Hocko wrote:
>>>
>>>> On Mon 26-09-16 17:16:54, Xishi Qiu wrote:
>&
On 2016/9/26 17:43, Michal Hocko wrote:
> On Mon 26-09-16 17:16:54, Xishi Qiu wrote:
>> On 2016/9/26 16:58, Michal Hocko wrote:
>>
>>> On Mon 26-09-16 16:47:57, Xishi Qiu wrote:
>>>> commit 97a16fc82a7c5b0cfce95c05dfb9561e306ca1b1
>>>> (mm,
On 2016/9/26 16:58, Michal Hocko wrote:
> On Mon 26-09-16 16:47:57, Xishi Qiu wrote:
>> commit 97a16fc82a7c5b0cfce95c05dfb9561e306ca1b1
>> (mm, page_alloc: only enforce watermarks for order-0 allocations)
>> rewrite the high-order check in __zone_watermark_ok(), but I thin
ne_watermark_ok() always return true, and it lead to alloc a high-order
unmovable page failed, then do direct reclaim.
Thanks,
Xishi Qiu
On 2016/9/19 10:39, Xishi Qiu wrote:
> On my system, I set HugePages_Total to 2G(1024 x 2M), and I use 1G hugetlb,
> but the HugePages_Free is not 1G(512 x 2M), it is 280(280 x 2M) left,
> HugePages_Rsvd is 0, it seems someone use 232(232 x 2M) hugetlb additionally.
>
> So how
and find the total hugetlb size is only 1G,
cat /proc/xx/smaps | grep KernelPageSize, then account the vma size
which KernelPageSize is 2048 kB.
Thanks,
Xishi Qiu
() return 1 and store_mem_state()
return -EINVAL even without this patch, as Reza described in v2.
1. store_mem_state() called with buf="online"
2. device_online() returns 1 because device is already online
3. store_mem_state() returns 1
4. calling code interprets this as 1-byte buf
1 - 100 of 503 matches
Mail list logo