Signed-off-by: Tang Chen
---
mm/page_alloc.c | 23 ++-
1 files changed, 14 insertions(+), 9 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 00037a3..cd6f8a6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4372,7 +4372,7 @@ static unsigned long
When CONFIG_HAVE_MEMBLOCK_NODE_MAP is not defined, sanitize_zone_movable_limit()
is also not used. So remove it.
Signed-off-by: Tang Chen
---
mm/page_alloc.c |5 -
1 files changed, 0 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cd6f8a6..2bd529e
Since "core" could be confused with cpu cores, but here it is memory,
so rename the boot option movablecore_map to movablemem_map.
Signed-off-by: Tang Chen
---
Documentation/kernel-parameters.txt |8 ++--
include/linux/memblock.h|2 +-
include/
Hi Andrew,
patch1 ~ patch3 fix some problems of movablecore_map boot option.
And since the name "core" could be confused, patch4 rename this option
to movablemem_map.
All these patches are based on the latest -mm tree.
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
d.
This patch only clears apicid-to-node mapping when the cpu is hotremoved.
Cc: Yasuaki Ishimatsu
Cc: David Rientjes
Cc: Jiang Liu
Cc: Minchan Kim
Cc: KOSAKI Motohiro
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Peter Zijlstra
Cc: Ta
.
Else, a online cpu on another node will be picked.
Cc: Yasuaki Ishimatsu
Cc: David Rientjes
Cc: Jiang Liu
Cc: Minchan Kim
Cc: KOSAKI Motohiro
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Peter Zijlstra
Signed-off-by: Tang Chen
Sig
BLE should be valid zone
for policies. is_valid_nodemask() need to be changed to match it.
Fix: check all zones, even its zoneid > policy_zone.
Use nodes_intersects() instead open code to check it.
Reported-by: Wen Congyang
Signed-off-by: Lai Jiangshan
Signed-off-by: Tang Chen
---
mm/mempolicy.c
The function release_firmware_map_entry() references the function
__meminit firmware_map_find_entry_in_list(). So it should also have
__meminit.
And since the firmware_map_entry->kobj is initialized with memmap_ktype,
the memmap_ktype should also be prefixed by __refdata.
Signed-off-by: T
patch will fix the problem based on the above patches.
Reported-by: Wen Congyang
Signed-off-by: Tang Chen
---
arch/x86/mm/init_64.c | 148 +++-
1 files changed, 46 insertions(+), 102 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm
+0xef/0x166
[ 696.960128] RSP
[ 697.001768] CR2: ea004000
[ 697.041336] ---[ end trace e7f94e3a34c442d4 ]---
[ 697.096474] Kernel panic - not syncing: Fatal exception
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
mm/sparse.c |2 +-
1 files changed, 1 insertions(+), 1 dele
Make the comments in drivers/firmware/memmap.c kernel-doc compliant.
Reported-by: Julian Calaby
Signed-off-by: Tang Chen
---
drivers/firmware/memmap.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/firmware/memmap.c b/drivers/firmware/memmap.c
index
will panic the next time memory is hot-added.
patch3: the old way of freeing pagetable pages was wrong. We should never
split larger pages into small ones.
Lai Jiangshan (1):
Bug-fix: mempolicy: fix is_valid_nodemask()
Tang Chen (3):
Bug fix: Do not split pages when freeing
This patch make sure bootmem will not allocate memory from areas that
may be ZONE_MOVABLE. The map info is from movablecore_map boot option.
Signed-off-by: Tang Chen
Reviewed-by: Wen Congyang
Reviewed-by: Lai Jiangshan
Tested-by: Lin Feng
---
include/linux/memblock.h |2 +
mm/memblock.c
Hi Stephen,
On 01/21/2013 02:08 PM, Stephen Rothwell wrote:
Hi all,
After merging the final tree, today's linux-next build (arm defconfig)
failed like this:
mm/memblock.c: In function 'memblock_find_in_range_node':
mm/memblock.c:104:2: error: invalid use of undefined type 'struct
movablecore_
On 01/18/2013 03:38 PM, Yasuaki Ishimatsu wrote:
2013/01/18 15:25, H. Peter Anvin wrote:
We already do DMI parsing in the kernel...
Thank you for giving the infomation.
Is your mention /sys/firmware/dmi/entries?
If so, my box does not have memory information.
My box has only type 0, 1, 2, 3,
On 01/17/2013 06:52 AM, H. Peter Anvin wrote:
On 01/16/2013 01:29 PM, Andrew Morton wrote:
Yes. If SRAT support is available, all memory which enabled hotpluggable
bit are managed by ZONEMOVABLE. But performance degradation may
occur by NUMA because we can only allocate anonymous page and page-
On 01/16/2013 06:26 AM, Julian Calaby wrote:
Hi Tang,
One minor point.
/*
- * Search memmap entry
+ * firmware_map_find_entry: Search memmap entry.
+ * @start: Start of the memory range.
+ * @end: End of the memory range (exclusive).
+ * @type: Type of the memory range.
+ *
+ * This func
Now we have a map_entries_lock to protect map_entries list.
So we need to update the comments.
Signed-off-by: Tang Chen
---
drivers/firmware/memmap.c |6 +-
1 files changed, 1 insertions(+), 5 deletions(-)
diff --git a/drivers/firmware/memmap.c b/drivers/firmware/memmap.c
index 940c4e9
Signed-off-by: Tang Chen
Signed-off-by: Wen Congyang
---
arch/x86/mm/init_64.c | 80 +++-
1 files changed, 71 insertions(+), 9 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index e77d312..ff0206b 100644
--- a/arch/x86/mm/ini
link the map entries
allocated by bootmem when they are removed, and a lock to protect it.
And these entries will be reused when the memory is hot-added again.
The idea is suggestted by Andrew Morton
Signed-off-by: Tang Chen
---
drivers/firmware/memmap.c |
Hi Andrew,
Here are some bug fix patches for physical hot-remove patches.
And there are some new ones reported by others, I'll fix them soon.
Thanks. :)
Tang Chen (6):
Bug fix: Hold spinlock across find|remove /sys/firmware/memmap/X
operation.
Bug fix: Do not calculate direct ma
these two functions need to be careful to hold the lock when
using
these two functions.
The suggestion is from Andrew Morton
Signed-off-by: Tang Chen
---
drivers/firmware/memmap.c | 25 +
1 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/drivers/firmware
Direct mapped pages were freed when they were offlined, or they were
not allocated. So we only need to free vmemmap pages, no need to free
direct mapped pages.
Signed-off-by: Tang Chen
---
arch/x86/mm/init_64.c |9 +++--
1 files changed, 7 insertions(+), 2 deletions(-)
diff --git a
We only need to update direct_pages_count[level] when we freeing direct mapped
pagetables.
Signed-off-by: Tang Chen
---
arch/x86/mm/init_64.c | 17 +++--
1 files changed, 7 insertions(+), 10 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index e829113
sure ZONE_MOVABLE won't use these
addresses.
Suggested by Wu Jianguo
3) Add lowmem addresses check, when the system has highmem, make sure
ZONE_MOVABLE
won't use lowmem. Suggested by Liu Jiang
4) Fix misuse of pfns in movablecore_map.map[] as physical addresses.
Tang Chen
find_usable_zone_for_movable() to free_area_init_nodes()
so that sanitize_zone_movable_limit() in patch 3 could use
initialized movable_zone.
Reported-by: Wu Jianguo
Signed-off-by: Tang Chen
Reviewed-by: Wen Congyang
Reviewed-by: Lai Jiangshan
Tested-by: Lin Feng
---
mm/page_alloc.c | 28
each node.
change log:
Do find_usable_zone_for_movable() to initialize movable_zone
so that sanitize_zone_movable_limit() could use it.
Reported-by: Wu Jianguo
Signed-off-by: Tang Chen
Signed-off-by: Liu Jiang
Reviewed-by: Wen Congyang
Reviewed-by: Lai Jiangshan
Tested-by: Lin Feng
---
mm
fails.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Lai Jiangshan
Signed-off-by: Tang Chen
Signed-off-by: Jiang Liu
---
arch/x86/mm/numa.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2d125be..db939b6 100644
--- a/arch/x
This patch make sure bootmem will not allocate memory from areas that
may be ZONE_MOVABLE. The map info is from movablecore_map boot option.
Signed-off-by: Tang Chen
Reviewed-by: Wen Congyang
Reviewed-by: Lai Jiangshan
Tested-by: Lin Feng
---
include/linux/memblock.h |1 +
mm/memblock.c
.
Signed-off-by: Tang Chen
Signed-off-by: Lai Jiangshan
Reviewed-by: Wen Congyang
Tested-by: Lin Feng
---
Documentation/kernel-parameters.txt | 17 +
include/linux/mm.h | 11 +++
mm/page_alloc.c | 126 +++
3 files
On 01/11/2013 08:12 PM, Michal Hocko wrote:
On Fri 11-01-13 20:06:25, Tang Chen wrote:
On 01/11/2013 06:47 PM, Michal Hocko wrote:
Darn! And now that I am looking at the patch closer it is too x86
centric so this cannot be in the generic code. I will try to cook
something better. Sorry about
On 01/11/2013 06:47 PM, Michal Hocko wrote:
Darn! And now that I am looking at the patch closer it is too x86
centric so this cannot be in the generic code. I will try to cook
something better. Sorry about the noise.
It is more complicated than I thought. One would tell it's a mess.
The patch
Hi Michal,
Thank you very much for the nice catch. :)
On 01/11/2013 05:56 PM, Michal Hocko wrote:
Defconfig for x86_64 complains
arch/x86/mm/init_64.c: In function ‘register_page_bootmem_memmap’:
arch/x86/mm/init_64.c:1340: error: implicit declaration of function
‘get_page_bootmem’
arch/x86/mm
Hi Andrew,
On 01/10/2013 07:19 AM, Andrew Morton wrote:
...
+ entry = firmware_map_find_entry(start, end - 1, type);
+ if (!entry)
+ return -EINVAL;
+
+ firmware_map_remove_entry(entry);
...
The above code looks racy. After firmware_map_find_entry() does the
Hi Andrew,
On 01/10/2013 07:11 AM, Andrew Morton wrote:
On Wed, 9 Jan 2013 17:32:26 +0800
Tang Chen wrote:
We remove the memory like this:
1. lock memory hotplug
2. offline a memory block
3. unlock memory hotplug
4. repeat 1-3 to offline all memory blocks
5. lock memory hotplug
6. remove
Hi Andrew,
On 01/10/2013 06:49 AM, Andrew Morton wrote:
On Wed, 9 Jan 2013 17:32:28 +0800
Tang Chen wrote:
When (hot)adding memory into system, /sys/firmware/memmap/X/{end, start, type}
sysfs files are created. But there is no code to remove these files. The patch
implements the function to
Hi Andrew,
On 01/10/2013 06:50 AM, Andrew Morton wrote:
On Wed, 9 Jan 2013 17:32:29 +0800
Tang Chen wrote:
For removing memory, we need to remove page table. But it depends
on architecture. So the patch introduce arch_remove_memory() for
removing page table. Now it only calls __remove_pages
Hi Andrew,
On 01/10/2013 07:33 AM, Andrew Morton wrote:
On Wed, 9 Jan 2013 17:32:24 +0800
Tang Chen wrote:
This patch-set aims to implement physical memory hot-removing.
As you were on th patch delivery path, all of these patches should have
your Signed-off-by:. But some were missing it
Hi Andrew,
Thank you very much for your pushing. :)
On 01/10/2013 06:23 AM, Andrew Morton wrote:
This does sound like a significant problem. We should assume that
mmecg is available and in use.
In patch1, we provide a solution which is not good enough:
Iterate twice to offline the memory.
1
Hi Glauber,
On 01/09/2013 11:09 PM, Glauber Costa wrote:
We try to make all page_cgroup allocations local to the node they are describing
now. If the memory is the first memory onlined in this node, we will allocate
it from the other node.
For example, node1 has 4 memory blocks: 8-11, and we o
Currently __remove_section for SPARSEMEM_VMEMMAP does nothing. But even if
we use SPARSEMEM_VMEMMAP, we can unregister the memory_section.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
mm/memory_hotplug.c | 11 ---
1 files changed, 0
when the node is online again.
The problem is suggested by Kamezawa Hiroyuki
The idea is from Wen Congyang
NOTE: If we don't reset pgdat to 0, the WARN_ON in free_area_init_node()
will be triggered.
Signed-off-by: Tang Chen
Reviewed-by: Wen Congyang
---
mm/memory_hotplug.c |
From: Wen Congyang
memory can't be offlined when CONFIG_MEMCG is selected.
For example: there is a memory device on node 1. The address range
is [1G, 1.5G). You will find 4 new directories memory8, memory9, memory10,
and memory11 under the directory /sys/devices/system/memory/.
If CONFIG_MEMCG i
From: Wen Congyang
offlining memory blocks and checking whether memory blocks are offlined
are very similar. This patch introduces a new function to remove
redundant codes.
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
Reviewed-by: Kamezawa Hiroyuki
---
mm/memory_hotplug.c | 129
0x190
[ 457.764552] [] ? __init_kthread_worker+0x70/0x70
[ 457.839427] [] ret_from_fork+0x7c/0xb0
[ 457.903914] [] ? __init_kthread_worker+0x70/0x70
[ 457.978784] ---[ end trace 25e85300f542aa01 ]---
Signed-off-by: Tang Chen
Signed-off-by: Lai Jiangshan
Signed-off-by: Wen Congyang
Ack
.
Patch3: new patch, no logical change, just remove reduntant codes.
Patch9: merge the patch from wujianguo into this patch. flush tlb on all cpu
after the pagetable is changed.
Patch12: new patch, free node_data when a node is offlined.
Tang Chen (6):
memory-hotplug: mo
cleared.
In this case, the page used as PT/PMD can be freed.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Jianguo Wu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
arch/x86/include/asm/pgtable_types.h |1 +
arch/x86/mm/init_64.c| 299
From: Yasuaki Ishimatsu
For removing memmap region of sparse-vmemmap which is allocated bootmem,
memmap region of sparse-vmemmap needs to be registered by get_page_bootmem().
So the patch searches pages of virtual mapping and registers the pages by
get_page_bootmem().
Note: register_page_bootmem
bootmem.
So the patch makes memory leak. But I think the memory leak size is
very samll. And it does not affect the system.
Signed-off-by: Wen Congyang
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Tang Chen
Reviewed-by: Kamezawa Hiroyuki
---
drivers/firmware/memmap.c| 96
memory. But we don't hold
the lock in the whole operation. So we should check whether all memory blocks
are offlined before step6. Otherwise, kernel maybe panicked.
Signed-off-by: Wen Congyang
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Tang Chen
Acked-by: KAMEZAWA Hiroyuki
---
dr
From: Yasuaki Ishimatsu
When a memory is added, we update zone's and pgdat's start_pfn and
spanned_pages in the function __add_zone(). So we should revert them
when the memory is removed.
The patch adds a new function __remove_zone() to do this.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by:
sparc.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Jianguo Wu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
arch/arm64/mm/mmu.c |3 +++
arch/ia64/mm/discontig.c |4
arch/powerpc/mm/init_64.c |4
arch/s390/mm/vmem.c |4
arch/sparc/mm
From: Wen Congyang
For removing memory, we need to remove page table. But it depends
on architecture. So the patch introduce arch_remove_memory() for
removing page table. Now it only calls __remove_pages().
Note: __remove_pages() for some archtecuture is not implemented
(I don't know how t
This patch introduces a new function try_offline_node() to
remove sysfs file of node when all memory sections of this
node are removed. If some memory sections of this node are
not removed, this function does nothing.
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
drivers/acpi
From: Wen Congyang
We call hotadd_new_pgdat() to allocate memory to store node_data. So we
should free it when removing a node.
Signed-off-by: Wen Congyang
Reviewed-by: Kamezawa Hiroyuki
---
mm/memory_hotplug.c | 30 +++---
1 files changed, 27 insertions(+), 3 deleti
This patch searches a page table about the removed memory, and clear
page table for x86_64 architecture.
Signed-off-by: Wen Congyang
Signed-off-by: Jianguo Wu
Signed-off-by: Jiang Liu
Signed-off-by: Tang Chen
---
arch/x86/mm/init_64.c | 10 ++
1 files changed, 10 insertions(+), 0
On 12/26/2012 11:10 AM, Kamezawa Hiroyuki wrote:
> (2012/12/24 21:09), Tang Chen wrote:
>> From: Yasuaki Ishimatsu
>>
>> We remove the memory like this:
>> 1. lock memory hotplug
>> 2. offline a memory block
>> 3. unlock memory hotplug
>> 4. repea
Hi Kamezawa-san,
Thanks for the reviewing. Please see below. :)
On 12/26/2012 11:20 AM, Kamezawa Hiroyuki wrote:
> (2012/12/24 21:09), Tang Chen wrote:
>> From: Wen Congyang
>>
>> offlining memory blocks and checking whether memory blocks are offlined
>> are very simil
On 12/26/2012 11:30 AM, Kamezawa Hiroyuki wrote:
>> @@ -41,6 +42,7 @@ struct firmware_map_entry {
>> const char *type; /* type of the memory range */
>> struct list_headlist; /* entry for the linked list */
>> struct kobject kobj; /* kobject for eac
t a discussion soon.
Thanks. :)
> Thanks,
> Yasuaki Ishimatsu
>
> 2012/12/19 18:11, Tang Chen wrote:
>> The Hot Plugable bit in SRAT flags specifys if the memory range
>> could be hotplugged.
>>
>> If user specified movablecore_map=nn[KMG]@ss[KMG], reset
>>
On 12/26/2012 11:47 AM, Kamezawa Hiroyuki wrote:
> (2012/12/24 21:09), Tang Chen wrote:
>> In __remove_section(), we locked pgdat_resize_lock when calling
>> sparse_remove_one_section(). This lock will disable irq. But we don't need
>> to lock the whole function.
On 12/25/2012 04:09 PM, Jianguo Wu wrote:
+
+ if (!cpu_has_pse) {
+ next = (addr + PAGE_SIZE)& PAGE_MASK;
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd))
+ continue;
+
On 12/26/2012 11:11 AM, Tang Chen wrote:
On 12/26/2012 10:49 AM, Tang Chen wrote:
On 12/25/2012 04:17 PM, Jianguo Wu wrote:
+
+static void __meminit free_pagetable(struct page *page, int order)
+{
+ struct zone *zone;
+ bool bootmem = false;
+ unsigned long magic;
+
+ /* bootmem page has
On 12/26/2012 10:49 AM, Tang Chen wrote:
On 12/25/2012 04:17 PM, Jianguo Wu wrote:
+
+static void __meminit free_pagetable(struct page *page, int order)
+{
+ struct zone *zone;
+ bool bootmem = false;
+ unsigned long magic;
+
+ /* bootmem page has reserved flag */
+ if (PageReserved(page
On 12/25/2012 04:17 PM, Jianguo Wu wrote:
+
+static void __meminit free_pagetable(struct page *page, int order)
+{
+ struct zone *zone;
+ bool bootmem = false;
+ unsigned long magic;
+
+ /* bootmem page has reserved flag */
+ if (PageReserved(page)) {
+
From: Wen Congyang
For removing memory, we need to remove page table. But it depends
on architecture. So the patch introduce arch_remove_memory() for
removing page table. Now it only calls __remove_pages().
Note: __remove_pages() for some archtecuture is not implemented
(I don't know how t
From: Wen Congyang
offlining memory blocks and checking whether memory blocks are offlined
are very similar. This patch introduces a new function to remove
redundant codes.
Signed-off-by: Wen Congyang
---
mm/memory_hotplug.c | 101 ---
1 files c
emory
block.
Patch3: new patch, no logical change, just remove reduntant codes.
Patch9: merge the patch from wujianguo into this patch. flush tlb on all cpu
after the pagetable is changed.
Patch12: new patch, free node_data when a node is offlined.
Tang Chen (5):
memory-hotplug:
rwise the WARN_ON_ONCE() in smp_call_function_many()
will be triggered.
Signed-off-by: Tang Chen
Signed-off-by: Lai Jiangshan
Signed-off-by: Wen Congyang
---
mm/memory_hotplug.c |4
mm/sparse.c |5 -
2 files changed, 4 insertions(+), 5 deletions(-)
diff --git
This patch searches a page table about the removed memory, and clear
page table for x86_64 architecture.
Signed-off-by: Wen Congyang
Signed-off-by: Jianguo Wu
Signed-off-by: Jiang Liu
Signed-off-by: Tang Chen
---
arch/x86/mm/init_64.c | 10 ++
1 files changed, 10 insertions(+), 0
From: Yasuaki Ishimatsu
For removing memmap region of sparse-vmemmap which is allocated bootmem,
memmap region of sparse-vmemmap needs to be registered by get_page_bootmem().
So the patch searches pages of virtual mapping and registers the pages by
get_page_bootmem().
Note: register_page_bootmem
sparc.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Jianguo Wu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
arch/arm64/mm/mmu.c |3 +++
arch/ia64/mm/discontig.c |4
arch/powerpc/mm/init_64.c |4
arch/s390/mm/vmem.c |4
arch/sparc/mm
cleared.
In this case, the page used as PT/PMD can be freed.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Jianguo Wu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
arch/x86/include/asm/pgtable_types.h |1 +
arch/x86/mm/init_64.c| 297
From: Wen Congyang
We call hotadd_new_pgdat() to allocate memory to store node_data. So we
should free it when removing a node.
Signed-off-by: Wen Congyang
---
mm/memory_hotplug.c | 20 +++-
1 files changed, 19 insertions(+), 1 deletions(-)
diff --git a/mm/memory_hotplug.c b
From: Yasuaki Ishimatsu
When (hot)adding memory into system, /sys/firmware/memmap/X/{end, start, type}
sysfs files are created. But there is no code to remove these files. The patch
implements the function to remove them.
Note: The code does not free firmware_map_entry which is allocated by boot
Currently __remove_section for SPARSEMEM_VMEMMAP does nothing. But even if
we use SPARSEMEM_VMEMMAP, we can unregister the memory_section.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
mm/memory_hotplug.c | 11 ---
1 files changed, 0
From: Wen Congyang
memory can't be offlined when CONFIG_MEMCG is selected.
For example: there is a memory device on node 1. The address range
is [1G, 1.5G). You will find 4 new directories memory8, memory9, memory10,
and memory11 under the directory /sys/devices/system/memory/.
If CONFIG_MEMCG i
From: Yasuaki Ishimatsu
When a memory is added, we update zone's and pgdat's start_pfn and
spanned_pages in the function __add_zone(). So we should revert them
when the memory is removed.
The patch adds a new function __remove_zone() to do this.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by:
This patch introduces a new function try_offline_node() to
remove sysfs file of node when all memory sections of this
node are removed. If some memory sections of this node are
not removed, this function does nothing.
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
---
drivers/acpi
From: Yasuaki Ishimatsu
We remove the memory like this:
1. lock memory hotplug
2. offline a memory block
3. unlock memory hotplug
4. repeat 1-3 to offline all memory blocks
5. lock memory hotplug
6. remove memory(TODO)
7. unlock memory hotplug
All memory blocks must be offlined before removing m
Hi Randy,
Thank you for your reviewing. :)
I think this boot option has been dropped. And we are implementing a new
boot option called "movablecore_map" to replace it.
Please refer to the following url if you like:
https://lkml.org/lkml/2012/12/19/51
Thanks. :)
On 12/20/2012 03:26 AM, Randy
order.
*/
-static void __init insert_movablecore_map(unsigned long start_pfn,
+void __init insert_movablecore_map(unsigned long start_pfn,
unsigned long end_pfn)
{
int pos, overlap;
-- 1.7.6.1
.
Thanks,
Jianguo Wu
On 2012-11-23 18:44, Tang Chen
On 12/19/2012 04:15 PM, Tang Chen wrote:
The Hot Plugable bit in SRAT flags specifys if the memory range
could be hotplugged.
If user specified movablecore_map=nn[KMG]@ss[KMG], reset
movablecore_map.map to the intersection of hotpluggable ranges from
SRAT and old movablecore_map.map.
Else if
the hotpluggable
ranges from SRAT.
Otherwise, do nothing. The kernel will use all the memory in all nodes
evenly.
The idea "getting info from SRAT" was from Liu Jiang .
And the idea "do more limit for memblock" was from Wu Jianguo
Signed-off-by: Tang Chen
Tested-by: Gu Zhe
each node.
Signed-off-by: Tang Chen
Signed-off-by: Liu Jiang
Reviewed-by: Wen Congyang
Reviewed-by: Lai Jiangshan
Tested-by: Lin Feng
---
mm/page_alloc.c | 79 ++-
1 files changed, 78 insertions(+), 1 deletions(-)
diff --git a/mm
This patch make sure bootmem will not allocate memory from areas that
may be ZONE_MOVABLE. The map info is from movablecore_map boot option.
Signed-off-by: Tang Chen
Signed-off-by: Lai Jiangshan
Reviewed-by: Wen Congyang
Tested-by: Lin Feng
---
include/linux/memblock.h |1 +
mm
core_map boot option.
Since the option could be specified more then once, all the maps will
be stored in the global variable movablecore_map.map array.
And also, we keep the array in monotonic increasing order by start_pfn.
And merge all overlapped ranges.
Signed-off-by: Tang Chen
Signed-off-by: Lai
the hotpluggable
ranges from SRAT.
Otherwise, do nothing. The kernel will use all the memory in all nodes
evenly.
The idea "getting info from SRAT" was from Liu Jiang .
And the idea "do more limit for memblock" was from Wu Jianguo
Signed-off-by: Tang Chen
Tested-by: Gu Zhe
If kernelcore or movablecore is specified at the same time
with movablecore_map, movablecore_map will have higher
priority to be satisfied.
This patch will make find_zone_movable_pfns_for_nodes()
calculate zone_movable_pfn[] with the limit from
zone_movable_limit[].
Signed-off-by: Tang Chen
wmem. Suggested by Liu Jiang
4) Fix misuse of pfns in movablecore_map.map[] as physical addresses.
Tang Chen (5):
page_alloc: add movablecore_map kernel parameter
ACPI: Restructure movablecore_map with memory info from SRAT.
page_alloc: Introduce zone_movable_limit[] to keep movable li
fails.
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Lai Jiangshan
Signed-off-by: Tang Chen
Signed-off-by: Jiang Liu
---
arch/x86/mm/numa.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2d125be..db939b6 100644
--- a/arch/x
rman
Cc: David Rientjes
Cc: Yinghai Lu
Cc: Rusty Russell
Cc: Greg KH
Signed-off-by: Lai Jiangshan
Signed-off-by: Wen Congyang
Signed-off-by: Andrew Morton
Signed-off-by: Tang Chen
Tested-by: Yasuaki Ishimatsu
---
mm/memory_hotplug.c | 22 ++
1 files changed, 22 inser
The first patch adds help info for CONFIG_MOVABLE_NODE option.
The second patch disable this option by default.
change log v1 -> v2:
Fix spelling comments from Ingo.
Tang Chen (2):
memory-hotplug: Add help info for CONFIG_MOVABLE_NODE option
memory-hotplug: Disable CONFIG_MOVABLE_NODE opt
Signed-off-by: Tang Chen
Reviewed-by: Yasuaki Ishimatsu
Acked-by: Ingo Molnar
---
mm/Kconfig | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 71259e0..491 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -150,6 +150,16 @@ config MOVABLE_NODE
This patch set CONFIG_MOVABLE_NODE to "default n" instead of
"depends on BROKEN".
Signed-off-by: Tang Chen
Reviewed-by: Yasuaki Ishimatsu
---
mm/Kconfig |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index 491..bbd6bfa 100
The first patch adds help info for CONFIG_MOVABLE_NODE option.
The second patch disable this option by default.
Tang Chen (2):
memory-hotplug: Add help info for CONFIG_MOVABLE_NODE option
memory-hotplug: Disable CONFIG_MOVABLE_NODE option by default.
mm/Kconfig | 12 +++-
1 file
Signed-off-by: Tang Chen
Reviewed-by: Yasuaki Ishimatsu
---
mm/Kconfig | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 71259e0..2ad51cb 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -150,6 +150,16 @@ config MOVABLE_NODE
depends on X86_64
This patch set CONFIG_MOVABLE_NODE to "default n" instead of
"depends on BROKEN".
Signed-off-by: Tang Chen
Reviewed-by: Yasuaki Ishimatsu
---
mm/Kconfig |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index 2ad51cb..1ed3295 100
On 12/13/2012 08:28 AM, Simon Jeons wrote:
On Wed, 2012-12-12 at 18:32 +0800, Tang Chen wrote:
Hi Simon,
On 12/12/2012 05:29 PM, Simon Jeons wrote:
Thanks for your clarify.
Enable PAE on x86 32bit kernel, 8G memory, movablecore=6.5G
Could you please provide more info ?
Such as the whole
Hi Simon,
On 12/12/2012 05:29 PM, Simon Jeons wrote:
Thanks for your clarify.
Enable PAE on x86 32bit kernel, 8G memory, movablecore=6.5G
Could you please provide more info ?
Such as the whole kernel commondline. And did this happen after
you applied these patches ? What is the output witho
801 - 900 of 1031 matches
Mail list logo