Re: [PATCH part5 1/7] x86: get pg_data_t's memory from other node
On 08/12/2013 10:39 PM, Tejun Heo wrote: Hello, The subject is a bit misleading. Maybe it should say "allow getting ..." rather than "get ..."? Ok, followed. On Thu, Aug 08, 2013 at 06:16:13PM +0800, Tang Chen wrote: .. I suppose the above three paragraphs are trying to say * A hotpluggable NUMA node may be composed of multiple memory devices which individually are hot-pluggable. * pg_data_t and page tables the serving a NUMA node may be located in the same node they're serving; however, if the node is composed of multiple hotpluggable memory devices, the device containing them should be the last one to be removed. * For physical memory hotplug, whole NUMA node hotunplugging is fine; however, in virtualizied environments, finer grained hotunplugging is desirable; unfortunately, there currently is no way to which specific memory device pg_data_t and page tables are allocated inside making it impossible to order unpluggings of memory devices of a NUMA node. To avoid the ordering problem while allowing removal of subset fo a NUMA node, it has been decided that pg_data_t and page tables should be allocated on a different non-hotpluggable NUMA node. Am I following it correctly? If so, can you please update the description? It's quite confusing. Yes, you are right. I'll update the description. Also, the decision seems rather poorly made. It should be trivial to allocate memory for pg_data_t and page tables in one end of the NUMA node and just record the boundary to distinguish between the area which can be removed any time and the other which can only be removed as a unit as the last step. We have tried, but the hot-remove path is difficult to fix. Please refer to: https://lkml.org/lkml/2013/6/13/249 Actually, the above patch-set can achieve movable node, what you said. But we have the following problems: 1. The device holding pagetable cannot be removed before other devices. In virtualization environment, it could be prlblematic. (https://lkml.org/lkml/2013/6/18/527) 2. It will break the semanteme of memory_block online/offline. If part of the memory_block is pagetable, and it is offlined, what status it should have ? My patches set it to offline, but the kernel is still using the memory. I'm not saying it is not fixable. But we finally came to that we may do the movable node in the current way and then improve it, including local pgdat and pagetable. We need more discussion on that. But it should not block the memory hotplug developping. I suggest to do movable node in the current way first, and improve it after this is done. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH part5 1/7] x86: get pg_data_t's memory from other node
Hello, The subject is a bit misleading. Maybe it should say "allow getting ..." rather than "get ..."? On Thu, Aug 08, 2013 at 06:16:13PM +0800, Tang Chen wrote: > A node could have several memory devices. And the device who holds node > data should be hot-removed in the last place. But in NUMA level, we don't > know which memory_block (/sys/devices/system/node/nodeX/memoryXXX) belongs > to which memory device. We only have node. So we can only do node hotplug. > > But in virtualization, developers are now developing memory hotplug in qemu, > which support a single memory device hotplug. So a whole node hotplug will > not satisfy virtualization users. > > So at last, we concluded that we'd better do memory hotplug and local node > things (local node node data, pagetable, vmemmap, ...) in two steps. > Please refer to https://lkml.org/lkml/2013/6/19/73 I suppose the above three paragraphs are trying to say * A hotpluggable NUMA node may be composed of multiple memory devices which individually are hot-pluggable. * pg_data_t and page tables the serving a NUMA node may be located in the same node they're serving; however, if the node is composed of multiple hotpluggable memory devices, the device containing them should be the last one to be removed. * For physical memory hotplug, whole NUMA node hotunplugging is fine; however, in virtualizied environments, finer grained hotunplugging is desirable; unfortunately, there currently is no way to which specific memory device pg_data_t and page tables are allocated inside making it impossible to order unpluggings of memory devices of a NUMA node. To avoid the ordering problem while allowing removal of subset fo a NUMA node, it has been decided that pg_data_t and page tables should be allocated on a different non-hotpluggable NUMA node. Am I following it correctly? If so, can you please update the description? It's quite confusing. Also, the decision seems rather poorly made. It should be trivial to allocate memory for pg_data_t and page tables in one end of the NUMA node and just record the boundary to distinguish between the area which can be removed any time and the other which can only be removed as a unit as the last step. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH part5 1/7] x86: get pg_data_t's memory from other node
From: Yasuaki Ishimatsu If system can create movable node which all memory of the node is allocated as ZONE_MOVABLE, setup_node_data() cannot allocate memory for the node's pg_data_t. So, use memblock_alloc_try_nid() instead of memblock_alloc_nid() to retry when the first allocation fails. Otherwise, the system could failed to boot. The node_data could be on hotpluggable node. And so could pagetable and vmemmap. But for now, doing so will break memory hot-remove path. A node could have several memory devices. And the device who holds node data should be hot-removed in the last place. But in NUMA level, we don't know which memory_block (/sys/devices/system/node/nodeX/memoryXXX) belongs to which memory device. We only have node. So we can only do node hotplug. But in virtualization, developers are now developing memory hotplug in qemu, which support a single memory device hotplug. So a whole node hotplug will not satisfy virtualization users. So at last, we concluded that we'd better do memory hotplug and local node things (local node node data, pagetable, vmemmap, ...) in two steps. Please refer to https://lkml.org/lkml/2013/6/19/73 For now, we put node_data of movable node to another node, and then improve it in the future. In the later patches, a boot option will be introduced to enable/disable this functionality. If users disable it, the node_data will still be put on the local node. Signed-off-by: Yasuaki Ishimatsu Signed-off-by: Lai Jiangshan Signed-off-by: Tang Chen Signed-off-by: Jiang Liu Reviewed-by: Wanpeng Li Reviewed-by: Zhang Yanfei Acked-by: Toshi Kani --- arch/x86/mm/numa.c |5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 8bf93ba..d532b6d 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -209,10 +209,9 @@ static void __init setup_node_data(int nid, u64 start, u64 end) * Allocate node data. Try node-local memory and then any node. * Never allocate in DMA zone. */ - nd_pa = memblock_alloc_nid(nd_size, SMP_CACHE_BYTES, nid); + nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid); if (!nd_pa) { - pr_err("Cannot find %zu bytes in node %d\n", - nd_size, nid); + pr_err("Cannot find %zu bytes in any node\n", nd_size); return; } nd = __va(nd_pa); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/