On 04/03/2019 02:47 PM, Michal Hocko wrote: > On Wed 03-04-19 10:00:04, Anshuman Khandual wrote: >> Memory hot remove uses get_nid_for_pfn() while tearing down linked sysfs >> entries between memory block and node. It first checks pfn validity with >> pfn_valid_within() before fetching nid. With CONFIG_HOLES_IN_ZONE config >> (arm64 has this enabled) pfn_valid_within() calls pfn_valid(). >> >> pfn_valid() is an arch implementation on arm64 (CONFIG_HAVE_ARCH_PFN_VALID) >> which scans all mapped memblock regions with memblock_is_map_memory(). This >> creates a problem in memory hot remove path which has already removed given >> memory range from memory block with memblock_[remove|free] before arriving >> at unregister_mem_sect_under_nodes(). Hence get_nid_for_pfn() returns -1 >> skipping subsequent sysfs_remove_link() calls leaving node <-> memory block >> sysfs entries as is. Subsequent memory add operation hits BUG_ON() because >> of existing sysfs entries. >> >> [ 62.007176] NUMA: Unknown node for memory at 0x680000000, assuming node 0 >> [ 62.052517] ------------[ cut here ]------------ >> [ 62.053211] kernel BUG at mm/memory_hotplug.c:1143! >> [ 62.053868] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP >> [ 62.054589] Modules linked in: >> [ 62.054999] CPU: 19 PID: 3275 Comm: bash Not tainted >> 5.1.0-rc2-00004-g28cea40b2683 #41 >> [ 62.056274] Hardware name: linux,dummy-virt (DT) >> [ 62.057166] pstate: 40400005 (nZcv daif +PAN -UAO) >> [ 62.058083] pc : add_memory_resource+0x1cc/0x1d8 >> [ 62.058961] lr : add_memory_resource+0x10c/0x1d8 >> [ 62.059842] sp : ffff0000168b3ce0 >> [ 62.060477] x29: ffff0000168b3ce0 x28: ffff8005db546c00 >> [ 62.061501] x27: 0000000000000000 x26: 0000000000000000 >> [ 62.062509] x25: ffff0000111ef000 x24: ffff0000111ef5d0 >> [ 62.063520] x23: 0000000000000000 x22: 00000006bfffffff >> [ 62.064540] x21: 00000000ffffffef x20: 00000000006c0000 >> [ 62.065558] x19: 0000000000680000 x18: 0000000000000024 >> [ 62.066566] x17: 0000000000000000 x16: 0000000000000000 >> [ 62.067579] x15: ffffffffffffffff x14: ffff8005e412e890 >> [ 62.068588] x13: ffff8005d6b105d8 x12: 0000000000000000 >> [ 62.069610] x11: ffff8005d6b10490 x10: 0000000000000040 >> [ 62.070615] x9 : ffff8005e412e898 x8 : ffff8005e412e890 >> [ 62.071631] x7 : ffff8005d6b105d8 x6 : ffff8005db546c00 >> [ 62.072640] x5 : 0000000000000001 x4 : 0000000000000002 >> [ 62.073654] x3 : ffff8005d7049480 x2 : 0000000000000002 >> [ 62.074666] x1 : 0000000000000003 x0 : 00000000ffffffef >> [ 62.075685] Process bash (pid: 3275, stack limit = 0x00000000d754280f) >> [ 62.076930] Call trace: >> [ 62.077411] add_memory_resource+0x1cc/0x1d8 >> [ 62.078227] __add_memory+0x70/0xa8 >> [ 62.078901] probe_store+0xa4/0xc8 >> [ 62.079561] dev_attr_store+0x18/0x28 >> [ 62.080270] sysfs_kf_write+0x40/0x58 >> [ 62.080992] kernfs_fop_write+0xcc/0x1d8 >> [ 62.081744] __vfs_write+0x18/0x40 >> [ 62.082400] vfs_write+0xa4/0x1b0 >> [ 62.083037] ksys_write+0x5c/0xc0 >> [ 62.083681] __arm64_sys_write+0x18/0x20 >> [ 62.084432] el0_svc_handler+0x88/0x100 >> [ 62.085177] el0_svc+0x8/0xc >> >> Re-ordering arch_remove_memory() with memblock_[free|remove] solves the >> problem on arm64 as pfn_valid() behaves correctly and returns positive >> as memblock for the address range still exists. arch_remove_memory() >> removes applicable memory sections from zone with __remove_pages() and >> tears down kernel linear mapping. Removing memblock regions afterwards >> is consistent. > > consistent with what? Anyway, I believe you wanted to mention that this > is safe because there is no other memblock (bootmem) allocator user that Yes I did intend but did not express that very well here. > late. So nobody is going to allocate from the removed range just to blow > up later. Also nobody should be using the bootmem allocated range else > we wouldn't allow to remove it. So reordering is indeed safe. Looks better. > >> Signed-off-by: Anshuman Khandual <anshuman.khand...@arm.com> > > With a changelog updated to explain why this is safe Sure will change it. Thanks for the commit message suggestion.