Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Laurent Dufour

Le 10/09/2020 à 13:12, Michal Hocko a écrit :

On Thu 10-09-20 09:51:39, Laurent Dufour wrote:

Le 10/09/2020 à 09:23, Michal Hocko a écrit :

On Wed 09-09-20 18:07:15, Laurent Dufour wrote:

Le 09/09/2020 à 12:59, Michal Hocko a écrit :

On Wed 09-09-20 11:21:58, Laurent Dufour wrote:

[...]

For the point a, using the enum allows to know in
register_mem_sect_under_node() if the link operation is due to a hotplug
operation or done at boot time.


Yes, but let me repeat. We have a mess here and different paths check
for the very same condition by different ways. We need to unify those.


What are you suggesting to unify these checks (using a MP_* enum as
suggested by David, something else)?


We do have system_state check spread at different places. I would use
this one and wrap it behind a helper. Or have I missed any reason why
that wouldn't work for this case?


That would not work in that case because memory can be hot-added at the
SYSTEM_SCHEDULING system state and the regular memory is also registered at
that system state too. So system state is not enough to discriminate between
the both.


If that is really the case all other places need a fix as well.
Btw. could you be more specific about memory hotplug during early boot?
How that happens? I am only aware of 
https://lkml.kernel.org/r/20200818110046.6664-1-osalva...@suse.de
and that doesn't happen as early as SYSTEM_SCHEDULING.


That points has been raised by David, quoting him here:


IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.

Ccing Oscar, I think he mentioned recently that this is the case with ACPI.


Oscar told that he need to investigate further on that.

On my side I can't get these ACPI "early" hot-plug operations to happen so I 
can't check that.


If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING, 
the patch I proposed at first is enough to fix the issue.


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread David Hildenbrand
On 10.09.20 14:47, Michal Hocko wrote:
> On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
>> On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
>>  
>>> That points has been raised by David, quoting him here:
>>>
 IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.

 Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
>>>
>>> Oscar told that he need to investigate further on that.
>>
>> I think my reply got lost.
>>
>> We can see acpi hotplugs during SYSTEM_SCHEDULING:
>>
>> $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host 
>> -monitor pty \
>> -m size=$MEM,slots=255,maxmem=4294967296k  \
>> -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
>> -object memory-backend-ram,id=memdimm0,size=134217728 -device 
>> pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
>> -object memory-backend-ram,id=memdimm1,size=134217728 -device 
>> pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
>> -object memory-backend-ram,id=memdimm2,size=134217728 -device 
>> pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
>> -object memory-backend-ram,id=memdimm3,size=134217728 -device 
>> pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
>> -object memory-backend-ram,id=memdimm4,size=134217728 -device 
>> pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
>> -object memory-backend-ram,id=memdimm5,size=134217728 -device 
>> pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
>> -object memory-backend-ram,id=memdimm6,size=134217728 -device 
>> pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
>>
>> kernel: [0.753643] __add_memory: nid: 0 start: 01 - 010800 
>> (size: 134217728)
>> kernel: [0.756950] register_mem_sect_under_node: system_state= 1
>>
>> kernel: [0.760811]  register_mem_sect_under_node+0x4f/0x230
>> kernel: [0.760811]  walk_memory_blocks+0x80/0xc0
>> kernel: [0.760811]  link_mem_sections+0x32/0x40
>> kernel: [0.760811]  add_memory_resource+0x148/0x250
>> kernel: [0.760811]  __add_memory+0x5b/0x90
>> kernel: [0.760811]  acpi_memory_device_add+0x130/0x300
>> kernel: [0.760811]  acpi_bus_attach+0x13c/0x1c0
>> kernel: [0.760811]  acpi_bus_attach+0x60/0x1c0
>> kernel: [0.760811]  acpi_bus_scan+0x33/0x70
>> kernel: [0.760811]  acpi_scan_init+0xea/0x21b
>> kernel: [0.760811]  acpi_init+0x2f1/0x33c
>> kernel: [0.760811]  do_one_initcall+0x46/0x1f4
> 
> Is there any actual usecase for a configuration like this? What is the
> point to statically define additional memory like this when the same can
> be achieved on the same command line?

You can online it movable right away to unplug later.

Also, under QEMU, just do a reboot with hotplugged memory and you're in
the very same situation.

-- 
Thanks,

David / dhildenb



Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Laurent Dufour

Le 10/09/2020 à 14:00, David Hildenbrand a écrit :

On 10.09.20 13:35, Laurent Dufour wrote:

Le 10/09/2020 à 13:12, Michal Hocko a écrit :

On Thu 10-09-20 09:51:39, Laurent Dufour wrote:

Le 10/09/2020 à 09:23, Michal Hocko a écrit :

On Wed 09-09-20 18:07:15, Laurent Dufour wrote:

Le 09/09/2020 à 12:59, Michal Hocko a écrit :

On Wed 09-09-20 11:21:58, Laurent Dufour wrote:

[...]

For the point a, using the enum allows to know in
register_mem_sect_under_node() if the link operation is due to a hotplug
operation or done at boot time.


Yes, but let me repeat. We have a mess here and different paths check
for the very same condition by different ways. We need to unify those.


What are you suggesting to unify these checks (using a MP_* enum as
suggested by David, something else)?


We do have system_state check spread at different places. I would use
this one and wrap it behind a helper. Or have I missed any reason why
that wouldn't work for this case?


That would not work in that case because memory can be hot-added at the
SYSTEM_SCHEDULING system state and the regular memory is also registered at
that system state too. So system state is not enough to discriminate between
the both.


If that is really the case all other places need a fix as well.
Btw. could you be more specific about memory hotplug during early boot?
How that happens? I am only aware of 
https://lkml.kernel.org/r/20200818110046.6664-1-osalva...@suse.de
and that doesn't happen as early as SYSTEM_SCHEDULING.


That points has been raised by David, quoting him here:


IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.

Ccing Oscar, I think he mentioned recently that this is the case with ACPI.


Oscar told that he need to investigate further on that.

On my side I can't get these ACPI "early" hot-plug operations to happen so I
can't check that.

If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING,
the patch I proposed at first is enough to fix the issue.



Booting a qemu guest with 4 coldplugged DIMMs gives me:

:/root# dmesg | grep link_mem
[0.302247] link_mem_sections() during 1
[0.445086] link_mem_sections() during 1
[0.445766] link_mem_sections() during 1
[0.446749] link_mem_sections() during 1
[0.447746] link_mem_sections() during 1

So AFAICs everything happens during SYSTEM_SCHEDULING - boot memory and
ACPI (cold)plug.

To make forward progress with this, relying on the system_state is
obviously not sufficient.

1. We have to fix this instance and the instance directly in
get_nid_for_pfn() by passing in the context (I once had a patch to clean
that up, to not have two state checks, but it got lost somewhere).

2. The "system_state < SYSTEM_RUNNING" check in
register_memory_resource() is correct. Actual memory hotplug after boot
is not impacted. (I remember we discussed this exact behavior back then)

3. build_all_zonelists() should work as expected, called from
start_kernel() before sched_init().


I'm bit confused now.
Since hotplug operation is happening at SYSTEM_SCHEDULING like the regular 
memory registration, would it be enough to add a parameter to 
register_mem_sect_under_node() (reworking the memmap_context enum)?

That way the check is not based on the system state but on the calling path.


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread David Hildenbrand
On 10.09.20 14:36, Laurent Dufour wrote:
> Le 10/09/2020 à 14:00, David Hildenbrand a écrit :
>> On 10.09.20 13:35, Laurent Dufour wrote:
>>> Le 10/09/2020 à 13:12, Michal Hocko a écrit :
 On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
> Le 10/09/2020 à 09:23, Michal Hocko a écrit :
>> On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
>>> Le 09/09/2020 à 12:59, Michal Hocko a écrit :
 On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
>> [...]
> For the point a, using the enum allows to know in
> register_mem_sect_under_node() if the link operation is due to a 
> hotplug
> operation or done at boot time.

 Yes, but let me repeat. We have a mess here and different paths check
 for the very same condition by different ways. We need to unify those.
>>>
>>> What are you suggesting to unify these checks (using a MP_* enum as
>>> suggested by David, something else)?
>>
>> We do have system_state check spread at different places. I would use
>> this one and wrap it behind a helper. Or have I missed any reason why
>> that wouldn't work for this case?
>
> That would not work in that case because memory can be hot-added at the
> SYSTEM_SCHEDULING system state and the regular memory is also registered 
> at
> that system state too. So system state is not enough to discriminate 
> between
> the both.

 If that is really the case all other places need a fix as well.
 Btw. could you be more specific about memory hotplug during early boot?
 How that happens? I am only aware of 
 https://lkml.kernel.org/r/20200818110046.6664-1-osalva...@suse.de
 and that doesn't happen as early as SYSTEM_SCHEDULING.
>>>
>>> That points has been raised by David, quoting him here:
>>>
 IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.

 Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
>>>
>>> Oscar told that he need to investigate further on that.
>>>
>>> On my side I can't get these ACPI "early" hot-plug operations to happen so I
>>> can't check that.
>>>
>>> If this is clear that ACPI memory hotplug doesn't happen at 
>>> SYSTEM_SCHEDULING,
>>> the patch I proposed at first is enough to fix the issue.
>>>
>>
>> Booting a qemu guest with 4 coldplugged DIMMs gives me:
>>
>> :/root# dmesg | grep link_mem
>> [0.302247] link_mem_sections() during 1
>> [0.445086] link_mem_sections() during 1
>> [0.445766] link_mem_sections() during 1
>> [0.446749] link_mem_sections() during 1
>> [0.447746] link_mem_sections() during 1
>>
>> So AFAICs everything happens during SYSTEM_SCHEDULING - boot memory and
>> ACPI (cold)plug.
>>
>> To make forward progress with this, relying on the system_state is
>> obviously not sufficient.
>>
>> 1. We have to fix this instance and the instance directly in
>> get_nid_for_pfn() by passing in the context (I once had a patch to clean
>> that up, to not have two state checks, but it got lost somewhere).
>>
>> 2. The "system_state < SYSTEM_RUNNING" check in
>> register_memory_resource() is correct. Actual memory hotplug after boot
>> is not impacted. (I remember we discussed this exact behavior back then)
>>
>> 3. build_all_zonelists() should work as expected, called from
>> start_kernel() before sched_init().
> 
> I'm bit confused now.
> Since hotplug operation is happening at SYSTEM_SCHEDULING like the regular 
> memory registration, would it be enough to add a parameter to 
> register_mem_sect_under_node() (reworking the memmap_context enum)?
> That way the check is not based on the system state but on the calling path.
> 

That would have been my suggestion to definitely fix it - maybe
Michal/Oscar have a better suggestion know that we know what's going on.

-- 
Thanks,

David / dhildenb



Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
> On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
>  
> > That points has been raised by David, quoting him here:
> > 
> > > IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break 
> > > that.
> > > 
> > > Ccing Oscar, I think he mentioned recently that this is the case with 
> > > ACPI.
> > 
> > Oscar told that he need to investigate further on that.
> 
> I think my reply got lost.
> 
> We can see acpi hotplugs during SYSTEM_SCHEDULING:
> 
> $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host 
> -monitor pty \
> -m size=$MEM,slots=255,maxmem=4294967296k  \
> -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
> -object memory-backend-ram,id=memdimm0,size=134217728 -device 
> pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
> -object memory-backend-ram,id=memdimm1,size=134217728 -device 
> pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
> -object memory-backend-ram,id=memdimm2,size=134217728 -device 
> pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
> -object memory-backend-ram,id=memdimm3,size=134217728 -device 
> pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
> -object memory-backend-ram,id=memdimm4,size=134217728 -device 
> pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
> -object memory-backend-ram,id=memdimm5,size=134217728 -device 
> pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
> -object memory-backend-ram,id=memdimm6,size=134217728 -device 
> pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
> 
> kernel: [0.753643] __add_memory: nid: 0 start: 01 - 010800 
> (size: 134217728)
> kernel: [0.756950] register_mem_sect_under_node: system_state= 1
> 
> kernel: [0.760811]  register_mem_sect_under_node+0x4f/0x230
> kernel: [0.760811]  walk_memory_blocks+0x80/0xc0
> kernel: [0.760811]  link_mem_sections+0x32/0x40
> kernel: [0.760811]  add_memory_resource+0x148/0x250
> kernel: [0.760811]  __add_memory+0x5b/0x90
> kernel: [0.760811]  acpi_memory_device_add+0x130/0x300
> kernel: [0.760811]  acpi_bus_attach+0x13c/0x1c0
> kernel: [0.760811]  acpi_bus_attach+0x60/0x1c0
> kernel: [0.760811]  acpi_bus_scan+0x33/0x70
> kernel: [0.760811]  acpi_scan_init+0xea/0x21b
> kernel: [0.760811]  acpi_init+0x2f1/0x33c
> kernel: [0.760811]  do_one_initcall+0x46/0x1f4

Is there any actual usecase for a configuration like this? What is the
point to statically define additional memory like this when the same can
be achieved on the same command line?
-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 15:39:00, Oscar Salvador wrote:
> On Thu, Sep 10, 2020 at 02:48:47PM +0200, Michal Hocko wrote:
> > > Is there any actual usecase for a configuration like this? What is the
> > > point to statically define additional memory like this when the same can
> > > be achieved on the same command line?
> 
> Well, for qemu I am not sure, but if David is right, it seems you can face
> the same if you reboot a vm with hotplugged memory.

OK, thanks for the clarification. I was not aware of the reboot.

> Moreover, it seems that the problem we spotted with [1], it was a VM running 
> on
> Promox (KVM).
> The Hypervisor probably said at boot time "Ey, I do have these ACPI devices, 
> care
> to enable them now"?
> 
> As always, there are all sorts of configurations/scenarios out there in the 
> wild.
> 
> > Forgot to ask one more thing. Who is going to online that memory when
> > userspace is not running yet?
> 
> Depends, if you have CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE set or you specify
> memhp_default_online_type=[online,online_*], memory will get onlined right
> after hot-adding stage:
> 
> /* online pages if requested */
> if (memhp_default_online_type != MMOP_OFFLINE)
> walk_memory_blocks(start, size, NULL, online_memory_block);
> 
> If not, systemd-udev will do the magic once the system is up.

Does that imply that we need udev to scan all existing devices and
reprobe them?
-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 14:49:28, David Hildenbrand wrote:
> On 10.09.20 14:47, Michal Hocko wrote:
> > On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
> >> On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
> >>  
> >>> That points has been raised by David, quoting him here:
> >>>
>  IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break 
>  that.
> 
>  Ccing Oscar, I think he mentioned recently that this is the case with 
>  ACPI.
> >>>
> >>> Oscar told that he need to investigate further on that.
> >>
> >> I think my reply got lost.
> >>
> >> We can see acpi hotplugs during SYSTEM_SCHEDULING:
> >>
> >> $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host 
> >> -monitor pty \
> >> -m size=$MEM,slots=255,maxmem=4294967296k  \
> >> -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
> >> -object memory-backend-ram,id=memdimm0,size=134217728 -device 
> >> pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
> >> -object memory-backend-ram,id=memdimm1,size=134217728 -device 
> >> pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
> >> -object memory-backend-ram,id=memdimm2,size=134217728 -device 
> >> pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
> >> -object memory-backend-ram,id=memdimm3,size=134217728 -device 
> >> pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
> >> -object memory-backend-ram,id=memdimm4,size=134217728 -device 
> >> pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
> >> -object memory-backend-ram,id=memdimm5,size=134217728 -device 
> >> pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
> >> -object memory-backend-ram,id=memdimm6,size=134217728 -device 
> >> pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
> >>
> >> kernel: [0.753643] __add_memory: nid: 0 start: 01 - 010800 
> >> (size: 134217728)
> >> kernel: [0.756950] register_mem_sect_under_node: system_state= 1
> >>
> >> kernel: [0.760811]  register_mem_sect_under_node+0x4f/0x230
> >> kernel: [0.760811]  walk_memory_blocks+0x80/0xc0
> >> kernel: [0.760811]  link_mem_sections+0x32/0x40
> >> kernel: [0.760811]  add_memory_resource+0x148/0x250
> >> kernel: [0.760811]  __add_memory+0x5b/0x90
> >> kernel: [0.760811]  acpi_memory_device_add+0x130/0x300
> >> kernel: [0.760811]  acpi_bus_attach+0x13c/0x1c0
> >> kernel: [0.760811]  acpi_bus_attach+0x60/0x1c0
> >> kernel: [0.760811]  acpi_bus_scan+0x33/0x70
> >> kernel: [0.760811]  acpi_scan_init+0xea/0x21b
> >> kernel: [0.760811]  acpi_init+0x2f1/0x33c
> >> kernel: [0.760811]  do_one_initcall+0x46/0x1f4
> > 
> > Is there any actual usecase for a configuration like this? What is the
> > point to statically define additional memory like this when the same can
> > be achieved on the same command line?
> 
> You can online it movable right away to unplug later.

You can use movable_node for that. IIRC this would only all hotplugable
memory as movable.

> Also, under QEMU, just do a reboot with hotplugged memory and you're in
> the very same situation.

OK, I didn't know that. I thought the memory would be presented as a
normal memory after reboot. Thanks for the clarification.

-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 15:51:07, Michal Hocko wrote:
> On Thu 10-09-20 15:39:00, Oscar Salvador wrote:
> > On Thu, Sep 10, 2020 at 02:48:47PM +0200, Michal Hocko wrote:
[...]
> > > Forgot to ask one more thing. Who is going to online that memory when
> > > userspace is not running yet?
> > 
> > Depends, if you have CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE set or you specify
> > memhp_default_online_type=[online,online_*], memory will get onlined right
> > after hot-adding stage:
> > 
> > /* online pages if requested */
> > if (memhp_default_online_type != MMOP_OFFLINE)
> > walk_memory_blocks(start, size, NULL, online_memory_block);
> > 
> > If not, systemd-udev will do the magic once the system is up.
> 
> Does that imply that we need udev to scan all existing devices and
> reprobe them?

I've checked the sysfs side of things and it seems that the KOBJ_ADD
event gets lost because there are no listeners
(create_memory_block_devices ->  -> device_register -> ... ->
device_add -> kobject_uevent(>kobj, KOBJ_ADD) ->
kobject_uevent_net_broadcast). So the only way to find out about those
devices once the init is up and something than intercept those events is
to rescan devices.

This is really unfortunate because this solution really doesn't scale
with most usecases which do not do early boot hotplug and this can get
more than interesting on machines like ppc which have gazillions of
memory block devices because they use insanly small blocks and just
imagine a multi TB machine how that scales. Sigh...
-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread David Hildenbrand
>> Also, under QEMU, just do a reboot with hotplugged memory and you're in
>> the very same situation.
> 
> OK, I didn't know that. I thought the memory would be presented as a
> normal memory after reboot. Thanks for the clarification.

That's one of the cases where QEMU differs to actual hardware - it's not
added to e820, so ACPI always probes+detects+adds DIMMs during boot.

Some people (me :)) consider that a feature and not a BUG.

-- 
Thanks,

David / dhildenb



Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Oscar Salvador
On Thu, Sep 10, 2020 at 02:48:47PM +0200, Michal Hocko wrote:
> > Is there any actual usecase for a configuration like this? What is the
> > point to statically define additional memory like this when the same can
> > be achieved on the same command line?

Well, for qemu I am not sure, but if David is right, it seems you can face
the same if you reboot a vm with hotplugged memory.
Moreover, it seems that the problem we spotted with [1], it was a VM running on
Promox (KVM).
The Hypervisor probably said at boot time "Ey, I do have these ACPI devices, 
care
to enable them now"?

As always, there are all sorts of configurations/scenarios out there in the 
wild.

> Forgot to ask one more thing. Who is going to online that memory when
> userspace is not running yet?

Depends, if you have CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE set or you specify
memhp_default_online_type=[online,online_*], memory will get onlined right
after hot-adding stage:

/* online pages if requested */
if (memhp_default_online_type != MMOP_OFFLINE)
walk_memory_blocks(start, size, NULL, online_memory_block);

If not, systemd-udev will do the magic once the system is up.

-- 
Oscar Salvador
SUSE L3


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 14:47:56, Michal Hocko wrote:
> On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
> > On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
> >  
> > > That points has been raised by David, quoting him here:
> > > 
> > > > IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break 
> > > > that.
> > > > 
> > > > Ccing Oscar, I think he mentioned recently that this is the case with 
> > > > ACPI.
> > > 
> > > Oscar told that he need to investigate further on that.
> > 
> > I think my reply got lost.
> > 
> > We can see acpi hotplugs during SYSTEM_SCHEDULING:
> > 
> > $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host 
> > -monitor pty \
> > -m size=$MEM,slots=255,maxmem=4294967296k  \
> > -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
> > -object memory-backend-ram,id=memdimm0,size=134217728 -device 
> > pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
> > -object memory-backend-ram,id=memdimm1,size=134217728 -device 
> > pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
> > -object memory-backend-ram,id=memdimm2,size=134217728 -device 
> > pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
> > -object memory-backend-ram,id=memdimm3,size=134217728 -device 
> > pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
> > -object memory-backend-ram,id=memdimm4,size=134217728 -device 
> > pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
> > -object memory-backend-ram,id=memdimm5,size=134217728 -device 
> > pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
> > -object memory-backend-ram,id=memdimm6,size=134217728 -device 
> > pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
> > 
> > kernel: [0.753643] __add_memory: nid: 0 start: 01 - 010800 
> > (size: 134217728)
> > kernel: [0.756950] register_mem_sect_under_node: system_state= 1
> > 
> > kernel: [0.760811]  register_mem_sect_under_node+0x4f/0x230
> > kernel: [0.760811]  walk_memory_blocks+0x80/0xc0
> > kernel: [0.760811]  link_mem_sections+0x32/0x40
> > kernel: [0.760811]  add_memory_resource+0x148/0x250
> > kernel: [0.760811]  __add_memory+0x5b/0x90
> > kernel: [0.760811]  acpi_memory_device_add+0x130/0x300
> > kernel: [0.760811]  acpi_bus_attach+0x13c/0x1c0
> > kernel: [0.760811]  acpi_bus_attach+0x60/0x1c0
> > kernel: [0.760811]  acpi_bus_scan+0x33/0x70
> > kernel: [0.760811]  acpi_scan_init+0xea/0x21b
> > kernel: [0.760811]  acpi_init+0x2f1/0x33c
> > kernel: [0.760811]  do_one_initcall+0x46/0x1f4
> 
> Is there any actual usecase for a configuration like this? What is the
> point to statically define additional memory like this when the same can
> be achieved on the same command line?

Forgot to ask one more thing. Who is going to online that memory when
userspace is not running yet?
-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Laurent Dufour

Le 10/09/2020 à 14:03, Oscar Salvador a écrit :

On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
  

That points has been raised by David, quoting him here:


IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.

Ccing Oscar, I think he mentioned recently that this is the case with ACPI.


Oscar told that he need to investigate further on that.


I think my reply got lost.

We can see acpi hotplugs during SYSTEM_SCHEDULING:

$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host 
-monitor pty \
 -m size=$MEM,slots=255,maxmem=4294967296k  \
 -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
 -object memory-backend-ram,id=memdimm0,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
 -object memory-backend-ram,id=memdimm1,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
 -object memory-backend-ram,id=memdimm2,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
 -object memory-backend-ram,id=memdimm3,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
 -object memory-backend-ram,id=memdimm4,size=134217728 -device 
pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
 -object memory-backend-ram,id=memdimm5,size=134217728 -device 
pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
 -object memory-backend-ram,id=memdimm6,size=134217728 -device 
pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \

kernel: [0.753643] __add_memory: nid: 0 start: 01 - 010800 
(size: 134217728)
kernel: [0.756950] register_mem_sect_under_node: system_state= 1

kernel: [0.760811]  register_mem_sect_under_node+0x4f/0x230
kernel: [0.760811]  walk_memory_blocks+0x80/0xc0
kernel: [0.760811]  link_mem_sections+0x32/0x40
kernel: [0.760811]  add_memory_resource+0x148/0x250
kernel: [0.760811]  __add_memory+0x5b/0x90
kernel: [0.760811]  acpi_memory_device_add+0x130/0x300
kernel: [0.760811]  acpi_bus_attach+0x13c/0x1c0
kernel: [0.760811]  acpi_bus_attach+0x60/0x1c0
kernel: [0.760811]  acpi_bus_scan+0x33/0x70
kernel: [0.760811]  acpi_scan_init+0xea/0x21b
kernel: [0.760811]  acpi_init+0x2f1/0x33c
kernel: [0.760811]  do_one_initcall+0x46/0x1f4


Thanks Oscar!


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Oscar Salvador
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
 
> That points has been raised by David, quoting him here:
> 
> > IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
> > 
> > Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
> 
> Oscar told that he need to investigate further on that.

I think my reply got lost.

We can see acpi hotplugs during SYSTEM_SCHEDULING:

$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host 
-monitor pty \
-m size=$MEM,slots=255,maxmem=4294967296k  \
-numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
-object memory-backend-ram,id=memdimm0,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
-object memory-backend-ram,id=memdimm1,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
-object memory-backend-ram,id=memdimm2,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
-object memory-backend-ram,id=memdimm3,size=134217728 -device 
pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
-object memory-backend-ram,id=memdimm4,size=134217728 -device 
pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
-object memory-backend-ram,id=memdimm5,size=134217728 -device 
pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
-object memory-backend-ram,id=memdimm6,size=134217728 -device 
pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \

kernel: [0.753643] __add_memory: nid: 0 start: 01 - 010800 
(size: 134217728)
kernel: [0.756950] register_mem_sect_under_node: system_state= 1

kernel: [0.760811]  register_mem_sect_under_node+0x4f/0x230
kernel: [0.760811]  walk_memory_blocks+0x80/0xc0
kernel: [0.760811]  link_mem_sections+0x32/0x40
kernel: [0.760811]  add_memory_resource+0x148/0x250
kernel: [0.760811]  __add_memory+0x5b/0x90
kernel: [0.760811]  acpi_memory_device_add+0x130/0x300
kernel: [0.760811]  acpi_bus_attach+0x13c/0x1c0
kernel: [0.760811]  acpi_bus_attach+0x60/0x1c0
kernel: [0.760811]  acpi_bus_scan+0x33/0x70
kernel: [0.760811]  acpi_scan_init+0xea/0x21b
kernel: [0.760811]  acpi_init+0x2f1/0x33c
kernel: [0.760811]  do_one_initcall+0x46/0x1f4



-- 
Oscar Salvador
SUSE L3


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 13:35:32, Laurent Dufour wrote:
> Le 10/09/2020 à 13:12, Michal Hocko a écrit :
> > On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
> > > Le 10/09/2020 à 09:23, Michal Hocko a écrit :
> > > > On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
> > > > > Le 09/09/2020 à 12:59, Michal Hocko a écrit :
> > > > > > On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
> > > > [...]
> > > > > > > For the point a, using the enum allows to know in
> > > > > > > register_mem_sect_under_node() if the link operation is due to a 
> > > > > > > hotplug
> > > > > > > operation or done at boot time.
> > > > > > 
> > > > > > Yes, but let me repeat. We have a mess here and different paths 
> > > > > > check
> > > > > > for the very same condition by different ways. We need to unify 
> > > > > > those.
> > > > > 
> > > > > What are you suggesting to unify these checks (using a MP_* enum as
> > > > > suggested by David, something else)?
> > > > 
> > > > We do have system_state check spread at different places. I would use
> > > > this one and wrap it behind a helper. Or have I missed any reason why
> > > > that wouldn't work for this case?
> > > 
> > > That would not work in that case because memory can be hot-added at the
> > > SYSTEM_SCHEDULING system state and the regular memory is also registered 
> > > at
> > > that system state too. So system state is not enough to discriminate 
> > > between
> > > the both.
> > 
> > If that is really the case all other places need a fix as well.
> > Btw. could you be more specific about memory hotplug during early boot?
> > How that happens? I am only aware of 
> > https://lkml.kernel.org/r/20200818110046.6664-1-osalva...@suse.de
> > and that doesn't happen as early as SYSTEM_SCHEDULING.
> 
> That points has been raised by David, quoting him here:
> 
> > IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
> > 
> > Ccing Oscar, I think he mentioned recently that this is the case with ACPI.

: Please, note that upstream has fixed that differently (and unintentionally) by
: adding another boot state (SYSTEM_SCHEDULING), which is set before smp_init().
: That should happen before memory hotplug events even with 
memhp_default_state=online.
: Backporting that would be too intrusive.

Either I am confused or the above says that no hotplug should happen
during SYSTEM_SCHEDULING even in the above case. I really have hard time
to imagine how an early boot hotplug should even work. We start with a
memory layout provided by a BIOS/FW and intiailize it statically. How
would a hotplug even actually trigger that early?
-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread David Hildenbrand
On 10.09.20 13:35, Laurent Dufour wrote:
> Le 10/09/2020 à 13:12, Michal Hocko a écrit :
>> On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
>>> Le 10/09/2020 à 09:23, Michal Hocko a écrit :
 On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
> Le 09/09/2020 à 12:59, Michal Hocko a écrit :
>> On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
 [...]
>>> For the point a, using the enum allows to know in
>>> register_mem_sect_under_node() if the link operation is due to a hotplug
>>> operation or done at boot time.
>>
>> Yes, but let me repeat. We have a mess here and different paths check
>> for the very same condition by different ways. We need to unify those.
>
> What are you suggesting to unify these checks (using a MP_* enum as
> suggested by David, something else)?

 We do have system_state check spread at different places. I would use
 this one and wrap it behind a helper. Or have I missed any reason why
 that wouldn't work for this case?
>>>
>>> That would not work in that case because memory can be hot-added at the
>>> SYSTEM_SCHEDULING system state and the regular memory is also registered at
>>> that system state too. So system state is not enough to discriminate between
>>> the both.
>>
>> If that is really the case all other places need a fix as well.
>> Btw. could you be more specific about memory hotplug during early boot?
>> How that happens? I am only aware of 
>> https://lkml.kernel.org/r/20200818110046.6664-1-osalva...@suse.de
>> and that doesn't happen as early as SYSTEM_SCHEDULING.
> 
> That points has been raised by David, quoting him here:
> 
>> IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
>>
>> Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
> 
> Oscar told that he need to investigate further on that.
> 
> On my side I can't get these ACPI "early" hot-plug operations to happen so I 
> can't check that.
> 
> If this is clear that ACPI memory hotplug doesn't happen at 
> SYSTEM_SCHEDULING, 
> the patch I proposed at first is enough to fix the issue.
> 

Booting a qemu guest with 4 coldplugged DIMMs gives me:

:/root# dmesg | grep link_mem
[0.302247] link_mem_sections() during 1
[0.445086] link_mem_sections() during 1
[0.445766] link_mem_sections() during 1
[0.446749] link_mem_sections() during 1
[0.447746] link_mem_sections() during 1

So AFAICs everything happens during SYSTEM_SCHEDULING - boot memory and
ACPI (cold)plug.

To make forward progress with this, relying on the system_state is
obviously not sufficient.

1. We have to fix this instance and the instance directly in
get_nid_for_pfn() by passing in the context (I once had a patch to clean
that up, to not have two state checks, but it got lost somewhere).

2. The "system_state < SYSTEM_RUNNING" check in
register_memory_resource() is correct. Actual memory hotplug after boot
is not impacted. (I remember we discussed this exact behavior back then)

3. build_all_zonelists() should work as expected, called from
start_kernel() before sched_init().

-- 
Thanks,

David / dhildenb



Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
> Le 10/09/2020 à 09:23, Michal Hocko a écrit :
> > On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
> > > Le 09/09/2020 à 12:59, Michal Hocko a écrit :
> > > > On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
> > [...]
> > > > > For the point a, using the enum allows to know in
> > > > > register_mem_sect_under_node() if the link operation is due to a 
> > > > > hotplug
> > > > > operation or done at boot time.
> > > > 
> > > > Yes, but let me repeat. We have a mess here and different paths check
> > > > for the very same condition by different ways. We need to unify those.
> > > 
> > > What are you suggesting to unify these checks (using a MP_* enum as
> > > suggested by David, something else)?
> > 
> > We do have system_state check spread at different places. I would use
> > this one and wrap it behind a helper. Or have I missed any reason why
> > that wouldn't work for this case?
> 
> That would not work in that case because memory can be hot-added at the
> SYSTEM_SCHEDULING system state and the regular memory is also registered at
> that system state too. So system state is not enough to discriminate between
> the both.

If that is really the case all other places need a fix as well.
Btw. could you be more specific about memory hotplug during early boot?
How that happens? I am only aware of 
https://lkml.kernel.org/r/20200818110046.6664-1-osalva...@suse.de
and that doesn't happen as early as SYSTEM_SCHEDULING.

-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Laurent Dufour

Le 10/09/2020 à 09:23, Michal Hocko a écrit :

On Wed 09-09-20 18:07:15, Laurent Dufour wrote:

Le 09/09/2020 à 12:59, Michal Hocko a écrit :

On Wed 09-09-20 11:21:58, Laurent Dufour wrote:

[...]

For the point a, using the enum allows to know in
register_mem_sect_under_node() if the link operation is due to a hotplug
operation or done at boot time.


Yes, but let me repeat. We have a mess here and different paths check
for the very same condition by different ways. We need to unify those.


What are you suggesting to unify these checks (using a MP_* enum as
suggested by David, something else)?


We do have system_state check spread at different places. I would use
this one and wrap it behind a helper. Or have I missed any reason why
that wouldn't work for this case?


That would not work in that case because memory can be hot-added at the 
SYSTEM_SCHEDULING system state and the regular memory is also registered at that 
system state too. So system state is not enough to discriminate between the both.


I think I'll go with the option suggested by David, replacing the enum 
memmap_context a new enum memplug_context and pass that context to 
register_mem_sect_under_node() so that function will known when node id should 
be checked or not.


Cheers,
Laurent.


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-10 Thread Michal Hocko
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
> Le 09/09/2020 à 12:59, Michal Hocko a écrit :
> > On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
> > > For the point a, using the enum allows to know in
> > > register_mem_sect_under_node() if the link operation is due to a hotplug
> > > operation or done at boot time.
> > 
> > Yes, but let me repeat. We have a mess here and different paths check
> > for the very same condition by different ways. We need to unify those.
> 
> What are you suggesting to unify these checks (using a MP_* enum as
> suggested by David, something else)?

We do have system_state check spread at different places. I would use
this one and wrap it behind a helper. Or have I missed any reason why
that wouldn't work for this case?

-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Michal Hocko
On Wed 09-09-20 14:32:57, David Hildenbrand wrote:
> On 09.09.20 14:30, Greg Kroah-Hartman wrote:
> > On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
>  I am not sure an enum is going to make the existing situation less
>  messy. Sure we somehow have to distinguish boot init and runtime hotplug
>  because they have different constrains. I am arguing that a) we should
>  have a consistent way to check for those and b) we shouldn't blow up
>  easily just because sysfs infrastructure has failed to initialize.
> >>>
> >>> For the point a, using the enum allows to know in 
> >>> register_mem_sect_under_node() 
> >>> if the link operation is due to a hotplug operation or done at boot time.
> >>>
> >>> For the point b, one option would be ignore the link error in the case 
> >>> the link 
> >>> is already existing, but that BUG_ON() had the benefit to highlight the 
> >>> root issue.
> >>>
> >>
> >> WARN_ON_ONCE() would be preferred  - not crash the system but still
> >> highlight the issue.
> > 
> > Many many systems now run with 'panic on warn' enabled, so that wouldn't
> > change much :(
> > 
> > If you can warn, you can properly just print an error message and
> > recover from the problem.
> 
> Maybe VM_WARN_ON_ONCE() then to detect this during testing?
> 
> (we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting
> used in production - behaves like BUG_ON and BUG_ON is frowned upon)

VM_WARN* is not that much different from panic on warn. Still one can
argue that many workloads enable it just because. And I would disagree
that we should care much about those because those are debugging
features and everybody has to take consequences.

On the other hand the question is whether WARN is giving us much. So
what is the advantage over a simple pr_err? We will get a backtrace.
Interesting but not really that useful because there are only few code
paths this can trigger from. Registers dump? Not really useful here.
Taint flag, probably useful because follow up problems might give us a
hint that this might be related. People tend to pay more attention to
WARN splat than a single line error. Well, not really a strong reason, I
would say.

So while I wouldn't argue against WARN* in general (just because somebody
might be setting the system to panic), I would also think of how much
useful the splat is.

-- 
Michal Hocko
SUSE Labs


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Laurent Dufour

Le 09/09/2020 à 12:59, Michal Hocko a écrit :

On Wed 09-09-20 11:21:58, Laurent Dufour wrote:

Le 09/09/2020 à 11:09, Michal Hocko a écrit :

On Wed 09-09-20 09:48:59, Laurent Dufour wrote:

Le 09/09/2020 à 09:40, Michal Hocko a écrit :

[...]

In
that case, the system is able to boot but later hot-plug operation may lead
to this panic because the node's links are correctly broken:


Correctly broken? Could you provide more details on the inconsistency
please?


laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
total 0
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
-rw-r--r-- 1 root root 65536 Aug 24 05:27 online
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
drwxr-xr-x 2 root root 0 Aug 24 05:27 power
-r--r--r-- 1 root root 65536 Aug 24 05:27 removable
-rw-r--r-- 1 root root 65536 Aug 24 05:27 state
lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
-rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
-r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones


OK, so there are two nodes referenced here. Not terrible from the user
point of view. Such a memory block will refuse to offline or online
IIRC.


No the memory block is still owned by one node, only the sysfs
representation is wrong. So the memory block can be hot unplugged, but only
one node's link will be cleaned, and a '/syss/devices/system/node#/memory21'
link will remain and that will be detected later when that memory block is
hot plugged again.


OK, so you need to hotremove first and hotadd again to trigger the
problem. It is not like you would be a hot adding something new. This is
a useful information to have in the changelog.


Which physical memory range you are trying to add here and what is the
node affinity?


None is added, the root cause of the issue is happening at boot time.


Let me clarify my question. The crash has clearly happened during the
hotplug add_memory_resource - which is clearly not a boot time path.
I was askin for more information about why this has failed. It is quite
clear that sysfs machinery has failed and that led to BUG_ON but we are
mising an information on why. What was the physical memory range to be
added and why sysfs failed?


The BUG_ON is detecting a bad state generated earlier, at boot time because
register_mem_sect_under_node() didn't check for the block's node id.


[ cut here ]
kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
Oops: Exception in kernel mode, sig: 5 [#1]
LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul 
binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
NIP:  c0403f34 LR: c0403f2c CTR: 
REGS: c004876e3660 TRAP: 0700   Not tainted  (5.9.0-rc1+)
MSR:  8282b033   CR: 24000448  XER: 
2004
CFAR: c0846d20 IRQMASK: 0
GPR00: c0403f2c c004876e38f0 c12f6f00 ffef
GPR04: 0227 c004805ae680  0004886f
GPR08: 0226 0003 0002 fffd
GPR12: 88000484 c0001ec96280  
GPR16:   0004 0003
GPR20: c0047814ffe0 c0077c08 0010 c13332c8
GPR24:  c11f6cc0  
GPR28: ffef 0001 00015000 1000
NIP [c0403f34] add_memory_resource+0x244/0x340
LR [c0403f2c] add_memory_resource+0x23c/0x340
Call Trace:
[c004876e38f0] [c0403f2c] add_memory_resource+0x23c/0x340 
(unreliable)
[c004876e39c0] [c040408c] __add_memory+0x5c/0xf0
[c004876e39f0] [c00e2b94] dlpar_add_lmb+0x1b4/0x500
[c004876e3ad0] [c00e3888] dlpar_memory+0x1f8/0xb80
[c004876e3b60] [c00dc0d0] handle_dlpar_errorlog+0xc0/0x190
[c004876e3bd0] [c00dc398] dlpar_store+0x198/0x4a0
[c004876e3c90] [c072e630] kobj_attr_store+0x30/0x50
[c004876e3cb0] [c051f954] sysfs_kf_write+0x64/0x90
[c004876e3cd0] [c051ee40] kernfs_fop_write+0x1b0/0x290
[c004876e3d20] [c0438dd8] vfs_write+0xe8/0x290
[c004876e3d70] [c04391ac] ksys_write+0xdc/0x130
[c004876e3dc0] [c0034e40] system_call_exception+0x160/0x270
[c004876e3e20] [c000d740] system_call_common+0xf0/0x27c
Instruction dump:
48442e35 6000 0b03 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
78a58402 48442db1 6000 7c7c1b78 <0b03> 7f23cb78 4bda371d 6000
---[ end trace 562fd6c109cd0fb2 ]---


The BUG_ON on failure is absolutely horrendous. There must be a better
way to handle a 

Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Greg Kroah-Hartman
On Wed, Sep 09, 2020 at 02:32:57PM +0200, David Hildenbrand wrote:
> On 09.09.20 14:30, Greg Kroah-Hartman wrote:
> > On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
>  I am not sure an enum is going to make the existing situation less
>  messy. Sure we somehow have to distinguish boot init and runtime hotplug
>  because they have different constrains. I am arguing that a) we should
>  have a consistent way to check for those and b) we shouldn't blow up
>  easily just because sysfs infrastructure has failed to initialize.
> >>>
> >>> For the point a, using the enum allows to know in 
> >>> register_mem_sect_under_node() 
> >>> if the link operation is due to a hotplug operation or done at boot time.
> >>>
> >>> For the point b, one option would be ignore the link error in the case 
> >>> the link 
> >>> is already existing, but that BUG_ON() had the benefit to highlight the 
> >>> root issue.
> >>>
> >>
> >> WARN_ON_ONCE() would be preferred  - not crash the system but still
> >> highlight the issue.
> > 
> > Many many systems now run with 'panic on warn' enabled, so that wouldn't
> > change much :(
> > 
> > If you can warn, you can properly just print an error message and
> > recover from the problem.
> 
> Maybe VM_WARN_ON_ONCE() then to detect this during testing?

If you all use that, sure.

> (we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting
> used in production - behaves like BUG_ON and BUG_ON is frowned upon)

Yes we have, but in the end, it's good, those things should be fixed and
not accessable by anything a user can trigger.

thanks,

greg k-h


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread David Hildenbrand
On 09.09.20 14:30, Greg Kroah-Hartman wrote:
> On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
 I am not sure an enum is going to make the existing situation less
 messy. Sure we somehow have to distinguish boot init and runtime hotplug
 because they have different constrains. I am arguing that a) we should
 have a consistent way to check for those and b) we shouldn't blow up
 easily just because sysfs infrastructure has failed to initialize.
>>>
>>> For the point a, using the enum allows to know in 
>>> register_mem_sect_under_node() 
>>> if the link operation is due to a hotplug operation or done at boot time.
>>>
>>> For the point b, one option would be ignore the link error in the case the 
>>> link 
>>> is already existing, but that BUG_ON() had the benefit to highlight the 
>>> root issue.
>>>
>>
>> WARN_ON_ONCE() would be preferred  - not crash the system but still
>> highlight the issue.
> 
> Many many systems now run with 'panic on warn' enabled, so that wouldn't
> change much :(
> 
> If you can warn, you can properly just print an error message and
> recover from the problem.

Maybe VM_WARN_ON_ONCE() then to detect this during testing?

(we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting
used in production - behaves like BUG_ON and BUG_ON is frowned upon)

-- 
Thanks,

David / dhildenb



Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Greg Kroah-Hartman
On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
> >> I am not sure an enum is going to make the existing situation less
> >> messy. Sure we somehow have to distinguish boot init and runtime hotplug
> >> because they have different constrains. I am arguing that a) we should
> >> have a consistent way to check for those and b) we shouldn't blow up
> >> easily just because sysfs infrastructure has failed to initialize.
> > 
> > For the point a, using the enum allows to know in 
> > register_mem_sect_under_node() 
> > if the link operation is due to a hotplug operation or done at boot time.
> > 
> > For the point b, one option would be ignore the link error in the case the 
> > link 
> > is already existing, but that BUG_ON() had the benefit to highlight the 
> > root issue.
> > 
> 
> WARN_ON_ONCE() would be preferred  - not crash the system but still
> highlight the issue.

Many many systems now run with 'panic on warn' enabled, so that wouldn't
change much :(

If you can warn, you can properly just print an error message and
recover from the problem.

thanks,

greg k-h


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Michal Hocko
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
> Le 09/09/2020 à 11:09, Michal Hocko a écrit :
> > On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
> > > Le 09/09/2020 à 09:40, Michal Hocko a écrit :
[...]
> > > > > In
> > > > > that case, the system is able to boot but later hot-plug operation 
> > > > > may lead
> > > > > to this panic because the node's links are correctly broken:
> > > > 
> > > > Correctly broken? Could you provide more details on the inconsistency
> > > > please?
> > > 
> > > laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
> > > total 0
> > > lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
> > > lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
> > > -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
> > > drwxr-xr-x 2 root root 0 Aug 24 05:27 power
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
> > > -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
> > > lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> 
> > > ../../../../bus/memory
> > > -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
> > 
> > OK, so there are two nodes referenced here. Not terrible from the user
> > point of view. Such a memory block will refuse to offline or online
> > IIRC.
> 
> No the memory block is still owned by one node, only the sysfs
> representation is wrong. So the memory block can be hot unplugged, but only
> one node's link will be cleaned, and a '/syss/devices/system/node#/memory21'
> link will remain and that will be detected later when that memory block is
> hot plugged again.

OK, so you need to hotremove first and hotadd again to trigger the
problem. It is not like you would be a hot adding something new. This is
a useful information to have in the changelog.

> > > > Which physical memory range you are trying to add here and what is the
> > > > node affinity?
> > > 
> > > None is added, the root cause of the issue is happening at boot time.
> > 
> > Let me clarify my question. The crash has clearly happened during the
> > hotplug add_memory_resource - which is clearly not a boot time path.
> > I was askin for more information about why this has failed. It is quite
> > clear that sysfs machinery has failed and that led to BUG_ON but we are
> > mising an information on why. What was the physical memory range to be
> > added and why sysfs failed?
> 
> The BUG_ON is detecting a bad state generated earlier, at boot time because
> register_mem_sect_under_node() didn't check for the block's node id.
> 
> > > > > [ cut here ]
> > > > > kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
> > > > > Oops: Exception in kernel mode, sig: 5 [#1]
> > > > > LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
> > > > > Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto 
> > > > > gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum 
> > > > > autofs4
> > > > > CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
> > > > > NIP:  c0403f34 LR: c0403f2c CTR: 
> > > > > REGS: c004876e3660 TRAP: 0700   Not tainted  (5.9.0-rc1+)
> > > > > MSR:  8282b033   CR: 
> > > > > 24000448  XER: 2004
> > > > > CFAR: c0846d20 IRQMASK: 0
> > > > > GPR00: c0403f2c c004876e38f0 c12f6f00 
> > > > > ffef
> > > > > GPR04: 0227 c004805ae680  
> > > > > 0004886f
> > > > > GPR08: 0226 0003 0002 
> > > > > fffd
> > > > > GPR12: 88000484 c0001ec96280  
> > > > > 
> > > > > GPR16:   0004 
> > > > > 0003
> > > > > GPR20: c0047814ffe0 c0077c08 0010 
> > > > > c13332c8
> > > > > GPR24:  c11f6cc0  
> > > > > 
> > > > > GPR28: ffef 0001 00015000 
> > > > > 1000
> > > > > NIP [c0403f34] add_memory_resource+0x244/0x340
> > > > > LR [c0403f2c] add_memory_resource+0x23c/0x340
> > > > > Call Trace:
> > > > > [c004876e38f0] [c0403f2c] add_memory_resource+0x23c/0x340 
> > > > > (unreliable)
> > > > > [c004876e39c0] [c040408c] __add_memory+0x5c/0xf0
> > > > > [c004876e39f0] [c00e2b94] dlpar_add_lmb+0x1b4/0x500
> > > > > [c004876e3ad0] [c00e3888] dlpar_memory+0x1f8/0xb80
> > > > > [c004876e3b60] [c00dc0d0] handle_dlpar_errorlog+0xc0/0x190
> > > > > [c004876e3bd0] [c00dc398] dlpar_store+0x198/0x4a0
> > > > > [c004876e3c90] [c072e630] kobj_attr_store+0x30/0x50
> > > > > [c004876e3cb0] [c051f954] sysfs_kf_write+0x64/0x90

Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Laurent Dufour

Le 09/09/2020 à 11:24, David Hildenbrand a écrit :

I am not sure an enum is going to make the existing situation less
messy. Sure we somehow have to distinguish boot init and runtime hotplug
because they have different constrains. I am arguing that a) we should
have a consistent way to check for those and b) we shouldn't blow up
easily just because sysfs infrastructure has failed to initialize.


For the point a, using the enum allows to know in register_mem_sect_under_node()
if the link operation is due to a hotplug operation or done at boot time.

For the point b, one option would be ignore the link error in the case the link
is already existing, but that BUG_ON() had the benefit to highlight the root 
issue.



WARN_ON_ONCE() would be preferred  - not crash the system but still
highlight the issue.


Indeed, calling sysfs_create_link() instead of sysfs_create_link_nowarn() in 
register_mem_sect_under_node() and ignoring EEXIST returned value should do the job.


I'll do that in a separate patch.


Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread David Hildenbrand
>> I am not sure an enum is going to make the existing situation less
>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
>> because they have different constrains. I am arguing that a) we should
>> have a consistent way to check for those and b) we shouldn't blow up
>> easily just because sysfs infrastructure has failed to initialize.
> 
> For the point a, using the enum allows to know in 
> register_mem_sect_under_node() 
> if the link operation is due to a hotplug operation or done at boot time.
> 
> For the point b, one option would be ignore the link error in the case the 
> link 
> is already existing, but that BUG_ON() had the benefit to highlight the root 
> issue.
> 

WARN_ON_ONCE() would be preferred  - not crash the system but still
highlight the issue.

> Cheers,
> Laurent.
> 


-- 
Thanks,

David / dhildenb



Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Laurent Dufour

Le 09/09/2020 à 11:09, Michal Hocko a écrit :

On Wed 09-09-20 09:48:59, Laurent Dufour wrote:

Le 09/09/2020 à 09:40, Michal Hocko a écrit :

[reposting because the malformed cc list confused my email client]

On Tue 08-09-20 19:08:35, Laurent Dufour wrote:

In register_mem_sect_under_node() the system_state’s value is checked to
detect whether the operation the call is made during boot time or during an
hot-plug operation. Unfortunately, that check is wrong on some
architecture, and may lead to sections being registered under multiple
nodes if node's memory ranges are interleaved.


Why is this check arch specific?


I was wrong the check is not arch specific.


This can be seen on PowerPC LPAR after multiple memory hot-plug and
hot-unplug operations are done. At the next reboot the node's memory ranges
can be interleaved


What is the exact memory layout?


For instance:
[0.00] Early memory node ranges
[0.00]   node   1: [mem 0x-0x00011fff]
[0.00]   node   2: [mem 0x00012000-0x00014fff]
[0.00]   node   1: [mem 0x00015000-0x0001]
[0.00]   node   0: [mem 0x0002-0x00048fff]
[0.00]   node   2: [mem 0x00049000-0x0007]


Include this into the changelog.


and since the call to link_mem_sections() is made in
topology_init() while the system is in the SYSTEM_SCHEDULING state, the
node's id is not checked, and the sections registered multiple times.


So a single memory section/memblock belongs to two numa nodes?


If the node id is not checked in register_mem_sect_under_node(), yes that the 
case.


I do not follow. register_mem_sect_under_node is about user interface.
This is independent on the low level memory representation - aka memory
section. I do not think we can handle a section in multiple zones/nodes.
Memblock in multiple zones/nodes is a different story and interleaving
physical memory layout can indeed lead to it. This is something that we
do not allow for runtime hotplug but have to somehow live with that - at
least not crash.


register_mem_sect_under_node() is called at boot time and when memory is hot 
added. In the later case the assumption is made that all the pages of the added 
block are in the same node. And that's a valid assumption. However at boot time 
the call is made using the node's whole range, lowest address to highest address 
for that node. In the case there are interleaved ranges, this means the 
interleaved sections are registered for each nodes which is not correct.



In
that case, the system is able to boot but later hot-plug operation may lead
to this panic because the node's links are correctly broken:


Correctly broken? Could you provide more details on the inconsistency
please?


laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
total 0
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
-rw-r--r-- 1 root root 65536 Aug 24 05:27 online
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
drwxr-xr-x 2 root root 0 Aug 24 05:27 power
-r--r--r-- 1 root root 65536 Aug 24 05:27 removable
-rw-r--r-- 1 root root 65536 Aug 24 05:27 state
lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
-rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
-r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones


OK, so there are two nodes referenced here. Not terrible from the user
point of view. Such a memory block will refuse to offline or online
IIRC.


No the memory block is still owned by one node, only the sysfs representation is 
wrong. So the memory block can be hot unplugged, but only one node's link will 
be cleaned, and a '/syss/devices/system/node#/memory21' link will remain and 
that will be detected later when that memory block is hot plugged again.


  

Which physical memory range you are trying to add here and what is the
node affinity?


None is added, the root cause of the issue is happening at boot time.


Let me clarify my question. The crash has clearly happened during the
hotplug add_memory_resource - which is clearly not a boot time path.
I was askin for more information about why this has failed. It is quite
clear that sysfs machinery has failed and that led to BUG_ON but we are
mising an information on why. What was the physical memory range to be
added and why sysfs failed?


The BUG_ON is detecting a bad state generated earlier, at boot time because 
register_mem_sect_under_node() didn't check for the block's node id.


  

[ cut here ]
kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
Oops: Exception in kernel mode, sig: 5 [#1]
LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul 
binfmt_misc ip_tables x_tables xfs 

Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Michal Hocko
On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
> Le 09/09/2020 à 09:40, Michal Hocko a écrit :
> > [reposting because the malformed cc list confused my email client]
> > 
> > On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
> > > In register_mem_sect_under_node() the system_state’s value is checked to
> > > detect whether the operation the call is made during boot time or during 
> > > an
> > > hot-plug operation. Unfortunately, that check is wrong on some
> > > architecture, and may lead to sections being registered under multiple
> > > nodes if node's memory ranges are interleaved.
> > 
> > Why is this check arch specific?
> 
> I was wrong the check is not arch specific.
> 
> > > This can be seen on PowerPC LPAR after multiple memory hot-plug and
> > > hot-unplug operations are done. At the next reboot the node's memory 
> > > ranges
> > > can be interleaved
> > 
> > What is the exact memory layout?
> 
> For instance:
> [0.00] Early memory node ranges
> [0.00]   node   1: [mem 0x-0x00011fff]
> [0.00]   node   2: [mem 0x00012000-0x00014fff]
> [0.00]   node   1: [mem 0x00015000-0x0001]
> [0.00]   node   0: [mem 0x0002-0x00048fff]
> [0.00]   node   2: [mem 0x00049000-0x0007]

Include this into the changelog.

> > > and since the call to link_mem_sections() is made in
> > > topology_init() while the system is in the SYSTEM_SCHEDULING state, the
> > > node's id is not checked, and the sections registered multiple times.
> > 
> > So a single memory section/memblock belongs to two numa nodes?
> 
> If the node id is not checked in register_mem_sect_under_node(), yes that the 
> case.

I do not follow. register_mem_sect_under_node is about user interface.
This is independent on the low level memory representation - aka memory
section. I do not think we can handle a section in multiple zones/nodes.
Memblock in multiple zones/nodes is a different story and interleaving
physical memory layout can indeed lead to it. This is something that we
do not allow for runtime hotplug but have to somehow live with that - at
least not crash.
 
> > > In
> > > that case, the system is able to boot but later hot-plug operation may 
> > > lead
> > > to this panic because the node's links are correctly broken:
> > 
> > Correctly broken? Could you provide more details on the inconsistency
> > please?
> 
> laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
> total 0
> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
> -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
> drwxr-xr-x 2 root root 0 Aug 24 05:27 power
> -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
> -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
> lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
> -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
> -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones

OK, so there are two nodes referenced here. Not terrible from the user
point of view. Such a memory block will refuse to offline or online
IIRC.
 
> > Which physical memory range you are trying to add here and what is the
> > node affinity?
> 
> None is added, the root cause of the issue is happening at boot time.

Let me clarify my question. The crash has clearly happened during the
hotplug add_memory_resource - which is clearly not a boot time path.
I was askin for more information about why this has failed. It is quite
clear that sysfs machinery has failed and that led to BUG_ON but we are
mising an information on why. What was the physical memory range to be
added and why sysfs failed?
 
> > > [ cut here ]
> > > kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
> > > Oops: Exception in kernel mode, sig: 5 [#1]
> > > LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
> > > Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto 
> > > gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum 
> > > autofs4
> > > CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
> > > NIP:  c0403f34 LR: c0403f2c CTR: 
> > > REGS: c004876e3660 TRAP: 0700   Not tainted  (5.9.0-rc1+)
> > > MSR:  8282b033   CR: 24000448  
> > > XER: 2004
> > > CFAR: c0846d20 IRQMASK: 0
> > > GPR00: c0403f2c c004876e38f0 c12f6f00 ffef
> > > GPR04: 0227 c004805ae680  0004886f
> > > GPR08: 0226 0003 0002 fffd
> > > GPR12: 88000484 c0001ec96280  
> > > GPR16:   0004 0003
> 

Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Laurent Dufour

Le 09/09/2020 à 09:40, Michal Hocko a écrit :

[reposting because the malformed cc list confused my email client]

On Tue 08-09-20 19:08:35, Laurent Dufour wrote:

In register_mem_sect_under_node() the system_state’s value is checked to
detect whether the operation the call is made during boot time or during an
hot-plug operation. Unfortunately, that check is wrong on some
architecture, and may lead to sections being registered under multiple
nodes if node's memory ranges are interleaved.


Why is this check arch specific?


I was wrong the check is not arch specific.


This can be seen on PowerPC LPAR after multiple memory hot-plug and
hot-unplug operations are done. At the next reboot the node's memory ranges
can be interleaved


What is the exact memory layout?


For instance:
[0.00] Early memory node ranges
[0.00]   node   1: [mem 0x-0x00011fff]
[0.00]   node   2: [mem 0x00012000-0x00014fff]
[0.00]   node   1: [mem 0x00015000-0x0001]
[0.00]   node   0: [mem 0x0002-0x00048fff]
[0.00]   node   2: [mem 0x00049000-0x0007]




and since the call to link_mem_sections() is made in
topology_init() while the system is in the SYSTEM_SCHEDULING state, the
node's id is not checked, and the sections registered multiple times.


So a single memory section/memblock belongs to two numa nodes?


If the node id is not checked in register_mem_sect_under_node(), yes that the 
case.




In
that case, the system is able to boot but later hot-plug operation may lead
to this panic because the node's links are correctly broken:


Correctly broken? Could you provide more details on the inconsistency
please?


laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
total 0
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
-rw-r--r-- 1 root root 65536 Aug 24 05:27 online
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
drwxr-xr-x 2 root root 0 Aug 24 05:27 power
-r--r--r-- 1 root root 65536 Aug 24 05:27 removable
-rw-r--r-- 1 root root 65536 Aug 24 05:27 state
lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
-rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
-r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones



Which physical memory range you are trying to add here and what is the
node affinity?


None is added, the root cause of the issue is happening at boot time.




[ cut here ]
kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
Oops: Exception in kernel mode, sig: 5 [#1]
LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul 
binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
NIP:  c0403f34 LR: c0403f2c CTR: 
REGS: c004876e3660 TRAP: 0700   Not tainted  (5.9.0-rc1+)
MSR:  8282b033   CR: 24000448  XER: 
2004
CFAR: c0846d20 IRQMASK: 0
GPR00: c0403f2c c004876e38f0 c12f6f00 ffef
GPR04: 0227 c004805ae680  0004886f
GPR08: 0226 0003 0002 fffd
GPR12: 88000484 c0001ec96280  
GPR16:   0004 0003
GPR20: c0047814ffe0 c0077c08 0010 c13332c8
GPR24:  c11f6cc0  
GPR28: ffef 0001 00015000 1000
NIP [c0403f34] add_memory_resource+0x244/0x340
LR [c0403f2c] add_memory_resource+0x23c/0x340
Call Trace:
[c004876e38f0] [c0403f2c] add_memory_resource+0x23c/0x340 
(unreliable)
[c004876e39c0] [c040408c] __add_memory+0x5c/0xf0
[c004876e39f0] [c00e2b94] dlpar_add_lmb+0x1b4/0x500
[c004876e3ad0] [c00e3888] dlpar_memory+0x1f8/0xb80
[c004876e3b60] [c00dc0d0] handle_dlpar_errorlog+0xc0/0x190
[c004876e3bd0] [c00dc398] dlpar_store+0x198/0x4a0
[c004876e3c90] [c072e630] kobj_attr_store+0x30/0x50
[c004876e3cb0] [c051f954] sysfs_kf_write+0x64/0x90
[c004876e3cd0] [c051ee40] kernfs_fop_write+0x1b0/0x290
[c004876e3d20] [c0438dd8] vfs_write+0xe8/0x290
[c004876e3d70] [c04391ac] ksys_write+0xdc/0x130
[c004876e3dc0] [c0034e40] system_call_exception+0x160/0x270
[c004876e3e20] [c000d740] system_call_common+0xf0/0x27c
Instruction dump:
48442e35 6000 0b03 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
78a58402 48442db1 6000 7c7c1b78 <0b03> 7f23cb78 4bda371d 6000
---[ 

Re: [PATCH] mm: don't rely on system state to detect hot-plug operations

2020-09-09 Thread Michal Hocko
[reposting because the malformed cc list confused my email client]

On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
> In register_mem_sect_under_node() the system_state’s value is checked to
> detect whether the operation the call is made during boot time or during an
> hot-plug operation. Unfortunately, that check is wrong on some
> architecture, and may lead to sections being registered under multiple
> nodes if node's memory ranges are interleaved.

Why is this check arch specific?

> This can be seen on PowerPC LPAR after multiple memory hot-plug and
> hot-unplug operations are done. At the next reboot the node's memory ranges
> can be interleaved

What is the exact memory layout?

> and since the call to link_mem_sections() is made in
> topology_init() while the system is in the SYSTEM_SCHEDULING state, the
> node's id is not checked, and the sections registered multiple times.

So a single memory section/memblock belongs to two numa nodes?

> In
> that case, the system is able to boot but later hot-plug operation may lead
> to this panic because the node's links are correctly broken:

Correctly broken? Could you provide more details on the inconsistency
please?

Which physical memory range you are trying to add here and what is the
node affinity?

> [ cut here ]
> kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
> Oops: Exception in kernel mode, sig: 5 [#1]
> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
> Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto 
> gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
> CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
> NIP:  c0403f34 LR: c0403f2c CTR: 
> REGS: c004876e3660 TRAP: 0700   Not tainted  (5.9.0-rc1+)
> MSR:  8282b033   CR: 24000448  XER: 
> 2004
> CFAR: c0846d20 IRQMASK: 0
> GPR00: c0403f2c c004876e38f0 c12f6f00 ffef
> GPR04: 0227 c004805ae680  0004886f
> GPR08: 0226 0003 0002 fffd
> GPR12: 88000484 c0001ec96280  
> GPR16:   0004 0003
> GPR20: c0047814ffe0 c0077c08 0010 c13332c8
> GPR24:  c11f6cc0  
> GPR28: ffef 0001 00015000 1000
> NIP [c0403f34] add_memory_resource+0x244/0x340
> LR [c0403f2c] add_memory_resource+0x23c/0x340
> Call Trace:
> [c004876e38f0] [c0403f2c] add_memory_resource+0x23c/0x340 
> (unreliable)
> [c004876e39c0] [c040408c] __add_memory+0x5c/0xf0
> [c004876e39f0] [c00e2b94] dlpar_add_lmb+0x1b4/0x500
> [c004876e3ad0] [c00e3888] dlpar_memory+0x1f8/0xb80
> [c004876e3b60] [c00dc0d0] handle_dlpar_errorlog+0xc0/0x190
> [c004876e3bd0] [c00dc398] dlpar_store+0x198/0x4a0
> [c004876e3c90] [c072e630] kobj_attr_store+0x30/0x50
> [c004876e3cb0] [c051f954] sysfs_kf_write+0x64/0x90
> [c004876e3cd0] [c051ee40] kernfs_fop_write+0x1b0/0x290
> [c004876e3d20] [c0438dd8] vfs_write+0xe8/0x290
> [c004876e3d70] [c04391ac] ksys_write+0xdc/0x130
> [c004876e3dc0] [c0034e40] system_call_exception+0x160/0x270
> [c004876e3e20] [c000d740] system_call_common+0xf0/0x27c
> Instruction dump:
> 48442e35 6000 0b03 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
> 78a58402 48442db1 6000 7c7c1b78 <0b03> 7f23cb78 4bda371d 6000
> ---[ end trace 562fd6c109cd0fb2 ]---

The BUG_ON on failure is absolutely horrendous. There must be a better
way to handle a failure like that. The failure means that
sysfs_create_link_nowarn has failed. Please describe why that is the
case.

> This patch addresses the root cause by not relying on the system_state
> value to detect whether the call is due to a hot-plug operation or not. An
> additional parameter is added to link_mem_sections() to tell the context of
> the call and this parameter is propagated to register_mem_sect_under_node()
> throuugh the walk_memory_blocks()'s call.

This looks like a hack to me and it deserves a better explanation. The
existing code is a hack on its own and it is inconsistent with other
boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
places IIRC. Would it help to use the same here as well? Maybe we want to
wrap that inside a helper (early_memory_init()) and use it at all
places.

> Fixes: 4fbce633910e ("mm/memory_hotplug.c: make 
> register_mem_sect_under_node() a callback of walk_memory_range()")
> Signed-off-by: Laurent Dufour 
> Cc: sta...@vger.kernel.org
> Cc: Greg Kroah-Hartman 
> Cc: "Rafael J. Wysocki" 
> Cc: Andrew Morton 
> ---
>  drivers/base/node.c  | 20 +++-
>