On 24.04.19 23:34, Pavel Tatashin wrote:
>>>> +static int
>>>> +offline_memblock_cb(struct memory_block *mem, void *arg)
>>>
>>> Function name suggests that you are actually trying to offline memory
>>> here. Maybe check_memblocks_offline_cb(), just like we have in
>>> mm/memory_hotplug.c.
> 
> Makes sense, I will rename to check_memblocks_offline_cb()
> 
>>>> +     lock_device_hotplug();
>>>> +     rc = walk_memory_range(start_pfn, end_pfn, dev, offline_memblock_cb);
>>>> +     unlock_device_hotplug();
>>>> +
>>>> +     /*
>>>> +      * If admin has not offlined memory beforehand, we cannot hotremove 
>>>> dax.
>>>> +      * Unfortunately, because unbind will still succeed there is no way 
>>>> for
>>>> +      * user to hotremove dax after this.
>>>> +      */
>>>> +     if (rc)
>>>> +             return rc;
>>>
>>> Can't it happen that there is a race between you checking if memory is
>>> offline and an admin onlining memory again? maybe pull the
>>> remove_memory() into the locked region, using __remove_memory() instead.
>>
>> I think the race is ok. The admin gets to keep the pieces of allowing
>> racing updates to the state and the kernel will keep the range active
>> until the next reboot.
> 
> Thank you for noticing this. I will pull it into locking region.
> Because, __remove_memory() has this code:
> 
> 1868   ret = walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), 
> NULL,
> 1869   check_memblock_offlined_cb);
> 1870   if (ret)
> 1871      BUG();
> 

Yes, also I think you can let go of the device_lock in
check_memblocks_offline_cb, lock_device_hotplug() should take care of
this (see Documentation/core-api/memory-hotplug.rst - "locking internals")

Cheers!

-- 

Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to