On 05/29/2018 06:57 AM, Anand Jain wrote:
On 05/28/2018 11:40 PM, David Sterba wrote:
On Mon, May 28, 2018 at 10:43:29PM +0800, Anand Jain wrote:
btrfs_free_extra_devids() is called only in the mount context which
traverses through the fs_devices::devices and frees the orphan devices
devices in the given %fs_devices if any. As the search for the orphan
device is limited to fs_devices::devices so we don't need the global
uuid_mutex.
There can't be any mount-point based ioctl threads in this context as
the mount thread is not yet returned. But there can be the btrfs-control
based scan ioctls thread which calls device_list_add().
Here in the mount thread the fs_devices::opened is incremented way
before
btrfs_free_extra_devids() is called and in the scan context the
fs_devices
which are already opened neither be freed or alloc-able at
device_list_add().
But lets say you change the device-path and call the scan again, then
scan
would update the new device path and this operation could race
against the
btrfs_free_extra_devids() thread, which might be in the process of
free-ing the same device. So synchronize it by using the
device_list_mutex.
This scenario is a very corner case, and practically the scan and mount
are anyway serialized by the usage so unless the race is instrumented
its
very difficult to achieve.
Signed-off-by: Anand Jain <anand.j...@oracle.com>
---
(I didn't see this email in the mailing list, so trying again).
v3->v4: As we traverse through the seed device, fs_device gets
updated with
the child seed fs_devices, so make sure we use the same fs_devices
pointer for the mutex_unlock as used for the mutex_lock.
Well, now that I see the change, shouldn't we always hold the
device_list_mutex of the fs_devices that's being processed? Ie. each
time it's switched, the previous is unlocked and new one locked.
No David. That's because we organize seed device under its parent
fs_devices ((fs_devices::seed)::seed)..so on, and they are a local
cloned copy of the original seed fs_devices. So parent's
fs_devices::device_list_mutex lock will suffice.
On the 2nd thought, though theoretically you are right. But practically
there isn't any use case which can benefit by using the intricate
locking as you suggested above.
I am following the following method:-
By holding the parent fs_devices (which is also the mounted fs_devices
lock) it would imply to lock its dependent cloned seed fs_devices, as
to reach the cloned seed device, first we need to traverse from the
parent fs_devices.
Thanks, Anand
Thanks, Anand
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html