On 31.10.18 г. 19:06 ч., Omar Sandoval wrote:
> From: Omar Sandoval <osan...@fb.com>
>
> There's a race between close_ctree() and cleaner_kthread().
> close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it
> sees it set, but this is racy; the cleaner might have already checked
> the bit and could be cleaning stuff. In particular, if it deletes unused
> block groups, it will create delayed iputs for the free space cache
> inodes. As of "btrfs: don't run delayed_iputs in commit", we're no
> longer running delayed iputs after a commit. Therefore, if the cleaner
> creates more delayed iputs after delayed iputs are run in
> btrfs_commit_super(), we will leak inodes on unmount and get a busy
> inode crash from the VFS.
>
> Fix it by parking the cleaner before we actually close anything. Then,
> any remaining delayed iputs will always be handled in
> btrfs_commit_super(). This also ensures that the commit in close_ctree()
> is really the last commit, so we can get rid of the commit in
> cleaner_kthread().
>
> Fixes: 30928e9baac2 ("btrfs: don't run delayed_iputs in commit")
> Signed-off-by: Omar Sandoval <osan...@fb.com>
Also I believe this patch renders the wake_up_process in
btrfs_commit_super a null op so it can also be removed, which leaves a
single place that could wake up the cleaner - transaction_kthread.
So can't we stop transaction and cleaner thread right after setting
CLOSING_FS. And commit the transaction in close_ctree whenever we deem
necessary (in btrfs_commit_super for example) ?
> ---
> Changes from v1:
>
> - Add a comment explaining why it needs to be a kthread_park(), not
> kthread_stop()
> - Update later comment now that the cleaner thread is definitely stopped
>
> fs/btrfs/disk-io.c | 51 ++++++++++++++--------------------------------
> 1 file changed, 15 insertions(+), 36 deletions(-)
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index b0ab41da91d1..40bcc45d827d 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -1664,9 +1664,8 @@ static int cleaner_kthread(void *arg)
> struct btrfs_root *root = arg;
> struct btrfs_fs_info *fs_info = root->fs_info;
> int again;
> - struct btrfs_trans_handle *trans;
>
> - do {
> + while (1) {
> again = 0;
>
> /* Make the cleaner go to sleep early. */
> @@ -1715,42 +1714,16 @@ static int cleaner_kthread(void *arg)
> */
> btrfs_delete_unused_bgs(fs_info);
> sleep:
> + if (kthread_should_park())
> + kthread_parkme();
> + if (kthread_should_stop())
> + return 0;
> if (!again) {
> set_current_state(TASK_INTERRUPTIBLE);
> - if (!kthread_should_stop())
> - schedule();
> + schedule();
> __set_current_state(TASK_RUNNING);
> }
> - } while (!kthread_should_stop());
> -
> - /*
> - * Transaction kthread is stopped before us and wakes us up.
> - * However we might have started a new transaction and COWed some
> - * tree blocks when deleting unused block groups for example. So
> - * make sure we commit the transaction we started to have a clean
> - * shutdown when evicting the btree inode - if it has dirty pages
> - * when we do the final iput() on it, eviction will trigger a
> - * writeback for it which will fail with null pointer dereferences
> - * since work queues and other resources were already released and
> - * destroyed by the time the iput/eviction/writeback is made.
> - */
> - trans = btrfs_attach_transaction(root);
> - if (IS_ERR(trans)) {
> - if (PTR_ERR(trans) != -ENOENT)
> - btrfs_err(fs_info,
> - "cleaner transaction attach returned %ld",
> - PTR_ERR(trans));
> - } else {
> - int ret;
> -
> - ret = btrfs_commit_transaction(trans);
> - if (ret)
> - btrfs_err(fs_info,
> - "cleaner open transaction commit returned %d",
> - ret);
> }
> -
> - return 0;
> }
>
> static int transaction_kthread(void *arg)
> @@ -3931,6 +3904,13 @@ void close_ctree(struct btrfs_fs_info *fs_info)
> int ret;
>
> set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);
> + /*
> + * We don't want the cleaner to start new transactions, add more delayed
> + * iputs, etc. while we're closing. We can't use kthread_stop() yet
> + * because that frees the task_struct, and the transaction kthread might
> + * still try to wake up the cleaner.
> + */
> + kthread_park(fs_info->cleaner_kthread);
>
> /* wait for the qgroup rescan worker to stop */
> btrfs_qgroup_wait_for_completion(fs_info, false);
> @@ -3958,9 +3938,8 @@ void close_ctree(struct btrfs_fs_info *fs_info)
>
> if (!sb_rdonly(fs_info->sb)) {
> /*
> - * If the cleaner thread is stopped and there are
> - * block groups queued for removal, the deletion will be
> - * skipped when we quit the cleaner thread.
> + * The cleaner kthread is stopped, so do one final pass over
> + * unused block groups.
> */
> btrfs_delete_unused_bgs(fs_info);
>
>