wake_up() will go to check whether someone is on the waiting list with holding spin_lock().
Around some btrfs code, we don't check waitqueue_active() firstly, so the spin_lock() pair in wake_up() is called even if no one is waiting on the queue. There are more wake_up()s without waitqueue_active(), but these two are the hottest one I've run into so far. Signed-off-by: Liu Bo <bo.li....@oracle.com> --- fs/btrfs/extent_io.c | 9 ++++++++- fs/btrfs/ordered-data.c | 8 +++++++- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 825fad6..e2dc042 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -536,8 +536,15 @@ static struct extent_state *clear_state_bit(struct extent_io_tree *tree, clear_state_cb(tree, state, bits); add_extent_changeset(state, bits_to_clear, changeset, 0); state->state &= ~bits_to_clear; - if (wake) + + assert_spin_locked(&tree->lock); + /* + * spin_lock is acquired by both waker and waiter, thus no + * need to restrict the order. + **/ + if (wake && waitqueue_active(&state->wq)) wake_up(&state->wq); + if (state->state == 0) { next = next_state(state); if (extent_state_in_tree(state)) { diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index a3aca49..e439fb4 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -647,7 +647,13 @@ void btrfs_remove_ordered_extent(struct inode *inode, spin_unlock(&fs_info->ordered_root_lock); } spin_unlock(&root->ordered_extent_lock); - wake_up(&entry->wait); + + /* + * setting flag is protected by spin_lock pair, which has a + * implicit memory barrier. + */ + if (waitqueue_active(&entry->wait)) + wake_up(&entry->wait); } static void btrfs_run_ordered_extent_work(struct btrfs_work *work) -- 2.9.4 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html