Hi,

On Thu, Dec 14, 2017 at 04:23:43PM +0100, Christoph Hellwig wrote:
> From: Matthew Wilcox <mawil...@microsoft.com>
> 
> The epoll code currently uses the unlocked waitqueue helpers for managing

The userfaultfd code

> fault_wqh, but instead of holding the waitqueue lock for this waitqueue
> around these calls, it the waitqueue lock of fault_pending_wq, which is
> a different waitqueue instance.  Given that the waitqueue is not exposed
> to the rest of the kernel this actually works ok at the moment, but
> prevents the epoll locking rules from being enforced using lockdep.

ditto

> Switch to the internally locked waitqueue helpers instead.  This means
> that the lock inside fault_wqh now nests inside the fault_pending_wqh
> lock, but that's not a problem since it was entireyl unused before.

spelling: entirely

> Signed-off-by: Matthew Wilcox <mawil...@microsoft.com>
> [hch: slight changelog updates]
> Signed-off-by: Christoph Hellwig <h...@lst.de>

Reviewed-by: Mike Rapoport <r...@linux.vnet.ibm.com>

> ---
>  fs/userfaultfd.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index ac9a4e65ca49..a39bc3237b68 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -879,7 +879,7 @@ static int userfaultfd_release(struct inode *inode, 
> struct file *file)
>        */
>       spin_lock(&ctx->fault_pending_wqh.lock);
>       __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range);
> -     __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, &range);
> +     __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, &range);
>       spin_unlock(&ctx->fault_pending_wqh.lock);
> 
>       /* Flush pending events that may still wait on event_wqh */
> @@ -1045,7 +1045,7 @@ static ssize_t userfaultfd_ctx_read(struct 
> userfaultfd_ctx *ctx, int no_wait,
>                        * anyway.
>                        */
>                       list_del(&uwq->wq.entry);
> -                     __add_wait_queue(&ctx->fault_wqh, &uwq->wq);
> +                     add_wait_queue(&ctx->fault_wqh, &uwq->wq);
> 
>                       write_seqcount_end(&ctx->refile_seq);
> 
> @@ -1194,7 +1194,7 @@ static void __wake_userfault(struct userfaultfd_ctx 
> *ctx,
>               __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL,
>                                    range);
>       if (waitqueue_active(&ctx->fault_wqh))
> -             __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, range);
> +             __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, range);
>       spin_unlock(&ctx->fault_pending_wqh.lock);
>  }
> 
> -- 
> 2.14.2
> 

-- 
Sincerely yours,
Mike.

Reply via email to