On Thu, 2010-07-15 at 15:19 +0300, Michael S. Tsirkin wrote:
> We flush under vq mutex when changing backends.
> This creates a deadlock as workqueue being flushed
> needs this lock as well.
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=612421
> 
> Drop the vq mutex before flush: we have the device mutex
> which is sufficient to prevent another ioctl from touching
> the vq.

Why do we need to flush the vq when trying to set the backend and
we find that it is already set. Is this just an optimization?

Thanks
Sridhar
> 
> Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
> ---
>  drivers/vhost/net.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 28d7786..50df58e6 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, 
> unsigned index, int fd)
>       rcu_assign_pointer(vq->private_data, sock);
>       vhost_net_enable_vq(n, vq);
>  done:
> +     mutex_unlock(&vq->mutex);
> +
>       if (oldsock) {
>               vhost_net_flush_vq(n, index);
>               fput(oldsock->file);
>       }
> 
> +     mutex_unlock(&n->dev.mutex);
> +     return 0;
> +
>  err_vq:
>       mutex_unlock(&vq->mutex);
>  err:

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Reply via email to