On Thu, 24 Mar 2022 10:34:09 +0800 Jason Wang wrote:
> On Thu, Mar 24, 2022 at 8:54 AM Hillf Danton <hdan...@sina.com> wrote:
> >
> > On Tue, 22 Mar 2022 09:59:14 +0800 Jason Wang wrote:
> > >
> > > Yes, there will be no "infinite" loop, but since the loop is triggered
> > > by userspace. It looks to me it will delay the flush/drain of the
> > > workqueue forever which is still suboptimal.
> >
> > Usually it is barely possible to shoot two birds using a stone.
> >
> > Given the "forever", I am inclined to not running faster, hehe, though
> > another cobble is to add another line in the loop checking if mvdev is
> > unregistered, and for example make mvdev->cvq unready before destroying
> > workqueue.
> >
> > static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct 
> > vdpa_device *dev)
> > {
> >         struct mlx5_vdpa_mgmtdev *mgtdev = container_of(v_mdev, struct 
> > mlx5_vdpa_mgmtdev, mgtdev);
> >         struct mlx5_vdpa_dev *mvdev = to_mvdev(dev);
> >         struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
> >
> >         mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
> >         destroy_workqueue(mvdev->wq);
> >         _vdpa_unregister_device(dev);
> >         mgtdev->ndev = NULL;
> > }
> >
> 
> Yes, so we had
> 
> 1) using a quota for re-requeue
> 2) using something like
> 
> while (READ_ONCE(cvq->ready)) {
>         ...
>         cond_resched();
> }
> 
> There should not be too much difference except we need to use
> cancel_work_sync() instead of flush_work for 1).
> 
> I would keep the code as is but if you stick I can change.

No Sir I would not - I am simply not a fan of work requeue.

Hillf
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to