Re: [RFC V1 07/13] vhost-vdpa: flush workers on suspend
On 1/11/2024 9:28 PM, Jason Wang wrote: > On Fri, Jan 12, 2024 at 12:18 AM Mike Christie > wrote: >> >> On 1/10/24 9:09 PM, Jason Wang wrote: >>> On Thu, Jan 11, 2024 at 4:40 AM Steve Sistare >>> wrote: To pass ownership of a live vdpa device to a new process, the user suspends the device, calls VHOST_NEW_OWNER to change the mm, and calls VHOST_IOTLB_REMAP to change the user virtual addresses to match the new mm. Flush workers in suspend to guarantee that no worker sees the new mm and old VA in between. Signed-off-by: Steve Sistare --- drivers/vhost/vdpa.c | 4 1 file changed, 4 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 8fe1562d24af..9673e8e20d11 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -591,10 +591,14 @@ static long vhost_vdpa_suspend(struct vhost_vdpa *v) { struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; + struct vhost_dev *vdev = >vdev; if (!ops->suspend) return -EOPNOTSUPP; + if (vdev->use_worker) + vhost_dev_flush(vdev); >>> >>> It looks to me like it's better to check use_woker in vhost_dev_flush. >> >> You can now just call vhost_dev_flush and it will do the right thing. >> The xa_for_each loop will only flush workers if they have been setup, >> so for vdpa it will not find/flush anything. Very good, I will drop this patch. - Steve
Re: [RFC V1 07/13] vhost-vdpa: flush workers on suspend
On Fri, Jan 12, 2024 at 12:18 AM Mike Christie wrote: > > On 1/10/24 9:09 PM, Jason Wang wrote: > > On Thu, Jan 11, 2024 at 4:40 AM Steve Sistare > > wrote: > >> > >> To pass ownership of a live vdpa device to a new process, the user > >> suspends the device, calls VHOST_NEW_OWNER to change the mm, and calls > >> VHOST_IOTLB_REMAP to change the user virtual addresses to match the new > >> mm. Flush workers in suspend to guarantee that no worker sees the new > >> mm and old VA in between. > >> > >> Signed-off-by: Steve Sistare > >> --- > >> drivers/vhost/vdpa.c | 4 > >> 1 file changed, 4 insertions(+) > >> > >> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c > >> index 8fe1562d24af..9673e8e20d11 100644 > >> --- a/drivers/vhost/vdpa.c > >> +++ b/drivers/vhost/vdpa.c > >> @@ -591,10 +591,14 @@ static long vhost_vdpa_suspend(struct vhost_vdpa *v) > >> { > >> struct vdpa_device *vdpa = v->vdpa; > >> const struct vdpa_config_ops *ops = vdpa->config; > >> + struct vhost_dev *vdev = >vdev; > >> > >> if (!ops->suspend) > >> return -EOPNOTSUPP; > >> > >> + if (vdev->use_worker) > >> + vhost_dev_flush(vdev); > > > > It looks to me like it's better to check use_woker in vhost_dev_flush. > > > > You can now just call vhost_dev_flush and it will do the right thing. > The xa_for_each loop will only flush workers if they have been setup, > so for vdpa it will not find/flush anything. Right. Thanks > > >
Re: [RFC V1 07/13] vhost-vdpa: flush workers on suspend
On 1/10/24 9:09 PM, Jason Wang wrote: > On Thu, Jan 11, 2024 at 4:40 AM Steve Sistare > wrote: >> >> To pass ownership of a live vdpa device to a new process, the user >> suspends the device, calls VHOST_NEW_OWNER to change the mm, and calls >> VHOST_IOTLB_REMAP to change the user virtual addresses to match the new >> mm. Flush workers in suspend to guarantee that no worker sees the new >> mm and old VA in between. >> >> Signed-off-by: Steve Sistare >> --- >> drivers/vhost/vdpa.c | 4 >> 1 file changed, 4 insertions(+) >> >> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c >> index 8fe1562d24af..9673e8e20d11 100644 >> --- a/drivers/vhost/vdpa.c >> +++ b/drivers/vhost/vdpa.c >> @@ -591,10 +591,14 @@ static long vhost_vdpa_suspend(struct vhost_vdpa *v) >> { >> struct vdpa_device *vdpa = v->vdpa; >> const struct vdpa_config_ops *ops = vdpa->config; >> + struct vhost_dev *vdev = >vdev; >> >> if (!ops->suspend) >> return -EOPNOTSUPP; >> >> + if (vdev->use_worker) >> + vhost_dev_flush(vdev); > > It looks to me like it's better to check use_woker in vhost_dev_flush. > You can now just call vhost_dev_flush and it will do the right thing. The xa_for_each loop will only flush workers if they have been setup, so for vdpa it will not find/flush anything.
Re: [RFC V1 07/13] vhost-vdpa: flush workers on suspend
On Thu, Jan 11, 2024 at 4:40 AM Steve Sistare wrote: > > To pass ownership of a live vdpa device to a new process, the user > suspends the device, calls VHOST_NEW_OWNER to change the mm, and calls > VHOST_IOTLB_REMAP to change the user virtual addresses to match the new > mm. Flush workers in suspend to guarantee that no worker sees the new > mm and old VA in between. > > Signed-off-by: Steve Sistare > --- > drivers/vhost/vdpa.c | 4 > 1 file changed, 4 insertions(+) > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c > index 8fe1562d24af..9673e8e20d11 100644 > --- a/drivers/vhost/vdpa.c > +++ b/drivers/vhost/vdpa.c > @@ -591,10 +591,14 @@ static long vhost_vdpa_suspend(struct vhost_vdpa *v) > { > struct vdpa_device *vdpa = v->vdpa; > const struct vdpa_config_ops *ops = vdpa->config; > + struct vhost_dev *vdev = >vdev; > > if (!ops->suspend) > return -EOPNOTSUPP; > > + if (vdev->use_worker) > + vhost_dev_flush(vdev); It looks to me like it's better to check use_woker in vhost_dev_flush. Thanks > + > return ops->suspend(vdpa); > } > > -- > 2.39.3 >
[RFC V1 07/13] vhost-vdpa: flush workers on suspend
To pass ownership of a live vdpa device to a new process, the user suspends the device, calls VHOST_NEW_OWNER to change the mm, and calls VHOST_IOTLB_REMAP to change the user virtual addresses to match the new mm. Flush workers in suspend to guarantee that no worker sees the new mm and old VA in between. Signed-off-by: Steve Sistare --- drivers/vhost/vdpa.c | 4 1 file changed, 4 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 8fe1562d24af..9673e8e20d11 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -591,10 +591,14 @@ static long vhost_vdpa_suspend(struct vhost_vdpa *v) { struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; + struct vhost_dev *vdev = >vdev; if (!ops->suspend) return -EOPNOTSUPP; + if (vdev->use_worker) + vhost_dev_flush(vdev); + return ops->suspend(vdpa); } -- 2.39.3