Re: [PATCH 0/3] trivial: Fix several compilation errors/warnings with GCC12

2022-05-15 Thread Davidlohr Bueso

Hello - What is the status of this? Currently gcc 12 (tumbleweed) is unable to
build Linus' latest because of splats in the scheduler headers...

Thanks,
Davidlohr

On Thu, 14 Apr 2022, Christophe de Dinechin wrote:


Compiling with GCC 12 using defconfig generates a number of build errors
due to new warnings, notably array-bounds checks. Some of these warnings appear
legitimate and relatively easy to fix.

Note that this series is not sufficient for a clean build yet. There are
in particular a number of warnings reported by the array-bounds check
that appear bogus, like:

| In function ???__native_read_cr3???,
| inlined from ???__read_cr3???
| at ./arch/x86/include/asm/special_insns.h:169:9,
| inlined from ???read_cr3_pa???
| at ./arch/x86/include/asm/processor.h:252:9,
| inlined from ???relocate_restore_code???
| at arch/x86/power/hibernate.c:165:17:
| ./arch/x86/include/asm/special_insns.h:48:9: error:
|array subscript 0 is outside array bounds of ???unsigned int[0]???
|[-Werror=array-bounds]
|48 | asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER);
|   | ^~~
| cc1: all warnings being treated as errors

The error above is for an instruction that does not obviously address any
C array, in particular since the asm constraint is "=r" and not "=rm".

Consequently, the series here only addresses a few low hanging fruits that
appear legitimate and relatively easy to fix.

Christophe de Dinechin (3):
 sched/headers: Fix compilation error with GCC 12
 nodemask.h: Fix compilation error with GCC12
 virtio-pci: Use cpumask_available to fix compilation error

drivers/virtio/virtio_pci_common.c |  2 +-
include/linux/nodemask.h   | 13 ++---
kernel/sched/sched.h   | 11 +--
3 files changed, 16 insertions(+), 10 deletions(-)

--
2.35.1



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 5/8] vhost_vsock: simplify vhost_vsock_flush()

2022-05-15 Thread Mike Christie
From: Andrey Ryabinin 

vhost_vsock_flush() calls vhost_work_dev_flush(vsock->vqs[i].poll.dev)
before vhost_work_dev_flush(>dev). This seems pointless
as vsock->vqs[i].poll.dev is the same as >dev and several flushes
in a row doesn't do anything useful, one is just enough.

Signed-off-by: Andrey Ryabinin 
Reviewed-by: Stefano Garzarella 
Signed-off-by: Mike Christie 
---
 drivers/vhost/vsock.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index a4c8ae92a0fb..96be63697117 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -705,11 +705,6 @@ static int vhost_vsock_dev_open(struct inode *inode, 
struct file *file)
 
 static void vhost_vsock_flush(struct vhost_vsock *vsock)
 {
-   int i;
-
-   for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
-   if (vsock->vqs[i].handle_kick)
-   vhost_work_dev_flush(vsock->vqs[i].poll.dev);
vhost_work_dev_flush(>dev);
 }
 
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 7/8] vhost-test: drop flush after vhost_dev_cleanup

2022-05-15 Thread Mike Christie
The flush after vhost_dev_cleanup is not needed because:

1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
so the flush call will just return since the worker has not device.

2. It's not needed. The comment about jobs re-queueing themselves does
not look correct because handle_vq does not requeue work.

Signed-off-by: Mike Christie 
---
 drivers/vhost/test.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index 837148d0a6a8..a63a626a554e 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -158,9 +158,6 @@ static int vhost_test_release(struct inode *inode, struct 
file *f)
vhost_test_flush(n);
vhost_dev_stop(>dev);
vhost_dev_cleanup(>dev);
-   /* We do an extra flush before freeing memory,
-* since jobs can re-queue themselves. */
-   vhost_test_flush(n);
kfree(n->dev.vqs);
kfree(n);
return 0;
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 8/8] vhost: rename vhost_work_dev_flush

2022-05-15 Thread Mike Christie
This patch renames vhost_work_dev_flush to just vhost_dev_flush to
relfect that it flushes everything on the device and that drivers
don't know/care that polls are based on vhost_works. Drivers just
flush the entire device and polls, and works for vhost-scsi
management TMFs and IO net virtqueues, etc all are flushed.

Signed-off-by: Mike Christie 
---
 drivers/vhost/net.c   |  4 ++--
 drivers/vhost/scsi.c  |  2 +-
 drivers/vhost/test.c  |  2 +-
 drivers/vhost/vhost.c | 10 +-
 drivers/vhost/vhost.h |  2 +-
 drivers/vhost/vsock.c |  2 +-
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 047b7b05109a..0e4ff6a08f5f 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1376,7 +1376,7 @@ static void vhost_net_stop(struct vhost_net *n, struct 
socket **tx_sock,
 
 static void vhost_net_flush(struct vhost_net *n)
 {
-   vhost_work_dev_flush(>dev);
+   vhost_dev_flush(>dev);
if (n->vqs[VHOST_NET_VQ_TX].ubufs) {
mutex_lock(>vqs[VHOST_NET_VQ_TX].vq.mutex);
n->tx_flush = true;
@@ -1566,7 +1566,7 @@ static long vhost_net_set_backend(struct vhost_net *n, 
unsigned index, int fd)
}
 
if (oldsock) {
-   vhost_work_dev_flush(>dev);
+   vhost_dev_flush(>dev);
sockfd_put(oldsock);
}
 
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 94535c813ef7..ffd9e6c2ffc1 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1436,7 +1436,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
kref_put(_inflight[i]->kref, vhost_scsi_done_inflight);
 
/* Flush both the vhost poll and vhost work */
-   vhost_work_dev_flush(>dev);
+   vhost_dev_flush(>dev);
 
/* Wait for all reqs issued before the flush to be finished */
for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index a63a626a554e..87498c1109fe 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -146,7 +146,7 @@ static void vhost_test_stop(struct vhost_test *n, void 
**privatep)
 
 static void vhost_test_flush(struct vhost_test *n)
 {
-   vhost_work_dev_flush(>dev);
+   vhost_dev_flush(>dev);
 }
 
 static int vhost_test_release(struct inode *inode, struct file *f)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 9f8de04bb673..716a80c61fa9 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
 }
 EXPORT_SYMBOL_GPL(vhost_poll_stop);
 
-void vhost_work_dev_flush(struct vhost_dev *dev)
+void vhost_dev_flush(struct vhost_dev *dev)
 {
struct vhost_flush_struct flush;
 
@@ -243,7 +243,7 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
wait_for_completion(_event);
}
 }
-EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
+EXPORT_SYMBOL_GPL(vhost_dev_flush);
 
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 {
@@ -530,7 +530,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev)
attach.owner = current;
vhost_work_init(, vhost_attach_cgroups_work);
vhost_work_queue(dev, );
-   vhost_work_dev_flush(dev);
+   vhost_dev_flush(dev);
return attach.ret;
 }
 
@@ -657,7 +657,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
vhost_poll_stop(>vqs[i]->poll);
}
 
-   vhost_work_dev_flush(dev);
+   vhost_dev_flush(dev);
 }
 EXPORT_SYMBOL_GPL(vhost_dev_stop);
 
@@ -1711,7 +1711,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int 
ioctl, void __user *arg
mutex_unlock(>mutex);
 
if (pollstop && vq->handle_kick)
-   vhost_work_dev_flush(vq->poll.dev);
+   vhost_dev_flush(vq->poll.dev);
return r;
 }
 EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index aeb8e1ad1496..d02adf1b2bf8 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -45,7 +45,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t 
fn,
 int vhost_poll_start(struct vhost_poll *poll, struct file *file);
 void vhost_poll_stop(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
-void vhost_work_dev_flush(struct vhost_dev *dev);
+void vhost_dev_flush(struct vhost_dev *dev);
 
 struct vhost_log {
u64 addr;
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 96be63697117..368330417bde 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -705,7 +705,7 @@ static int vhost_vsock_dev_open(struct inode *inode, struct 
file *file)
 
 static void vhost_vsock_flush(struct vhost_vsock *vsock)
 {
-   vhost_work_dev_flush(>dev);
+   vhost_dev_flush(>dev);
 }
 
 static void vhost_vsock_reset_orphans(struct sock *sk)
-- 
2.25.1

___
Virtualization mailing list

[PATCH V2 6/8] vhost-scsi: drop flush after vhost_dev_cleanup

2022-05-15 Thread Mike Christie
The flush after vhost_dev_cleanup is not needed because:

1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
so the flush call will just return since the worker has not device.

2. It's not needed for the re-queue case. vhost_scsi_evt_handle_kick grabs
the mutex and if the backend is NULL will return without queueing a work.
vhost_scsi_clear_endpoint will set the backend to NULL under the vq->mutex
then drops the mutex and does a flush. So we know when
vhost_scsi_clear_endpoint has dropped the mutex after clearing the backend
no evt related work will be able to requeue. The flush would then make sure
any queued evts are run and return.

Signed-off-by: Mike Christie 
Acked-by: Jason Wang 
---
 drivers/vhost/scsi.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 532e204f2b1b..94535c813ef7 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1827,8 +1827,6 @@ static int vhost_scsi_release(struct inode *inode, struct 
file *f)
vhost_scsi_clear_endpoint(vs, );
vhost_dev_stop(>dev);
vhost_dev_cleanup(>dev);
-   /* Jobs can re-queue themselves in evt kick handler. Do extra flush. */
-   vhost_scsi_flush(vs);
kfree(vs->dev.vqs);
kvfree(vs);
return 0;
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 3/8] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls

2022-05-15 Thread Mike Christie
From: Andrey Ryabinin 

vhost_net_flush_vq() calls vhost_work_dev_flush() twice passing
vhost_dev pointer obtained via 'n->poll[index].dev' and
'n->vqs[index].vq.poll.dev'. This is actually the same pointer,
initialized in vhost_net_open()/vhost_dev_init()/vhost_poll_init()

Remove vhost_net_flush_vq() and call vhost_work_dev_flush() directly.
Do the flushes only once instead of several flush calls in a row
which seems rather useless.

Signed-off-by: Andrey Ryabinin 
[drop vhost_dev forward declaration in vhost.h]
Signed-off-by: Mike Christie 
---
 drivers/vhost/net.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 4e55ad8c942a..047b7b05109a 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1374,16 +1374,9 @@ static void vhost_net_stop(struct vhost_net *n, struct 
socket **tx_sock,
*rx_sock = vhost_net_stop_vq(n, >vqs[VHOST_NET_VQ_RX].vq);
 }
 
-static void vhost_net_flush_vq(struct vhost_net *n, int index)
-{
-   vhost_work_dev_flush(n->poll[index].dev);
-   vhost_work_dev_flush(n->vqs[index].vq.poll.dev);
-}
-
 static void vhost_net_flush(struct vhost_net *n)
 {
-   vhost_net_flush_vq(n, VHOST_NET_VQ_TX);
-   vhost_net_flush_vq(n, VHOST_NET_VQ_RX);
+   vhost_work_dev_flush(>dev);
if (n->vqs[VHOST_NET_VQ_TX].ubufs) {
mutex_lock(>vqs[VHOST_NET_VQ_TX].vq.mutex);
n->tx_flush = true;
@@ -1573,7 +1566,7 @@ static long vhost_net_set_backend(struct vhost_net *n, 
unsigned index, int fd)
}
 
if (oldsock) {
-   vhost_net_flush_vq(n, index);
+   vhost_work_dev_flush(>dev);
sockfd_put(oldsock);
}
 
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 4/8] vhost_test: remove vhost_test_flush_vq()

2022-05-15 Thread Mike Christie
From: Andrey Ryabinin 

vhost_test_flush_vq() just a simple wrapper around vhost_work_dev_flush()
which seems have no value. It's just easier to call vhost_work_dev_flush()
directly. Besides there is no point in obtaining vhost_dev pointer
via 'n->vqs[index].poll.dev' while we can just use >dev.
It's the same pointers, see vhost_test_open()/vhost_dev_init().

Signed-off-by: Andrey Ryabinin 
Signed-off-by: Mike Christie 
---
 drivers/vhost/test.c | 11 +++
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index f0ac9e35f5d6..837148d0a6a8 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -144,14 +144,9 @@ static void vhost_test_stop(struct vhost_test *n, void 
**privatep)
*privatep = vhost_test_stop_vq(n, n->vqs + VHOST_TEST_VQ);
 }
 
-static void vhost_test_flush_vq(struct vhost_test *n, int index)
-{
-   vhost_work_dev_flush(n->vqs[index].poll.dev);
-}
-
 static void vhost_test_flush(struct vhost_test *n)
 {
-   vhost_test_flush_vq(n, VHOST_TEST_VQ);
+   vhost_work_dev_flush(>dev);
 }
 
 static int vhost_test_release(struct inode *inode, struct file *f)
@@ -210,7 +205,7 @@ static long vhost_test_run(struct vhost_test *n, int test)
goto err;
 
if (oldpriv) {
-   vhost_test_flush_vq(n, index);
+   vhost_test_flush(n, index);
}
}
 
@@ -303,7 +298,7 @@ static long vhost_test_set_backend(struct vhost_test *n, 
unsigned index, int fd)
mutex_unlock(>mutex);
 
if (enable) {
-   vhost_test_flush_vq(n, index);
+   vhost_test_flush(n);
}
 
mutex_unlock(>dev.mutex);
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 1/8] vhost: get rid of vhost_poll_flush() wrapper

2022-05-15 Thread Mike Christie
From: Andrey Ryabinin 

vhost_poll_flush() is a simple wrapper around vhost_work_dev_flush().
It gives wrong impression that we are doing some work over vhost_poll,
while in fact it flushes vhost_poll->dev.
It only complicate understanding of the code and leads to mistakes
like flushing the same vhost_dev several times in a row.

Just remove vhost_poll_flush() and call vhost_work_dev_flush() directly.

Signed-off-by: Andrey Ryabinin 
[merge vhost_poll_flush removal from Stefano Garzarella]
Signed-off-by: Mike Christie 
---
 drivers/vhost/net.c   |  4 ++--
 drivers/vhost/test.c  |  2 +-
 drivers/vhost/vhost.c | 12 ++--
 drivers/vhost/vhost.h |  1 -
 drivers/vhost/vsock.c |  2 +-
 5 files changed, 6 insertions(+), 15 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 792ab5f23647..4e55ad8c942a 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1376,8 +1376,8 @@ static void vhost_net_stop(struct vhost_net *n, struct 
socket **tx_sock,
 
 static void vhost_net_flush_vq(struct vhost_net *n, int index)
 {
-   vhost_poll_flush(n->poll + index);
-   vhost_poll_flush(>vqs[index].vq.poll);
+   vhost_work_dev_flush(n->poll[index].dev);
+   vhost_work_dev_flush(n->vqs[index].vq.poll.dev);
 }
 
 static void vhost_net_flush(struct vhost_net *n)
diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
index 05740cba1cd8..f0ac9e35f5d6 100644
--- a/drivers/vhost/test.c
+++ b/drivers/vhost/test.c
@@ -146,7 +146,7 @@ static void vhost_test_stop(struct vhost_test *n, void 
**privatep)
 
 static void vhost_test_flush_vq(struct vhost_test *n, int index)
 {
-   vhost_poll_flush(>vqs[index].poll);
+   vhost_work_dev_flush(n->vqs[index].poll.dev);
 }
 
 static void vhost_test_flush(struct vhost_test *n)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index d02173fb290c..1d84a2818c6f 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -245,14 +245,6 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
 }
 EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
 
-/* Flush any work that has been scheduled. When calling this, don't hold any
- * locks that are also used by the callback. */
-void vhost_poll_flush(struct vhost_poll *poll)
-{
-   vhost_work_dev_flush(poll->dev);
-}
-EXPORT_SYMBOL_GPL(vhost_poll_flush);
-
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 {
if (!dev->worker)
@@ -663,7 +655,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
for (i = 0; i < dev->nvqs; ++i) {
if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick) {
vhost_poll_stop(>vqs[i]->poll);
-   vhost_poll_flush(>vqs[i]->poll);
+   vhost_work_dev_flush(dev->vqs[i]->poll.dev);
}
}
 }
@@ -1719,7 +1711,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int 
ioctl, void __user *arg
mutex_unlock(>mutex);
 
if (pollstop && vq->handle_kick)
-   vhost_poll_flush(>poll);
+   vhost_work_dev_flush(vq->poll.dev);
return r;
 }
 EXPORT_SYMBOL_GPL(vhost_vring_ioctl);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 638bb640d6b4..aeb8e1ad1496 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -44,7 +44,6 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t 
fn,
 __poll_t mask, struct vhost_dev *dev);
 int vhost_poll_start(struct vhost_poll *poll, struct file *file);
 void vhost_poll_stop(struct vhost_poll *poll);
-void vhost_poll_flush(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
 void vhost_work_dev_flush(struct vhost_dev *dev);
 
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index e6c9d41db1de..a4c8ae92a0fb 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -709,7 +709,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock)
 
for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
if (vsock->vqs[i].handle_kick)
-   vhost_poll_flush(>vqs[i].poll);
+   vhost_work_dev_flush(vsock->vqs[i].poll.dev);
vhost_work_dev_flush(>dev);
 }
 
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 0/8] vhost flush cleanups

2022-05-15 Thread Mike Christie
The following patches are Andrey Ryabinin's flush cleanups and some
from me. They reduce the number of flush calls and remove some bogus
ones where we don't even have a worker running anymore or they were
based on outdated or incorrect assumptions.

Jason, for the patches you gave me an explicit acked/reviewed tag I
added it. For the replies where I answered your review questions and
you only replied with an affirimative reply I did not add a tag,
because I was not not 100% sure what you wanted to do.

These patches will be used in the vhost threading patches which I think
will make progress again now that I have Christian's email figured out
:) (he had moved to microsoft but I've been using the ubuntu address).
I think the patches are ok cleanups in general so I thought they could
get merged separately if people agree.

V2:
- Added patch to rename vhost_work_dev_flush to just vhost_dev_flush
to handle review comment from Jason about the naming not being so
great.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH V2 2/8] vhost: flush dev once during vhost_dev_stop

2022-05-15 Thread Mike Christie
When vhost_work_dev_flush returns all work queued at that time will have
completed. There is then no need to flush after every vhost_poll_stop
call, and we can move the flush call to after the loop that stops the
pollers.

Signed-off-by: Mike Christie 
---
 drivers/vhost/vhost.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 1d84a2818c6f..9f8de04bb673 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -653,11 +653,11 @@ void vhost_dev_stop(struct vhost_dev *dev)
int i;
 
for (i = 0; i < dev->nvqs; ++i) {
-   if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick) {
+   if (dev->vqs[i]->kick && dev->vqs[i]->handle_kick)
vhost_poll_stop(>vqs[i]->poll);
-   vhost_work_dev_flush(dev->vqs[i]->poll.dev);
-   }
}
+
+   vhost_work_dev_flush(dev);
 }
 EXPORT_SYMBOL_GPL(vhost_dev_stop);
 
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization