Re: [PATCH 3/4] iotests/mirror-top-perms: switch to AQMP

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 05:24, John Snow wrote:

We don't have to maintain compatibility with both QMP libraries anymore,
so we can just remove the old exception. While we're here, take
advantage of the extra fields present in the VMLaunchFailure exception
that machine.py now raises.

(Note: I'm leaving the logging suppression here unchanged. I had
suggested previously we use filters to scrub the PID out of the logging
information so it could just be diffed as part of the iotest output, but
that meant*always*  scrubbing PID from logger output, which defeated the
point of even offering that information in the output to begin with.

Ultimately, I decided it's fine to just suppress the logger temporarily.)

Signed-off-by: John Snow


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 3/7] block/nbd: Assert there are no timers when closed

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

Our two timers must not remain armed beyond nbd_clear_bdrvstate(), or
they will access freed data when they fire.

This patch is separate from the patches that actually fix the issue
(HEAD^^ and HEAD^) so that you can run the associated regression iotest
(281) on a configuration that reproducibly exposes the bug.

Signed-off-by: Hanna Reitz


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 5/7] iotests/281: Test lingering timers

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

Prior to "block/nbd: Delete reconnect delay timer when done" and
"block/nbd: Delete open timer when done", both of those timers would
remain scheduled even after successfully (re-)connecting to the server,
and they would not even be deleted when the BDS is deleted.

This test constructs exactly this situation:
(1) Configure an @open-timeout, so the open timer is armed, and
(2) Configure a @reconnect-delay and trigger a reconnect situation
 (which succeeds immediately), so the reconnect delay timer is armed.
Then we immediately delete the BDS, and sleep for longer than the
@open-timeout and @reconnect-delay.  Prior to said patches, this caused
one (or both) of the timer CBs to access already-freed data.

Accessing freed data may or may not crash, so this test can produce
false successes, but I do not know how to show the problem in a better
or more reliable way.  If you run this test on "block/nbd: Assert there
are no timers when closed" and without the fix patches mentioned above,
you should reliably see an assertion failure.
(But all other tests that use the reconnect delay timer (264 and 277)
will fail in that configuration, too; as will nbd-reconnect-on-open,
which uses the open timer.)

Remove this test from the quick group because of the two second sleep
this patch introduces.

(I decided to put this test case into 281, because the main bug this
series addresses is in the interaction of the NBD block driver and I/O
threads, which is precisely the scope of 281.  The test case for that
other bug will also be put into the test class added here.

Also, excuse the test class's name, I couldn't come up with anything
better.  The "yield" part will make sense two patches from now.)

Signed-off-by: Hanna Reitz 


Reviewed-by: Vladimir Sementsov-Ogievskiy 


--
Best regards,
Vladimir



Re: [PATCH 2/7] block/nbd: Delete open timer when done

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

We start the open timer to cancel the connection attempt after a while.
Once nbd_do_establish_connection() has returned, the attempt is over,
and we no longer need the timer.

Delete it before returning from nbd_open(), so that it does not persist
for longer.  It has no use after nbd_open(), and just like the reconnect
delay timer, it might well be dangerous if it were to fire afterwards.

Signed-off-by: Hanna Reitz


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 6/7] block/nbd: Move s->ioc on AioContext change

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

s->ioc must always be attached to the NBD node's AioContext.  If that
context changes, s->ioc must be attached to the new context.

Buglink:https://bugzilla.redhat.com/show_bug.cgi?id=2033626
Signed-off-by: Hanna Reitz


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 10/12] block.c: add subtree_drains where needed

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

02.02.2022 18:37, Emanuele Giuseppe Esposito wrote:



On 01/02/2022 15:47, Vladimir Sementsov-Ogievskiy wrote:

18.01.2022 19:27, Emanuele Giuseppe Esposito wrote:

Protect bdrv_replace_child_noperm, as it modifies the
graph by adding/removing elements to .children and .parents
list of a bs. Use the newly introduced
bdrv_subtree_drained_{begin/end}_unlocked drains to achieve
that and be free from the aiocontext lock.

One important criteria to keep in mind is that if the caller of
bdrv_replace_child_noperm creates a transaction, we need to make sure
that the
whole transaction is under the same drain block. This is imperative,
as having
multiple drains also in the .abort() class of functions causes
discrepancies
in the drained counters (as nodes are put back into the original
positions),
making it really hard to retourn all to zero and leaving the code very
buggy.
See https://patchew.org/QEMU/20211213104014.69858-1-eespo...@redhat.com/
for more explanations.

Unfortunately we still need to have bdrv_subtree_drained_begin/end
in bdrv_detach_child() releasing and then holding the AioContext
lock, since it later invokes bdrv_try_set_aio_context() that is
not safe yet. Once all is cleaned up, we can also remove the
acquire/release locks in job_unref, artificially added because of this.

Signed-off-by: Emanuele Giuseppe Esposito 
---
   block.c | 50 --
   1 file changed, 44 insertions(+), 6 deletions(-)

diff --git a/block.c b/block.c
index fcc44a49a0..6196c95aae 100644
--- a/block.c
+++ b/block.c
@@ -3114,8 +3114,22 @@ static void bdrv_detach_child(BdrvChild **childp)
   BlockDriverState *old_bs = (*childp)->bs;
     assert(qemu_in_main_thread());
+    if (old_bs) {
+    /*
+ * TODO: this is called by job_unref with lock held, because
+ * afterwards it calls bdrv_try_set_aio_context.
+ * Once all of this is fixed, take care of removing
+ * the aiocontext lock and make this function _unlocked.
+ */
+    bdrv_subtree_drained_begin(old_bs);
+    }
+
   bdrv_replace_child_noperm(childp, NULL, true);
   +    if (old_bs) {
+    bdrv_subtree_drained_end(old_bs);
+    }
+
   if (old_bs) {
   /*
    * Update permissions for old node. We're just taking a
parent away, so
@@ -3154,6 +3168,7 @@ BdrvChild
*bdrv_root_attach_child(BlockDriverState *child_bs,
   Transaction *tran = tran_new();
     assert(qemu_in_main_thread());
+    bdrv_subtree_drained_begin_unlocked(child_bs);
     ret = bdrv_attach_child_common(child_bs, child_name, child_class,
  child_role, perm, shared_perm,
opaque,
@@ -3168,6 +3183,7 @@ out:
   tran_finalize(tran, ret);
   /* child is unset on failure by bdrv_attach_child_common_abort() */
   assert((ret < 0) == !child);
+    bdrv_subtree_drained_end_unlocked(child_bs);
     bdrv_unref(child_bs);
   return child;
@@ -3197,6 +3213,9 @@ BdrvChild *bdrv_attach_child(BlockDriverState
*parent_bs,
     assert(qemu_in_main_thread());
   +    bdrv_subtree_drained_begin_unlocked(parent_bs);
+    bdrv_subtree_drained_begin_unlocked(child_bs);
+
   ret = bdrv_attach_child_noperm(parent_bs, child_bs, child_name,
child_class,
  child_role, &child, tran, errp);
   if (ret < 0) {
@@ -3211,6 +3230,9 @@ BdrvChild *bdrv_attach_child(BlockDriverState
*parent_bs,
   out:
   tran_finalize(tran, ret);
   /* child is unset on failure by bdrv_attach_child_common_abort() */
+    bdrv_subtree_drained_end_unlocked(child_bs);
+    bdrv_subtree_drained_end_unlocked(parent_bs);
+
   assert((ret < 0) == !child);
     bdrv_unref(child_bs);
@@ -3456,6 +3478,11 @@ int bdrv_set_backing_hd(BlockDriverState *bs,
BlockDriverState *backing_hd,
     assert(qemu_in_main_thread());
   +    bdrv_subtree_drained_begin_unlocked(bs);
+    if (backing_hd) {
+    bdrv_subtree_drained_begin_unlocked(backing_hd);
+    }
+
   ret = bdrv_set_backing_noperm(bs, backing_hd, tran, errp);
   if (ret < 0) {
   goto out;
@@ -3464,6 +3491,10 @@ int bdrv_set_backing_hd(BlockDriverState *bs,
BlockDriverState *backing_hd,
   ret = bdrv_refresh_perms(bs, errp);
   out:
   tran_finalize(tran, ret);
+    if (backing_hd) {
+    bdrv_subtree_drained_end_unlocked(backing_hd);
+    }
+    bdrv_subtree_drained_end_unlocked(bs);
     return ret;
   }
@@ -5266,7 +5297,8 @@ static int
bdrv_replace_node_common(BlockDriverState *from,
     assert(qemu_get_current_aio_context() == qemu_get_aio_context());
   assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
-    bdrv_drained_begin(from);
+    bdrv_subtree_drained_begin_unlocked(from);
+    bdrv_subtree_drained_begin_unlocked(to);
     /*
    * Do the replacement without permission update.
@@ -5298,7 +5330,8 @@ static int
bdrv_replace_node_common(BlockDriverState *from,
   out:
   tran_finalize(

Re: [PATCH 1/7] block/nbd: Delete reconnect delay timer when done

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

We start the reconnect delay timer to cancel the reconnection attempt
after a while.  Once nbd_co_do_establish_connection() has returned, this
attempt is over, and we no longer need the timer.

Delete it before returning from nbd_reconnect_attempt(), so that it does
not persist beyond the I/O request that was paused for reconnecting; we
do not want it to fire in a drained section, because all sort of things
can happen in such a section (e.g. the AioContext might be changed, and
we do not want the timer to fire in the wrong context; or the BDS might
even be deleted, and so the timer CB would access already-freed data).

Signed-off-by: Hanna Reitz


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 4/7] iotests.py: Add QemuStorageDaemon class

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

This is a rather simple class that allows creating a QSD instance
running in the background and stopping it when no longer needed.

The __del__ handler is a safety net for when something goes so wrong in
a test that e.g. the tearDown() method is not called (e.g. setUp()
launches the QSD, but then launching a VM fails).  We do not want the
QSD to continue running after the test has failed, so __del__() will
take care to kill it.

Signed-off-by: Hanna Reitz 
---
  tests/qemu-iotests/iotests.py | 42 +++
  1 file changed, 42 insertions(+)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 8cdb381f2a..c75e402b87 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -73,6 +73,8 @@
  qemu_prog = os.environ.get('QEMU_PROG', 'qemu')
  qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
  
+qsd_prog = os.environ.get('QSD_PROG', 'qemu-storage-daemon')

+
  gdb_qemu_env = os.environ.get('GDB_OPTIONS')
  qemu_gdb = []
  if gdb_qemu_env:
@@ -345,6 +347,46 @@ def cmd(self, cmd):
  return self._read_output()
  
  
+class QemuStorageDaemon:

+def __init__(self, *args: str, instance_id: Optional[str] = None):
+if not instance_id:
+instance_id = 'a'


this is equivalent to simply
 
  instance_id: str = 'a'



+


I'd add

   assert '--pidfile' not in args

to prove following logic


+self.pidfile = os.path.join(test_dir, f'qsd-{instance_id}-pid')
+all_args = [qsd_prog] + list(args) + ['--pidfile', self.pidfile]
+
+# Cannot use with here, we want the subprocess to stay around
+# pylint: disable=consider-using-with
+self._p = subprocess.Popen(all_args)
+while not os.path.exists(self.pidfile):
+if self._p.poll() is not None:
+cmd = ' '.join(all_args)
+raise RuntimeError(
+'qemu-storage-daemon terminated with exit code ' +
+f'{self._p.returncode}: {cmd}')
+
+time.sleep(0.01)
+
+with open(self.pidfile, encoding='utf-8') as f:
+self._pid = int(f.read().strip())
+
+assert self._pid == self._p.pid
+
+def stop(self, kill_signal=15):
+self._p.send_signal(kill_signal)
+self._p.wait()
+self._p = None
+
+try:
+os.remove(self.pidfile)
+except OSError:
+pass
+
+def __del__(self):
+if self._p is not None:
+self.stop(kill_signal=9)
+
+
  def qemu_nbd(*args):
  '''Run qemu-nbd in daemon mode and return the parent's exit code'''
  return subprocess.call(qemu_nbd_args + ['--fork'] + list(args))



Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 7/7] iotests/281: Let NBD connection yield in iothread

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

03.02.2022 19:30, Hanna Reitz wrote:

Put an NBD block device into an I/O thread, and then read data from it,
hoping that the NBD connection will yield during that read.  When it
does, the coroutine must be reentered in the block device's I/O thread,
which will only happen if the NBD block driver attaches the connection's
QIOChannel to the new AioContext.  It did not do that after 4ddb5d2fde
("block/nbd: drop connection_co") and prior to "block/nbd: Move s->ioc
on AioContext change", which would cause an assertion failure.

To improve our chances of yielding, the NBD server is throttled to
reading 64 kB/s, and the NBD client reads 128 kB, so it should yield at
some point.

Signed-off-by: Hanna Reitz



Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH 4/7] iotests.py: Add QemuStorageDaemon class

2022-02-04 Thread Hanna Reitz

On 04.02.22 10:04, Vladimir Sementsov-Ogievskiy wrote:

03.02.2022 19:30, Hanna Reitz wrote:

This is a rather simple class that allows creating a QSD instance
running in the background and stopping it when no longer needed.

The __del__ handler is a safety net for when something goes so wrong in
a test that e.g. the tearDown() method is not called (e.g. setUp()
launches the QSD, but then launching a VM fails).  We do not want the
QSD to continue running after the test has failed, so __del__() will
take care to kill it.

Signed-off-by: Hanna Reitz 
---
  tests/qemu-iotests/iotests.py | 42 +++
  1 file changed, 42 insertions(+)

diff --git a/tests/qemu-iotests/iotests.py 
b/tests/qemu-iotests/iotests.py

index 8cdb381f2a..c75e402b87 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -73,6 +73,8 @@
  qemu_prog = os.environ.get('QEMU_PROG', 'qemu')
  qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
  +qsd_prog = os.environ.get('QSD_PROG', 'qemu-storage-daemon')
+
  gdb_qemu_env = os.environ.get('GDB_OPTIONS')
  qemu_gdb = []
  if gdb_qemu_env:
@@ -345,6 +347,46 @@ def cmd(self, cmd):
  return self._read_output()
    +class QemuStorageDaemon:
+    def __init__(self, *args: str, instance_id: Optional[str] = None):
+    if not instance_id:
+    instance_id = 'a'


this is equivalent to simply

  instance_id: str = 'a'


Oh.  Right. :)


+


I'd add

   assert '--pidfile' not in args

to prove following logic


Sounds good!


+    self.pidfile = os.path.join(test_dir, f'qsd-{instance_id}-pid')
+    all_args = [qsd_prog] + list(args) + ['--pidfile', 
self.pidfile]

+
+    # Cannot use with here, we want the subprocess to stay around
+    # pylint: disable=consider-using-with
+    self._p = subprocess.Popen(all_args)
+    while not os.path.exists(self.pidfile):
+    if self._p.poll() is not None:
+    cmd = ' '.join(all_args)
+    raise RuntimeError(
+    'qemu-storage-daemon terminated with exit code ' +
+    f'{self._p.returncode}: {cmd}')
+
+    time.sleep(0.01)
+
+    with open(self.pidfile, encoding='utf-8') as f:
+    self._pid = int(f.read().strip())
+
+    assert self._pid == self._p.pid
+
+    def stop(self, kill_signal=15):
+    self._p.send_signal(kill_signal)
+    self._p.wait()
+    self._p = None
+
+    try:
+    os.remove(self.pidfile)
+    except OSError:
+    pass
+
+    def __del__(self):
+    if self._p is not None:
+    self.stop(kill_signal=9)
+
+
  def qemu_nbd(*args):
  '''Run qemu-nbd in daemon mode and return the parent's exit 
code'''

  return subprocess.call(qemu_nbd_args + ['--fork'] + list(args))



Reviewed-by: Vladimir Sementsov-Ogievskiy 


Thanks a lot for reviewing!




[PATCH v2 0/7] block/nbd: Move s->ioc on AioContext change

2022-02-04 Thread Hanna Reitz
Hi,

The original RFC is here:
https://lists.nongnu.org/archive/html/qemu-block/2022-01/msg00765.html

And v1 (whose cover letter has some exposition) is here:
https://lists.nongnu.org/archive/html/qemu-block/2022-02/msg00124.html


In v2, I’ve fixed the QemuStorageDaemon class added in patch 4 as
suggested by Vladimir:
- Have `__init__()`’s `instance_id` parameter have its actual default
  value instead of some weird `if instance_id is None` construct
- Assert that the QSD class users won’t use --pidfile

(And added Vladimir’s R-b to all patches.)

git backport-diff against v1:

Key:
[] : patches are identical
[] : number of functional differences between upstream/downstream patch
[down] : patch is downstream-only
The flags [FC] indicate (F)unctional and (C)ontextual differences, respectively

001/7:[] [--] 'block/nbd: Delete reconnect delay timer when done'
002/7:[] [--] 'block/nbd: Delete open timer when done'
003/7:[] [--] 'block/nbd: Assert there are no timers when closed'
004/7:[0006] [FC] 'iotests.py: Add QemuStorageDaemon class'
005/7:[] [--] 'iotests/281: Test lingering timers'
006/7:[] [--] 'block/nbd: Move s->ioc on AioContext change'
007/7:[] [--] 'iotests/281: Let NBD connection yield in iothread'


Hanna Reitz (7):
  block/nbd: Delete reconnect delay timer when done
  block/nbd: Delete open timer when done
  block/nbd: Assert there are no timers when closed
  iotests.py: Add QemuStorageDaemon class
  iotests/281: Test lingering timers
  block/nbd: Move s->ioc on AioContext change
  iotests/281: Let NBD connection yield in iothread

 block/nbd.c   |  64 +
 tests/qemu-iotests/281| 101 +-
 tests/qemu-iotests/281.out|   4 +-
 tests/qemu-iotests/iotests.py |  40 ++
 4 files changed, 205 insertions(+), 4 deletions(-)

-- 
2.34.1




[PATCH v2 1/7] block/nbd: Delete reconnect delay timer when done

2022-02-04 Thread Hanna Reitz
We start the reconnect delay timer to cancel the reconnection attempt
after a while.  Once nbd_co_do_establish_connection() has returned, this
attempt is over, and we no longer need the timer.

Delete it before returning from nbd_reconnect_attempt(), so that it does
not persist beyond the I/O request that was paused for reconnecting; we
do not want it to fire in a drained section, because all sort of things
can happen in such a section (e.g. the AioContext might be changed, and
we do not want the timer to fire in the wrong context; or the BDS might
even be deleted, and so the timer CB would access already-freed data).

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 block/nbd.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/block/nbd.c b/block/nbd.c
index 63dbfa807d..16cd7fef77 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -381,6 +381,13 @@ static coroutine_fn void 
nbd_reconnect_attempt(BDRVNBDState *s)
 }
 
 nbd_co_do_establish_connection(s->bs, NULL);
+
+/*
+ * The reconnect attempt is done (maybe successfully, maybe not), so
+ * we no longer need this timer.  Delete it so it will not outlive
+ * this I/O request (so draining removes all timers).
+ */
+reconnect_delay_timer_del(s);
 }
 
 static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
-- 
2.34.1




[PATCH v2 2/7] block/nbd: Delete open timer when done

2022-02-04 Thread Hanna Reitz
We start the open timer to cancel the connection attempt after a while.
Once nbd_do_establish_connection() has returned, the attempt is over,
and we no longer need the timer.

Delete it before returning from nbd_open(), so that it does not persist
for longer.  It has no use after nbd_open(), and just like the reconnect
delay timer, it might well be dangerous if it were to fire afterwards.

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 block/nbd.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/block/nbd.c b/block/nbd.c
index 16cd7fef77..5ff8a57314 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -1885,11 +1885,19 @@ static int nbd_open(BlockDriverState *bs, QDict 
*options, int flags,
 goto fail;
 }
 
+/*
+ * The connect attempt is done, so we no longer need this timer.
+ * Delete it, because we do not want it to be around when this node
+ * is drained or closed.
+ */
+open_timer_del(s);
+
 nbd_client_connection_enable_retry(s->conn);
 
 return 0;
 
 fail:
+open_timer_del(s);
 nbd_clear_bdrvstate(bs);
 return ret;
 }
-- 
2.34.1




[PATCH v2 3/7] block/nbd: Assert there are no timers when closed

2022-02-04 Thread Hanna Reitz
Our two timers must not remain armed beyond nbd_clear_bdrvstate(), or
they will access freed data when they fire.

This patch is separate from the patches that actually fix the issue
(HEAD^^ and HEAD^) so that you can run the associated regression iotest
(281) on a configuration that reproducibly exposes the bug.

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 block/nbd.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/block/nbd.c b/block/nbd.c
index 5ff8a57314..dc6c3f3bbc 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -110,6 +110,10 @@ static void nbd_clear_bdrvstate(BlockDriverState *bs)
 
 yank_unregister_instance(BLOCKDEV_YANK_INSTANCE(bs->node_name));
 
+/* Must not leave timers behind that would access freed data */
+assert(!s->reconnect_delay_timer);
+assert(!s->open_timer);
+
 object_unref(OBJECT(s->tlscreds));
 qapi_free_SocketAddress(s->saddr);
 s->saddr = NULL;
-- 
2.34.1




[PATCH v2 7/7] iotests/281: Let NBD connection yield in iothread

2022-02-04 Thread Hanna Reitz
Put an NBD block device into an I/O thread, and then read data from it,
hoping that the NBD connection will yield during that read.  When it
does, the coroutine must be reentered in the block device's I/O thread,
which will only happen if the NBD block driver attaches the connection's
QIOChannel to the new AioContext.  It did not do that after 4ddb5d2fde
("block/nbd: drop connection_co") and prior to "block/nbd: Move s->ioc
on AioContext change", which would cause an assertion failure.

To improve our chances of yielding, the NBD server is throttled to
reading 64 kB/s, and the NBD client reads 128 kB, so it should yield at
some point.

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 tests/qemu-iotests/281 | 28 +---
 tests/qemu-iotests/281.out |  4 ++--
 2 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tests/qemu-iotests/281 b/tests/qemu-iotests/281
index 4fb3cd30dd..5e1339bd75 100755
--- a/tests/qemu-iotests/281
+++ b/tests/qemu-iotests/281
@@ -253,8 +253,9 @@ class TestYieldingAndTimers(iotests.QMPTestCase):
 self.create_nbd_export()
 
 # Simple VM with an NBD block device connected to the NBD export
-# provided by the QSD
+# provided by the QSD, and an (initially unused) iothread
 self.vm = iotests.VM()
+self.vm.add_object('iothread,id=iothr')
 self.vm.add_blockdev('nbd,node-name=nbd,server.type=unix,' +
  f'server.path={self.sock},export=exp,' +
  'reconnect-delay=1,open-timeout=1')
@@ -299,19 +300,40 @@ class TestYieldingAndTimers(iotests.QMPTestCase):
 # thus not see the error, and so the test will pass.)
 time.sleep(2)
 
+def test_yield_in_iothread(self):
+# Move the NBD node to the I/O thread; the NBD block driver should
+# attach the connection's QIOChannel to that thread's AioContext, too
+result = self.vm.qmp('x-blockdev-set-iothread',
+ node_name='nbd', iothread='iothr')
+self.assert_qmp(result, 'return', {})
+
+# Do some I/O that will be throttled by the QSD, so that the network
+# connection hopefully will yield here.  When it is resumed, it must
+# then be resumed in the I/O thread's AioContext.
+result = self.vm.qmp('human-monitor-command',
+ command_line='qemu-io nbd "read 0 128K"')
+self.assert_qmp(result, 'return', '')
+
 def create_nbd_export(self):
 assert self.qsd is None
 
-# Simple NBD export of a null-co BDS
+# Export a throttled null-co BDS: Reads are throttled (max 64 kB/s),
+# writes are not.
 self.qsd = QemuStorageDaemon(
+'--object',
+'throttle-group,id=thrgr,x-bps-read=65536,x-bps-read-max=65536',
+
 '--blockdev',
 'null-co,node-name=null,read-zeroes=true',
 
+'--blockdev',
+'throttle,node-name=thr,file=null,throttle-group=thrgr',
+
 '--nbd-server',
 f'addr.type=unix,addr.path={self.sock}',
 
 '--export',
-'nbd,id=exp,node-name=null,name=exp,writable=true'
+'nbd,id=exp,node-name=thr,name=exp,writable=true'
 )
 
 def stop_nbd_export(self):
diff --git a/tests/qemu-iotests/281.out b/tests/qemu-iotests/281.out
index 914e3737bd..3f8a935a08 100644
--- a/tests/qemu-iotests/281.out
+++ b/tests/qemu-iotests/281.out
@@ -1,5 +1,5 @@
-.
+..
 --
-Ran 5 tests
+Ran 6 tests
 
 OK
-- 
2.34.1




[PATCH v2 4/7] iotests.py: Add QemuStorageDaemon class

2022-02-04 Thread Hanna Reitz
This is a rather simple class that allows creating a QSD instance
running in the background and stopping it when no longer needed.

The __del__ handler is a safety net for when something goes so wrong in
a test that e.g. the tearDown() method is not called (e.g. setUp()
launches the QSD, but then launching a VM fails).  We do not want the
QSD to continue running after the test has failed, so __del__() will
take care to kill it.

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 tests/qemu-iotests/iotests.py | 40 +++
 1 file changed, 40 insertions(+)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 8cdb381f2a..6ba65eb1ff 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -73,6 +73,8 @@
 qemu_prog = os.environ.get('QEMU_PROG', 'qemu')
 qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
 
+qsd_prog = os.environ.get('QSD_PROG', 'qemu-storage-daemon')
+
 gdb_qemu_env = os.environ.get('GDB_OPTIONS')
 qemu_gdb = []
 if gdb_qemu_env:
@@ -345,6 +347,44 @@ def cmd(self, cmd):
 return self._read_output()
 
 
+class QemuStorageDaemon:
+def __init__(self, *args: str, instance_id: str = 'a'):
+assert '--pidfile' not in args
+self.pidfile = os.path.join(test_dir, f'qsd-{instance_id}-pid')
+all_args = [qsd_prog] + list(args) + ['--pidfile', self.pidfile]
+
+# Cannot use with here, we want the subprocess to stay around
+# pylint: disable=consider-using-with
+self._p = subprocess.Popen(all_args)
+while not os.path.exists(self.pidfile):
+if self._p.poll() is not None:
+cmd = ' '.join(all_args)
+raise RuntimeError(
+'qemu-storage-daemon terminated with exit code ' +
+f'{self._p.returncode}: {cmd}')
+
+time.sleep(0.01)
+
+with open(self.pidfile, encoding='utf-8') as f:
+self._pid = int(f.read().strip())
+
+assert self._pid == self._p.pid
+
+def stop(self, kill_signal=15):
+self._p.send_signal(kill_signal)
+self._p.wait()
+self._p = None
+
+try:
+os.remove(self.pidfile)
+except OSError:
+pass
+
+def __del__(self):
+if self._p is not None:
+self.stop(kill_signal=9)
+
+
 def qemu_nbd(*args):
 '''Run qemu-nbd in daemon mode and return the parent's exit code'''
 return subprocess.call(qemu_nbd_args + ['--fork'] + list(args))
-- 
2.34.1




[PATCH v2 6/7] block/nbd: Move s->ioc on AioContext change

2022-02-04 Thread Hanna Reitz
s->ioc must always be attached to the NBD node's AioContext.  If that
context changes, s->ioc must be attached to the new context.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2033626
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 block/nbd.c | 45 +
 1 file changed, 45 insertions(+)

diff --git a/block/nbd.c b/block/nbd.c
index dc6c3f3bbc..5853d85d60 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -2055,6 +2055,42 @@ static void nbd_cancel_in_flight(BlockDriverState *bs)
 nbd_co_establish_connection_cancel(s->conn);
 }
 
+static void nbd_attach_aio_context(BlockDriverState *bs,
+   AioContext *new_context)
+{
+BDRVNBDState *s = bs->opaque;
+
+/* The open_timer is used only during nbd_open() */
+assert(!s->open_timer);
+
+/*
+ * The reconnect_delay_timer is scheduled in I/O paths when the
+ * connection is lost, to cancel the reconnection attempt after a
+ * given time.  Once this attempt is done (successfully or not),
+ * nbd_reconnect_attempt() ensures the timer is deleted before the
+ * respective I/O request is resumed.
+ * Since the AioContext can only be changed when a node is drained,
+ * the reconnect_delay_timer cannot be active here.
+ */
+assert(!s->reconnect_delay_timer);
+
+if (s->ioc) {
+qio_channel_attach_aio_context(s->ioc, new_context);
+}
+}
+
+static void nbd_detach_aio_context(BlockDriverState *bs)
+{
+BDRVNBDState *s = bs->opaque;
+
+assert(!s->open_timer);
+assert(!s->reconnect_delay_timer);
+
+if (s->ioc) {
+qio_channel_detach_aio_context(s->ioc);
+}
+}
+
 static BlockDriver bdrv_nbd = {
 .format_name= "nbd",
 .protocol_name  = "nbd",
@@ -2078,6 +2114,9 @@ static BlockDriver bdrv_nbd = {
 .bdrv_dirname   = nbd_dirname,
 .strong_runtime_opts= nbd_strong_runtime_opts,
 .bdrv_cancel_in_flight  = nbd_cancel_in_flight,
+
+.bdrv_attach_aio_context= nbd_attach_aio_context,
+.bdrv_detach_aio_context= nbd_detach_aio_context,
 };
 
 static BlockDriver bdrv_nbd_tcp = {
@@ -2103,6 +2142,9 @@ static BlockDriver bdrv_nbd_tcp = {
 .bdrv_dirname   = nbd_dirname,
 .strong_runtime_opts= nbd_strong_runtime_opts,
 .bdrv_cancel_in_flight  = nbd_cancel_in_flight,
+
+.bdrv_attach_aio_context= nbd_attach_aio_context,
+.bdrv_detach_aio_context= nbd_detach_aio_context,
 };
 
 static BlockDriver bdrv_nbd_unix = {
@@ -2128,6 +2170,9 @@ static BlockDriver bdrv_nbd_unix = {
 .bdrv_dirname   = nbd_dirname,
 .strong_runtime_opts= nbd_strong_runtime_opts,
 .bdrv_cancel_in_flight  = nbd_cancel_in_flight,
+
+.bdrv_attach_aio_context= nbd_attach_aio_context,
+.bdrv_detach_aio_context= nbd_detach_aio_context,
 };
 
 static void bdrv_nbd_init(void)
-- 
2.34.1




[PATCH v2 5/7] iotests/281: Test lingering timers

2022-02-04 Thread Hanna Reitz
Prior to "block/nbd: Delete reconnect delay timer when done" and
"block/nbd: Delete open timer when done", both of those timers would
remain scheduled even after successfully (re-)connecting to the server,
and they would not even be deleted when the BDS is deleted.

This test constructs exactly this situation:
(1) Configure an @open-timeout, so the open timer is armed, and
(2) Configure a @reconnect-delay and trigger a reconnect situation
(which succeeds immediately), so the reconnect delay timer is armed.
Then we immediately delete the BDS, and sleep for longer than the
@open-timeout and @reconnect-delay.  Prior to said patches, this caused
one (or both) of the timer CBs to access already-freed data.

Accessing freed data may or may not crash, so this test can produce
false successes, but I do not know how to show the problem in a better
or more reliable way.  If you run this test on "block/nbd: Assert there
are no timers when closed" and without the fix patches mentioned above,
you should reliably see an assertion failure.
(But all other tests that use the reconnect delay timer (264 and 277)
will fail in that configuration, too; as will nbd-reconnect-on-open,
which uses the open timer.)

Remove this test from the quick group because of the two second sleep
this patch introduces.

(I decided to put this test case into 281, because the main bug this
series addresses is in the interaction of the NBD block driver and I/O
threads, which is precisely the scope of 281.  The test case for that
other bug will also be put into the test class added here.

Also, excuse the test class's name, I couldn't come up with anything
better.  The "yield" part will make sense two patches from now.)

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Hanna Reitz 
---
 tests/qemu-iotests/281 | 79 +-
 tests/qemu-iotests/281.out |  4 +-
 2 files changed, 79 insertions(+), 4 deletions(-)

diff --git a/tests/qemu-iotests/281 b/tests/qemu-iotests/281
index 318e333939..4fb3cd30dd 100755
--- a/tests/qemu-iotests/281
+++ b/tests/qemu-iotests/281
@@ -1,5 +1,5 @@
 #!/usr/bin/env python3
-# group: rw quick
+# group: rw
 #
 # Test cases for blockdev + IOThread interactions
 #
@@ -20,8 +20,9 @@
 #
 
 import os
+import time
 import iotests
-from iotests import qemu_img
+from iotests import qemu_img, QemuStorageDaemon
 
 image_len = 64 * 1024 * 1024
 
@@ -243,6 +244,80 @@ class TestBlockdevBackupAbort(iotests.QMPTestCase):
 # Hangs on failure, we expect this error.
 self.assert_qmp(result, 'error/class', 'GenericError')
 
+# Test for RHBZ#2033626
+class TestYieldingAndTimers(iotests.QMPTestCase):
+sock = os.path.join(iotests.sock_dir, 'nbd.sock')
+qsd = None
+
+def setUp(self):
+self.create_nbd_export()
+
+# Simple VM with an NBD block device connected to the NBD export
+# provided by the QSD
+self.vm = iotests.VM()
+self.vm.add_blockdev('nbd,node-name=nbd,server.type=unix,' +
+ f'server.path={self.sock},export=exp,' +
+ 'reconnect-delay=1,open-timeout=1')
+
+self.vm.launch()
+
+def tearDown(self):
+self.stop_nbd_export()
+self.vm.shutdown()
+
+def test_timers_with_blockdev_del(self):
+# The NBD BDS will have had an active open timer, because setUp() gave
+# a positive value for @open-timeout.  It should be gone once the BDS
+# has been opened.
+# (But there used to be a bug where it remained active, which will
+# become important below.)
+
+# Stop and restart the NBD server, and do some I/O on the client to
+# trigger a reconnect and start the reconnect delay timer
+self.stop_nbd_export()
+self.create_nbd_export()
+
+result = self.vm.qmp('human-monitor-command',
+ command_line='qemu-io nbd "write 0 512"')
+self.assert_qmp(result, 'return', '')
+
+# Reconnect is done, so the reconnect delay timer should be gone.
+# (This is similar to how the open timer should be gone after open,
+# and similarly there used to be a bug where it was not gone.)
+
+# Delete the BDS to see whether both timers are gone.  If they are not,
+# they will remain active, fire later, and then access freed data.
+# (Or, with "block/nbd: Assert there are no timers when closed"
+# applied, the assertions added in that patch will fail.)
+result = self.vm.qmp('blockdev-del', node_name='nbd')
+self.assert_qmp(result, 'return', {})
+
+# Give the timers some time to fire (both have a timeout of 1 s).
+# (Sleeping in an iotest may ring some alarm bells, but note that if
+# the timing is off here, the test will just always pass.  If we kill
+# the VM too early, then we just kill the timers before they can fire,
+# thus not see the error, and so the test w

Re: [PATCH v2 0/7] block/nbd: Move s->ioc on AioContext change

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

04.02.2022 14:10, Hanna Reitz wrote:

Hi,

The original RFC is here:
https://lists.nongnu.org/archive/html/qemu-block/2022-01/msg00765.html

And v1 (whose cover letter has some exposition) is here:
https://lists.nongnu.org/archive/html/qemu-block/2022-02/msg00124.html


In v2, I’ve fixed the QemuStorageDaemon class added in patch 4 as
suggested by Vladimir:
- Have `__init__()`’s `instance_id` parameter have its actual default
   value instead of some weird `if instance_id is None` construct
- Assert that the QSD class users won’t use --pidfile

(And added Vladimir’s R-b to all patches.)

git backport-diff against v1:

Key:
[] : patches are identical
[] : number of functional differences between upstream/downstream patch
[down] : patch is downstream-only
The flags [FC] indicate (F)unctional and (C)ontextual differences, respectively

001/7:[] [--] 'block/nbd: Delete reconnect delay timer when done'
002/7:[] [--] 'block/nbd: Delete open timer when done'
003/7:[] [--] 'block/nbd: Assert there are no timers when closed'
004/7:[0006] [FC] 'iotests.py: Add QemuStorageDaemon class'
005/7:[] [--] 'iotests/281: Test lingering timers'
006/7:[] [--] 'block/nbd: Move s->ioc on AioContext change'
007/7:[] [--] 'iotests/281: Let NBD connection yield in iothread'


Hanna Reitz (7):
   block/nbd: Delete reconnect delay timer when done
   block/nbd: Delete open timer when done
   block/nbd: Assert there are no timers when closed
   iotests.py: Add QemuStorageDaemon class
   iotests/281: Test lingering timers
   block/nbd: Move s->ioc on AioContext change
   iotests/281: Let NBD connection yield in iothread

  block/nbd.c   |  64 +
  tests/qemu-iotests/281| 101 +-
  tests/qemu-iotests/281.out|   4 +-
  tests/qemu-iotests/iotests.py |  40 ++
  4 files changed, 205 insertions(+), 4 deletions(-)



Thanks a lot for fixing this.

Applied to my nbd branch. I'll wait for additional comments until Tuesday 
before preparing a pull req.

--
Best regards,
Vladimir



Re: [PATCH 01/12] introduce BDRV_POLL_WHILE_UNLOCKED

2022-02-04 Thread Paolo Bonzini

On 2/3/22 14:57, Emanuele Giuseppe Esposito wrote:

  * This function avoids taking the AioContext lock unnecessarly, so use
  * it only when sure that the lock is not taken already, otherwise it
  * might cause deadlocks.


"so use" should be "but use". :)

Paolo


  *
  * @cond must be thread-safe.
  */





Re: [PATCH 10/12] block.c: add subtree_drains where needed

2022-02-04 Thread Emanuele Giuseppe Esposito
>>
>>  From the other thread:
>>> So you mean that backing_hd is modified as its parent is changed, do
>>> I understand correctly?
>>
>> Yes
>>
>>>
>>> As we started to discuss in a thread started with my "[PATCH] block:
>>> bdrv_set_backing_hd(): use drained section", I think draining the whole
>>> subtree when we modify some parent-child relation is too much. In-flight
>>> requests in subtree don't touch these relations, so why to wait/stop
>>> them? Imagine two disks A and B with same backing file C. If we want to
>>> detach A from C, what is the reason to drain in-fligth read request of B
>>> in C?
>>
>> If I am not mistaken, bdrv_set_backing_hd adds a new node (yes removes
>> the old backing_hd, but let's not think about this for a moment).
>> So we have disk B with backing file C, and new disk A that wants to have
>> backing file C.
>>
>> I think I understand what you mean, so in theory the operation would be
>> - create new child
>> - add child to A->children list
>> - add child to C->parents list
>>
>> So in theory we need to
>> * drain A (without subtree), because it can't happen that child nodes of
>>    A have in-flight requests that look at A status (children list),
>> right?
>>    In other words, if A has another node X, can a request in X inspect
>>    A->children
> 
> It should not happen. In my understanding, IO request never access
> parents of the node.
> 
>> * drain C, as parents can inspect C status (like B). Same assumption
>>    here, C->children[x]->bs cannot have requests inspecting C->parents
>>    list?
> 
> No, I think we should not drain C. IO requests don't inspect parents
> list. And if some _other_ operations do inspect it, drain will not help,
> as drain is only about IO requests.

I am still not convinced about this, because of the draining.

While drain can only be called by either main loop or the "home
iothread" (the one running the AioContext), it still means there are 2
threads involved. So while the iothread runs set_backing_hd, main loop
could easily drain one of the children of the nodes. Or the other way
around.
I am not saying that this happens, but it *might* happen.

I am a little bit confused about this, it would be nice to hear opinions
from others as well.

Once we figure this, I will know where exactly to put the assertions,
and consequently what to drain and with which function.

Emanuele




Re: [PATCH 10/12] block.c: add subtree_drains where needed

2022-02-04 Thread Vladimir Sementsov-Ogievskiy

04.02.2022 16:30, Emanuele Giuseppe Esposito wrote:


  From the other thread:

So you mean that backing_hd is modified as its parent is changed, do
I understand correctly?


Yes



As we started to discuss in a thread started with my "[PATCH] block:
bdrv_set_backing_hd(): use drained section", I think draining the whole
subtree when we modify some parent-child relation is too much. In-flight
requests in subtree don't touch these relations, so why to wait/stop
them? Imagine two disks A and B with same backing file C. If we want to
detach A from C, what is the reason to drain in-fligth read request of B
in C?


If I am not mistaken, bdrv_set_backing_hd adds a new node (yes removes
the old backing_hd, but let's not think about this for a moment).
So we have disk B with backing file C, and new disk A that wants to have
backing file C.

I think I understand what you mean, so in theory the operation would be
- create new child
- add child to A->children list
- add child to C->parents list

So in theory we need to
* drain A (without subtree), because it can't happen that child nodes of
    A have in-flight requests that look at A status (children list),
right?
    In other words, if A has another node X, can a request in X inspect
    A->children


It should not happen. In my understanding, IO request never access
parents of the node.


* drain C, as parents can inspect C status (like B). Same assumption
    here, C->children[x]->bs cannot have requests inspecting C->parents
    list?


No, I think we should not drain C. IO requests don't inspect parents
list. And if some _other_ operations do inspect it, drain will not help,
as drain is only about IO requests.


I am still not convinced about this, because of the draining.

While drain can only be called by either main loop or the "home
iothread" (the one running the AioContext), it still means there are 2
threads involved. So while the iothread runs set_backing_hd, main loop
could easily drain one of the children of the nodes. Or the other way
around.
I am not saying that this happens, but it *might* happen.


I agree that it might happen. So, parallel drains are a problem. But drain is 
always a part of graph modification, and any graph modifications running in 
parallel are already a problem, we should forbid it somehow.

When you drain node, you say: please no in-flight requests now at this node. 
But IO requests doesn't do drain themselves, so why to restrict them?

And anyway, I don't see how this help. Ok, assume you drain additional node C 
or even the whole subtree. What protect us from parallel drain from another 
thread?



I am a little bit confused about this, it would be nice to hear opinions
from others as well.

Once we figure this, I will know where exactly to put the assertions,
and consequently what to drain and with which function.

Emanuele




--
Best regards,
Vladimir



Re: [PULL 0/4] Python patches

2022-02-04 Thread Peter Maydell
On Thu, 3 Feb 2022 at 01:59, John Snow  wrote:
>
> The following changes since commit 47cc1a3655135b89fa75c2824fbddd29df874612:
>
>   Merge remote-tracking branch 'remotes/kwolf-gitlab/tags/for-upstream' into 
> staging (2022-02-01 19:48:15 +)
>
> are available in the Git repository at:
>
>   https://gitlab.com/jsnow/qemu.git tags/python-pull-request
>
> for you to fetch changes up to b0b662bb2b340d63529672b5bdae596a6243c4d0:
>
>   python/aqmp: add socket bind step to legacy.py (2022-02-02 14:12:22 -0500)
>
> 
> Python patches
>
> Peter: I expect this to address the iotest 040,041 failures you observed
> on NetBSD. If it doesn't, let me know.
>
> 



Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/7.0
for any user-visible changes.

-- PMM



[PATCH v3 08/26] drop libxml2 checks since libxml is not actually used (for parallels)

2022-02-04 Thread Alex Bennée
From: Michael Tokarev 

For a long time, we assumed that libxml2 is necessary for parallels
block format support (block/parallels*). However, this format actually
does not use libxml [*]. Since this is the only user of libxml2 in
whole QEMU tree, we can drop all libxml2 checks and dependencies too.

It is even more: --enable-parallels configure option was the only
option which was silently ignored when it's (fake) dependency
(libxml2) isn't installed.

Drop all mentions of libxml2.

[*] Actually the basis for libxml use were introduced in commit
ed279a06c53 ("configure: add dependency") but the implementation
was never merged:

https://lore.kernel.org/qemu-devel/70227bbd-a517-70e9-714f-e6e0ec431...@openvz.org/

Signed-off-by: Michael Tokarev 
Reviewed-by: Stefan Hajnoczi 
Message-Id: <20220119090423.149315-1-...@msgid.tls.msk.ru>
Tested-by: Philippe Mathieu-Daudé 
Reviewed-by: Philippe Mathieu-Daudé 
[PMD: Updated description and adapted to use lcitool]
Reviewed-by: Thomas Huth 
Signed-off-by: Philippe Mathieu-Daudé 
Signed-off-by: Alex Bennée 
Message-Id: <20220121154134.315047-5-f4...@amsat.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Message-Id: <20220124201608.604599-9-alex.ben...@linaro.org>
---
 meson.build | 6 --
 block/meson.build   | 3 +--
 meson_options.txt   | 2 --
 scripts/checkpatch.pl   | 1 -
 scripts/ci/org.centos/stream/8/x86_64/configure | 1 -
 scripts/coverity-scan/coverity-scan.docker  | 1 -
 scripts/coverity-scan/run-coverity-scan | 2 +-
 scripts/meson-buildoptions.sh   | 3 ---
 8 files changed, 2 insertions(+), 17 deletions(-)

diff --git a/meson.build b/meson.build
index 5f43355071..82db1e7e74 100644
--- a/meson.build
+++ b/meson.build
@@ -453,11 +453,6 @@ if not get_option('linux_io_uring').auto() or have_block
   required: get_option('linux_io_uring'),
   method: 'pkg-config', kwargs: static_kwargs)
 endif
-libxml2 = not_found
-if not get_option('libxml2').auto() or have_block
-  libxml2 = dependency('libxml-2.0', required: get_option('libxml2'),
-   method: 'pkg-config', kwargs: static_kwargs)
-endif
 libnfs = not_found
 if not get_option('libnfs').auto() or have_block
   libnfs = dependency('libnfs', version: '>=1.9.3',
@@ -3496,7 +3491,6 @@ summary_info += {'bzip2 support': libbzip2}
 summary_info += {'lzfse support': liblzfse}
 summary_info += {'zstd support':  zstd}
 summary_info += {'NUMA host support': config_host.has_key('CONFIG_NUMA')}
-summary_info += {'libxml2':   libxml2}
 summary_info += {'capstone':  capstone_opt == 'internal' ? 
capstone_opt : capstone}
 summary_info += {'libpmem support':   libpmem}
 summary_info += {'libdaxctl support': libdaxctl}
diff --git a/block/meson.build b/block/meson.build
index deb73ca389..90dc9983e5 100644
--- a/block/meson.build
+++ b/block/meson.build
@@ -58,8 +58,7 @@ block_ss.add(when: 'CONFIG_QED', if_true: files(
   'qed-table.c',
   'qed.c',
 ))
-block_ss.add(when: [libxml2, 'CONFIG_PARALLELS'],
- if_true: files('parallels.c', 'parallels-ext.c'))
+block_ss.add(when: 'CONFIG_PARALLELS', if_true: files('parallels.c', 
'parallels-ext.c'))
 block_ss.add(when: 'CONFIG_WIN32', if_true: files('file-win32.c', 
'win32-aio.c'))
 block_ss.add(when: 'CONFIG_POSIX', if_true: [files('file-posix.c'), coref, 
iokit])
 block_ss.add(when: libiscsi, if_true: files('iscsi-opts.c'))
diff --git a/meson_options.txt b/meson_options.txt
index 921967eddb..95d527f773 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -113,8 +113,6 @@ option('libudev', type : 'feature', value : 'auto',
description: 'Use libudev to enumerate host devices')
 option('libusb', type : 'feature', value : 'auto',
description: 'libusb support for USB passthrough')
-option('libxml2', type : 'feature', value : 'auto',
-   description: 'libxml2 support for Parallels image format')
 option('linux_aio', type : 'feature', value : 'auto',
description: 'Linux AIO support')
 option('linux_io_uring', type : 'feature', value : 'auto',
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 5caa739db4..5e50111060 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -307,7 +307,6 @@ our @typeList = (
qr{target_(?:u)?long},
qr{hwaddr},
 # external libraries
-   qr{xml${Ident}},
qr{xen\w+_handle},
# Glib definitions
qr{gchar},
diff --git a/scripts/ci/org.centos/stream/8/x86_64/configure 
b/scripts/ci/org.centos/stream/8/x86_64/configure
index e05f2fddcc..9850dd 100755
--- a/scripts/ci/org.centos/stream/8/x86_64/configure
+++ b/scripts/ci/org.centos/stream/8/x86_64/configure
@@ -81,7 +81,6 @@
 --disable-libssh \
 --disable-libudev \
 --disable-libusb \
---disable-libxml2 \
 --disable-linux-aio \
 --disable-linux-io-

Re: [PATCH v4 4/4] python/aqmp: add socket bind step to legacy.py

2022-02-04 Thread John Snow
On Thu, Feb 3, 2022 at 4:19 AM Kevin Wolf  wrote:
>
> Am 02.02.2022 um 20:08 hat John Snow geschrieben:
> > > I guess the relevant question in the context of this patch is whether
> > > sync.py will need a similar two-phase setup as legacy.py. Do you think
> > > you can do without it when you have to reintroduce this behaviour here
> > > to fix bugs?
> > >
> >
> > Hm, honestly, no. I think you'll still need the two-phase in the sync
> > version no matter what -- if you expect to be able to launch QMP and
> > QEMU from the same process, anyway. I suppose it's just a matter of
> > what you *call* it and what/where the arguments are.
> >
> > I could just call it bind(), and it does whatever it needs to (Either
> > creating a socket or, in 3.7+, instantiating more of the asyncio loop
> > machinery).
> > The signature for accept() would need to change to (perhaps)
> > optionally accepting no arguments; i.e. you can do either of the
> > following:
> >
> > (1) qmp.bind('/tmp/sock')
> > qmp.accept()
> >
> > (2) qmp.accept('/tmp/sock')
> >
> > The risk is that the interface is just a touch more complex, the docs
> > get a bit more cluttered, there's a slight loss of clarity, the
> > accept() method would need to check to make sure you didn't give it an
> > address argument if you've already given it one, etc. Necessary
> > trade-off for the flexibility, though.
>
> Hm, how about copying the create_server() interface instead? So it
> would become:
>
> (1) qmp.create_server('/tmp/sock', start_serving=False)
> qmp.start_serving()
>
> (2) qmp.create_server('/tmp/sock')
>
> Then you get the connection details only in a single place. (The names
> should probably be changed because we're still a QMP client even though
> we're technically the server on the socket level, but you get the idea.)

Good idea. I'll experiment.

(I'm worried that the way I initially factored the code to begin with
might make this ugly, but maybe I'm just dreading nothing. I'll have
to see.)

Thanks for taking a look!

--js




Re: [PATCH-for-6.2 0/2] hw/block/fdc: Fix CVE-2021-3507

2022-02-04 Thread John Snow
On Thu, Jan 27, 2022 at 3:11 PM Jon Maloy  wrote:
>
>
> On 11/18/21 06:57, Philippe Mathieu-Daudé wrote:
> > Trivial fix for CVE-2021-3507.
> >
> > Philippe Mathieu-Daudé (2):
> >hw/block/fdc: Prevent end-of-track overrun (CVE-2021-3507)
> >tests/qtest/fdc-test: Add a regression test for CVE-2021-3507
> >
> >   hw/block/fdc.c |  8 
> >   tests/qtest/fdc-test.c | 20 
> >   2 files changed, 28 insertions(+)
> >
> Series
> Acked-by: Jon Maloy 
>

I could have sworn that Philippe said that this patch was incomplete
and to not merge it for 6.2, but maybe I mistook that for a different
series.

I seem to recall that this series didn't apply correctly in
conjunction with the fix for 2021-20196, but if there was a followup,
I missed it.

--js




[PULL 16/32] virtio: add vhost support for virtio devices

2022-02-04 Thread Michael S. Tsirkin
From: Jonah Palmer 

This patch adds a get_vhost() callback function for VirtIODevices that
returns the device's corresponding vhost_dev structure, if the vhost
device is running. This patch also adds a vhost_started flag for
VirtIODevices.

Previously, a VirtIODevice wouldn't be able to tell if its corresponding
vhost device was active or not.

Signed-off-by: Jonah Palmer 
Message-Id: <1642678168-20447-3-git-send-email-jonah.pal...@oracle.com>
Reviewed-by: Michael S. Tsirkin 
Signed-off-by: Michael S. Tsirkin 
---
 include/hw/virtio/virtio.h |  3 +++
 hw/block/vhost-user-blk.c  |  7 +++
 hw/display/vhost-user-gpu.c|  7 +++
 hw/input/vhost-user-input.c|  7 +++
 hw/net/virtio-net.c|  9 +
 hw/scsi/vhost-scsi.c   |  8 
 hw/virtio/vhost-user-fs.c  |  7 +++
 hw/virtio/vhost-user-rng.c |  7 +++
 hw/virtio/vhost-vsock-common.c |  7 +++
 hw/virtio/vhost.c  |  4 +++-
 hw/virtio/virtio-crypto.c  | 10 ++
 hw/virtio/virtio.c |  1 +
 12 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 2a0be70ec6..90e6080890 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -22,6 +22,7 @@
 #include "standard-headers/linux/virtio_config.h"
 #include "standard-headers/linux/virtio_ring.h"
 #include "qom/object.h"
+#include "hw/virtio/vhost.h"
 
 /* A guest should never accept this.  It implies negotiation is broken. */
 #define VIRTIO_F_BAD_FEATURE   30
@@ -102,6 +103,7 @@ struct VirtIODevice
 bool started;
 bool start_on_kick; /* when virtio 1.0 feature has not been negotiated */
 bool disable_legacy_check;
+bool vhost_started;
 VMChangeStateEntry *vmstate;
 char *bus_name;
 uint8_t device_endian;
@@ -160,6 +162,7 @@ struct VirtioDeviceClass {
 int (*post_load)(VirtIODevice *vdev);
 const VMStateDescription *vmsd;
 bool (*primary_unplug_pending)(void *opaque);
+struct vhost_dev *(*get_vhost)(VirtIODevice *vdev);
 };
 
 void virtio_instance_init_common(Object *proxy_obj, void *data,
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index e8cb170032..5dca4eab09 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -569,6 +569,12 @@ static void vhost_user_blk_instance_init(Object *obj)
   "/disk@0,0", DEVICE(obj));
 }
 
+static struct vhost_dev *vhost_user_blk_get_vhost(VirtIODevice *vdev)
+{
+VHostUserBlk *s = VHOST_USER_BLK(vdev);
+return &s->dev;
+}
+
 static const VMStateDescription vmstate_vhost_user_blk = {
 .name = "vhost-user-blk",
 .minimum_version_id = 1,
@@ -603,6 +609,7 @@ static void vhost_user_blk_class_init(ObjectClass *klass, 
void *data)
 vdc->get_features = vhost_user_blk_get_features;
 vdc->set_status = vhost_user_blk_set_status;
 vdc->reset = vhost_user_blk_reset;
+vdc->get_vhost = vhost_user_blk_get_vhost;
 }
 
 static const TypeInfo vhost_user_blk_info = {
diff --git a/hw/display/vhost-user-gpu.c b/hw/display/vhost-user-gpu.c
index 09818231bd..96e56c4467 100644
--- a/hw/display/vhost-user-gpu.c
+++ b/hw/display/vhost-user-gpu.c
@@ -565,6 +565,12 @@ vhost_user_gpu_device_realize(DeviceState *qdev, Error 
**errp)
 g->vhost_gpu_fd = -1;
 }
 
+static struct vhost_dev *vhost_user_gpu_get_vhost(VirtIODevice *vdev)
+{
+VhostUserGPU *g = VHOST_USER_GPU(vdev);
+return &g->vhost->dev;
+}
+
 static Property vhost_user_gpu_properties[] = {
 VIRTIO_GPU_BASE_PROPERTIES(VhostUserGPU, parent_obj.conf),
 DEFINE_PROP_END_OF_LIST(),
@@ -586,6 +592,7 @@ vhost_user_gpu_class_init(ObjectClass *klass, void *data)
 vdc->guest_notifier_pending = vhost_user_gpu_guest_notifier_pending;
 vdc->get_config = vhost_user_gpu_get_config;
 vdc->set_config = vhost_user_gpu_set_config;
+vdc->get_vhost = vhost_user_gpu_get_vhost;
 
 device_class_set_props(dc, vhost_user_gpu_properties);
 }
diff --git a/hw/input/vhost-user-input.c b/hw/input/vhost-user-input.c
index 273e96a7b1..43d2ff3816 100644
--- a/hw/input/vhost-user-input.c
+++ b/hw/input/vhost-user-input.c
@@ -79,6 +79,12 @@ static void vhost_input_set_config(VirtIODevice *vdev,
 virtio_notify_config(vdev);
 }
 
+static struct vhost_dev *vhost_input_get_vhost(VirtIODevice *vdev)
+{
+VHostUserInput *vhi = VHOST_USER_INPUT(vdev);
+return &vhi->vhost->dev;
+}
+
 static const VMStateDescription vmstate_vhost_input = {
 .name = "vhost-user-input",
 .unmigratable = 1,
@@ -93,6 +99,7 @@ static void vhost_input_class_init(ObjectClass *klass, void 
*data)
 dc->vmsd = &vmstate_vhost_input;
 vdc->get_config = vhost_input_get_config;
 vdc->set_config = vhost_input_set_config;
+vdc->get_vhost = vhost_input_get_vhost;
 vic->realize = vhost_input_realize;
 vic->change_active = vhost_input_change_active;
 }
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 2

[PULL 02/32] hw/i386: Add the possibility to disable the 'isapc' machine

2022-02-04 Thread Michael S. Tsirkin
From: Thomas Huth 

We already have a CONFIG_ISAPC switch - but we're not using it yet.
Add some "#ifdefs" to make it possible to disable this machine now.

Signed-off-by: Thomas Huth 
Message-Id: <20220107160713.235918-1-th...@redhat.com>
Acked-by: Paolo Bonzini 
Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Michael S. Tsirkin 
Signed-off-by: Michael S. Tsirkin 
---
 hw/i386/pc_piix.c| 5 -
 tests/qtest/cdrom-test.c | 2 +-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 7c7790a5ce..d9b344248d 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -357,10 +357,12 @@ static void pc_compat_1_4_fn(MachineState *machine)
 pc_compat_1_5_fn(machine);
 }
 
+#ifdef CONFIG_ISAPC
 static void pc_init_isa(MachineState *machine)
 {
 pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, TYPE_I440FX_PCI_DEVICE);
 }
+#endif
 
 #ifdef CONFIG_XEN
 static void pc_xen_hvm_init_pci(MachineState *machine)
@@ -916,6 +918,7 @@ void igd_passthrough_isa_bridge_create(PCIBus *bus, 
uint16_t gpu_dev_id)
 pci_config_set_revision(bridge_dev->config, pch_rev_id);
 }
 
+#ifdef CONFIG_ISAPC
 static void isapc_machine_options(MachineClass *m)
 {
 PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
@@ -935,7 +938,7 @@ static void isapc_machine_options(MachineClass *m)
 
 DEFINE_PC_MACHINE(isapc, "isapc", pc_init_isa,
   isapc_machine_options);
-
+#endif
 
 #ifdef CONFIG_XEN
 static void xenfv_4_2_machine_options(MachineClass *m)
diff --git a/tests/qtest/cdrom-test.c b/tests/qtest/cdrom-test.c
index cfca24fa94..fdd889a487 100644
--- a/tests/qtest/cdrom-test.c
+++ b/tests/qtest/cdrom-test.c
@@ -138,7 +138,7 @@ static void add_x86_tests(void)
  * Unstable CI test under load
  * See https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg05509.html
  */
-if (g_test_slow()) {
+if (g_test_slow() && qtest_has_machine("isapc")) {
 qtest_add_data_func("cdrom/boot/isapc", "-M isapc "
 "-drive if=ide,media=cdrom,file=", test_cdboot);
 }
-- 
MST




[PULL 19/32] qmp: decode feature & status bits in virtio-status

2022-02-04 Thread Michael S. Tsirkin
From: Laurent Vivier 

Display feature names instead of bitmaps for host, guest, and
backend for VirtIODevices.

Display status names instead of bitmaps for VirtIODevices.

Display feature names instead of bitmaps for backend, protocol,
acked, and features (hdev->features) for vhost devices.

Decode features according to device ID. Decode statuses
according to configuration status bitmap (config_status_map).
Decode vhost user protocol features according to vhost user
protocol bitmap (vhost_user_protocol_map).

Transport features are on the first line. Undecoded bits (if
any) are stored in a separate field.

Signed-off-by: Jonah Palmer 
Message-Id: <1642678168-20447-6-git-send-email-jonah.pal...@oracle.com>
Reviewed-by: Michael S. Tsirkin 
Signed-off-by: Michael S. Tsirkin 
---
 qapi/virtio.json   | 156 ++---
 include/hw/virtio/vhost.h  |   3 +
 include/hw/virtio/virtio.h |  18 ++
 hw/block/virtio-blk.c  |  29 
 hw/char/virtio-serial-bus.c|  11 ++
 hw/display/virtio-gpu-base.c   |  18 +-
 hw/input/virtio-input.c|  10 ++
 hw/net/virtio-net.c|  47 ++
 hw/scsi/virtio-scsi.c  |  17 ++
 hw/virtio/vhost-user-fs.c  |  10 ++
 hw/virtio/vhost-vsock-common.c |  10 ++
 hw/virtio/virtio-balloon.c |  14 ++
 hw/virtio/virtio-crypto.c  |  10 ++
 hw/virtio/virtio-iommu.c   |  14 ++
 hw/virtio/virtio-mem.c |  11 ++
 hw/virtio/virtio.c | 297 -
 16 files changed, 646 insertions(+), 29 deletions(-)

diff --git a/qapi/virtio.json b/qapi/virtio.json
index ba61d83df7..474a8bd64e 100644
--- a/qapi/virtio.json
+++ b/qapi/virtio.json
@@ -106,10 +106,10 @@
 'n-tmp-sections': 'int',
 'nvqs': 'uint32',
 'vq-index': 'int',
-'features': 'uint64',
-'acked-features': 'uint64',
-'backend-features': 'uint64',
-'protocol-features': 'uint64',
+'features': 'VirtioDeviceFeatures',
+'acked-features': 'VirtioDeviceFeatures',
+'backend-features': 'VirtioDeviceFeatures',
+'protocol-features': 'VhostDeviceProtocols',
 'max-queues': 'uint64',
 'backend-cap': 'uint64',
 'log-enabled': 'bool',
@@ -176,11 +176,11 @@
 'device-id': 'uint16',
 'vhost-started': 'bool',
 'device-endian': 'str',
-'guest-features': 'uint64',
-'host-features': 'uint64',
-'backend-features': 'uint64',
+'guest-features': 'VirtioDeviceFeatures',
+'host-features': 'VirtioDeviceFeatures',
+'backend-features': 'VirtioDeviceFeatures',
 'num-vqs': 'int',
-'status': 'uint8',
+'status': 'VirtioDeviceStatus',
 'isr': 'uint8',
 'queue-sel': 'uint16',
 'vm-running': 'bool',
@@ -222,14 +222,28 @@
 #"name": "virtio-crypto",
 #"started": true,
 #"device-id": 20,
-#"backend-features": 0,
+#"backend-features": {
+#   "transports": [],
+#   "dev-features": []
+#},
 #"start-on-kick": false,
 #"isr": 1,
 #"broken": false,
-#"status": 15,
+#"status": {
+#   "statuses": ["ACKNOWLEDGE", "DRIVER", "FEATURES_OK",
+#"DRIVER_OK"]
+#},
 #"num-vqs": 2,
-#"guest-features": 5100273664,
-#"host-features": 6325010432,
+#"guest-features": {
+#   "transports": ["EVENT_IDX", "INDIRECT_DESC", "VERSION_1"],
+#   "dev-features": []
+#},
+#"host-features": {
+#   "transports": ["PROTOCOL_FEATURES", "EVENT_IDX",
+#  "INDIRECT_DESC", "VERSION_1", "ANY_LAYOUT",
+#  "NOTIFY_ON_EMPTY"],
+#   "dev-features": []
+#},
 #"use-guest-notifier-mask": true,
 #"vm-running": true,
 #"queue-sel": 1,
@@ -257,22 +271,65 @@
 #   "max-queues": 1,
 #   "backend-cap": 2,
 #   "log-size": 0,
-#   "backend-features": 0,
+#   "backend-features": {
+#  "transports": [],
+#  "dev-features": []
+#   },
 #   "nvqs": 2,
-#   "protocol-features": 0,
+#   "protocol-features": {
+#  "protocols": []
+#   },
 #   "vq-index": 0,
 #   "log-enabled": false,
-#   "acked-features": 5100306432,
-#   "features": 13908344832
+#   "acked-features": {
+#  "transports": ["EVENT_IDX", "INDIRECT_DESC", "VERSION_1",
+# "ANY_LAYOUT", "NOTIFY_ON_EMPTY"],
+#  "dev-features"

[PULL 15/32] virtio: drop name parameter for virtio_init()

2022-02-04 Thread Michael S. Tsirkin
From: Jonah Palmer 

This patch drops the name parameter for the virtio_init function.

The pair between the numeric device ID and the string device ID
(name) of a virtio device already exists, but not in a way that
lets us map between them.

This patch lets us do this and removes the need for the name
parameter in the virtio_init function.

[Jonah: added new virtio IDs to virtio device list from rebase].

Signed-off-by: Jonah Palmer 
Message-Id: <1642678168-20447-2-git-send-email-jonah.pal...@oracle.com>
Reviewed-by: Michael S. Tsirkin 
Signed-off-by: Michael S. Tsirkin 
---
 include/hw/virtio/vhost-vsock-common.h |  2 +-
 include/hw/virtio/virtio-gpu.h |  3 +-
 include/hw/virtio/virtio.h |  4 +-
 hw/9pfs/virtio-9p-device.c |  2 +-
 hw/block/vhost-user-blk.c  |  2 +-
 hw/block/virtio-blk.c  |  2 +-
 hw/char/virtio-serial-bus.c|  3 +-
 hw/display/virtio-gpu-base.c   |  2 +-
 hw/input/virtio-input.c|  3 +-
 hw/net/virtio-net.c|  2 +-
 hw/scsi/virtio-scsi.c  |  3 +-
 hw/virtio/vhost-user-fs.c  |  3 +-
 hw/virtio/vhost-user-i2c.c |  7 +---
 hw/virtio/vhost-user-rng.c |  2 +-
 hw/virtio/vhost-user-vsock.c   |  2 +-
 hw/virtio/vhost-vsock-common.c |  5 +--
 hw/virtio/vhost-vsock.c|  2 +-
 hw/virtio/virtio-balloon.c |  3 +-
 hw/virtio/virtio-crypto.c  |  2 +-
 hw/virtio/virtio-iommu.c   |  3 +-
 hw/virtio/virtio-mem.c |  3 +-
 hw/virtio/virtio-pmem.c|  3 +-
 hw/virtio/virtio-rng.c |  2 +-
 hw/virtio/virtio.c | 55 --
 24 files changed, 77 insertions(+), 43 deletions(-)

diff --git a/include/hw/virtio/vhost-vsock-common.h 
b/include/hw/virtio/vhost-vsock-common.h
index d8b565b4da..076b7ab779 100644
--- a/include/hw/virtio/vhost-vsock-common.h
+++ b/include/hw/virtio/vhost-vsock-common.h
@@ -44,7 +44,7 @@ int vhost_vsock_common_start(VirtIODevice *vdev);
 void vhost_vsock_common_stop(VirtIODevice *vdev);
 int vhost_vsock_common_pre_save(void *opaque);
 int vhost_vsock_common_post_load(void *opaque, int version_id);
-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name);
+void vhost_vsock_common_realize(VirtIODevice *vdev);
 void vhost_vsock_common_unrealize(VirtIODevice *vdev);
 uint64_t vhost_vsock_common_get_features(VirtIODevice *vdev, uint64_t features,
  Error **errp);
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 2179b75703..afff9e158e 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -22,6 +22,7 @@
 #include "sysemu/vhost-user-backend.h"
 
 #include "standard-headers/linux/virtio_gpu.h"
+#include "standard-headers/linux/virtio_ids.h"
 #include "qom/object.h"
 
 #define TYPE_VIRTIO_GPU_BASE "virtio-gpu-base"
@@ -37,8 +38,6 @@ OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPUGL, VIRTIO_GPU_GL)
 #define TYPE_VHOST_USER_GPU "vhost-user-gpu"
 OBJECT_DECLARE_SIMPLE_TYPE(VhostUserGPU, VHOST_USER_GPU)
 
-#define VIRTIO_ID_GPU 16
-
 struct virtio_gpu_simple_resource {
 uint32_t resource_id;
 uint32_t width;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index f095637058..2a0be70ec6 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -165,8 +165,8 @@ struct VirtioDeviceClass {
 void virtio_instance_init_common(Object *proxy_obj, void *data,
  size_t vdev_size, const char *vdev_name);
 
-void virtio_init(VirtIODevice *vdev, const char *name,
- uint16_t device_id, size_t config_size);
+void virtio_init(VirtIODevice *vdev, uint16_t device_id, size_t config_size);
+
 void virtio_cleanup(VirtIODevice *vdev);
 
 void virtio_error(VirtIODevice *vdev, const char *fmt, ...) GCC_FMT_ATTR(2, 3);
diff --git a/hw/9pfs/virtio-9p-device.c b/hw/9pfs/virtio-9p-device.c
index 54ee93b71f..5f522e68e9 100644
--- a/hw/9pfs/virtio-9p-device.c
+++ b/hw/9pfs/virtio-9p-device.c
@@ -216,7 +216,7 @@ static void virtio_9p_device_realize(DeviceState *dev, 
Error **errp)
 }
 
 v->config_size = sizeof(struct virtio_9p_config) + strlen(s->fsconf.tag);
-virtio_init(vdev, "virtio-9p", VIRTIO_ID_9P, v->config_size);
+virtio_init(vdev, VIRTIO_ID_9P, v->config_size);
 v->vq = virtio_add_queue(vdev, MAX_REQ, handle_9p_output);
 }
 
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index 1a42ae9187..e8cb170032 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -491,7 +491,7 @@ static void vhost_user_blk_device_realize(DeviceState *dev, 
Error **errp)
 return;
 }
 
-virtio_init(vdev, "virtio-blk", VIRTIO_ID_BLOCK,
+virtio_init(vdev, VIRTIO_ID_BLOCK,
 sizeof(struct virtio_blk_config));
 
 s->virtqs = g_new(VirtQueue *,