[Bug 1849644] Update Released

2020-10-12 Thread Łukasz Zemczak
The verification of the Stable Release Update for qemu has completed
successfully and the package is now being released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1849644

Title:
  QEMU VNC websocket proxy requires non-standard 'binary' subprotocol

Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Focal:
  Fix Released

Bug description:
  [Impact]

   * The exact details of the protocol/subprotocal was slightly unclear
 between the projects. So qemu ended up insisting on "binary" being
 used but newer noVNC clients no more used it.

   * Qemu got fixed in 5.0 to be more tolerant and accept an empty sub-
 protocol as well. This SRU backports that fix to Focal.

  [Test Case]

   * Without the fix the following will "Failed to connect", but with
  the fix it will work.

  $ sudo apt install qemu-system-x86
  # will only boot into a failing bootloader, but that is enough
  $ qemu-system-x86_64 -vnc :0,websocket
  # We need version 1.2 or later, so use the snap
  $ snap install novnc
  $ novnc --vnc localhost:5700
  Connect browser to http://:6080/vnc.html
  Click "Connect"

   * Cross check with an older noVNC (e.g. the one in Focal) if the 
 connectivity still works on those as well

 - Reminders when switching between the noVNC implementations
   - always refresh the browser with all clear ctrl+alt+f5
   - start/stop the snapped one via snap.novnc.novncsvc.service

  [Regression Potential]

   * This is exclusive to the functionality of noVNC, so regressions would 
 have to be expected in there. The tests try to exercise the basics, but
 e.g. Openstack would be a major product using 

  [Other Info]
   
   * The noVNC in Focal itself does not yet have the offending change, but
 we want the qemu in focal to be connecteable from ~any type of client


  ---


  
  When running a machine using "-vnc" and the "websocket" option QEMU seems to 
require the subprotocol called 'binary'. This subprotocol does not exist in the 
WebSocket specification. In fact it has never existed in the spec, in one of 
the very early drafts of WebSockets it was briefly mentioned but it never made 
it to a final version.

  When the WebSocket server requires a non-standard subprotocol any
  WebSocket client that works correctly won't be able to connect.

  One example of such a client is noVNC, it tells the server that it
  doesn't want to use any subprotocol. QEMU's WebSocket proxy doesn't
  let noVNC connect. If noVNC is modified to ask for 'binary' it will
  work, this is, however, incorrect behavior.

  Looking at the code in "io/channel-websock.c" it seems it's quite
  hard-coded to binary:

  Look at line 58 and 433 here:
  https://git.qemu.org/?p=qemu.git;a=blob;f=io/channel-websock.c

  This code has to be made more dynamic, and shouldn't require binary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1849644/+subscriptions



[Bug 1805256] Update Released

2020-06-18 Thread Łukasz Zemczak
The verification of the Stable Release Update for qemu has completed
successfully and the package is now being released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1805256

Title:
  qemu-img hangs on rcu_call_ready_event logic in Aarch64 when
  converting images

Status in kunpeng920:
  Fix Committed
Status in kunpeng920 ubuntu-18.04 series:
  Fix Committed
Status in kunpeng920 ubuntu-18.04-hwe series:
  Fix Committed
Status in kunpeng920 ubuntu-19.10 series:
  Fix Committed
Status in kunpeng920 ubuntu-20.04 series:
  Fix Committed
Status in kunpeng920 upstream-kernel series:
  Invalid
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Bionic:
  Fix Committed
Status in qemu source package in Eoan:
  Fix Committed
Status in qemu source package in Focal:
  Fix Released

Bug description:
  [Impact]

  * QEMU locking primitives might face a race condition in QEMU Async
  I/O bottom halves scheduling. This leads to a dead lock making either
  QEMU or one of its tools to hang indefinitely.

  [Test Case]

  * qemu-img convert -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2

  Hangs indefinitely approximately 30% of the runs in Aarch64.

  [Regression Potential]

  * This is a change to a core part of QEMU: The AIO scheduling. It
  works like a "kernel" scheduler, whereas kernel schedules OS tasks,
  the QEMU AIO code is responsible to schedule QEMU coroutines or event
  listeners callbacks.

  * There was a long discussion upstream about primitives and Aarch64.
  After quite sometime Paolo released this patch and it solves the
  issue. Tested platforms were: amd64 and aarch64 based on his commit
  log.

  * Christian suggests that this fix stay little longer in -proposed to
  make sure it won't cause any regressions.

  * dannf suggests we also check for performance regressions; e.g. how
  long it takes to convert a cloud image on high-core systems.

  [Other Info]

   * Original Description bellow:

  Command:

  qemu-img convert -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2

  Hangs indefinitely approximately 30% of the runs.

  

  Workaround:

  qemu-img convert -m 1 -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2

  Run "qemu-img convert" with "a single coroutine" to avoid this issue.

  

  (gdb) thread 1
  ...
  (gdb) bt
  #0 0xbf1ad81c in __GI_ppoll
  #1 0xaabcf73c in ppoll
  #2 qemu_poll_ns
  #3 0xaabd0764 in os_host_main_loop_wait
  #4 main_loop_wait
  ...

  (gdb) thread 2
  ...
  (gdb) bt
  #0 syscall ()
  #1 0xaabd41cc in qemu_futex_wait
  #2 qemu_event_wait (ev=ev@entry=0xaac86ce8 )
  #3 0xaabed05c in call_rcu_thread
  #4 0xaabd34c8 in qemu_thread_start
  #5 0xbf25c880 in start_thread
  #6 0xbf1b6b9c in thread_start ()

  (gdb) thread 3
  ...
  (gdb) bt
  #0 0xbf11aa20 in __GI___sigtimedwait
  #1 0xbf2671b4 in __sigwait
  #2 0xaabd1ddc in sigwait_compat
  #3 0xaabd34c8 in qemu_thread_start
  #4 0xbf25c880 in start_thread
  #5 0xbf1b6b9c in thread_start

  

  (gdb) run
  Starting program: /usr/bin/qemu-img convert -f qcow2 -O qcow2
  ./disk01.ext4.qcow2 ./output.qcow2

  [New Thread 0xbec5ad90 (LWP 72839)]
  [New Thread 0xbe459d90 (LWP 72840)]
  [New Thread 0xbdb57d90 (LWP 72841)]
  [New Thread 0xacac9d90 (LWP 72859)]
  [New Thread 0xa7ffed90 (LWP 72860)]
  [New Thread 0xa77fdd90 (LWP 72861)]
  [New Thread 0xa6ffcd90 (LWP 72862)]
  [New Thread 0xa67fbd90 (LWP 72863)]
  [New Thread 0xa5ffad90 (LWP 72864)]

  [Thread 0xa5ffad90 (LWP 72864) exited]
  [Thread 0xa6ffcd90 (LWP 72862) exited]
  [Thread 0xa77fdd90 (LWP 72861) exited]
  [Thread 0xbdb57d90 (LWP 72841) exited]
  [Thread 0xa67fbd90 (LWP 72863) exited]
  [Thread 0xacac9d90 (LWP 72859) exited]
  [Thread 0xa7ffed90 (LWP 72860) exited]

  
  """

  All the tasks left are blocked in a system call, so no task left to call
  qemu_futex_wake() to unblock thread #2 (in futex()), which would unblock
  thread #1 (doing poll() in a pipe with thread #2).

  Those 7 threads exit before disk conversion is complete (sometimes in
  the beginning, sometimes at the end).

  

  On the HiSilicon D06 system - a 96 core NUMA arm64 box - qemu-img
  frequently hangs (~50% of the time) with this command:

  qemu-img convert -f qcow2 -O qcow2 /tmp/cloudimg /tmp/cloudimg2

  Where "cloudimg" is a standard qcow2 Ubuntu cloud image. This
  qcow2->qcow2 conversion happens to be somethi

[Bug 1848556] Update Released

2019-12-02 Thread Łukasz Zemczak
The verification of the Stable Release Update for qemu has completed
successfully and the package is now being released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1848556

Title:
  qemu-img check failing on remote image in Eoan

Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Eoan:
  Fix Released
Status in qemu source package in Focal:
  Fix Released

Bug description:
  Ubuntu SRU Template:

  [Impact]

   * There is fallout due to changes in libcurl that affect qemu and might 
 lead to a hang.

   * Fix by backporting the upstream fix

  [Test Case]

   * If you have network just run
 $ qemu-img check http://10.193.37.117/cloud/eoan-server-cloudimg-amd64.img

   * Without network, install apache2, and get a complex qemu file (like a 
 cloud image) onto the system. Then access the file via apache http but 
 not localhost (that would work)

  [Regression Potential]

   * The change is local to the libcurl usage of qemu, so that could be 
 affected. But then this is what has been found to not work here, so I'd 
 expect not too much trouble. But if so then in the curl usage (which 
 means disks on http)

  [Other Info]
   
   * n/a

  ---

  The "qemu-img check" function is failing on remote (HTTP-hosted)
  images, beginning with Ubuntu 19.10 (qemu-utils version 1:4.0+dfsg-
  0ubuntu9). With previous versions, through Ubuntu 19.04/qemu-utils
  version 1:3.1+dfsg-2ubuntu3.5, the following worked:

  $ /usr/bin/qemu-img check  
http://10.193.37.117/cloud/eoan-server-cloudimg-amd64.img
  No errors were found on the image.
  19778/36032 = 54.89% allocated, 90.34% fragmented, 89.90% compressed clusters
  Image end offset: 514064384

  The 10.193.37.117 server holds an Apache server that hosts the cloud
  images on a LAN. Beginning with Ubuntu 19.10/qemu-utils 1:4.0+dfsg-
  0ubuntu9, the same command never returns. (I've left it for up to an
  hour with no change.) I'm able to wget the image from the same server
  and installation on which qemu-img check fails. I've tried several
  .img files on the server, ranging from Bionic to Eoan, with the same
  results with all of them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1848556/+subscriptions



[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu

2019-05-13 Thread Łukasz Zemczak
This is even more than what I wanted, thanks!

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or stopping a vhost net.

  [other info]

  this was originally seen in the 2.5 version of qemu - specifically,
  the UCA version in trusty-mitaka (which uses the xenial qemu
  codebase).

  After discussion upstream, it appears this was fixed upstream by
  commit e7c83a885f8, which is included starting in version 2.9.
  However, this commit depends on at least commit 5345fdb4467, and
  likely more other previous co

[Qemu-devel] [Bug 1823458] Update Released

2019-05-13 Thread Łukasz Zemczak
The verification of the Stable Release Update for qemu has completed
successfully and the package has now been released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or st

[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu

2019-05-06 Thread Łukasz Zemczak
Since this SRU is rather hard to verify + Christian's suggestion to let
the SRU age a bit longer, I would still wait a few days before
releasing.

Dan (or anyone else involved) - could you maybe perform some safety
checks using this package against the use-cases mentioned in the
regression potential? I guess that would make me feel much safer
releasing this when knowing it was tested more than by just one person
too.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or stopping a vhost net.

  [other info]

  

[Qemu-devel] [Bug 1755912] Update Released

2018-09-06 Thread Łukasz Zemczak
The verification of the Stable Release Update for qemu has completed
successfully and the package has now been released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1755912

Title:
  qemu-system-x86_64 crashed with SIGABRT when using option -vga qxl

Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * There are conditions where the vga/qxl driver can crash the qemu 
 process.

   * It is like a very complex case of a non initialized var - without the 
 fix it might try to ask for updates without having a valid primary 
 surface.

   * Backport from upstream
  
https://git.qemu.org/?p=qemu.git;a=commit;h=5bd5c27c7d284d01477c5cc022ce22438c46bf9f
  to avoid the crash

  
  [Test Case]

   * Sometimes booting xubuntu was reported to be enough, at other times
  it was needed to change resolution a few times to trigger.

# get xubuntu iso (actually other UI Isos should do as well)
$ qemu-system-x86_64 -vga qxl -enable-kvm -cpu host -smp cores=2,threads=2 
-m 2048 -cdrom xubuntu-18.04-desktop-amd64.iso
# If it boots successfully, change resolution until it crashes.
$ while true ; do xrandr --output Virtual-0 --mode 640x480 ; sleep 1 ; 
xrandr --output Virtual-0 --mode 1280x720 ; sleep 1 ; xrandr --output Virtual-0 
--mode 1920x1080 ; sleep 1 ; done

   * Without the fix that will trigger the qemu crash

  [Regression Potential]

   * The change "just" adds QXL_MODE_UNDEFINED as one more trigger to leave 
 the rendering update. That sounds rather safe. But thinking hard on 
 potential updates I could think of theoretical setups that were  in 
 undefined mode all the time (unlikely or impossible I think) that now 
 would get no updates anymore. Well I really don't think this is an 
 issue, but since this section should be open thinking on "potential" 
 regressions that is what comes to my mind.

  [Other Info]
   
   * Thanks to Leonardo for most of the bisecting and discussion work!

  
  ---

  
  When using qemu-system-x86_64 with the option -vga qxl, it crashes. The 
easiest way to crash it is by trying to change the guest's resolution. However, 
the system may randomly crash too, not happening only when changing resolution. 
Here is the terminal output of one of these random crashes:

  

  $ qemu-system-x86_64 -hda /dev/sdb -m 2048 -enable-kvm -cpu host -vga qxl 
-nodefaults -netdev user,id=hostnet0 -device 
virtio-net-pci,id=net0,netdev=hostnet0
  WARNING: Image format was not specified for '/dev/sdb' and probing guessed 
raw.
   Automatically detecting the format is dangerous for raw images, 
write operations on block 0 will be restricted.
   Specify the 'raw' format explicitly to remove the restrictions.

  (process:21313): Spice-WARNING **: 16:01:45.759: display-
  channel.c:2431:display_channel_validate_surface: canvas address is
  0x7f8eb948ab18 for 0 (and is NULL)

  (process:21313): Spice-WARNING **: 16:01:45.759: display-
  channel.c:2432:display_channel_validate_surface: failed on 0

  (process:21313): Spice-CRITICAL **: 16:01:45.759: 
display-channel.c:2035:display_channel_update: condition 
`display_channel_validate_surface(display, surface_id)' failed
  Abortado (imagem do núcleo gravada)

  

  I was running QEMU as a normal user which is on the groups kvm and
  disk. Initially I supposed the problem was because I was running QEMU
  as root, but as a normal user this happens too.

  I have tested with guests with different Ubuntu version: 18.04, 17.10
  and 16.04. It is happening with them all.

  ProblemType: Crash
  DistroRelease: Ubuntu 18.04
  Package: qemu-system-x86 1:2.11+dfsg-1ubuntu4
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CurrentDesktop: XFCE
  Date: Wed Mar 14 17:13:52 2018
  ExecutablePath: /usr/bin/qemu-system-x86_64
  InstallationDate: Installed on 2017-06-13 (273 days ago)
  InstallationMedia: Xubuntu 17.04 "Zesty Zapus" - Release amd64 (20170412)
  KvmCmdLine: COMMAND STAT  EUID  RUID   PID  PPID %CPU COMMAND
  MachineType: LENOVO 80UG
  ProcCmdline: qemu-system-x86_64 -hda /dev/sdb -smp cpus=2 -m 512 -enable-kvm 
-cpu host -vga qxl -nodefaults -netdev user,id=hostnet0 -device 
virtio-net-pci,id=net0,netdev=hostnet0
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.15.0-10-generic.efi.signed 
root=UUID=6b4ae5c0-c78c-49a6-a1ba-02

[Qemu-devel] [Bug 1755912] Re: qemu-system-x86_64 crashed with SIGABRT when using option -vga qxl

2018-08-27 Thread Łukasz Zemczak
Hello Leonardo, or anyone else affected,

Accepted qemu into bionic-proposed. The package will build now and be
available at https://launchpad.net/ubuntu/+source/qemu/1:2.11+dfsg-
1ubuntu7.5 in a few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested and change the tag from
verification-needed-bionic to verification-done-bionic. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-failed-bionic. In either case, without details of
your testing we will not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Changed in: qemu (Ubuntu Bionic)
   Status: Triaged => Fix Committed

** Tags added: verification-needed verification-needed-bionic

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1755912

Title:
  qemu-system-x86_64 crashed with SIGABRT when using option -vga qxl

Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Bionic:
  Fix Committed

Bug description:
  [Impact]

   * There are conditions where the vga/qxl driver can crash the qemu 
 process.

   * It is like a very complex case of a non initialized var - without the 
 fix it might try to ask for updates without having a valid primary 
 surface.

   * Backport from upstream
  
https://git.qemu.org/?p=qemu.git;a=commit;h=5bd5c27c7d284d01477c5cc022ce22438c46bf9f
  to avoid the crash

  
  [Test Case]

   * Sometimes booting xubuntu was reported to be enough, at other times
  it was needed to change resolution a few times to trigger.

# get xubuntu iso (actually other UI Isos should do as well)
$ qemu-system-x86_64 -vga qxl -enable-kvm -cpu host -smp cores=2,threads=2 
-m 2048 -cdrom xubuntu-18.04-desktop-amd64.iso
# If it boots successfully, change resolution until it crashes.
$ while true ; do xrandr --output Virtual-0 --mode 640x480 ; sleep 1 ; 
xrandr --output Virtual-0 --mode 1280x720 ; sleep 1 ; xrandr --output Virtual-0 
--mode 1920x1080 ; sleep 1 ; done

   * Without the fix that will trigger the qemu crash

  [Regression Potential]

   * The change "just" adds QXL_MODE_UNDEFINED as one more trigger to leave 
 the rendering update. That sounds rather safe. But thinking hard on 
 potential updates I could think of theoretical setups that were  in 
 undefined mode all the time (unlikely or impossible I think) that now 
 would get no updates anymore. Well I really don't think this is an 
 issue, but since this section should be open thinking on "potential" 
 regressions that is what comes to my mind.

  [Other Info]
   
   * Thanks to Leonardo for most of the bisecting and discussion work!

  
  ---

  
  When using qemu-system-x86_64 with the option -vga qxl, it crashes. The 
easiest way to crash it is by trying to change the guest's resolution. However, 
the system may randomly crash too, not happening only when changing resolution. 
Here is the terminal output of one of these random crashes:

  

  $ qemu-system-x86_64 -hda /dev/sdb -m 2048 -enable-kvm -cpu host -vga qxl 
-nodefaults -netdev user,id=hostnet0 -device 
virtio-net-pci,id=net0,netdev=hostnet0
  WARNING: Image format was not specified for '/dev/sdb' and probing guessed 
raw.
   Automatically detecting the format is dangerous for raw images, 
write operations on block 0 will be restricted.
   Specify the 'raw' format explicitly to remove the restrictions.

  (process:21313): Spice-WARNING **: 16:01:45.759: display-
  channel.c:2431:display_channel_validate_surface: canvas address is
  0x7f8eb948ab18 for 0 (and is NULL)

  (process:21313): Spice-WARNING **: 16:01:45.759: display-
  channel.c:2432:display_channel_validate_surface: failed on 0

  (process:21313): Spice-CRITICAL **: 16:01:45.759: 
display-channel.c:2035:display_channel_update: condition 
`display_channel_validate_surface(display, surface_id)' failed
  Abortado (imagem do núcleo gravada)

  

  I was running QEMU as a normal user which is on the groups kvm and
  disk. Initially I supposed the problem was because I was running QEMU
  as root, but as a normal user this happens too.

  I have tested with guests with different Ubuntu version: 18.04, 17.10
  and 16.04. It is happening with them all.

  ProblemType: Crash
  DistroRelease: Ubuntu 18.04
  Package: qemu-system-x86 1:2.11+dfsg-1ubuntu4
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0

[Qemu-devel] [Bug 1587065] Update Released

2018-07-26 Thread Łukasz Zemczak
The verification of the Stable Release Update for qemu has completed
successfully and the package has now been released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1587065

Title:
  btrfs qemu-ga - multiple mounts block fsfreeze

Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * backport upstream fix to avoid double unmounts (e.g. bind mounts)
     hanging the freeze action.

  [Test Case]

   * Prepare a guest
     $ uvt-simplestreams-libvirt --verbose sync --source 
http://cloud-images.ubuntu.com/daily arch=amd64 release=xenial label=daily
  uvt-kvm create --password ubuntu x-freeze arch=amd64 release=xenial 
label=daily

   * Shut it down and use virsh edit to add a channel for the agent. Add this 
somewhere under :
  
     
     
  

    * Start the guest again, in it install qemu-guest-agent

    *

    * On the Host then freeze and thaw the guest
  $ virsh domfsfreeze x-freeze
  $ virsh domfsthaw x-freeze

    # The above works, then trigger the issue, in the guest do
  $ sudo mkdir /mnt/test
  $ sudo mount -o bind / /mnt/test/

    * Now the freeze/thaw will fail unless the fix is applied to the
  guest.

  [Regression Potential]

   * From looking at the code alone I could think of a different cause that
     makes it return EBUSY.
     The biggest risk I could see due to that is the agent reporting a
     successful freeze due to accepting EBUSY which is not actually true.
     But we have that in Ubuntu since Artful and
     upstream since January 2017 and there are no such reports.

  [Other Info]

   * n/a

  ---

  Having two mounts of the same device makes fsfreeze through qemu-ga 
impossible.
  root@CmsrvMTA:/# mount -l | grep /dev/vda2
  /dev/vda2 on / type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@)
  /dev/vda2 on /home type btrfs 
(rw,relatime,space_cache,subvolid=258,subvol=/@home)

  Having two mounts is rather common with btrfs, so the feature fsfreeze
  is unusable on these systems.

  Below more information about how we encountered this issue...

  Message send to qemu-disc...@nongnu.org:

  Message 1:
  --
  I use external snapshots to backup my guests. I use the 'quiesce' option to 
flush and frees the guest file system with the qemu guest agent.

  With the exeption of one guest, this procedure works fine. On the 'unwilling' 
guest, I get this error message:
  "ERROR 2016-05-25 00:51:19 | T25-bakVMSCmsrvVH2 | fout: internal error: 
unable to execute QEMU agent command 'guest-fsfreeze-freeze': failed to freeze 
/: Device or resource busy"

  I don't think this is not some sort of time-out error, because
  activation of the fsfreeze and the error message happen immediately
  after each other:

  $ grep qemu-ga syslog.1
  May 25 00:51:19 CmsrvMTA qemu-ga: info: guest-fsfreeze called

  This is the only entry of the qemu guest agent in syslog.

  $ sudo virsh version
  Compiled against library: libvirt 1.3.1
  Using library: libvirt 1.3.1
  Gebruikte API: QEMU 1.3.1
  Draaiende hypervisor: QEMU 2.5.0

  $ virsh qemu-agent-command CmsrvMTA '{"execute": "guest-info"}'
  {"return":{"version":"2.5.0", ... 
,{"enabled":true,"name":"guest-fstrim","success-response":true},{"enabled":true,"name":"guest-fsfreeze-thaw","success-response":true},{"enabled":true,"name":"guest-fsfreeze-status","success-response":true},{"enabled":true,"name":"guest-fsfreeze-freeze-list","success-response":true},{"enabled":true,"name":"guest-fsfreeze-freeze","success-response":true},
 ... }

  For making an external snapshot, I use this command:
  $ virsh snapshot-create-as --domain CmsrvMTA sn1 --disk-only --atomic 
--quiesce --no-metadata --diskspec vda,file=/srv/poolVMS/CmsrvMTA.sn1

  Piece of reply 1:
  -
  I have encountered this before. Some operating systems
   may have bind-mounts that let a device appear multiple times in the mount 
list. Unfortunately the guest agent is not smart enough to consider a device 
that has been frozen as succesfull and keep going. This causes this specific 
error.

  Piece of reply 2:
  -
  Ok, that seems to be it.

  I’ve got the ‘/’ and ‘/home’ on the same device formatted as btrfs on
  two separate sub volumes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1587065/+subscriptions



[Qemu-devel] [Bug 955379] Re: cmake hangs with qemu-arm-static

2018-06-29 Thread Łukasz Zemczak
What's the status of this bug? I see LP: #1764555 has been marked as invalid as 
the test environment was tainted - does this imply the fix was correct and 
everything is working as intended?
The bug is marked for 16.04.5. If it's still something we intent to get 
released for the point-release we would need someone to prepare an SRU as soon 
as possible.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/955379

Title:
  cmake hangs with qemu-arm-static

Status in QEMU:
  Fix Released
Status in Linaro QEMU:
  Confirmed
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu-linaro package in Ubuntu:
  Confirmed
Status in qemu source package in Xenial:
  Incomplete
Status in qemu-linaro source package in Xenial:
  New

Bug description:
  I'm using git commit 3e7ecd976b06f... configured with --target-list
  =arm-linux-user --static in a chroot environment to compile some
  things. I ran into this problem with both pcl and opencv-2.3.1. cmake
  consistently freezes at some point during its execution, though in a
  different spot each time, usually during a step when it's searching
  for some libraries. For instance, pcl most commonly stops after:

  [snip]
  -- Boost version: 1.46.1
  -- Found the following Boost libraries:
  --   system
  --   filesystem
  --   thread
  --   date_time
  -- checking for module 'eigen3'
  --   found eigen3, version 3.0.1

  which is perplexing because it freezes after finding what it wants,
  not during the search. When it does get past that point, it does so
  almost immediately but freezes somewhere else.

  I'm using 64-bit Ubuntu 11.10 with kernel release 3.0.0-16-generic
  with an Intel i5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/955379/+subscriptions