[Qemu-devel] [Bug 1547012] Re: qemu instances crashes with certain spice clients
[Expired for QEMU because there has been no activity for 60 days.] ** Changed in: qemu Status: Incomplete => Expired -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1547012 Title: qemu instances crashes with certain spice clients Status in QEMU: Expired Bug description: It's possible to make qemu instances crash when using certain browsers connected as spice-clients. my environment: - OpenStack Kilo installed from ubuntu-cloud archive (qemu-system-x86 2.2+dfsg-5expubuntu9.6~cloud0) - Using spice for web-console access How to reproduce: 1. Start a VM on openstack 2. access the OpenStack dashboard using iceweasel 43.0.4 3. Open the spice-console 4. Leave the console open for few minutes 5. The VM will crash on the hypervisor The content of qemu log-files for this particular VM: 2016-02-18 07:25:23.655+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-system-x86_64 -name instance-188f -S -machine pc-i440fx-utopic,accel=kvm,usb=off -cpu SandyBridge,+erms,+smep,+fsgsbase,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 4096 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid cb04ff25-056f-4f82-a2e8-1fbb762bc29e -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=2015.1.2,serial=----0cc47a45f5e8,uuid=cb04ff25-056f-4f82-a2e8-1fbb762bc29e -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-188f.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=rbd:libvirt/cb04ff25-056f-4f82-a2e8-1fbb762bc29e_disk:id=cinder:key=AQBYmdBUCDq7IBAA/7tLevRjdF3Bo7522xkFqA==:auth_supported=cephx\;none:mon_host=xxx.xxx.xxx.xxx\:6789\;xxx.xxx.xxx.xxx\:6789\;xxx.xxx.xxx.xxx\:6789\;xxx.xxx.xxx.xxx\:6789\;xxx.xxx.xxx.xxx\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=55,id=hostnet0,vhost=on,vhostfd=58 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:a4:74:3b,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/cb04ff25-056f-4f82-a2e8-1fbb762bc29e/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -chardev pty,id=charchannel0 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5929,addr=172.24.1.30,disable-ticketing,seamless-migration=on -k fr-ch -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on char device redirected to /dev/pts/64 (label charserial1) char device redirected to /dev/pts/65 (label charchannel0) main_channel_link: add main channel client main_channel_handle_parsed: net test: latency 44.136000 ms, bitrate 157538461538 bps (150240.384615 Mbps) inputs_connect: inputs channel client create red_dispatcher_set_cursor_peer: ((null):18188): SpiceWorker-CRITICAL **: red_worker.c:1629:common_alloc_recv_buf: unexpected message size 214862 (max is 1024) 2016-02-18 07:30:47.008+: shutting down It's funny because this error only occurs with certain browser versions, in my case with Iceweasel 43.0.4 and 44. 0 but it works well with Chrome 48.0.256482 and Firefox 44.0.2. Marking this a potential security issue as it could maybe lead to a denial-of-service if a user sends crafted packets. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1547012/+subscriptions
[Qemu-devel] [Bug 1545024] Re: compiling on armv7 crashes compile qlx.o
[Expired for QEMU because there has been no activity for 60 days.] ** Changed in: qemu Status: Incomplete => Expired -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1545024 Title: compiling on armv7 crashes compile qlx.o Status in QEMU: Expired Bug description: If i try to compile qemu on armv7 cpu i get this error: LINK qemu-nbd CCqemu-img.o LINK qemu-img LINK qemu-io LINK qemu-bridge-helper CCqmp-marshal.o CChw/display/qxl.o {standard input}: Assembler messages: {standard input}:1704: Error: bad instruction `lock' {standard input}:1704: Error: bad instruction `addl $0,0(%rsp)' {standard input}:1864: Error: bad instruction `lock' {standard input}:1864: Error: bad instruction `addl $0,0(%rsp)' {standard input}:5239: Error: bad instruction `lock' {standard input}:5239: Error: bad instruction `addl $0,0(%rsp)' {standard input}:5731: Error: bad instruction `lock' {standard input}:5731: Error: bad instruction `addl $0,0(%rsp)' {standard input}:11923: Error: bad instruction `lock' {standard input}:11923: Error: bad instruction `addl $0,0(%rsp)' {standard input}:13960: Error: bad instruction `lock' {standard input}:13960: Error: bad instruction `addl $0,0(%rsp)' {standard input}:14349: Error: bad instruction `lock' {standard input}:14349: Error: bad instruction `addl $0,0(%rsp)' /home/fleixi/git/qemu/rules.mak:57: recipe for target 'hw/display/qxl.o' failed make: *** [hw/display/qxl.o] Error 1 Build options are: ./configure --target-list=i386-softmmu Install prefix/usr/local BIOS directory/usr/local/share/qemu binary directory /usr/local/bin library directory /usr/local/lib module directory /usr/local/lib/qemu libexec directory /usr/local/libexec include directory /usr/local/include config directory /usr/local/etc local state directory /usr/local/var Manual directory /usr/local/share/man ELF interp prefix /usr/gnemul/qemu-%M Source path /home/fleixi/git/qemu C compilercc Host C compiler cc C++ compiler c++ Objective-C compiler cc ARFLAGS rv CFLAGS-O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -pthread -I/usr/include/glib-2.0 -I/usr/lib/arm-linux-gnueabihf/glib-2.0/include -g -mcpu=cortex-a15.cortex-a7 -mfloat-abi=hard -mfpu=neon-vfpv4 -O2 -pipe -ffast-math -ftree-vectorize -mvectorize-with-neon-quad -fstack-protector --param=ssp-buffer-size=4 QEMU_CFLAGS -I/usr/include/pixman-1 -Werror -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -Wendif-labels -Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/include/libpng12 -I/usr/local/include/spice-server -I/usr/local/include -I/usr/local/include/spice-1 -I/usr/include/glib-2.0 -I/usr/lib/arm-linux-gnueabihf/glib-2.0/include -I/usr/include/pixman-1 LDFLAGS -Wl,--warn-common -g make make install install pythonpython -B smbd /usr/sbin/smbd module supportno host CPU arm host big endian no target list i386-softmmu tcg debug enabled no gprof enabled no sparse enabledno strip binariesyes profiler no static build no pixmansystem SDL support no GTK support yes GTK GL supportno GNUTLS supportno GNUTLS hash no libgcrypt no nettleno libtasn1 no VTE support no curses supportno virgl support no curl support yes mingw32 support no Audio drivers oss Block whitelist (rw) Block whitelist (ro) VirtFS supportno VNC support yes VNC SASL support yes VNC JPEG support yes VNC PNG support yes xen support no brlapi supportno bluez supportyes Documentation no PIE no vde support no netmap supportno Linux AIO support no ATTR/XATTR support yes Install blobs yes KVM support yes RDMA support no TCG interpreter no fdt support no preadv supportyes fdatasync yes madvise yes posix_madvise yes sigev_thread_id yes uuid support no libcap-ng support no vhost-net support yes vhost-scsi support yes Trace backendslog spice support yes (0.12.10/0.12.6) rbd support no xfsctl supportno smartcard support no libusbno usb net redir no OpenGL supportno libiscsi support no libnfs supportno build guest agent yes QGA VSS support
[Qemu-devel] [Bug 1546680] Re: Incorrect display colors when running big endian guest on POWER8 little endian host
[Expired for QEMU because there has been no activity for 60 days.] ** Changed in: qemu Status: Incomplete => Expired -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1546680 Title: Incorrect display colors when running big endian guest on POWER8 little endian host Status in QEMU: Expired Bug description: When running a big endian CentOS guest on a little endian host system the display shows severe color issues, probably due to endianness not being properly detected / switched in the emulated display hardware. Little endian guests show no display issues on the same host hardware and software. See attachment for an example of the problem. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1546680/+subscriptions
Re: [Qemu-devel] Question: can we hot plug a PCIe switch on machine "virt"
Hi Eric, My real interesting is about the hotplug of PCIe switch, which means we don't need to provide lots of PCIe root ports or PCIe down stream ports at the beginning, but we can extend the capacity by hot adding PCIe switches which can provide more hot-pluggable slots for endpoint devices. The document docs/pcie.txt says "PCI Express Downstream Ports can't be hot-plugged into an existing PCI Express Upstream Port" which confuses me. Does it actually mean Downstream Ports can't be hot-plugged? For they can't be hot-plugged into an existing Upstream Port as the doc says, either they can't be hot-plugged into an non-existing Upstream Port or another place... Thanks, Heyi On 2019/4/4 15:39, Auger Eric wrote: Hi Heyi, On 4/3/19 8:50 PM, Michael S. Tsirkin wrote: On Wed, Apr 03, 2019 at 03:32:09PM +0800, Heyi Guo wrote: Hi folks, In physical world, a PCIe switch including one upstream port and several downstream ports is a single physical device, however we treat each port as a device in qemu world. In qemu docs/pcie.txt, we have below statements: Line 230: Be aware that PCI Express Downstream Ports can't be hot-plugged into Line 231: an existing PCI Express Upstream Port. To my understanding, it implies PCIe downstream ports *can* be hot-plugged into something which is not an existing upstream port. If it is true, how can we do that? AFAIK monitor command device_add can only add one device at a time. Please help to show the truth. Thanks, Heyi afaik they can only be plugged into upstearm ports with or without hotplug. Hotplug on upstream port does not look supported, as mentionned in the doc: (QEMU) device_add driver=xio3130-downstream id=down0 bus=upstream_port1 {"error": {"class": "GenericError", "desc": "Bus 'upstream_port1' does not support hotplugging"}} Looks the std way to use the downstream port is the one documented in 2.2.3: 2.2.3 Plugging a PCI Express device into a Switch: -device ioh3420,id=root_port1,chassis=x,slot=y[,bus=pcie.0][,addr=z] -device x3130-upstream,id=upstream_port1,bus=root_port1[,addr=x] -device xio3130-downstream,id=downstream_port1,bus=upstream_port1,chassis=x1,slot=y1[,addr=z1]] -device ,bus=downstream_port1 For my curiosity why do you want to hotplug a downstream port in another place than an upstream port? Thanks Eric .
[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu
** Also affects: qemu (Ubuntu Trusty) Importance: Undecided Status: New ** Changed in: qemu (Ubuntu Trusty) Status: New => In Progress ** Changed in: qemu (Ubuntu Trusty) Importance: Undecided => Medium ** Changed in: qemu (Ubuntu Trusty) Assignee: (unassigned) => Dan Streetman (ddstreet) ** Also affects: qemu Importance: Undecided Status: New ** Changed in: qemu Status: New => In Progress ** Changed in: qemu Assignee: (unassigned) => Dan Streetman (ddstreet) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1823458 Title: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu Status in QEMU: In Progress Status in qemu package in Ubuntu: In Progress Status in qemu source package in Trusty: In Progress Status in qemu source package in Xenial: In Progress Status in qemu source package in Bionic: In Progress Status in qemu source package in Cosmic: In Progress Status in qemu source package in Disco: In Progress Bug description: [impact] on shutdown of a guest, there is a race condition that results in qemu crashing instead of normally shutting down. The bt looks similar to this (depending on the specific version of qemu, of course; this is taken from 2.5 version of qemu): (gdb) bt #0 __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66 #1 0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73 #2 0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at /build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205 #3 0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, dev=0x5636c1bf6b70) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195 #4 0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, ring=0x7ffe65c087e0) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364 #5 0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895 #6 0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, vdev=vdev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262 #7 0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, dev=dev@entry=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293 #8 0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, ncs=0x5636c209d110, total_queues=total_queues@entry=1) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371 #9 0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150 #10 virtio_net_set_status (vdev=, status=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162 #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624 #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, state=state@entry=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605 #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724 #14 vm_stop (state=RUN_STATE_SHUTDOWN) at /build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407 #15 0x5636c085d240 in main_loop_should_exit () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883 #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931 #17 main (argc=, argv=, envp=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683 [test case] unfortunately since this is a race condition, it's very hard to arbitrarily reproduce; it depends very much on the overall configuration of the guest as well as how exactly it's shut down - specifically, its vhost user net must be closed from the host side at a specific time during qemu shutdown. I have someone with such a setup who has reported to me their setup is able to reproduce this reliably, but the config is too complex for me to reproduce so I have relied on their reproduction and testing to debug and craft the patch for this. [regression potential] the change adds flags to prevent repeated calls to both vhost_net_stop() and vhost_net_cleanup() (really, prevents repeated calls to vhost_dev_cleanup()). Any regression would be seen when stopping and/or cleaning up a vhost net. Regressions might include failure to hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem leak), or crashes during cleanup or stopping a vhost net. [other info] this was originally seen in the 2.5 version of qemu - specifically, the
Re: [Qemu-devel] [PATCH] test qgraph.c: Fix segs due to out of scope default
On 05/04/2019 23.16, Paolo Bonzini wrote: > On 05/04/19 20:40, Dr. David Alan Gilbert (git) wrote: >> From: "Dr. David Alan Gilbert" >> >> The test uses the trick: >>if (!opts) { >> opts = &(QOSGraph...Options) { }; >>} >> >> in a couple of places, however the temporary created >> by the &() {} goes out of scope at the bottom of the if, >> and results in a seg or assert when opts-> fields are >> used (on fedora 30's gcc 9). >> >> Fixes: fc281c802022cb3a73a5 >> Signed-off-by: Dr. David Alan Gilbert > > Thomas, can you pick this up? Sure, queued it to my qtest-next branch now: https://gitlab.com/huth/qemu/tree/qtest-next I'll send a PULL request on Monday. Thomas
[Qemu-devel] CFP: KVM Forum 2019
KVM Forum 2019: Call For Participation October 30-November 1, 2019 - Lyon Convention Center - Lyon, France (All submissions must be received before June 15, 2019 at 23:59 PST) = KVM Forum is an annual event that presents a rare opportunity for developers and users to meet, discuss the state of Linux virtualization technology, and plan for the challenges ahead. We invite you to lead part of the discussion by submitting a speaking proposal for KVM Forum 2019. At this highly technical conference, developers driving innovation in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can meet users who depend on KVM as part of their offerings, or to power their data centers and clouds. KVM Forum will include sessions on the state of the KVM virtualization stack, planning for the future, and many opportunities for attendees to collaborate. As we celebrate ten years of KVM development in the Linux kernel, KVM continues to be a critical part of the FOSS cloud infrastructure. This year, KVM Forum is joining Open Source Summit in Lyon, France. Selected talks from KVM Forum will be presented on Wednesday October 30 to the full audience of the Open Source Summit. Also, attendees of KVM Forum will have access to all of the talks from Open Source Summit on Wednesday. https://events.linuxfoundation.org/events/kvm-forum-2019/program/call-for-proposals/ Suggested topics: * Scaling, latency optimizations, performance tuning, real-time guests * Hardening and security * New features * Testing KVM and the Linux kernel: * Nested virtualization * Resource management (CPU, I/O, memory) and scheduling * VFIO: IOMMU, SR-IOV, virtual GPU, etc. * Networking: Open vSwitch, XDP, etc. * virtio and vhost * Architecture ports and new processor features QEMU: * Management interfaces: QOM and QMP * New devices, new boards, new architectures * Graphics, desktop virtualization and virtual GPU * New storage features * High availability, live migration and fault tolerance * Emulation and TCG * Firmware: ACPI, UEFI, coreboot, U-Boot, etc. Management and infrastructure * Managing KVM: Libvirt, OpenStack, oVirt, etc. * Storage: Ceph, Gluster, SPDK, etc.r * Network Function Virtualization: DPDK, OPNFV, OVN, etc. * Provisioning SUBMITTING YOUR PROPOSAL Abstracts due: June 15, 2019 Please submit a short abstract (~150 words) describing your presentation proposal. Slots vary in length up to 45 minutes. Submit your proposal here: https://events.linuxfoundation.org/events/kvm-forum-2019/program/call-for-proposals/ Please only use the categories "presentation" and "panel discussion" You will receive a notification whether or not your presentation proposal was accepted by August 12, 2019. Speakers will receive a complimentary pass for the event. In case your submission has multiple presenters, only the primary speaker for a proposal will receive a complimentary event pass. For panel discussions, all panelists will receive a complimentary event pass. TECHNICAL TALKS A good technical talk should not just report on what has happened over the last year; it should present a concrete problem and how it impacts the user and/or developer community. Whenever applicable, focus on work that needs to be done, difficulties that haven't yet been solved, and on decisions that other developers should be aware of. Summarizing recent developments is okay but it should not be more than a small portion of the overall talk. END-USER TALKS One of the big challenges as developers is to know what, where and how people actually use our software. We will reserve a few slots for end users talking about their deployment challenges and achievements. If you are using KVM in production you are encouraged submit a speaking proposal. Simply mark it as an end-user talk. As an end user, this is a unique opportunity to get your input to developers. HANDS-ON / BOF SESSIONS We will reserve some time for people to get together and discuss strategic decisions as well as other topics that are best solved within smaller groups. These sessions will be announced during the event. If you are interested in organizing such a session, please add it to the list at http://www.linux-kvm.org/page/KVM_Forum_2019_BOF Let people you think who might be interested know about your BOF, and encourage them to add their names to the wiki page as well. Please add your ideas to the list before KVM Forum starts. PANEL DISCUSSIONS If you are proposing a panel discussion, please make sure that you list all of your potential panelists in your the abstract. We will request full biographies if a panel is accepted. == HOTEL / TRAVEL == This year's event will take place at the Lyon Conference Center. For information on hotels close to the conference, please visit
Re: [Qemu-devel] [PATCH] configure: Automatically fall back to TCI on non-release architectures
On 4/5/19 2:56 PM, Helge Deller wrote: > Looking just at some of the debian "ports" (non-release) architectures: > alpha, hppa, ia64, m68k, powerpc, sh4, sparc64 FWIW: sparc64 does have a tcg backend. (Indeed, tcg/sparc does *not* support 32-bit cpus!) powerpc does have a tcg backend, and it has both 32 and 64-bit users. Both hppa and ia64 used to have tcg backends, and could be resurrected if there was a will. I have a tcg/alpha backend that I haven't bothered to merge, because I assumed there was little interest. r~