On 16 September 2013 17:13, Laszlo Ersek <ler...@redhat.com> wrote: > Hmmmm. Normally (as in, virtio-pci), when a VCPU thread (running KVM) > executes guest code that sends data to the host via virtio, KVM kicks > the "host notifier" eventfd.
What happens in the virtio-pci without eventfd case? (eg virtio-pci on a non-x86 host) Also, IIRC Alex said they'd had an annoying "data gets lost" issue with the s390 virtio transports too... -- PMM -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1224444 Title: virtio-serial loses writes when used over virtio-mmio Status in QEMU: New Bug description: virtio-serial appears to lose writes, but only when used on top of virtio-mmio. The scenario is this: /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \ -global virtio-blk-device.scsi=off \ -nodefconfig \ -nodefaults \ -nographic \ -M vexpress-a15 \ -machine accel=kvm:tcg \ -m 500 \ -no-reboot \ -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \ -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \ -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \ -device virtio-scsi-device,id=scsi \ -drive file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none \ -device scsi-hd,drive=appliance \ -device virtio-serial-device \ -serial stdio \ -chardev socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color' After the guest starts up, a daemon writes 4 bytes to a virtio-serial socket. The host side reads these 4 bytes correctly and writes a 64 byte message. The guest never sees this message. I enabled virtio-mmio debugging, and this is what is printed (## = my comment): ## guest opens the socket: trying to open virtio-serial channel '/dev/virtio-ports/org.libguestfs.channel.0' virtio_mmio: virtio_mmio_write offset 0x50 value 0x3 opened the socket, sock = 3 udevadm settle ## guest writes 4 bytes to the socket: virtio_mmio: virtio_mmio_write offset 0x50 value 0x5 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 sent magic GUESTFS_LAUNCH_FLAG ## host reads 4 bytes successfully: main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG libguestfs: [14605ms] appliance is up Guest launched OK. ## host writes 64 bytes to socket: libguestfs: writing the data to the socket (size = 64) waiting for next request libguestfs: data written OK ## hangs here forever with guest in read() call never receiving any data I am using qemu from git today (2d1fe1873a984). Some notes: - It's not 100% reproducible. Sometimes everything works fine, although it fails "often" (eg > 2/3rds of the time). - KVM is being used. - We've long used virtio-serial on x86 and have never seen anything like this. This is what the output looks like when it doesn't fail: trying to open virtio-serial channel '/dev/virtio-ports/org.libguestfs.channel.0 ' virtio_mmio: virtio_mmio_write offset 0x50 value 0x3 opened the socket, sock = 3 udevadm settle virtio_mmio: virtio_mmio_write offset 0x50 value 0x5 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG libguestfs: [14710ms] appliance is up Guest launched OK. libguestfs: writing the data to the socket (size = 64) FS_LAUNCH_FLAG main_loop waiting for next request libguestfs: data written OK virtio_mmio: virtio_mmio_write offset 0x50 value 0x2 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_write offset 0x50 value 0x2 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x0 virtio_mmio: virtio_mmio setting IRQ 0 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 [... more virtio-mmio lines omitted ...] virtio_mmio: virtio_mmio_write offset 0x64 value 0x0 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 guestfsd: main_loop: new request, len 0x3c virtio_mmio: virtio_mmio_write offset 0x50 value 0x4 0000: 20 00 f5 f5 00 00 00 04 00 00 00 d2 00 00 00 00 | ...............|virtio_mmio: virtio_mmio_write offset 0x50 value 0x2 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 0010: 00 12 34 00 00 00 00 00 00 00 00 00 00 00 00 00 |..4.............| 0020: 00 00 00 00 00 00 00 00 00 00 00 08 2f 64 65 76 |............/dev| 0030: 2f 73 64 61 00 00 00 03 6d 62 72 00 |/sda....mbr. | virtio_mmio: virtio_mmio_write offset 0x50 value 0x2 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 virtio_mmio: virtio_mmio_write offset 0x50 value 0x2 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1224444/+subscriptions