[Qemu-devel] [Bug 1359394] Re: virtio block device hangs after "virtio_blk virtio3: requests:id 0 is not a head!"
You mean a kernel stack trace when the message was printed? I don't have that but I guess I could add a dump_stack() call in there. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1359394 Title: virtio block device hangs after "virtio_blk virtio3: requests:id 0 is not a head!" Status in QEMU: New Bug description: The virtual machine is running block layer workloads, interrupted by unclean reboots (echo b > /proc/sysrq-trigger). Kernel version is 3.14. Sometimes, I get this message on boot: "virtio_blk virtio3: requests:id 0 is not a head!" Then, I/O to the virtio block devices just hangs. Unfortunately I don't have a test case and this is kind of hard to reproduce, but it seems related to having I/O in flight when the kernel is forced to reboot. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1359394/+subscriptions
[Qemu-devel] [Bug 1359394] [NEW] virtio block device hangs after "virtio_blk virtio3: requests:id 0 is not a head!"
Public bug reported: The virtual machine is running block layer workloads, interrupted by unclean reboots (echo b > /proc/sysrq-trigger). Kernel version is 3.14. Sometimes, I get this message on boot: "virtio_blk virtio3: requests:id 0 is not a head!" Then, I/O to the virtio block devices just hangs. Unfortunately I don't have a test case and this is kind of hard to reproduce, but it seems related to having I/O in flight when the kernel is forced to reboot. ** Affects: qemu Importance: Undecided Status: New -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1359394 Title: virtio block device hangs after "virtio_blk virtio3: requests:id 0 is not a head!" Status in QEMU: New Bug description: The virtual machine is running block layer workloads, interrupted by unclean reboots (echo b > /proc/sysrq-trigger). Kernel version is 3.14. Sometimes, I get this message on boot: "virtio_blk virtio3: requests:id 0 is not a head!" Then, I/O to the virtio block devices just hangs. Unfortunately I don't have a test case and this is kind of hard to reproduce, but it seems related to having I/O in flight when the kernel is forced to reboot. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1359394/+subscriptions
[Qemu-devel] [Bug 1359383] [NEW] kernel panic at smpboot.c:134 when rebooting qemu with multiple cores
Public bug reported: Hi all, I can reproduce this with kernel 3.14 and 3.17rc1. I suspect it is a qemu issue, but I'm not sure. The test case is the following script: qemu-system-x86_64 -machine accel=kvm -pidfile /tmp/pid$$ -m 512M -smp 8,sockets=8 -kernel vmlinuz -append "init=/sbin/reboot -f console=ttyS0,115200 kgdboc=ttyS2,115200 root=/dev/sda rw" -nographic -serial stdio -drive format=raw,snapshot=on,file=/var/lib/ktest/root Note that we pass /sbin/reboot as the init program so it just reboots forever. After a dozen or so iterations, I hit this: [0.00] Initializing cgroup subsys cpuset [0.00] Initializing cgroup subsys cpu [0.00] Initializing cgroup subsys cpuacct [0.00] Linux version 3.17.0-rc1-0-2014.sp (sp@vodka) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #209 SMP Wed Aug 20 20:17:46 UTC 2014 [0.00] Command line: init=/sbin/reboot -f console=ttyS0,115200 kgdboc=ttyS2,115200 root=/dev/sda rw ktest.priority=9 [0.00] e820: BIOS-provided physical RAM map: [0.00] BIOS-e820: [mem 0x-0x0009fbff] usable [0.00] BIOS-e820: [mem 0x0009fc00-0x0009] reserved [0.00] BIOS-e820: [mem 0x000f-0x000f] reserved [0.00] BIOS-e820: [mem 0x0010-0x1fffcfff] usable [0.00] BIOS-e820: [mem 0x1fffd000-0x1fff] reserved [0.00] BIOS-e820: [mem 0xfeffc000-0xfeff] reserved [0.00] BIOS-e820: [mem 0xfffc-0x] reserved [0.00] process: using polling idle threads [0.00] NX (Execute Disable) protection: active [0.00] SMBIOS 2.4 present. [0.00] Hypervisor detected: KVM [0.00] e820: last_pfn = 0x1fffd max_arch_pfn = 0x4 [0.00] PAT not supported by CPU. [0.00] init_memory_mapping: [mem 0x-0x000f] [0.00] init_memory_mapping: [mem 0x1fc0-0x1fdf] [0.00] init_memory_mapping: [mem 0x1c00-0x1fbf] [0.00] init_memory_mapping: [mem 0x0010-0x1bff] [0.00] init_memory_mapping: [mem 0x1fe0-0x1fffcfff] [0.00] ACPI: Early table checksum verification disabled [0.00] ACPI: RSDP 0x000F0A90 14 (v00 BOCHS ) [0.00] ACPI: RSDT 0x1C21 34 (v01 BOCHS BXPCRSDT 0001 BXPC 0001) [0.00] ACPI: FACP 0x1FFFEF40 74 (v01 BOCHS BXPCFACP 0001 BXPC 0001) [0.00] ACPI: DSDT 0x1FFFDDC0 001180 (v01 BOCHS BXPCDSDT 0001 BXPC 0001) [0.00] ACPI: FACS 0x1FFFDD80 40 [0.00] ACPI: SSDT 0x1FFFEFB4 000B85 (v01 BOCHS BXPCSSDT 0001 BXPC 0001) [0.00] ACPI: APIC 0x1B39 B0 (v01 BOCHS BXPCAPIC 0001 BXPC 0001) [0.00] ACPI: HPET 0x1BE9 38 (v01 BOCHS BXPCHPET 0001 BXPC 0001) [0.00] No NUMA configuration found [0.00] Faking a node at [mem 0x-0x1fffcfff] [0.00] Initmem setup node 0 [mem 0x-0x1fffcfff] [0.00] NODE_DATA [mem 0x1fffa000-0x1fffcfff] [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00 [0.00] kvm-clock: cpu 0, msr 0:1fff9001, primary cpu clock [0.00] Zone ranges: [0.00] DMA [mem 0x1000-0x00
Re: [Qemu-devel] [PATCH 2/2] qemu-iotests: add multiwrite test cases
Why are you guys merging requests in qemu at all? Just submit them to the kernel and let the kernel do it. On Wed, Jul 30, 2014 at 1:11 AM, Stefan Hajnoczi wrote: > On Tue, Jul 29, 2014 at 4:11 PM, Eric Blake wrote: > > On 07/29/2014 06:41 AM, Stefan Hajnoczi wrote: > >> This test case covers the basic bdrv_aio_multiwrite() scenarios: > >> 1. Single request > >> 2. Sequential requests > >> 3. Overlapping requests > >> 4. Disjoint requests > >> > >> Signed-off-by: Stefan Hajnoczi > >> --- > > > >> +echo > >> +echo "== Overlapping requests ==" > >> +_make_test_img $size > >> +$QEMU_IO -c "multiwrite 0 4k ; 1k 2k" "$TEST_IMG" | _filter_qemu_io > >> + > > > > This only tests superset overlap: > > > > > > BB > > > > Wouldn't it be good to also test head overlap, tail overlap, and subset > > overlap, as in: > > > > AAA > > BB > > > > AAA > > BB > > > > AA > > > > Sure, I can add more test cases in v2. > > Stefan >
[Qemu-devel] [Bug 1343827] [NEW] block.c: multiwrite_merge() truncates overlapping requests
Public bug reported: If the list of requests passed to multiwrite_merge() contains two requests where the first is for a range of sectors that is a strict subset of the second's, the second request is truncated to end where the first starts, so the second half of the second request is lost. This is easy to reproduce by running fio against a virtio-blk device running on qemu 2.1.0-rc1 with the below fio script. At least with fio 2.0.13, the randwrite pass will issue overlapping bios to the block driver, which the kernel is happy to pass along to qemu: [global] randrepeat=0 ioengine=libaio iodepth=64 direct=1 size=1M numjobs=1 verify_fatal=1 verify_dump=1 filename=$dev [seqwrite] blocksize_range=4k-1M rw=write verify=crc32c-intel [randwrite] stonewall blocksize_range=4k-1M rw=randwrite verify=meta Here is a naive fix for the problem that simply avoids merging problematic requests. I guess a better solution would be to redo qemu_iovec_concat() to do the right thing. diff -ur old/qemu-2.1.0-rc2/block.c qemu-2.1.0-rc2/block.c --- old/qemu-2.1.0-rc2/block.c 2014-07-15 14:49:14.0 -0700 +++ qemu-2.1.0-rc2/block.c 2014-07-17 23:03:14.224169741 -0700 @@ -4460,7 +4460,9 @@ int64_t oldreq_last = reqs[outidx].sector + reqs[outidx].nb_sectors; // Handle exactly sequential writes and overlapping writes. -if (reqs[i].sector <= oldreq_last) { +// If this request ends before the previous one, don't merge. +if (reqs[i].sector <= oldreq_last && +reqs[i].sector + reqs[i].nb_sectors >= oldreq_last) { merge = 1; } ** Affects: qemu Importance: Undecided Status: New -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1343827 Title: block.c: multiwrite_merge() truncates overlapping requests Status in QEMU: New Bug description: If the list of requests passed to multiwrite_merge() contains two requests where the first is for a range of sectors that is a strict subset of the second's, the second request is truncated to end where the first starts, so the second half of the second request is lost. This is easy to reproduce by running fio against a virtio-blk device running on qemu 2.1.0-rc1 with the below fio script. At least with fio 2.0.13, the randwrite pass will issue overlapping bios to the block driver, which the kernel is happy to pass along to qemu: [global] randrepeat=0 ioengine=libaio iodepth=64 direct=1 size=1M numjobs=1 verify_fatal=1 verify_dump=1 filename=$dev [seqwrite] blocksize_range=4k-1M rw=write verify=crc32c-intel [randwrite] stonewall blocksize_range=4k-1M rw=randwrite verify=meta Here is a naive fix for the problem that simply avoids merging problematic requests. I guess a better solution would be to redo qemu_iovec_concat() to do the right thing. diff -ur old/qemu-2.1.0-rc2/block.c qemu-2.1.0-rc2/block.c --- old/qemu-2.1.0-rc2/block.c 2014-07-15 14:49:14.0 -0700 +++ qemu-2.1.0-rc2/block.c 2014-07-17 23:03:14.224169741 -0700 @@ -4460,7 +4460,9 @@ int64_t oldreq_last = reqs[outidx].sector + reqs[outidx].nb_sectors; // Handle exactly sequential writes and overlapping writes. -if (reqs[i].sector <= oldreq_last) { +// If this request ends before the previous one, don't merge. +if (reqs[i].sector <= oldreq_last && +reqs[i].sector + reqs[i].nb_sectors >= oldreq_last) { merge = 1; } To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1343827/+subscriptions