Package: seabios
Version: 1.7.3-1

my zfs test VM boots without a problem if it has seven disks (1x5GB
boot/OS zvol, 6 x 200M files) or less.  

It still works if I boot with 7 disks and then use 'virsh attach-disk'
to add another virtio disk (or five. or ten). the added drives appear in
the system and i can use them without any problem, including adding them
to my test zpool.

rebooting the VM with more than seven disks attached causes it to lock
up at the BIOS screen, immediately after the "Booting from Hard Disk..."
message.  

CPU utilisation of the qemu process at this point is about 90% (of one
core of a Phenom II 1090T), and it stays that way until i kill the VM
- i left one instance of the VM running overnight to see if it would
eventually get started (nope).



the only info i can find with google on block device limits suggests
that kvm has a limit of 4 IDE devices and 20 virtio block devices, from
an opensuse page:

http://doc.opensuse.org/documentation/html/openSUSE/opensuse-kvm/cha.kvm.limits.html#sec.kvm.limits.hardware

the fact that 'virsh attach-disk' works suggests that it's not a
kvm/qemu limitation, anyway.

craig

ps: i'm not really sure if this bug belongs to qemu or to seabios.
seabios seems most likely.

pps: this used to work in previous versions. another zfs testing VM that
I made early last year used to boot with nine virtio disks (vda ...
vdi). i last booted it a few months ago. it failed to boot yesterday
morning and i assumed it was a problem with the VM, so i created this
new ztest vm only to encounter the same problem when i added the extra
drives for the test pool.


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to