Hi,
thank you so much for this! Sorry for the late reply,
was busy with other things.
On 8/1/25 2:44 PM, Stefan Fritsch wrote:
Hi,
thanks for the report.
On Tue, 25 Feb 2025, Jan wrote:
i tried this question on @misc but did not get a response.
Hope it is okay to ask here after a few weeks?
I used a Qotom Box with 1G Interfaces 8G Ram and i5 8th generation
for some years, but recently i changed due to various reasons from
hardware to a qemu-vm (ArchLinux Host) on another host. The OpenBSD
Box/VM also routes / pf between some internal vlans.
Everything seems to work great and reasonably fast, as i'm used from
OpenBSD.
But i think i hit a small bug or perhaps a known limit (i did not find
anything in the man pages though):
If i add more than 9 vio Interfaces to the qemu-vm-config, OpenBSD
boots only to "fd0 at fdc0 drive 1: density unknown". I'm using stable.
As a workaround, i added the last interface as a intel-nic, and
everything works fine again.
I can reproduce this on a different machines with the same qemu
options / version. The qemu-version is 9.2.0 (Package-Version
Archlinux: 9.2.0-2)
I tried the same configuration with a Linux Guest, no problems.
It is completly irrelevant if i configure the interfaces in OpenBSD or
not (i think the boot stops long before the network scripts)
I can reproduce the issue. The problem seems to be that if there are too
many network devices, the interrupt slots on cpu0 run out and later
devices including the block device get interrupt slots on other cpus. But
at the time that the root file system is mounted, the other cpus are not
running yet, and the block device won't work, causing the hang.
A workaround is to set the pci addresses of the network devices explicitly
on the qemu command line with addr=X, starting at 5 (the X value must be
hex). Then qemu will place the block device before the network devices and
the block device will get interrupt slots on cpu 0. Another workaround is
to limit the msix vectors for the network devices with vectors=2. Then the
interrupt slots on cpu 0 will run out later.
I don't know yet how a fix in openbsd could look like.
This mitigation worked like a charm.
I tried with 13 devices and no problems anymore.
Im convinced more would also work.
For the record, this is my enhanced startup config:
qemu-system-x86_64 -enable-kvm -m 8G -cpu host -serial pty -vnc :50 -smp
6 \
-netdev bridge,id=nic1,br=br4 -device
virtio-net,addr=5,netdev=nic1,mac=52:54:00:12:23:55 \
-netdev bridge,id=nic2,br=br51 -device
virtio-net,addr=6,netdev=nic2,mac=52:54:00:12:41:02 \
-netdev bridge,id=nic3,br=br51 -device
virtio-net,addr=7,netdev=nic3,mac=52:54:00:12:41:03 \
-netdev bridge,id=nic4,br=br51 -device
virtio-net,addr=8,netdev=nic4,mac=52:54:00:12:41:04 \
-netdev bridge,id=nic5,br=br51 -device
virtio-net,addr=9,netdev=nic5,mac=52:54:00:12:41:05 \
-netdev bridge,id=nic6,br=br51 -device
virtio-net,addr=A,netdev=nic6,mac=52:54:00:12:41:06 \
-netdev bridge,id=nic7,br=br51 -device
virtio-net,addr=B,netdev=nic7,mac=52:54:00:12:41:07 \
-netdev bridge,id=nic8,br=br51 -device
virtio-net,addr=C,netdev=nic8,mac=52:54:00:12:41:08 \
-netdev bridge,id=nic9,br=br51 -device
virtio-net,addr=D,netdev=nic9,mac=52:54:00:12:41:09 \
-netdev bridge,id=nic10,br=br5 -device
virtio-net,addr=E,netdev=nic10,mac=52:54:00:12:41:10 \
-netdev bridge,id=nic11,br=br51 -device
virtio-net,addr=F,netdev=nic11,mac=52:54:00:12:41:11 \
-netdev bridge,id=nic12,br=br51 -device
virtio-net,addr=10,netdev=nic12,mac=52:54:00:12:41:12 \
-netdev bridge,id=nic13,br=br51 -device
virtio-net,addr=11,netdev=nic13,mac=52:54:00:12:41:13 \
-drive file=/vm2-sas1/sasvms/obsd-viotest.qcow2,if=virtio
It might help other folks who stumble over this problem.
I tried upgrading to a the latest snapshot, but the bsd.upgrade kernel
is not finding the storage device, so i stopped trying that.
(Error: "vioblk0 at virtio9virtio9: Feature negotiation failed")
I think this issue has been fixed in April.
Indeed, no problems anymore on this regard.
Cheers,
Jan