On Sun, Apr 14, 2024 at 8:52 PM Sahil <icegambi...@gmail.com> wrote: > > Hi, > > On Friday, April 5, 2024 12:36:02 AM IST Sahil wrote: > > [...] > > I'll set up this environment as well. > > I would like to post an update here. I spent the last week > trying to set up the environment as described in the blog [1]. > I initially tried to get the L1 VM running on my host machine > (Arch Linux). However, I was unable to use virt-sysprep or > virt-cutomize to install packages in the qcow2 image. It wasn't > able to resolve the hosts while downloading the packages. > > According to the logs, /etc/resolv.conf was a dangling symlink. > I tried to use "virt-rescue" to configure DNS resolution. I tried > following these sections [2], [3] in the Arch wiki but that didn't > work either. I tried using qemu-nbd as well following this section > [4] to access the image. While I managed gain access to the > image, I wasn't able to install packages after performing a > chroot. > > One workaround was to set this environment up in a VM. I > decided to set up the environment with a Fedora image in > virtualbox acting as L0. I have managed to set up an L1 VM > in this environment and I can load it using qemu-kvm. >
I'm not clear if the complaint of the dangling pointer comes from the host or from the guest env, but I think it is ok to continue if you've been able to build the env. > I have one question though. One of the options (use case 1 in [1]) > given to the "qemu-kvm" command is: > > -device virtio-net-pci,netdev=vhost-vdpa0,bus=pcie.0,addr=0x7\ > > ,disable-modern=off,page-per-vq=on > > This gives an error: > > Bus "pcie.0" not found > > Does pcie refer to PCI Express? Changing this to pci.0 works. Yes, you don't need to mess with pcie stuff so this solution is totally valid. I think we need to change that part in the tutorial. > I read through the "device buses" section in QEMU's user > documentation [5], but I have still not understood this. > > "ls /sys/bus/pci/devices/* | grep vdpa" does not give any results. > Replacing pci with pci_express doesn't give any results either. How > does one know which pci bus the vdpa device is connected to? > I have gone through the "vDPA bus drivers" section of the "vDPA > kernel framework" article [6] but I haven't managed to find an > answer yet. Am I missing something here? > You cannot see the vDPA device from the guest. From the guest POV is a regular virtio over PCI bus. >From the host, vdpa_sim is not a PCI device either, so you cannot see under /sys/bus. Do you have a vdpa* entry under /sys/bus/vdpa/devices/? > There's one more thing. In "use case 1" of "Running traffic with > vhost_vdpa in Guest" [1], running "modprobe pktgen" in the L1 VM > gives an error: > > module pktgen couldn't be found in /lib/modules/6.5.6-300.fc39.x86_64. > > The kernel version is 6.5.6-300.fc39.x86_64. I haven't tried building > pktgen manually in L1. I'll try that and will check if vdpa_sim works > as expected after that. > Did you install kernel-modules-internal? Thanks! > [1] > https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-1 > [2] https://wiki.archlinux.org/title/QEMU#User-mode_networking > [3] > https://wiki.archlinux.org/title/Systemd-networkd#Required_services_and_setup > [4] > https://wiki.archlinux.org/title/QEMU#Mounting_a_partition_from_a_qcow2_image > [5] https://qemu-project.gitlab.io/qemu/system/device-emulation.html > [6] > https://www.redhat.com/en/blog/vdpa-kernel-framework-part-1-vdpa-bus-abstracting-hardware > > Thanks, > Sahil > >