Hi,

On Monday, September 9, 2024 6:04:45 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Sun, Sep 8, 2024 at 9:47 PM Sahil <icegambi...@gmail.com> wrote:
> > On Friday, August 30, 2024 4:18:31 PM GMT+5:30 Eugenio Perez Martin wrote:
> > > On Fri, Aug 30, 2024 at 12:20 PM Sahil <icegambi...@gmail.com> wrote:
> > > [...]
> > > vdpa_sim does not support packed vq at the moment. You need to build
> > > the use case #3 of the second part of that blog [1]. It's good that
> > > you build the vdpa_sim earlier as it is a simpler setup.
> > > 
> > > If you have problems with the vp_vdpa environment please let me know
> > > so we can find alternative setups.
> > 
> > Thank you for the clarification. I tried setting up the vp_vdpa
> > environment (scenario 3) but I ended up running into a problem
> > in the L1 VM.
> > 
> > I verified that nesting is enabled in KVM (L0):
> > 
> > $ grep -oE "(vmx|svm)" /proc/cpuinfo | sort | uniq
> > vmx
> > 
> > $ cat /sys/module/kvm_intel/parameters/nested
> > Y
> > 
> > There are no issues when booting L1. I start the VM by running:
> > 
> > $ sudo ./qemu/build/qemu-system-x86_64 \
> > -enable-kvm \
> > -drive file=//home/ig91/fedora_qemu_test_vm/L1.qcow2,media=disk,if=virtio
> > \
> > -net nic,model=virtio \
> > -net user,hostfwd=tcp::2222-:22 \
> > -device intel-iommu,snoop-control=on \
> > -device
> > virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_pla
> > tform=on,event_idx=off,packed=on,bus=pcie.0,addr=0x4 \ -netdev
> > tap,id=net0,script=no,downscript=no \
> > -nographic \
> > -m 2G \
> > -smp 2 \
> > -M q35 \
> > -cpu host \
> > 2>&1 | tee vm.log
> > 
> > Kernel version in L1:
> > 
> > # uname -a
> > Linux fedora 6.8.5-201.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 11
> > 18:25:26 UTC 2024 x86_64 GNU/Linux
> Did you run the kernels with the arguments "iommu=pt intel_iommu=on"?
> You can print them with cat /proc/cmdline.

I missed this while setting up the environment. After setting the kernel
params I managed to move past this issue but my environment in virtualbox
was very unstable and it kept crashing.

I managed to get L1 to run on my host OS, so scenario 3 is now up and
running. However, the packed bit seems to be disabled in this scenario too.

L0 (host machine) specs:
- kernel version:
  6.6.46-1-lts

- QEMU version:
  9.0.50 (v8.2.0-5536-g16514611dc)

- vDPA version:
  iproute2-6.10.0

L1 specs:

- kernel version:
  6.8.5-201.fc39.x86_64

- QEMU version:
  9.0.91

- vDPA version:
  iproute2-6.10.0

L2 specs:
- kernel version
  6.8.7-200.fc39.x86_64

I followed the following steps to set up scenario 3:

==== In L0 ====

$ grep -oE "(vmx|svm)" /proc/cpuinfo | sort | uniq
vmx

$ cat /sys/module/kvm_intel/parameters/nested
Y

$ sudo ./qemu/build/qemu-system-x86_64 \
-enable-kvm \
-drive 
file=//home/valdaarhun/valdaarhun/qcow2_img/L1.qcow2,media=disk,if=virtio \
-net nic,model=virtio \
-net user,hostfwd=tcp::2222-:22 \
-device intel-iommu,snoop-control=on \
-device 
virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_platform=on,event_idx=off,packed=on,bus=pcie.0,addr=0x4
 \
-netdev tap,id=net0,script=no,downscript=no \
-nographic \
-m 8G \
-smp 4 \
-M q35 \
-cpu host \
2>&1 | tee vm.log

==== In L1 ====

I verified that the following config variables are set as decribed in the blog 
[1].

CONFIG_VIRTIO_VDPA=m
CONFIG_VDPA=m
CONFIG_VP_VDPA=m
CONFIG_VHOST_VDPA=m

# modprobe vdpa
# modprobe vhost_vdpa
# modprobe vp_vdpa

# lsmod | grep -i vdpa
vp_vdpa                 20480  0
vhost_vdpa              32768  0
vhost                   65536  1 vhost_vdpa
vhost_iotlb             16384  2 vhost_vdpa,vhost
vdpa                    36864  2 vp_vdpa,vhost_vdpa
irqbypass               12288  2 vhost_vdpa,kvm

# lspci | grep -i ethernet
00:04.0 Ethernet controller: Red Hat, Inc. Virtio 1.0 network device (rev 01)

# lspci -nn | grep 00:04.0
00:04.0 Ethernet controller [0200]: Red Hat, Inc. Virtio 1.0 network device 
[1af4:1041] (rev 01)

# echo 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
# echo 1af4 1041 > /sys/bus/pci/drivers/vp-vdpa/new_id

# vdpa mgmtdev show
pci/0000:00:04.0: 
  supported_classes net < unknown class >
  max_supported_vqs 3
  dev_features  CSUM  GUEST_CSUM  CTRL_GUEST_OFFLOADS  MAC  GUEST_TSO4
  GUEST_TSO6  GUEST_ECN  GUEST_UFO  HOST_TSO4  HOST_TSO6  HOST_ECN
  HOST_UFO  MRG_RXBUF  STATUS  CTRL_VQ  CTRL_RX  CTRL_VLAN  CTRL_RX_EXTRA
  GUEST_ANNOUNCE  CTRL_MAC_ADDR  RING_INDIRECT_DESC  RING_EVENT_IDX
  VERSION_1  ACCESS_PLATFORM  bit_40  bit_54  bit_55  bit_56

# vdpa dev add name vdpa0 mgmtdev pci/0000:00:04.0
# vdpa dev show -jp
{
    "dev": {
        "vdpa0": {
            "type": "network",
            "mgmtdev": "pci/0000:00:04.0",
            "vendor_id": 6900,
            "max_vqs": 3,
            "max_vq_size": 256
        }
    }
}

# ls -l /sys/bus/vdpa/devices/vdpa0/driver
lrwxrwxrwx. 1 root root 0 Sep 11 17:07 /sys/bus/vdpa/devices/vdpa0/driver -> 
../../../../bus/vdpa/drivers/vhost_vdpa

# ls -l /dev/ |grep vdpa
crw-------. 1 root root    239,   0 Sep 11 17:07 vhost-vdpa-0

# driverctl -b vdpa
vdpa0 vhost_vdpa [*]

Booting L2:

# ./qemu/build/qemu-system-x86_64 \
-nographic \
-m 4G \
-enable-kvm \
-M q35 \
-drive file=//root/L2.qcow2,media=disk,if=virtio \
-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 \
-device 
virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,event_idx=off,packed=on,bus=pcie.0,addr=0x7
 \
-smp 4 \
-cpu host \
2>&1 | tee vm.log

==== In L2 ====

# cut -c 35 /sys/devices/pci0000:00/0000:00:07.0/virtio1/features
0

Based on what I have understood from the kernel's source, vhost-vdpa and
vp-vdpa both support packed vqs and v6.6.46 is more recent than the
minimum required kernel version. I am not sure where I am going wrong.

> > However, I managed to set up scenario 4 successfully
> > and I see that packed vq is enabled in this case.
> > 
> > # cut -c 35 /sys/devices/pci0000:00/0000:00:04.0/virtio1/features
> > 1

So far scenario 4 is the only scenario where the virtio-net device has the 
packed
feature enabled. The difference between scenarios 3 and 4 in terms of the kernel
modules loaded is that scenario 4 uses virtio_vdpa while scenario 3 uses vdpa 
and
vhost_vdpa.

Thanks,
Sahil

[1] 
https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-2



Reply via email to