Hello Ben,

echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
sudo dpdk-devbind --bind=vfio-pci 0000:13:00.0

The above commands successfully resulted in vfio-pci driver binding to the
NIC. However, as soon as I assigned the NIC to VPP and restarted the
service, my VM CPU shot up and the VM crashes.


Regarding IOMMU I do have it enabled in the host's BIOS, ESXI "Expose IOMMU
to the guest OS" option, and I have set the GRUB_CMDLINE_LINUX per the
below wiki:

https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)

root@test:~# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"
GRUB_CMDLINE_LINUX="intel_iommu=on isolcpus=1-7 nohz_full=1-7
hugepagesz=1GB hugepages=16 default_hugepagesz=1GB"

Full dmesg output can be found at: http://jcm.me/dmesg.txt


On Tue, Sep 29, 2020 at 2:08 AM Benoit Ganne (bganne) <bga...@cisco.com>
wrote:

> Hi Joshua,
>
> We'd need the whole dmesg. It looks like vfio-pci either failed to load or
> is not usable (probably an IOMMU issue) but it might be something else.
> Alternatively, you can try to disable vfio IOMMU support and bind to vfio
> again:
> ~# echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> ~# sudo dpdk-devbind --bind=vfio-pci 0000:13:00.0
>
> Best
> ben
>
> > -----Original Message-----
> > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Joshua
> Moore
> > Sent: mardi 29 septembre 2020 00:26
> > To: Joshua Moore <j...@jcm.me>
> > Cc: Damjan Marion <dmar...@me.com>; Benoit Ganne (bganne)
> > <bga...@cisco.com>; vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
> Driver
> >
> > Sorry, previous dmesg was greped on my VMXNET3 adapter, not the i40evf.
> > Correct dmesg:
> >
> > jmoore@test:~$ dmesg | grep 0000:13:00.0
> > [    0.259249] pci 0000:13:00.0: [8086:154c] type 00 class 0x020000
> > [    0.261432] pci 0000:13:00.0: reg 0x10: [mem 0xe7af0000-0xe7afffff
> > 64bit pref]
> > [    0.266767] pci 0000:13:00.0: reg 0x1c: [mem 0xe7aec000-0xe7aeffff
> > 64bit pref]
> > [    0.272845] pci 0000:13:00.0: disabling ASPM on pre-1.1 PCIe device.
> > You can enable it with 'pcie_aspm=force'
> > [    1.179790] iommu: Adding device 0000:13:00.0 to group 8
> > [    2.196529] i40evf 0000:13:00.0: Multiqueue Enabled: Queue pair count
> =
> > 4
> > [    2.196799] i40evf 0000:13:00.0: MAC address: 00:0c:29:58:7f:b5
> > [    2.196865] i40evf 0000:13:00.0: GRO is enabled
> > [    2.510262] i40evf 0000:13:00.0 ens224: renamed from eth2
> >
> >
> > On Mon, Sep 28, 2020 at 4:57 PM Joshua Moore via lists.fd.io
> > <http://lists.fd.io>  <j=jcm...@lists.fd.io <mailto:jcm...@lists.fd.io>
> >
> > wrote:
> >
> >
> >       Hi Damjan,
> >
> >       I am running Ubuntu 18.04 LTS with kernel 4.15.0-118-generic
> >
> >
> >       See below dmesg output.
> >
> >       jmoore@test:~$ dmesg | grep 0000:03:00.0
> >       [    0.223459] pci 0000:03:00.0: [15ad:07b0] type 00 class 0x020000
> >       [    0.225126] pci 0000:03:00.0: reg 0x10: [mem 0xfd4fc000-
> > 0xfd4fcfff]
> >       [    0.227304] pci 0000:03:00.0: reg 0x14: [mem 0xfd4fd000-
> > 0xfd4fdfff]
> >       [    0.229121] pci 0000:03:00.0: reg 0x18: [mem 0xfd4fe000-
> > 0xfd4fffff]
> >       [    0.231298] pci 0000:03:00.0: reg 0x1c: [io  0x4000-0x400f]
> >       [    0.237119] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-
> > 0x0000ffff pref]
> >       [    0.237550] pci 0000:03:00.0: supports D1 D2
> >       [    0.237551] pci 0000:03:00.0: PME# supported from D0 D1 D2 D3hot
> > D3cold
> >       [    0.237774] pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe
> > device.  You can enable it with 'pcie_aspm=force'
> >       [    0.353290] pci 0000:03:00.0: BAR 6: assigned [mem 0xfd400000-
> > 0xfd40ffff pref]
> >       [    1.179463] iommu: Adding device 0000:03:00.0 to group 6
> >       [    2.108455] vmxnet3 0000:03:00.0: # of Tx queues : 8, # of Rx
> > queues : 8
> >       [    2.110321] vmxnet3 0000:03:00.0 eth0: NIC Link is Up 10000 Mbps
> >       [    2.471328] vmxnet3 0000:03:00.0 ens160: renamed from eth0
> >
> >
> >
> >
> >       On Mon, Sep 28, 2020 at 1:02 PM Damjan Marion <dmar...@me.com
> > <mailto:dmar...@me.com> > wrote:
> >
> >
> >
> >               What message do you see in dmesg? What is the kernel
> version?
> >
> >
> >
> >                       On 28.09.2020., at 19:47, Joshua Moore <j...@jcm.me
> > <mailto:j...@jcm.me> > wrote:
> >
> >                       Sorry, I'm still hitting an issue where I cannot
> create
> > the interface in VPP:
> >
> >
> >                               vpp# create interface avf 0000:13:00.0
> >                               create interface avf: device not bound to
> 'vfio-
> > pci' or 'uio_pci_generic' kernel module
> >
> >
> >
> >                       So I tried to bind the NIC to vfio-pci:
> >
> >
> >                               root@test:~# modprobe vfio-pci
> >                               root@test:~# /usr/local/vpp/vpp-
> > config/scripts/dpdk-devbind.py -s
> >
> >                               Network devices using DPDK-compatible
> driver
> >
>  ============================================
> >                               <none>
> >
> >                               Network devices using kernel driver
> >                               ===================================
> >                               0000:13:00.0 'Ethernet Virtual Function 700
> > Series' if=ens224 drv=i40evf unused=
> >
> >                               root@test:~# /usr/local/vpp/vpp-
> > config/scripts/dpdk-devbind.py --bind vfio-pci 13:00.0
> >                               Error - no supported modules(DPDK driver)
> are
> > loaded
> >
> >
> >
> >                       Thoughts?
> >
> >                       On Mon, Sep 28, 2020 at 11:43 AM Benoit Ganne
> (bganne)
> > <bga...@cisco.com <mailto:bga...@cisco.com> > wrote:
> >
> >
> >                               Hi Johsua,
> >
> >                               Your understanding is correct, however you
> do not
> > need to setup the VFs if it is already correctly setup by ESXI.
> >                               Just create the AVF interface directly by
> > specifying the VF PCI address.
> >
> >                               ben
> >
> >                               > -----Original Message-----
> >                               > From: Joshua Moore <j...@jcm.me <mailto:
> j...@jcm.me> >
> >                               > Sent: lundi 28 septembre 2020 17:48
> >                               > To: Benoit Ganne (bganne) <
> bga...@cisco.com
> > <mailto:bga...@cisco.com> >
> >                               > Cc: vpp-dev@lists.fd.io <mailto:vpp-
> > d...@lists.fd.io>
> >                               > Subject: Re: [vpp-dev] VPP on ESXI with
> i40evf
> > (SR-IOV Passthrough) Driver
> >                               >
> >                               > Hello Benoit,
> >                               >
> >                               > Looking at the script for AVF, it states:
> >                               >
> >                               > # Setup one VF on PF 0000:3b:00.0 and
> assign MAC
> > address
> >                               > setup 0000:3b:00.0 00:11:22:33:44:00
> >                               > # Setup one VF on PF 0000:3b:00.1 and
> assign MAC
> > address
> >                               > setup 0000:3b:00.1 00:11:22:33:44:01
> >                               >
> >                               > This seems to assume the entire PF NIC is
> > exposed to the VM and the VM is
> >                               > responsible for owning the configuration
> of the
> > WHOLE PF to setup the VF.
> >                               > This also makes sense to me considering
> that the
> > script is looking for
> >                               > i40en driver (physical) and not i40evf
> driver
> > (virtual). My understanding
> >                               > is that this will not work with my ESXI
> setup as
> > ESXI owns the
> >                               > configuration of the PF (physical NIC)
> and is
> > assigning the VFs from the
> >                               > NIC end is exposing just the VF to the
> VM.
> >                               >
> >                               > Does this make sense or am I
> misunderstanding
> > something?
> >                               >
> >                               > If so, then how can the AVF plugin/driver
> > consume just the VF NIC already
> >                               > assigned to the VM and not try to setup
> a new
> > VF?
> >                               >
> >                               >
> >                               > Thanks!
> >                               >
> >                               > -Josh
> >                               >
> >                               > On Mon, Sep 28, 2020 at 2:40 AM Benoit
> Ganne
> > (bganne) <bga...@cisco.com <mailto:bga...@cisco.com>
> >                               > <mailto:bga...@cisco.com
> > <mailto:bga...@cisco.com> > > wrote:
> >                               >
> >                               >
> >                               >       Hi,
> >                               >
> >                               >       It should work with AVF as it is
> using
> > VFs, not PF, see
> >                               >
> > https://docs.fd.io/vpp/21.01/d1/def/avf_plugin_doc.html
> >                               >       You should bind the VF with
> vfio-pci 1st
> > though, so that it is
> >                               > usable by userspace drivers such as VPP
> AVF
> > plugin.
> >                               >       If your system crashes when doing
> so it is
> > a bug with your system.
> >                               >
> >                               >       Best
> >                               >       ben
> >                               >
> >                               >       > -----Original Message-----
> >                               >       > From: vpp-dev@lists.fd.io
> <mailto:vpp-
> > d...@lists.fd.io>  <mailto:vpp-dev@lists.fd.io <mailto:
> vpp-dev@lists.fd.io>
> > >  <vpp-
> >                               > d...@lists.fd.io <mailto:d...@lists.fd.io>
> > <mailto:vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > > On Behalf
> Of
> > j...@jcm.me <mailto:j...@jcm.me>
> >                               > <mailto:j...@jcm.me <mailto:j...@jcm.me> >
> >                               >       > Sent: lundi 28 septembre 2020
> 01:29
> >                               >       > To: vpp-dev@lists.fd.io <mailto:
> vpp-
> > d...@lists.fd.io>  <mailto:vpp-dev@lists.fd.io <mailto:
> vpp-dev@lists.fd.io>
> > >
> >                               >       > Subject: [vpp-dev] VPP on ESXI
> with
> > i40evf (SR-IOV Passthrough)
> >                               > Driver
> >                               >       >
> >                               >       > Hello,
> >                               >       >
> >                               >       > Is there any support for VPP to
> talk
> > directly to ESXI-assigned VFs
> >                               > via SR-
> >                               >       > IOV? I saw the AVF plugin but I
> don't
> > want VPP to control the
> >                               > whole PF
> >                               >       > (physical NIC) but rather would
> like to
> > have ESXI control the
> >                               > mapping of
> >                               >       > VFs (SR-IOV) and VPP (or DPDK)
> consume
> > the VF natively in the VM
> >                               > so that I
> >                               >       > can run multiple VMs on the same
> > physical NIC while benefiting
> >                               > from
> >                               >       > bypassing the vSwitch in ESXI.
> Right now
> > I'm running VPP on a
> >                               > Ubuntu 18.04
> >                               >       > VM and I see the SR-IOV NIC as
> an i40evf
> > driver.
> >                               >       >
> >                               >       > I tried binding the SR-IOV NIC
> to the
> > vfio driver but this causes
> >                               > the CPU
> >                               >       > of the VM to skyrocket and
> crash. I
> > don't think using vfio is the
> >                               > right
> >                               >       > approach and feel like the
> solution here
> > is really simple. Any
> >                               >       > suggestions?
> >                               >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17577): https://lists.fd.io/g/vpp-dev/message/17577
Mute This Topic: https://lists.fd.io/mt/77164974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to