Re: [vpp-dev] Endpoint-Independent Mapping on Determinate NAT

2020-09-30 Thread Joshua Moore
Just to clarify, the filtering behavior I’m looking for is often known as “full 
cone” or “pure cone” NAT.

> On Sep 29, 2020, at 6:33 PM, Joshua Moore  wrote:
> 
> 
> Hello,
> 
> I have a need to relax the session lookup criteria on out2in packet 
> processing with NAT44 determinate mode. The behavior I am looking for is so 
> that as long as there is an initial session for a given destination IP:port 
> then any return packet to the translated port should be allowed regardless of 
> the source IP. Essentially, if I open a session from 100.65.0.2 to 
> 2.2.2.2:3074 and VPP creates a translation entry then the out2in processing 
> should allow any n:3074 source IP and not restrict the translation to return 
> packets only allowed from 2.2.2.2.
> 
> It looks like this may have been possible with the below feature but it's not 
> available in determinate mode:
> https://wiki.fd.io/view/VPP/NAT#Enable_or_disable_forwarding
> 
> Are there any thoughts on this? Any suggestions on where I could perhaps 
> compile my own version of that allows endpoint-independent mapping?
> 
> 
> 
> Thanks!
> 
> 
> --Josh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17600): https://lists.fd.io/g/vpp-dev/message/17600
Mute This Topic: https://lists.fd.io/mt/77210049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Endpoint-Independent Mapping on Determinate NAT

2020-09-29 Thread Joshua Moore
Hello,

I have a need to relax the session lookup criteria on out2in packet
processing with NAT44 determinate mode. The behavior I am looking for is so
that as long as there is an initial session for a given destination IP:port
then any return packet to the translated port should be allowed regardless
of the source IP. Essentially, if I open a session from 100.65.0.2 to
2.2.2.2:3074 and VPP creates a translation entry then the out2in processing
should allow any n:3074 source IP and not restrict the translation to
return packets only allowed from 2.2.2.2.

It looks like this may have been possible with the below feature but it's
not available in determinate mode:
https://wiki.fd.io/view/VPP/NAT#Enable_or_disable_forwarding

Are there any thoughts on this? Any suggestions on where I could perhaps
compile my own version of that allows endpoint-independent mapping?



Thanks!


--Josh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17595): https://lists.fd.io/g/vpp-dev/message/17595
Mute This Topic: https://lists.fd.io/mt/77210049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Deterministic NAT Same in/out Interface Not Matching Session

2020-09-29 Thread Joshua Moore
Yep, definitely looks like this is unsupported. I moved to separated in/out
interfaces and packets started flowing appropriately.



On Tue, Sep 29, 2020 at 2:35 PM Joshua Moore via lists.fd.io  wrote:

> Hello,
>
> Do we know if the same in/out interface for NAT in deterministic mode is
> supported in VPP? I am seeing a strange behavior where return traffic is
> not matching the session. For example, see session below where a DNS
> request is initially captured outbound to 8.8.8.8:
> http://jcm.me/session.txt
>
> As you can see, this is recorded as 1.1.1.0:2325 for the outside
> translated IP/port:
>
> in 100.65.0.2:35573 out 1.1.1.0:2325 external host 8.8.8.8:53 state:
> udp-active expire: 869
>
> When reply comes back from 8.8.8.8 though to 1.1.1.0:2325 the packet is
> dropped. I captured this in the trace: http://jcm.me/trace.txt
>
> The only thing I can think of here that may be a little odd with my setup
> is that I am using the same interface for inside and outside. See my VPP
> config below:
>
> jmoore@test:~$ cat /etc/vpp/setup.gate
> set interface ip address loop0 1.1.1.1/29
> set interface state loop0 up
> set interface ip address GigabitEthernet3/0/0 172.16.30.250/24
> set int nat44 in GigabitEthernet3/0/0 out GigabitEthernet3/0/0
> nat44 deterministic add in 100.65.0.0/22 out 1.1.1.0/29
> set interface state GigabitEthernet3/0/0 up
> ip route add 0.0.0.0/0 via 172.16.30.1
>
> Any reason that the trace is showing the below?
>
> 00:09:23:047897: drop
>   nat44-det-in2out: No translation
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17594): https://lists.fd.io/g/vpp-dev/message/17594
Mute This Topic: https://lists.fd.io/mt/77203973/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP Deterministic NAT Same in/out Interface Not Matching Session

2020-09-29 Thread Joshua Moore
Hello,

Do we know if the same in/out interface for NAT in deterministic mode is 
supported in VPP? I am seeing a strange behavior where return traffic is not 
matching the session. For example, see session below where a DNS request is 
initially captured outbound to 8.8.8.8: http://jcm.me/session.txt

As you can see, this is recorded as 1.1.1.0:2325 for the outside translated 
IP/port:

in 100.65.0.2:35573 out 1.1.1.0:2325 external host 8.8.8.8:53 state: udp-active 
expire: 869

When reply comes back from 8.8.8.8 though to 1.1.1.0:2325 the packet is 
dropped. I captured this in the trace: http://jcm.me/trace.txt

The only thing I can think of here that may be a little odd with my setup is 
that I am using the same interface for inside and outside. See my VPP config 
below:

jmoore@test:~$ cat /etc/vpp/setup.gate
set interface ip address loop0 1.1.1.1/29
set interface state loop0 up
set interface ip address GigabitEthernet3/0/0 172.16.30.250/24
set int nat44 in GigabitEthernet3/0/0 out GigabitEthernet3/0/0
nat44 deterministic add in 100.65.0.0/22 out 1.1.1.0/29
set interface state GigabitEthernet3/0/0 up
ip route add 0.0.0.0/0 via 172.16.30.1

Any reason that the trace is showing the below?

00:09:23:047897: drop
nat44-det-in2out: No translation

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17593): https://lists.fd.io/g/vpp-dev/message/17593
Mute This Topic: https://lists.fd.io/mt/77203973/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-29 Thread Joshua Moore
The driver here is performance. I like the idea of assigning CPUs dedicated to 
the DPDK namespace and avoiding kernel interrupts. My understanding was that 
IOMMU was needed for this but I may be completely off with that thinking.

It doesn’t seem in retrospect of this discussion that IOMMU offers any 
performance increase and that I should simply stick with SR-IOV + 
enable_unsafe_noiommu_mode.

Should I still expect 100% CPU with IOMMU disabled? Will I still be able to 
allocate CPU cores to namespaces without IOMMU?


Thanks for working with me on this. 


—Josh

> On Sep 29, 2020, at 9:35 AM, Yichen Wang (yicwang)  wrote:
> 
> 
> I am not sure if it worth to try to add “iommu=pt intel_iommu=on” in the 
> GRUB? We normally do it in both Linux host/guest, but in the case of ESXi not 
> sure if that will help with the vfio-pci IOMMU issue…
>  
> Regards,
> Yichen
>  
> From:  on behalf of "Damjan Marion via lists.fd.io" 
> 
> Reply-To: "dmar...@me.com" 
> Date: Tuesday, September 29, 2020 at 7:48 AM
> To: Joshua Moore 
> Cc: "Benoit Ganne (bganne)" , "vpp-dev@lists.fd.io" 
> 
> Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver
>  
> Joshua,
>  
> What is your motivation for using IOMMU inside VM?
> It is typically used by hypervisor to protect from misbehaving VMs.
> So unless you want to do nested virtualisation, IOMMU inside your VM doesn’t 
> bring lot of value.
> Can you simply try to turn off iommu in the VM and turn 
> “enable_unsafe_noiommu_mode”.
>  
> Another possible problem with AVF may be version of ESXi PF driver. AVF 
> communicates 
> with PF over the virtua lchannel and PF driver needs to support it. In linux, 
> that works well with recent kernels
> but I’m not sure what is the state of ESXi i40e PF driver….
>  
> — 
> Damjan
>  
> On 29.09.2020., at 14:04, Joshua Moore  wrote:
>  
> Ben,
>  
> Of course there's a PCI switch. My main purpose of leveraging SR-IOV with VF 
> allocation is to allow the internal eswitch on the Intel NIC to handle 
> switching in hardware instead of the vswitch on the ESXI hypervisor. I don't 
> really care so much about isolation of the PCI devices nor the risk of bad 
> firmware on the NIC. I will control/trust all of the VMs with access to the 
> VFs as well as the device attached to the PF.
>  
> So just to confirm, I need to expect 100% CPU utilization with VPP/DPDK + 
> IOMMU? If so, what's the best way to monitor CPU-related performance impact 
> if I always see 100%? Also I want to confirm that enable_unsafe_noiommu_mode 
> still enables the performance benefits of SR-IOV and the only tradeoff is the 
> aforementioned isolation/security concern?
>  
>  
> Thanks for your help,
>  
> --Josh
>  
> On Tue, Sep 29, 2020 at 5:23 AM Benoit Ganne (bganne)  
> wrote:
> Hi Joshua,
> 
> Glad it solves the vfio issue. Looking at the dmesg output, I suspect the 
> issue is that the PCIe topology advertised is not fully supported by vfio + 
> IOMMU: it looks like your VF is behind a PCIe switch, so the CPU PCIe IOMMU 
> root-complex port cannot guarantee full isolation: all devices behind the 
> PCIe switch can talk peer-to-peer directly w/o going through the CPU PCIe 
> root-complex port.
> As the CPU IOMMU cannot fully isolate your device, vfio refuses to bind 
> unless you allow for unsafe IOMMU config - rightfully, as it seems to be your 
> case.
> Anyway, it still means you should benefit from the IOMMU to prevent the 
> device to read/write everywhere in host memory. You might not however prevent 
> a malign firmware running on the NIC to harm other devices behind the same 
> PCIe switch.
> 
> Regarding the VM crash, note that VPP is polling the interfaces so it will 
> always uses 100% CPU.
> Does the VM also crashes if you run stress-test the CPU eg.
> ~# stress-ng --matrix 0 -t 1m
> 
> Best
> ben
> 
> > -Original Message-
> > From: Joshua Moore 
> > Sent: mardi 29 septembre 2020 12:07
> > To: Benoit Ganne (bganne) 
> > Cc: Damjan Marion ; vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver
> > 
> > Hello Ben,
> > 
> > echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > sudo dpdk-devbind --bind=vfio-pci :13:00.0
> > 
> > 
> > The above commands successfully resulted in vfio-pci driver binding to the
> > NIC. However, as soon as I assigned the NIC to VPP and restarted the
> > service, my VM CPU shot up and the VM crashes.
> > 
> > 
> > Regarding IOMMU I do have it enabled in the host's BIOS, ESXI "Expose
> > IOMMU to the guest OS&qu

Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-29 Thread Joshua Moore
Ben,

Of course there's a PCI switch. My main purpose of leveraging SR-IOV with
VF allocation is to allow the internal eswitch on the Intel NIC to handle
switching in hardware instead of the vswitch on the ESXI hypervisor. I
don't really care so much about isolation of the PCI devices nor the risk
of bad firmware on the NIC. I will control/trust all of the VMs with access
to the VFs as well as the device attached to the PF.

So just to confirm, I need to expect 100% CPU utilization with VPP/DPDK +
IOMMU? If so, what's the best way to monitor CPU-related performance impact
if I always see 100%? Also I want to confirm that
enable_unsafe_noiommu_mode still enables the performance benefits of SR-IOV
and the only tradeoff is the aforementioned isolation/security concern?


Thanks for your help,

--Josh

On Tue, Sep 29, 2020 at 5:23 AM Benoit Ganne (bganne) 
wrote:

> Hi Joshua,
>
> Glad it solves the vfio issue. Looking at the dmesg output, I suspect the
> issue is that the PCIe topology advertised is not fully supported by vfio +
> IOMMU: it looks like your VF is behind a PCIe switch, so the CPU PCIe IOMMU
> root-complex port cannot guarantee full isolation: all devices behind the
> PCIe switch can talk peer-to-peer directly w/o going through the CPU PCIe
> root-complex port.
> As the CPU IOMMU cannot fully isolate your device, vfio refuses to bind
> unless you allow for unsafe IOMMU config - rightfully, as it seems to be
> your case.
> Anyway, it still means you should benefit from the IOMMU to prevent the
> device to read/write everywhere in host memory. You might not however
> prevent a malign firmware running on the NIC to harm other devices behind
> the same PCIe switch.
>
> Regarding the VM crash, note that VPP is polling the interfaces so it will
> always uses 100% CPU.
> Does the VM also crashes if you run stress-test the CPU eg.
> ~# stress-ng --matrix 0 -t 1m
>
> Best
> ben
>
> > -Original Message-
> > From: Joshua Moore 
> > Sent: mardi 29 septembre 2020 12:07
> > To: Benoit Ganne (bganne) 
> > Cc: Damjan Marion ; vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
> Driver
> >
> > Hello Ben,
> >
> > echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > sudo dpdk-devbind --bind=vfio-pci :13:00.0
> >
> >
> > The above commands successfully resulted in vfio-pci driver binding to
> the
> > NIC. However, as soon as I assigned the NIC to VPP and restarted the
> > service, my VM CPU shot up and the VM crashes.
> >
> >
> > Regarding IOMMU I do have it enabled in the host's BIOS, ESXI "Expose
> > IOMMU to the guest OS" option, and I have set the GRUB_CMDLINE_LINUX per
> > the below wiki:
> >
> > https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
> >
> > root@test:~# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
> > GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"
> > GRUB_CMDLINE_LINUX="intel_iommu=on isolcpus=1-7 nohz_full=1-7
> > hugepagesz=1GB hugepages=16 default_hugepagesz=1GB"
> >
> > Full dmesg output can be found at: http://jcm.me/dmesg.txt
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17579): https://lists.fd.io/g/vpp-dev/message/17579
Mute This Topic: https://lists.fd.io/mt/77164974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-29 Thread Joshua Moore
Hello Ben,

echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
sudo dpdk-devbind --bind=vfio-pci :13:00.0

The above commands successfully resulted in vfio-pci driver binding to the
NIC. However, as soon as I assigned the NIC to VPP and restarted the
service, my VM CPU shot up and the VM crashes.


Regarding IOMMU I do have it enabled in the host's BIOS, ESXI "Expose IOMMU
to the guest OS" option, and I have set the GRUB_CMDLINE_LINUX per the
below wiki:

https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)

root@test:~# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"
GRUB_CMDLINE_LINUX="intel_iommu=on isolcpus=1-7 nohz_full=1-7
hugepagesz=1GB hugepages=16 default_hugepagesz=1GB"

Full dmesg output can be found at: http://jcm.me/dmesg.txt


On Tue, Sep 29, 2020 at 2:08 AM Benoit Ganne (bganne) 
wrote:

> Hi Joshua,
>
> We'd need the whole dmesg. It looks like vfio-pci either failed to load or
> is not usable (probably an IOMMU issue) but it might be something else.
> Alternatively, you can try to disable vfio IOMMU support and bind to vfio
> again:
> ~# echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> ~# sudo dpdk-devbind --bind=vfio-pci :13:00.0
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Joshua
> Moore
> > Sent: mardi 29 septembre 2020 00:26
> > To: Joshua Moore 
> > Cc: Damjan Marion ; Benoit Ganne (bganne)
> > ; vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
> Driver
> >
> > Sorry, previous dmesg was greped on my VMXNET3 adapter, not the i40evf.
> > Correct dmesg:
> >
> > jmoore@test:~$ dmesg | grep :13:00.0
> > [0.259249] pci :13:00.0: [8086:154c] type 00 class 0x02
> > [0.261432] pci :13:00.0: reg 0x10: [mem 0xe7af-0xe7af
> > 64bit pref]
> > [0.266767] pci :13:00.0: reg 0x1c: [mem 0xe7aec000-0xe7ae
> > 64bit pref]
> > [0.272845] pci :13:00.0: disabling ASPM on pre-1.1 PCIe device.
> > You can enable it with 'pcie_aspm=force'
> > [1.179790] iommu: Adding device :13:00.0 to group 8
> > [2.196529] i40evf :13:00.0: Multiqueue Enabled: Queue pair count
> =
> > 4
> > [2.196799] i40evf 0000:13:00.0: MAC address: 00:0c:29:58:7f:b5
> > [2.196865] i40evf :13:00.0: GRO is enabled
> > [2.510262] i40evf :13:00.0 ens224: renamed from eth2
> >
> >
> > On Mon, Sep 28, 2020 at 4:57 PM Joshua Moore via lists.fd.io
> > <http://lists.fd.io>  mailto:jcm...@lists.fd.io>
> >
> > wrote:
> >
> >
> >   Hi Damjan,
> >
> >   I am running Ubuntu 18.04 LTS with kernel 4.15.0-118-generic
> >
> >
> >   See below dmesg output.
> >
> >   jmoore@test:~$ dmesg | grep :03:00.0
> >   [0.223459] pci :03:00.0: [15ad:07b0] type 00 class 0x02
> >   [0.225126] pci :03:00.0: reg 0x10: [mem 0xfd4fc000-
> > 0xfd4fcfff]
> >   [0.227304] pci :03:00.0: reg 0x14: [mem 0xfd4fd000-
> > 0xfd4fdfff]
> >   [0.229121] pci :03:00.0: reg 0x18: [mem 0xfd4fe000-
> > 0xfd4f]
> >   [0.231298] pci :03:00.0: reg 0x1c: [io  0x4000-0x400f]
> >   [0.237119] pci :03:00.0: reg 0x30: [mem 0x-
> > 0x pref]
> >   [0.237550] pci :03:00.0: supports D1 D2
> >   [0.237551] pci :03:00.0: PME# supported from D0 D1 D2 D3hot
> > D3cold
> >   [0.237774] pci :03:00.0: disabling ASPM on pre-1.1 PCIe
> > device.  You can enable it with 'pcie_aspm=force'
> >   [0.353290] pci :03:00.0: BAR 6: assigned [mem 0xfd40-
> > 0xfd40 pref]
> >   [1.179463] iommu: Adding device :03:00.0 to group 6
> >   [2.108455] vmxnet3 :03:00.0: # of Tx queues : 8, # of Rx
> > queues : 8
> >   [2.110321] vmxnet3 :03:00.0 eth0: NIC Link is Up 1 Mbps
> >   [2.471328] vmxnet3 :03:00.0 ens160: renamed from eth0
> >
> >
> >
> >
> >   On Mon, Sep 28, 2020 at 1:02 PM Damjan Marion  > <mailto:dmar...@me.com> > wrote:
> >
> >
> >
> >   What message do you see in dmesg? What is the kernel
> version?
> >
> >
> >
> >   On 28.09.2020., at 19:47, Joshua Moore  > <mailto:j...@jcm.me> > wrote:
> >
> >   Sorry, I'm still hitting an issue where I cannot
> c

Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-28 Thread Joshua Moore
Sorry, previous dmesg was greped on my VMXNET3 adapter, not the i40evf.
Correct dmesg:

jmoore@test:~$ dmesg | grep :13:00.0
[0.259249] pci :13:00.0: [8086:154c] type 00 class 0x02
[0.261432] pci :13:00.0: reg 0x10: [mem 0xe7af-0xe7af 64bit
pref]
[0.266767] pci :13:00.0: reg 0x1c: [mem 0xe7aec000-0xe7ae 64bit
pref]
[0.272845] pci :13:00.0: disabling ASPM on pre-1.1 PCIe device.
You can enable it with 'pcie_aspm=force'
[1.179790] iommu: Adding device :13:00.0 to group 8
[2.196529] i40evf :13:00.0: Multiqueue Enabled: Queue pair count = 4
[2.196799] i40evf :13:00.0: MAC address: 00:0c:29:58:7f:b5
[2.196865] i40evf :13:00.0: GRO is enabled
[2.510262] i40evf :13:00.0 ens224: renamed from eth2

On Mon, Sep 28, 2020 at 4:57 PM Joshua Moore via lists.fd.io  wrote:

> Hi Damjan,
>
> I am running Ubuntu 18.04 LTS with kernel 4.15.0-118-generic
>
>
> See below dmesg output.
>
> jmoore@test:~$ dmesg | grep :03:00.0
> [0.223459] pci :03:00.0: [15ad:07b0] type 00 class 0x02
> [0.225126] pci :03:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff]
> [0.227304] pci :03:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff]
> [0.229121] pci :03:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4f]
> [0.231298] pci :03:00.0: reg 0x1c: [io  0x4000-0x400f]
> [0.237119] pci :03:00.0: reg 0x30: [mem 0x-0x pref]
> [0.237550] pci :03:00.0: supports D1 D2
> [0.237551] pci :03:00.0: PME# supported from D0 D1 D2 D3hot D3cold
> [0.237774] pci :03:00.0: disabling ASPM on pre-1.1 PCIe device.
> You can enable it with 'pcie_aspm=force'
> [0.353290] pci :03:00.0: BAR 6: assigned [mem
> 0xfd40-0xfd40 pref]
> [1.179463] iommu: Adding device :03:00.0 to group 6
> [2.108455] vmxnet3 :03:00.0: # of Tx queues : 8, # of Rx queues : 8
> [2.110321] vmxnet3 :03:00.0 eth0: NIC Link is Up 1 Mbps
> [2.471328] vmxnet3 :03:00.0 ens160: renamed from eth0
>
>
>
> On Mon, Sep 28, 2020 at 1:02 PM Damjan Marion  wrote:
>
>>
>> What message do you see in dmesg? What is the kernel version?
>>
>> On 28.09.2020., at 19:47, Joshua Moore  wrote:
>>
>> Sorry, I'm still hitting an issue where I cannot create the interface in
>> VPP:
>>
>> vpp# create interface avf :13:00.0
>> create interface avf: device not bound to 'vfio-pci' or 'uio_pci_generic'
>> kernel module
>>
>>
>>
>> So I tried to bind the NIC to vfio-pci:
>>
>> root@test:~# modprobe vfio-pci
>> root@test:~# /usr/local/vpp/vpp-config/scripts/dpdk-devbind.py -s
>>
>> Network devices using DPDK-compatible driver
>> 
>> 
>>
>> Network devices using kernel driver
>> ===
>> :13:00.0 'Ethernet Virtual Function 700 Series' if=ens224 drv=i40evf
>> unused=
>>
>> root@test:~# /usr/local/vpp/vpp-config/scripts/dpdk-devbind.py --bind
>> vfio-pci 13:00.0
>> Error - no supported modules(DPDK driver) are loaded
>>
>>
>>
>> Thoughts?
>>
>> On Mon, Sep 28, 2020 at 11:43 AM Benoit Ganne (bganne) 
>> wrote:
>>
>>> Hi Johsua,
>>>
>>> Your understanding is correct, however you do not need to setup the VFs
>>> if it is already correctly setup by ESXI.
>>> Just create the AVF interface directly by specifying the VF PCI address.
>>>
>>> ben
>>>
>>> > -Original Message-
>>> > From: Joshua Moore 
>>> > Sent: lundi 28 septembre 2020 17:48
>>> > To: Benoit Ganne (bganne) 
>>> > Cc: vpp-dev@lists.fd.io
>>> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
>>> Driver
>>> >
>>> > Hello Benoit,
>>> >
>>> > Looking at the script for AVF, it states:
>>> >
>>> > # Setup one VF on PF :3b:00.0 and assign MAC address
>>> > setup :3b:00.0 00:11:22:33:44:00
>>> > # Setup one VF on PF :3b:00.1 and assign MAC address
>>> > setup :3b:00.1 00:11:22:33:44:01
>>> >
>>> > This seems to assume the entire PF NIC is exposed to the VM and the VM
>>> is
>>> > responsible for owning the configuration of the WHOLE PF to setup the
>>> VF.
>>> > This also makes sense to me considering that the script is looking for
>>> > i40en driver (physical) and not i40evf driver (virtual). My
>>> understanding
>>> > is that this will not work with my ESXI setup 

Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-28 Thread Joshua Moore
Hi Damjan,

I am running Ubuntu 18.04 LTS with kernel 4.15.0-118-generic


See below dmesg output.

jmoore@test:~$ dmesg | grep :03:00.0
[0.223459] pci :03:00.0: [15ad:07b0] type 00 class 0x02
[0.225126] pci :03:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff]
[0.227304] pci :03:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff]
[0.229121] pci :03:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4f]
[0.231298] pci :03:00.0: reg 0x1c: [io  0x4000-0x400f]
[0.237119] pci :03:00.0: reg 0x30: [mem 0x-0x pref]
[0.237550] pci :03:00.0: supports D1 D2
[0.237551] pci :03:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[0.237774] pci :03:00.0: disabling ASPM on pre-1.1 PCIe device.
You can enable it with 'pcie_aspm=force'
[0.353290] pci :03:00.0: BAR 6: assigned [mem 0xfd40-0xfd40
pref]
[1.179463] iommu: Adding device :03:00.0 to group 6
[2.108455] vmxnet3 :03:00.0: # of Tx queues : 8, # of Rx queues : 8
[2.110321] vmxnet3 :03:00.0 eth0: NIC Link is Up 1 Mbps
[2.471328] vmxnet3 :03:00.0 ens160: renamed from eth0



On Mon, Sep 28, 2020 at 1:02 PM Damjan Marion  wrote:

>
> What message do you see in dmesg? What is the kernel version?
>
> On 28.09.2020., at 19:47, Joshua Moore  wrote:
>
> Sorry, I'm still hitting an issue where I cannot create the interface in
> VPP:
>
> vpp# create interface avf :13:00.0
> create interface avf: device not bound to 'vfio-pci' or 'uio_pci_generic'
> kernel module
>
>
>
> So I tried to bind the NIC to vfio-pci:
>
> root@test:~# modprobe vfio-pci
> root@test:~# /usr/local/vpp/vpp-config/scripts/dpdk-devbind.py -s
>
> Network devices using DPDK-compatible driver
> 
> 
>
> Network devices using kernel driver
> ===
> :13:00.0 'Ethernet Virtual Function 700 Series' if=ens224 drv=i40evf
> unused=
>
> root@test:~# /usr/local/vpp/vpp-config/scripts/dpdk-devbind.py --bind
> vfio-pci 13:00.0
> Error - no supported modules(DPDK driver) are loaded
>
>
>
> Thoughts?
>
> On Mon, Sep 28, 2020 at 11:43 AM Benoit Ganne (bganne) 
> wrote:
>
>> Hi Johsua,
>>
>> Your understanding is correct, however you do not need to setup the VFs
>> if it is already correctly setup by ESXI.
>> Just create the AVF interface directly by specifying the VF PCI address.
>>
>> ben
>>
>> > -Original Message-
>> > From: Joshua Moore 
>> > Sent: lundi 28 septembre 2020 17:48
>> > To: Benoit Ganne (bganne) 
>> > Cc: vpp-dev@lists.fd.io
>> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
>> Driver
>> >
>> > Hello Benoit,
>> >
>> > Looking at the script for AVF, it states:
>> >
>> > # Setup one VF on PF :3b:00.0 and assign MAC address
>> > setup :3b:00.0 00:11:22:33:44:00
>> > # Setup one VF on PF :3b:00.1 and assign MAC address
>> > setup :3b:00.1 00:11:22:33:44:01
>> >
>> > This seems to assume the entire PF NIC is exposed to the VM and the VM
>> is
>> > responsible for owning the configuration of the WHOLE PF to setup the
>> VF.
>> > This also makes sense to me considering that the script is looking for
>> > i40en driver (physical) and not i40evf driver (virtual). My
>> understanding
>> > is that this will not work with my ESXI setup as ESXI owns the
>> > configuration of the PF (physical NIC) and is assigning the VFs from the
>> > NIC end is exposing just the VF to the VM.
>> >
>> > Does this make sense or am I misunderstanding something?
>> >
>> > If so, then how can the AVF plugin/driver consume just the VF NIC
>> already
>> > assigned to the VM and not try to setup a new VF?
>> >
>> >
>> > Thanks!
>> >
>> > -Josh
>> >
>> > On Mon, Sep 28, 2020 at 2:40 AM Benoit Ganne (bganne) > > <mailto:bga...@cisco.com> > wrote:
>> >
>> >
>> >   Hi,
>> >
>> >   It should work with AVF as it is using VFs, not PF, see
>> > https://docs.fd.io/vpp/21.01/d1/def/avf_plugin_doc.html
>> >   You should bind the VF with vfio-pci 1st though, so that it is
>> > usable by userspace drivers such as VPP AVF plugin.
>> >   If your system crashes when doing so it is a bug with your system.
>> >
>> >   Best
>> >   ben
>> >
>> >   > -Original Message-
>> >   > From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.f

Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-28 Thread Joshua Moore
Sorry, I'm still hitting an issue where I cannot create the interface in
VPP:

vpp# create interface avf :13:00.0
create interface avf: device not bound to 'vfio-pci' or 'uio_pci_generic'
kernel module



So I tried to bind the NIC to vfio-pci:

root@test:~# modprobe vfio-pci
root@test:~# /usr/local/vpp/vpp-config/scripts/dpdk-devbind.py -s

Network devices using DPDK-compatible driver



Network devices using kernel driver
===
:13:00.0 'Ethernet Virtual Function 700 Series' if=ens224 drv=i40evf
unused=

root@test:~# /usr/local/vpp/vpp-config/scripts/dpdk-devbind.py --bind
vfio-pci 13:00.0
Error - no supported modules(DPDK driver) are loaded



Thoughts?

On Mon, Sep 28, 2020 at 11:43 AM Benoit Ganne (bganne) 
wrote:

> Hi Johsua,
>
> Your understanding is correct, however you do not need to setup the VFs if
> it is already correctly setup by ESXI.
> Just create the AVF interface directly by specifying the VF PCI address.
>
> ben
>
> > -Original Message-
> > From: Joshua Moore 
> > Sent: lundi 28 septembre 2020 17:48
> > To: Benoit Ganne (bganne) 
> > Cc: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
> Driver
> >
> > Hello Benoit,
> >
> > Looking at the script for AVF, it states:
> >
> > # Setup one VF on PF :3b:00.0 and assign MAC address
> > setup :3b:00.0 00:11:22:33:44:00
> > # Setup one VF on PF :3b:00.1 and assign MAC address
> > setup :3b:00.1 00:11:22:33:44:01
> >
> > This seems to assume the entire PF NIC is exposed to the VM and the VM is
> > responsible for owning the configuration of the WHOLE PF to setup the VF.
> > This also makes sense to me considering that the script is looking for
> > i40en driver (physical) and not i40evf driver (virtual). My understanding
> > is that this will not work with my ESXI setup as ESXI owns the
> > configuration of the PF (physical NIC) and is assigning the VFs from the
> > NIC end is exposing just the VF to the VM.
> >
> > Does this make sense or am I misunderstanding something?
> >
> > If so, then how can the AVF plugin/driver consume just the VF NIC already
> > assigned to the VM and not try to setup a new VF?
> >
> >
> > Thanks!
> >
> > -Josh
> >
> > On Mon, Sep 28, 2020 at 2:40 AM Benoit Ganne (bganne)  > <mailto:bga...@cisco.com> > wrote:
> >
> >
> >   Hi,
> >
> >   It should work with AVF as it is using VFs, not PF, see
> > https://docs.fd.io/vpp/21.01/d1/def/avf_plugin_doc.html
> >   You should bind the VF with vfio-pci 1st though, so that it is
> > usable by userspace drivers such as VPP AVF plugin.
> >   If your system crashes when doing so it is a bug with your system.
> >
> >   Best
> >   ben
> >
> >   > -Original Message-
> >   > From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>   > d...@lists.fd.io <mailto:vpp-dev@lists.fd.io> > On Behalf Of j...@jcm.me
> > <mailto:j...@jcm.me>
> >   > Sent: lundi 28 septembre 2020 01:29
> >   > To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
> >   > Subject: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough)
> > Driver
> >   >
> >   > Hello,
> >   >
> >   > Is there any support for VPP to talk directly to ESXI-assigned
> VFs
> > via SR-
> >   > IOV? I saw the AVF plugin but I don't want VPP to control the
> > whole PF
> >   > (physical NIC) but rather would like to have ESXI control the
> > mapping of
> >   > VFs (SR-IOV) and VPP (or DPDK) consume the VF natively in the VM
> > so that I
> >   > can run multiple VMs on the same physical NIC while benefiting
> > from
> >   > bypassing the vSwitch in ESXI. Right now I'm running VPP on a
> > Ubuntu 18.04
> >   > VM and I see the SR-IOV NIC as an i40evf driver.
> >   >
> >   > I tried binding the SR-IOV NIC to the vfio driver but this causes
> > the CPU
> >   > of the VM to skyrocket and crash. I don't think using vfio is the
> > right
> >   > approach and feel like the solution here is really simple. Any
> >   > suggestions?
> >
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17559): https://lists.fd.io/g/vpp-dev/message/17559
Mute This Topic: https://lists.fd.io/mt/77164974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver

2020-09-28 Thread Joshua Moore
Hello Benoit,

Looking at the script for AVF, it states:

# Setup one VF on PF :3b:00.0 and assign MAC address
setup :3b:00.0 00:11:22:33:44:00
# Setup one VF on PF :3b:00.1 and assign MAC address
setup :3b:00.1 00:11:22:33:44:01

This seems to assume the entire PF NIC is exposed to the VM and the VM is
responsible for owning the configuration of the WHOLE PF to setup the VF.
This also makes sense to me considering that the script is looking for
i40en driver (physical) and not i40evf driver (virtual). My understanding
is that this will not work with my ESXI setup as ESXI owns the
configuration of the PF (physical NIC) and is assigning the VFs from the
NIC end is exposing just the VF to the VM.

Does this make sense or am I misunderstanding something?

If so, then how can the AVF plugin/driver consume just the VF NIC already
assigned to the VM and not try to setup a new VF?


Thanks!

-Josh

On Mon, Sep 28, 2020 at 2:40 AM Benoit Ganne (bganne) 
wrote:

> Hi,
>
> It should work with AVF as it is using VFs, not PF, see
> https://docs.fd.io/vpp/21.01/d1/def/avf_plugin_doc.html
> You should bind the VF with vfio-pci 1st though, so that it is usable by
> userspace drivers such as VPP AVF plugin.
> If your system crashes when doing so it is a bug with your system.
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of j...@jcm.me
> > Sent: lundi 28 septembre 2020 01:29
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] VPP on ESXI with i40evf (SR-IOV Passthrough) Driver
> >
> > Hello,
> >
> > Is there any support for VPP to talk directly to ESXI-assigned VFs via
> SR-
> > IOV? I saw the AVF plugin but I don't want VPP to control the whole PF
> > (physical NIC) but rather would like to have ESXI control the mapping of
> > VFs (SR-IOV) and VPP (or DPDK) consume the VF natively in the VM so that
> I
> > can run multiple VMs on the same physical NIC while benefiting from
> > bypassing the vSwitch in ESXI. Right now I'm running VPP on a Ubuntu
> 18.04
> > VM and I see the SR-IOV NIC as an i40evf driver.
> >
> > I tried binding the SR-IOV NIC to the vfio driver but this causes the CPU
> > of the VM to skyrocket and crash. I don't think using vfio is the right
> > approach and feel like the solution here is really simple. Any
> > suggestions?
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17552): https://lists.fd.io/g/vpp-dev/message/17552
Mute This Topic: https://lists.fd.io/mt/77164974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-