Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-11 Thread Moon-Sang Lee
I first bind VF to igb_uio  in /etc/rc.local of VM, and I guess VPP take
the config.
I did not edited startup.conf yesterday, and I edit it today as below.

cpu {
  workers 2
}

dpdk {
dev :00:08.0
dev :00:09.0
uio-driver igb_uio
}

Anyway, I got to succeed to measure VPP/DPDK performance with pktgen.
Here is my guess about running VPP/DPDK with pktgen.
  - I think VF's MTU should match to that of PF. (i.e. PF 1500B vs VF 9220B
was my issue, I guess.)
  - On host side, I have ping using PF to pktgen port and this enables VF
on VM side communicate with pktgen.

Thank you for your kind discussion, Marco.



On Wed, Apr 11, 2018 at 4:49 PM, Marco Varlese  wrote:

> If I remember correctly, you mentioned in early email that you do _NOT_
> modify the startup.conf file for VPP. Can you confirm that?
>
> By looking at the startup.conf file coming with VPP v18.04 the default
> mechanism driver used is vfio-pci. That as mentioned before (and found out
> by you too) won't work.
> Also, by default the DPDK section is completely commented out so all the
> defaults are being used...
>
> Can you please take a look at your startup.conf file and make sure to use
> igb_uio by specifying "uio-driver igb_uio"
>
> And, first of all, make sure the dpdk section is not commented out?
>
> I think something like the below should help you to get going...
>
> dpdk {
> uio-driver igb_uio
> }
>
>
> - Marco
>
> On Wed, 2018-04-11 at 16:12 +0900, Moon-Sang Lee wrote:
>
>
> I'm back with a little progress.
> Anyway, I found that vpp interface has MTU size as 9220B.
> And I just tried to change it to 1500B as below.
> # vppctl set interface mtu 1500 VirtualFunctionEthernet0/8/0
>
> It works!
> My VM with VPP/DPDK ping/pong to outside server via VF of Intel 82599 10G
> NIC.
> But, I could not understand why MTU configuration affect to simple ping
> test.
>
> In addition, I need to measure this VM's performance with pktgen-dpdk
> which has no MTU option as far as I know.
> When I shoot packets from pktgen, VM does not receive packets nor ping
> does not work again.
> (i.e. pktgen and VM are directly connected with 10G link.)
>
> I appreciate any comment.
>
>
> On Tue, Apr 10, 2018 at 9:16 PM, Moon-Sang Lee  wrote:
>
>
> On Tue, Apr 10, 2018 at 8:24 PM, Marco Varlese  wrote:
>
> On Tue, 2018-04-10 at 19:33 +0900, Moon-Sang Lee wrote:
>
>
> Thanks for your interest, Marco.
>
> I follows the intel guideline, "As an SR-IOV VF network adapter using a
> KVM virtual network pool of adapters"
> from https://software.intel.com/en-us/articles/configure-sr-
> iov-network-virtual-functions-in-linux-kvm.
>
> In summary, I modprobe ixgbe on host side and creates one VF per PF.
> When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
> After VM finishes booting, I login to VM and bind the VF to igb_uio using
> dpdk-dev command.
> (i.e. only igb_uio works and other drivers like uio_pci_generic and
> vfio-pci fail to bind VF in VM side.)
>
> Yes, that's expected.
> If you want to use vfio-pci in the VM you'll need to enable the "no-iommu":
> # echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
>
>
> well, I cannot find such file in host nor VM.
>
>
>
>
> I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
> driver, igb_uio, inside VM.
> With above configuration, I can bind the VF to guest kernel driver,
> ixgbevf, and also to DPDK PMD, igb_uio.
> As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
> applications using igb_uio.
> (i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
> guess VF biding has no problem.)
>
> However, I cannot run VPP with DPDK and I suspect hugepage is related to
> this problem as shown in my VPP log.
>
> So, what does the command "cat /proc/meminfo |grep HugePages_" shows?
>
>
>
> well, 'cat /proc/meminfo |grep HugePages' seems ok even though I cannot
> find any rtemap files neither in /dev/hugepages nor /run/vpp/hugepages.
> When I run dpdk application, I can see rtemap files in /dev/hugepages.
>
> HugePages_Total:1024
> HugePages_Free:  972
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
>
> FYI.
> Apr 10 05:12:35 ubuntu1604 /usr/bin/vpp[1720]: dpdk_config:1271: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -b
> :00:05.0 -b :00:06.0 -b :00:07.0 --master-lcore 0 --socket-mem
> 64
>
>
>
>
>
> Every packet sent from the VM is traversed via VF to the opposite side, a
> pktgen server.
> And pktgen replies to those packets, but VM does not receive those replies.
> (i.e. I ping to pktgen port from VM, where host server port is directly
> linked to pktgen server port.)
>
> Here is my vpp-config script.
> I test ping after running this script.
>
> #!/bin/sh
>
> vppctl enable tap-inject
> vppctl create loopback interface
> vppctl set interface state loop0 up
>
> vppctl show interface

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-11 Thread Marco Varlese
If I remember correctly, you mentioned in early email that you do _NOT_ modify
the startup.conf file for VPP. Can you confirm that?
By looking at the startup.conf file coming with VPP v18.04 the default mechanism
driver used is vfio-pci. That as mentioned before (and found out by you too)
won't work.Also, by default the DPDK section is completely commented out so all
the defaults are being used...
Can you please take a look at your startup.conf file and make sure to use
igb_uio by specifying "uio-driver igb_uio"
And, first of all, make sure the dpdk section is not commented out?
I think something like the below should help you to get going...
dpdk {  uio-driver igb_uio  }

- Marco
On Wed, 2018-04-11 at 16:12 +0900, Moon-Sang Lee wrote:
> I'm back with a little progress.
> Anyway, I found that vpp interface has MTU size as 9220B.
> And I just tried to change it to 1500B as below.
> # vppctl set interface mtu 1500 VirtualFunctionEthernet0/8/0
> 
> It works!
> My VM with VPP/DPDK ping/pong to outside server via VF of Intel 82599 10G NIC.
> But, I could not understand why MTU configuration affect to simple ping test. 
> 
> In addition, I need to measure this VM's performance with pktgen-dpdk which
> has no MTU option as far as I know. 
> When I shoot packets from pktgen, VM does not receive packets nor ping does
> not work again. 
> (i.e. pktgen and VM are directly connected with 10G link.)
> 
> I appreciate any comment. 
> 
> 
> 
> On Tue, Apr 10, 2018 at 9:16 PM, Moon-Sang Lee  wrote:
> > On Tue, Apr 10, 2018 at 8:24 PM, Marco Varlese  wrote:
> > > On Tue, 2018-04-10 at 19:33 +0900, Moon-Sang Lee wrote:
> > > > Thanks for your interest, Marco.
> > > > I follows the intel guideline, "As an SR-IOV VF network adapter using a
> > > > KVM virtual network pool of adapters"
> > > > from https://software.intel.com/en-us/articles/configure-sr-iov-network-
> > > > virtual-functions-in-linux-kvm. 
> > > > 
> > > > In summary, I modprobe ixgbe on host side and creates one VF per PF. 
> > > > When I start VM using virsh, libvirt binds the VF to vfio-pci in host
> > > > side.
> > > > After VM finishes booting, I login to VM and bind the VF to igb_uio
> > > > using dpdk-dev command.
> > > > (i.e. only igb_uio works and other drivers like uio_pci_generic and
> > > > vfio-pci fail to bind VF in VM side.) 
> > > Yes, that's expected. 
> > > If you want to use vfio-pci in the VM you'll need to enable the "no-
> > > iommu":
> > > # echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > 
> > well, I cannot find such file in host nor VM.
> >
> > > > I don't edit startup.conf in VM, but I just bind the VF to dpdk-
> > > > compatible driver, igb_uio, inside VM.
> > > > With above configuration, I can bind the VF to guest kernel driver,
> > > > ixgbevf, and also to DPDK PMD, igb_uio.
> > > > As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
> > > > applications using igb_uio.
> > > > (i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so
> > > > I guess VF biding has no problem.)
> > > > 
> > > > However, I cannot run VPP with DPDK and I suspect hugepage is related to
> > > > this problem as shown in my VPP log.
> > > So, what does the command "cat /proc/meminfo |grep HugePages_" shows?
> > 
> > well, 'cat /proc/meminfo |grep HugePages' seems ok even though I cannot find
> > any rtemap files neither in /dev/hugepages nor /run/vpp/hugepages. 
> > When I run dpdk application, I can see rtemap files in /dev/hugepages. 
> > 
> > HugePages_Total:1024
> > HugePages_Free:  972
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> > 
> > FYI.Apr 10 05:12:35 ubuntu1604 /usr/bin/vpp[1720]: dpdk_config:1271: EAL
> > init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -b
> > :00:05.0 -b :00:06.0 -b :00:07.0 --master-lcore 0 --socket-mem
> > 64
> > 
> >
> > > > Every packet sent from the VM is traversed via VF to the opposite side,
> > > > a pktgen server. 
> > > > And pktgen replies to those packets, but VM does not receive those
> > > > replies.
> > > > (i.e. I ping to pktgen port from VM, where host server port is directly
> > > > linked to pktgen server port.)
> > > >
> > > > Here is my vpp-config script.
> > > > I test ping after running this script. 
> > > > 
> > > > #!/bin/sh
> > > > 
> > > > vppctl enable tap-inject
> > > > vppctl create loopback interface
> > > > vppctl set interface state loop0 up
> > > > 
> > > > vppctl show interface
> > > > vppctl show hardware
> > > > vppctl set interface state VirtualFunctionEthernet0/6/0 up
> > > > vppctl set interface state VirtualFunctionEthernet0/7/0 up
> > > > vppctl set interface ip address loop0 2.2.2.2/32
> > > > vppctl set interface ip address VirtualFunctionEthernet0/6/0
> > > > 192.168.0.1/24
> > > > vppctl set interface ip address VirtualFunctionEthernet0/7/0
> > > > 192.168.1.1/24
> > > > vppctl show interface address
> 

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Moon-Sang Lee
On Tue, Apr 10, 2018 at 8:24 PM, Marco Varlese  wrote:

> On Tue, 2018-04-10 at 19:33 +0900, Moon-Sang Lee wrote:
>
>
> Thanks for your interest, Marco.
>
> I follows the intel guideline, "As an SR-IOV VF network adapter using a
> KVM virtual network pool of adapters"
> from https://software.intel.com/en-us/articles/configure-
> sr-iov-network-virtual-functions-in-linux-kvm.
>
> In summary, I modprobe ixgbe on host side and creates one VF per PF.
> When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
> After VM finishes booting, I login to VM and bind the VF to igb_uio using
> dpdk-dev command.
> (i.e. only igb_uio works and other drivers like uio_pci_generic and
> vfio-pci fail to bind VF in VM side.)
>
> Yes, that's expected.
> If you want to use vfio-pci in the VM you'll need to enable the "no-iommu":
> # echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
>

well, I cannot find such file in host nor VM.


>
>
> I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
> driver, igb_uio, inside VM.
> With above configuration, I can bind the VF to guest kernel driver,
> ixgbevf, and also to DPDK PMD, igb_uio.
> As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
> applications using igb_uio.
> (i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
> guess VF biding has no problem.)
>
> However, I cannot run VPP with DPDK and I suspect hugepage is related to
> this problem as shown in my VPP log.
>
> So, what does the command "cat /proc/meminfo |grep HugePages_" shows?
>


well, 'cat /proc/meminfo |grep HugePages' seems ok even though I cannot
find any rtemap files neither in /dev/hugepages nor /run/vpp/hugepages.
When I run dpdk application, I can see rtemap files in /dev/hugepages.

HugePages_Total:1024
HugePages_Free:  972
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

FYI.
Apr 10 05:12:35 ubuntu1604 /usr/bin/vpp[1720]: dpdk_config:1271: EAL init
args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -b
:00:05.0 -b :00:06.0 -b :00:07.0 --master-lcore 0 --socket-mem
64




>
> Every packet sent from the VM is traversed via VF to the opposite side, a
> pktgen server.
> And pktgen replies to those packets, but VM does not receive those replies.
> (i.e. I ping to pktgen port from VM, where host server port is directly
> linked to pktgen server port.)
>
> Here is my vpp-config script.
> I test ping after running this script.
>
> #!/bin/sh
>
> vppctl enable tap-inject
> vppctl create loopback interface
> vppctl set interface state loop0 up
>
> vppctl show interface
> vppctl show hardware
> vppctl set interface state VirtualFunctionEthernet0/6/0 up
> vppctl set interface state VirtualFunctionEthernet0/7/0 up
> vppctl set interface ip address loop0 2.2.2.2/32
> vppctl set interface ip address VirtualFunctionEthernet0/6/0
> 192.168.0.1/24
> vppctl set interface ip address VirtualFunctionEthernet0/7/0
> 192.168.1.1/24
> vppctl show interface address
> vppctl show tap-inject
>
> ip addr add 2.2.2.2/32 dev vpp0
> ip addr add 192.168.0.1/24 dev vpp1
> ip addr add 192.168.1.1/24 dev vpp2
> ip link set dev vpp0 up
> ip link set dev vpp1 up
> ip link set dev vpp2 up
>
>
> On Tue, Apr 10, 2018 at 4:51 PM, Marco Varlese  wrote:
>
> On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
>
>
> I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
> In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> I can run DPDK sample applications,including l2fwd and l3fwd, in the VM,
> therefore I guess VM is successfully connected to the outside-world(pktgen
> server) via VFs.
>
> However, I cannot receive a packet when I run VPP/DPDK.
> I can see TX packets from the VM on the opposite side, pktgen server,
> but the VM does not receive any reply from  pktgen server which reports
> RX/TX packet count.
> (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
> not received in VM.)
> I found some strange log messages regarding vpp launching as below.
>
> I appreciate for any comment.
> Thanks in advance...
>
> - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> - VM: 1 socket 4 vCPU
> - VPP: 18.04
> - DPDK binding: igb_uio
>
> It isn't clear to me who manages the PF in the host and how you created
> the VFs (kernel module or via DPDK binding)?
>
> Second, what do you mean by DPDK binding in the last line above?
> Is that what you have configured in startup.conf in the VM for VPP to be
> used?
>
> If so, what is the different between VF binding and DPDK binding in your
> short setup summary above? I'm confused by reading vfio-pci in one place
> and then igb_uio later on.
>
> Can you provide us with the startup.conf you have in the VM?
>
> Finally, if you are interested in using vfio-pci then you'll need to have
> the no-IOMMU enabled otherwise you can't use VFIO in the VM...
> 

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Marco Varlese
On Tue, 2018-04-10 at 19:33 +0900, Moon-Sang Lee wrote:
> Thanks for your interest, Marco.
> I follows the intel guideline, "As an SR-IOV VF network adapter using a KVM
> virtual network pool of adapters"
> from https://software.intel.com/en-us/articles/configure-sr-iov-network-virtua
> l-functions-in-linux-kvm. 
> 
> In summary, I modprobe ixgbe on host side and creates one VF per PF. 
> When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
> After VM finishes booting, I login to VM and bind the VF to igb_uio using
> dpdk-dev command.
> (i.e. only igb_uio works and other drivers like uio_pci_generic and vfio-pci
> fail to bind VF in VM side.) 
Yes, that's expected. If you want to use vfio-pci in the VM you'll need to
enable the "no-iommu":# echo 1 >
/sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
> driver, igb_uio, inside VM.
> With above configuration, I can bind the VF to guest kernel driver, ixgbevf,
> and also to DPDK PMD, igb_uio.  
> As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
> applications using igb_uio.
> (i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
> guess VF biding has no problem.)
> 
> However, I cannot run VPP with DPDK and I suspect hugepage is related to this
> problem as shown in my VPP log.  
So, what does the command "cat /proc/meminfo |grep HugePages_" shows?
> Every packet sent from the VM is traversed via VF to the opposite side, a
> pktgen server. 
> And pktgen replies to those packets, but VM does not receive those replies.
> (i.e. I ping to pktgen port from VM, where host server port is directly linked
> to pktgen server port.)  
>  
> Here is my vpp-config script.
> I test ping after running this script. 
> 
> #!/bin/sh
> 
> vppctl enable tap-inject
> vppctl create loopback interface
> vppctl set interface state loop0 up
> 
> vppctl show interface
> vppctl show hardware
> vppctl set interface state VirtualFunctionEthernet0/6/0 up
> vppctl set interface state VirtualFunctionEthernet0/7/0 up
> vppctl set interface ip address loop0 2.2.2.2/32
> vppctl set interface ip address VirtualFunctionEthernet0/6/0 192.168.0.1/24
> vppctl set interface ip address VirtualFunctionEthernet0/7/0 192.168.1.1/24
> vppctl show interface address
> vppctl show tap-inject
> 
> ip addr add 2.2.2.2/32 dev vpp0
> ip addr add 192.168.0.1/24 dev vpp1
> ip addr add 192.168.1.1/24 dev vpp2
> ip link set dev vpp0 up
> ip link set dev vpp1 up
> ip link set dev vpp2 up
> 
> 
> 
> On Tue, Apr 10, 2018 at 4:51 PM, Marco Varlese  wrote:
> > On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
> > > I've configured a VM with KVM, and the VM is intended to run VPP with
> > > DPDK.
> > > In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> > > I can run DPDK sample applications,including l2fwd and l3fwd, in the VM, 
> > > therefore I guess VM is successfully connected to the 
> > > outside-world(pktgen 
> > > server) via VFs. 
> > > 
> > > However, I cannot receive a packet when I run VPP/DPDK. 
> > > I can see TX packets from the VM on the opposite side, pktgen server, 
> > > but the VM does not receive any reply from  pktgen server which reports
> > > RX/TX packet count.
> > > (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
> > > not received in VM.)
> > > I found some strange log messages regarding vpp launching as below. 
> > > 
> > > I appreciate for any comment.
> > > Thanks in advance...
> > > 
> > > - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> > > - VM: 1 socket 4 vCPU
> > > - VPP: 18.04
> > > - DPDK binding: igb_uio 
> > It isn't clear to me who manages the PF in the host and how you created the
> > VFs (kernel module or via DPDK binding)?
> > 
> > Second, what do you mean by DPDK binding in the last line above?
> > Is that what you have configured in startup.conf in the VM for VPP to be
> > used?
> > 
> > If so, what is the different between VF binding and DPDK binding in your
> > short setup summary above? I'm confused by reading vfio-pci in one place and
> > then igb_uio later on.
> > 
> > Can you provide us with the startup.conf you have in the VM?
> > 
> > Finally, if you are interested in using vfio-pci then you'll need to have
> > the no-IOMMU enabled otherwise you can't use VFIO in the VM... 
> > probably, the easiest would be to use igb_uio everywhere...
> > 
> > > root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
> > > vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> > > load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
> > > load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> > > Function (AVF) Device Plugin)
> > > load_one_plugin:189: Loaded plugin: cdp_plugin.so
> > > load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development
> > > Kit (DPDK))
> > > load_one_plugin:187: Loaded 

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Moon-Sang Lee
Thanks for your interest, Marco.

I follows the intel guideline, "As an SR-IOV VF network adapter using a KVM
virtual network pool of adapters"
from
https://software.intel.com/en-us/articles/configure-sr-iov-network-virtual-functions-in-linux-kvm
.

In summary, I modprobe ixgbe on host side and creates one VF per PF.
When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
After VM finishes booting, I login to VM and bind the VF to igb_uio using
dpdk-dev command.
(i.e. only igb_uio works and other drivers like uio_pci_generic and
vfio-pci fail to bind VF in VM side.)

I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
driver, igb_uio, inside VM.
With above configuration, I can bind the VF to guest kernel driver,
ixgbevf, and also to DPDK PMD, igb_uio.
As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
applications using igb_uio.
(i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
guess VF biding has no problem.)

However, I cannot run VPP with DPDK and I suspect hugepage is related to
this problem as shown in my VPP log.
Every packet sent from the VM is traversed via VF to the opposite side, a
pktgen server.
And pktgen replies to those packets, but VM does not receive those replies.
(i.e. I ping to pktgen port from VM, where host server port is directly
linked to pktgen server port.)

Here is my vpp-config script.
I test ping after running this script.

#!/bin/sh

vppctl enable tap-inject
vppctl create loopback interface
vppctl set interface state loop0 up

vppctl show interface
vppctl show hardware
vppctl set interface state VirtualFunctionEthernet0/6/0 up
vppctl set interface state VirtualFunctionEthernet0/7/0 up
vppctl set interface ip address loop0 2.2.2.2/32
vppctl set interface ip address VirtualFunctionEthernet0/6/0 192.168.0.1/24
vppctl set interface ip address VirtualFunctionEthernet0/7/0 192.168.1.1/24
vppctl show interface address
vppctl show tap-inject

ip addr add 2.2.2.2/32 dev vpp0
ip addr add 192.168.0.1/24 dev vpp1
ip addr add 192.168.1.1/24 dev vpp2
ip link set dev vpp0 up
ip link set dev vpp1 up
ip link set dev vpp2 up


On Tue, Apr 10, 2018 at 4:51 PM, Marco Varlese  wrote:

> On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
>
>
> I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
> In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> I can run DPDK sample applications,including l2fwd and l3fwd, in the VM,
> therefore I guess VM is successfully connected to the outside-world(pktgen
> server) via VFs.
>
> However, I cannot receive a packet when I run VPP/DPDK.
> I can see TX packets from the VM on the opposite side, pktgen server,
> but the VM does not receive any reply from  pktgen server which reports
> RX/TX packet count.
> (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
> not received in VM.)
> I found some strange log messages regarding vpp launching as below.
>
> I appreciate for any comment.
> Thanks in advance...
>
> - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> - VM: 1 socket 4 vCPU
> - VPP: 18.04
> - DPDK binding: igb_uio
>
> It isn't clear to me who manages the PF in the host and how you created
> the VFs (kernel module or via DPDK binding)?
>
> Second, what do you mean by DPDK binding in the last line above?
> Is that what you have configured in startup.conf in the VM for VPP to be
> used?
>
> If so, what is the different between VF binding and DPDK binding in your
> short setup summary above? I'm confused by reading vfio-pci in one place
> and then igb_uio later on.
>
> Can you provide us with the startup.conf you have in the VM?
>
> Finally, if you are interested in using vfio-pci then you'll need to have
> the no-IOMMU enabled otherwise you can't use VFIO in the VM...
> probably, the easiest would be to use igb_uio everywhere...
>
>
> root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
> vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> Function (AVF) Device Plugin)
> load_one_plugin:189: Loaded plugin: cdp_plugin.so
> load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development
> Kit (DPDK))
> load_one_plugin:187: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
> load_one_plugin:187: Loaded plugin: gbp_plugin.so (Group Based Policy)
> load_one_plugin:187: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:187: Loaded plugin: igmp_plugin.so (IGMP messaging)
> load_one_plugin:187: Loaded plugin: ila_plugin.so (Identifier-locator
> addressing for IPv6)
> load_one_plugin:187: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:187: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
> plane)
> load_one_plugin:187: Loaded 

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Marco Varlese
On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
> I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
> In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> I can run DPDK sample applications,including l2fwd and l3fwd, in the VM, 
> therefore I guess VM is successfully connected to the outside-world(pktgen
> server) via VFs. 
> 
> However, I cannot receive a packet when I run VPP/DPDK. 
> I can see TX packets from the VM on the opposite side, pktgen server, 
> but the VM does not receive any reply from  pktgen server which reports RX/TX
> packet count.
> (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is not
> received in VM.)
> I found some strange log messages regarding vpp launching as below. 
> 
> I appreciate for any comment.
> Thanks in advance...
> 
> - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> - VM: 1 socket 4 vCPU
> - VPP: 18.04
> - DPDK binding: igb_uio 
It isn't clear to me who manages the PF in the host and how you created the VFs
(kernel module or via DPDK binding)?
Second, what do you mean by DPDK binding in the last line above?Is that what you
have configured in startup.conf in the VM for VPP to be used?
If so, what is the different between VF binding and DPDK binding in your short
setup summary above? I'm confused by reading vfio-pci in one place and then
igb_uio later on.
Can you provide us with the startup.conf you have in the VM?
Finally, if you are interested in using vfio-pci then you'll need to have the
no-IOMMU enabled otherwise you can't use VFIO in the VM... probably, the easiest
would be to use igb_uio everywhere...
> root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
> vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> Function (AVF) Device Plugin)
> load_one_plugin:189: Loaded plugin: cdp_plugin.so
> load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit
> (DPDK))
> load_one_plugin:187: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
> load_one_plugin:187: Loaded plugin: gbp_plugin.so (Group Based Policy)
> load_one_plugin:187: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:187: Loaded plugin: igmp_plugin.so (IGMP messaging)
> load_one_plugin:187: Loaded plugin: ila_plugin.so (Identifier-locator
> addressing for IPv6)
> load_one_plugin:187: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:187: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
> plane)
> load_one_plugin:187: Loaded plugin: l2e_plugin.so (L2 Emulation)
> load_one_plugin:187: Loaded plugin: lacp_plugin.so (Link Aggregation Control
> Protocol)
> load_one_plugin:187: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:187: Loaded plugin: memif_plugin.so (Packet Memory Interface
> (experimetal))
> load_one_plugin:187: Loaded plugin: nat_plugin.so (Network Address
> Translation)
> load_one_plugin:187: Loaded plugin: pppoe_plugin.so (PPPoE)
> load_one_plugin:187: Loaded plugin: router.so (router)
> load_one_plugin:187: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
> load_one_plugin:187: Loaded plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
> load_one_plugin:187: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
> load_one_plugin:187: Loaded plugin: stn_plugin.so (VPP Steals the NIC for
> Container integration)
> load_one_plugin:187: Loaded plugin: tlsmbedtls_plugin.so (mbedtls based TLS
> Engine)
> load_one_plugin:187: Loaded plugin: tlsopenssl_plugin.so (openssl based TLS
> Engine)
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/cdp_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/stn_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/lacp_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> 

[vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-09 Thread Moon-Sang Lee
I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
I can run DPDK sample applications,including l2fwd and l3fwd, in the VM,
therefore I guess VM is successfully connected to the outside-world(pktgen
server) via VFs.

However, I cannot receive a packet when I run VPP/DPDK.
I can see TX packets from the VM on the opposite side, pktgen server,
but the VM does not receive any reply from  pktgen server which reports
RX/TX packet count.
(i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
not received in VM.)
I found some strange log messages regarding vpp launching as below.

I appreciate for any comment.
Thanks in advance...

- Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
- VM: 1 socket 4 vCPU
- VPP: 18.04
- DPDK binding: igb_uio

root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
Function (AVF) Device Plugin)
load_one_plugin:189: Loaded plugin: cdp_plugin.so
load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development
Kit (DPDK))
load_one_plugin:187: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:187: Loaded plugin: gbp_plugin.so (Group Based Policy)
load_one_plugin:187: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:187: Loaded plugin: igmp_plugin.so (IGMP messaging)
load_one_plugin:187: Loaded plugin: ila_plugin.so (Identifier-locator
addressing for IPv6)
load_one_plugin:187: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:187: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
plane)
load_one_plugin:187: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:187: Loaded plugin: lacp_plugin.so (Link Aggregation
Control Protocol)
load_one_plugin:187: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:187: Loaded plugin: memif_plugin.so (Packet Memory
Interface (experimetal))
load_one_plugin:187: Loaded plugin: nat_plugin.so (Network Address
Translation)
load_one_plugin:187: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:187: Loaded plugin: router.so (router)
load_one_plugin:187: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
load_one_plugin:187: Loaded plugin: srv6am_plugin.so (Masquerading SRv6
proxy)
load_one_plugin:187: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
load_one_plugin:187: Loaded plugin: stn_plugin.so (VPP Steals the NIC for
Container integration)
load_one_plugin:187: Loaded plugin: tlsmbedtls_plugin.so (mbedtls based TLS
Engine)
load_one_plugin:187: Loaded plugin: tlsopenssl_plugin.so (openssl based TLS
Engine)
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/cdp_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/stn_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/lacp_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/memif_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
load_one_plugin:67: Loaded plugin:
/usr/lib/vpp_api_test_plugins/nat_test_plugin.so
dpdk_config:1271: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages
--file-prefix vpp -w :00:06.0 -w :00:07.0 --master-lcore 0
--socket-mem 512
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
DPDK physical memory layout:
Segment 0: IOVA:0x3340, len:25165824, virt:0x7ff739c0, socket_id:0,
hugepage_sz:2097152, nchannel:0,