Re: [vpp-dev] Does VPP support flow control (pause frame)?

2018-02-07 Thread Damjan Marion (damarion)

> On 7 Feb 2018, at 19:12, Li, Charlie  wrote:
> 
> Hi All,
> 
> Does VPP support flow control?
> 
> If yes, how to turn on/off flow control?

Not supported today but can be it shouldn't be too hard to add.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Error when trying to add interface to vpp on ARM server.

2018-02-01 Thread Damjan Marion (damarion)
You also added socket-mem which is pretty bad idea, try without.
If that doesn't help then you will need to run VPP form console and possibly 
use gdb to collect more details.

Which ARM board is that?


On 1 Feb 2018, at 14:15, adarsh m 
<addi.ada...@yahoo.in<mailto:addi.ada...@yahoo.in>> wrote:

Hi,

This is on ARM board. and yes i have modified the specific pcie address to 
added in startup.conf

 dpdk {
   socket-mem 1024
   dev 0002:f9:00.0

}


On Thursday 1 February 2018, 5:20:59 PM IST, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:


Please keep mailing list in CC.

Those lines doesn't show that anything is wrong...

Is this 4 socket computer? Have you modified startup.conf?


On 1 Feb 2018, at 12:40, adarsh m 
<addi.ada...@yahoo.in<mailto:addi.ada...@yahoo.in>> wrote:

Hi,

Very sorry pls check the complete one.

Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
Feb 01 19:37:16 vasily vpp[43454]: /usr/bin/vpp[43454]: dpdk_config:1240: EAL 
init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 
0002:f9:00.0 --master-lcore 0 --socket-mem 1024,
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: dpdk_config:1240: EAL init args: -c 
1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0002:f9:00.0 
--master-lcore 0 --socket-mem 1024,0,0,0
Feb 01 19:37:16 vasily vpp[43454]: EAL: VFIO support initialized
Feb 01 19:37:16 vasily vnet[43454]: EAL: VFIO support initialized




On Thursday 1 February 2018, 4:48:35 PM IST, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:



Unfortunately log you provided is incomplete and truncated so I cannot help 
much

On 1 Feb 2018, at 11:59, adarsh m 
<addi.ada...@yahoo.in<mailto:addi.ada...@yahoo.in>> wrote:

Hi,

I checked hugepage and it was 0, so i freed this and increased it to 5120

ubuntu@vasily:~$ sudo -i
root@vasily:~# echo 5120 > /proc/sys/vm/nr_hugepages


now the previous error is not occurring when i start but VPP is not stable, 
it'll become dead after a few sceonds from start.

Logs :
ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
HugePages_Free: 5120
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$ sudo service vpp status
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)
   Active: inactive (dead) since Thu 2018-02-01 18:50:46 CST; 5min ago
  Process: 42736 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 42731 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf (code=exited, 
status=0/SUCCESS)
  Process: 42728 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=0/SUCCESS)
  Process: 42726 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 42731 (code=exited, status=0/SUCCESS)

Feb 01 18:50:46 vasily systemd[1]: vpp.service: Service hold-off time over, 
scheduling restart.
Feb 01 18:50:46 vasily systemd[1]: Stopped vector packet processing engine.
Feb 01 18:50:46 vasily systemd[1]: vpp.service: Start request repeated too 
quickly.
Feb 01 18:50:46 vasily systemd[1]: Failed to start vector packet processing 
engine.
Feb 01 18:56:12 vasily systemd[1]: Stopped vector packet processing engine.
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$ sudo service vpp start
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$ sudo service vpp status
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Thu 2018-02-01 18:56:49 CST; 298ms ago
  Process: 42857 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 42863 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=0/SUCCESS)
  Process: 42860 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 42866 (vpp)
   CGroup: /system.slice/vpp.service
   └─42866 /usr/bin/vpp -c /etc/vpp/startup.conf

Feb 01 18:56:4

Re: [vpp-dev] Error when trying to add interface to vpp on ARM server.

2018-02-01 Thread Damjan Marion (damarion)
Please keep mailing list in CC.

Those lines doesn't show that anything is wrong...

Is this 4 socket computer? Have you modified startup.conf?


On 1 Feb 2018, at 12:40, adarsh m 
<addi.ada...@yahoo.in<mailto:addi.ada...@yahoo.in>> wrote:

Hi,

Very sorry pls check the complete one.

Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
Feb 01 19:37:16 vasily vpp[43454]: /usr/bin/vpp[43454]: dpdk_config:1240: EAL 
init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 
0002:f9:00.0 --master-lcore 0 --socket-mem 1024,
Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: dpdk_config:1240: EAL init args: -c 
1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0002:f9:00.0 
--master-lcore 0 --socket-mem 1024,0,0,0
Feb 01 19:37:16 vasily vpp[43454]: EAL: VFIO support initialized
Feb 01 19:37:16 vasily vnet[43454]: EAL: VFIO support initialized




On Thursday 1 February 2018, 4:48:35 PM IST, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:



Unfortunately log you provided is incomplete and truncated so I cannot help 
much

On 1 Feb 2018, at 11:59, adarsh m 
<addi.ada...@yahoo.in<mailto:addi.ada...@yahoo.in>> wrote:

Hi,

I checked hugepage and it was 0, so i freed this and increased it to 5120

ubuntu@vasily:~$ sudo -i
root@vasily:~# echo 5120 > /proc/sys/vm/nr_hugepages


now the previous error is not occurring when i start but VPP is not stable, 
it'll become dead after a few sceonds from start.

Logs :
ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
HugePages_Free: 5120
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$ sudo service vpp status
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)
   Active: inactive (dead) since Thu 2018-02-01 18:50:46 CST; 5min ago
  Process: 42736 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 42731 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf (code=exited, 
status=0/SUCCESS)
  Process: 42728 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=0/SUCCESS)
  Process: 42726 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 42731 (code=exited, status=0/SUCCESS)

Feb 01 18:50:46 vasily systemd[1]: vpp.service: Service hold-off time over, 
scheduling restart.
Feb 01 18:50:46 vasily systemd[1]: Stopped vector packet processing engine.
Feb 01 18:50:46 vasily systemd[1]: vpp.service: Start request repeated too 
quickly.
Feb 01 18:50:46 vasily systemd[1]: Failed to start vector packet processing 
engine.
Feb 01 18:56:12 vasily systemd[1]: Stopped vector packet processing engine.
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$ sudo service vpp start
ubuntu@vasily:~$
ubuntu@vasily:~$
ubuntu@vasily:~$ sudo service vpp status
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Thu 2018-02-01 18:56:49 CST; 298ms ago
  Process: 42857 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 42863 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=0/SUCCESS)
  Process: 42860 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 42866 (vpp)
   CGroup: /system.slice/vpp.service
   └─42866 /usr/bin/vpp -c /etc/vpp/startup.conf

Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.s
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plu
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_p

Re: [vpp-dev] Error when trying to add interface to vpp on ARM server.

2018-02-01 Thread Damjan Marion (damarion)
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
Feb 01 18:56:49 vasily vpp[42866]: /usr/bin/vpp[42866]: dpdk_config:1240: EAL 
init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --f
Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: dpdk_config:1240: EAL init args: -c 
1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix v
Feb 01 18:56:49 vasily vpp[42866]: EAL: VFIO support initialized
Feb 01 18:56:49 vasily vnet[42866]: EAL: VFIO support initialized
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~

ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
HugePages_Free: 2335
ubuntu@vasily:~$ sudo service vpp status
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)
   Active: inactive (dead) since Thu 2018-02-01 18:56:56 CST; 63ms ago
  Process: 42917 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 42914 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf (code=exited, 
status=0/SUCCESS)
  Process: 42911 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=0/SUCCESS)
  Process: 42908 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 42914 (code=exited, status=0/SUCCESS)

Feb 01 18:56:56 vasily systemd[1]: vpp.service: Service hold-off time over, 
scheduling restart.
Feb 01 18:56:56 vasily systemd[1]: Stopped vector packet processing engine.
Feb 01 18:56:56 vasily systemd[1]: vpp.service: Start request repeated too 
quickly.
Feb 01 18:56:56 vasily systemd[1]: Failed to start vector packet processing 
engine.
ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
HugePages_Free: 5120
ubuntu@vasily:~$



On Wednesday 31 January 2018, 9:30:19 PM IST, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:



On 31 Jan 2018, at 10:34, adarsh m via vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> wrote:

Hi,

Pls check i am trying to bring up vpp with interface on ARM server but facing 
issue while doing so,

pls let me know if there is any existing issue or method to correct this issue.


ubuntu@vasily:~$ sudo service vpp status
[sudo] password for ubuntu:
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: enab
   Active: active (running) since Mon 2018-01-29 22:07:02 CST; 19h ago
  Process: 2461 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, status
  Process: 2453 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/
 Main PID: 2472 (vpp_main)
   CGroup: /system.slice/vpp.service
   └─2472 /usr/bin/vpp -c /etc/vpp/startup.conf

Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create dpdk_mbuf_
Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING: Failed
Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register: ioctl (VF
Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create dpdk_mbuf_
Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING: Failed
Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register: ioctl (VF
Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create dpdk_mbuf_
Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING: Failed
Jan 29 22:07:05 vasily vnet[2472]: dpdk_ipsec_process:1011: not enough DPDK cryp
Jan 29 22:07:05 vasily vnet[2472]: dpdk_lib_init:221: DPDK drivers found no port



Looks like hugepages issue. can you show full log? What you pasted above is 
truncated...

___
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Error when trying to add interface to vpp on ARM server.

2018-01-31 Thread Damjan Marion (damarion)

On 31 Jan 2018, at 10:34, adarsh m via vpp-dev 
> wrote:

Hi,

Pls check i am trying to bring up vpp with interface on ARM server but facing 
issue while doing so,

pls let me know if there is any existing issue or method to correct this issue.


ubuntu@vasily:~$ sudo service vpp status
[sudo] password for ubuntu:
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: enab
   Active: active (running) since Mon 2018-01-29 22:07:02 CST; 19h ago
  Process: 2461 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, status
  Process: 2453 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/
 Main PID: 2472 (vpp_main)
   CGroup: /system.slice/vpp.service
   └─2472 /usr/bin/vpp -c /etc/vpp/startup.conf

Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create dpdk_mbuf_
Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING: Failed
Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register: ioctl (VF
Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create dpdk_mbuf_
Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING: Failed
Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register: ioctl (VF
Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create dpdk_mbuf_
Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING: Failed
Jan 29 22:07:05 vasily vnet[2472]: dpdk_ipsec_process:1011: not enough DPDK cryp
Jan 29 22:07:05 vasily vnet[2472]: dpdk_lib_init:221: DPDK drivers found no port


Looks like hugepages issue. can you show full log? What you pasted above is 
truncated...

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Port mirroring support in vpp

2018-01-17 Thread Damjan Marion (damarion)
Have you tried with SPAN?

On 17 Jan 2018, at 10:07, Juraj Linkeš 
> wrote:

Hi VPP devs,

I’m trying to figure out whether it’s possible to set up port mirroring on a 
vhost-user port in VPP. The case I’m trying to make work is simple: I have 
traffic between two vms (using vhost-user ports) and I want to listen to that 
traffic, replicate it and send it somewhere else (to an interface, but 
preferably an ip).

I’ve looked into what’s available in VPP and there is some support for SPAN, 
but doesn’t seem to work with vhost-user interfaces (I wasn’t able to configure 
it). In fact, it only seems to be configurable on physical interfaces. Is this 
accurate?

Then there are clis for lawful intercept (set li), but the configuration 
doesn’t seem to do anything. Is this supported?

Is there some other way to achieve port mirroring on vhost-user interfaces in 
case the two above are not supported? It can be any unwieldy/hacky way (maybe 
setting something up with multicast?).

Thanks,
Juraj
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Some memif API and Naming Questions

2018-01-15 Thread Damjan Marion (damarion)


Sent from my iPhone

On 15 Jan 2018, at 16:46, Jon Loeliger 
<j...@netgate.com<mailto:j...@netgate.com>> wrote:


Hi Damjan,

Let's try again.   I've spoken with a colleague here and I think
I may have misunderstood a few aspects of your proposal.
Reviewing it with him, I think we can make it work!

Let me review, and see if I understand (better) what you are
saying and proposing.

You said:

On Sun, Jan 14, 2018 at 12:10 PM, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:

If "create memif socket file  id  [master|slave|"
works for you i would suggest that we go that way.

So scenario above will look like:

master 1:
create memif socket file /tmp/memif1.sock id 11 server
create memif id 33 master socket-id 11

Interface name: memif-11/33

master 2:
create memif socket file /tmp/memif2.sock id 22 server
create memif id 33 master socket-id 22

Interface name: memif-22/33

(OK, so my first point of misunderstanding here was that
you had two different instances of VPP, one for each master,
in this example.   But that doesn't matter WRT the proposal.)

Second, I didn't realize that these were two separate API calls now.
Specifically, I now think you are saying that this command:

create memif socket file /tmp/memif1.sock id 11 server

Is a new memif API call that places the socket file name in
a table with the user-assigned id 11 associated with it.
Later, a "create memif" API call can reference "socket id 11"
as its socket, along with its memif id, (here 33), so that it
would yield the SW IF name "memif11/33".

correct


If so, this is perfect and satisfies the naming problem that
I was describing.

good :)


slave:
create memif socket file /tmp/memif1.sock id 11 client
create memif socket file /tmp/memif2.sock id 22 client
create memif id 33 slave socket-id 11
create memif id 33 slave socket-id 22

Interface names: memif-11/33, memif-22/33

Right.

So.  Bottom line:  I think this proposal is a viable solution!
Would you like me to write a first patch effort?

please do.

it should be basic hash of pool_index by file_id. Please keep 0 as 
preconfigured default as majority of use cases will just need default socket 
file.

Thanks
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Some memif API and Naming Questions

2018-01-14 Thread Damjan Marion (damarion)
>  
> On other side it is perfectly fine to have one slave connected to 2 different
> masters where both connections have same interface ID. That's why 
> interface name is constructed out of 2 numbers. Having only interface_id
> in interface name will simply not work in this case.
> 
> This I need help understanding still.
> 
> What are the CLI commands to set this up?
> 
> create memif id 13 master socket /tmp/master-13.sock
> create memif id 23 master socket /tmp/master-23.sock
> 
> create memif id 100 slave 
> ???
> 
> I don't know how the connections are set up.

master 1:
create memif id 33 master socket /tmp/memif1.sock

Interface name: memif-0/33

master 2:
create memif id 33 master socket /tmp/memif2.sock

Interface name: memif-0/33


slave:
create memif id 33 slave socket /tmp/memif1.sock
create memif id 33 slave socket /tmp/memif2.sock

Interface names: memif-0/33, memif-1/33

> 
> I _think_ you don't understand my issue here.  I am in no way
> questioning the underlying implementation, nor am I questioning
> having the index-mappings that you use.
> 
> My issue is in the names that get exposed to the user and when
> they become knowable.

> 
> In order to make your mechanisms work, it relies on having to reveal
> or expose the names mid-setup and then proceeding with the rest
> of the setup commands.

> 
>  
> Possible area for improvement is adding explicit "create memif listener 
>  " 
> cli and API so
> you can better control assignment mapping of file_id to actual AF_UNIX socket.
> 
> OK, that is equivalent to my third suggestion, I think.  Pre-allocating the 
> sockets
> in a known order so that they have known index numbers.  In advance.
> If you don't allow for the creation of the (socket-name to socket-index) 
> mapping in advance
> of creating the memif itself, then the name of the memif effectively becomes 
> something
> unwieldly like "memif /path/to/socket/foo.sock id 23" everywhere.
>  
> Today file_id is simply index to the pool of memif files.
> 
> I know.  And that  is the problem.  You have exposed an unreliable allocation
> result to be the only authoritative and yet unknowable user identifier.  And 
> here
> by "user" I mean any other "API user".  In no way can the user say "Make an
> item for me and call it ."
> 
> Beside that, i don't see what else we can do to make your life easier
> 
> I think the problem stems from the fact that you expose the socket index
> to the user visible names, and there is no way of knowing what those
> values will be.
> 
> I think the whole problem can be avoid by simply exposing a memif name
> with just the one id that is used during creation of the memif.  From an
> implementation standpoint, I think it is pretty easy to add to the current
> implementation.
> - Add a hash in memif_main mapping user memif id to socket_index(*)
> - The uniqueness test during creation needs to be modified to check
>for the id within the global mapping,
> - When the memif is made, add the id to socket_index to the new hash
> - When the memif is removed, remove the id from the new hash
> 
> Then only expose to the user the global memif id, and not the socket_index 
> too.
> The user supplied the id in the first place during creation, so we satisfy the
> need to be able to say "Make a  for me and call it ."  So now
> the user knows that the corresponding SW IF will be named "memif-"
> without having to guess or rely on API call ordering.


I dont like this proposal, it adds another ID to the game or limits us for 
using same
id on 2 different socket files.

> 
> We had this same "naming problem" with loopback interfaces last year too,
> and we had to fix aspects of that as well.  Similarly, the loopback interfaces
> were just created sequentially, but the user had no idea what interface name
> was actually created as a result.  In order to determine that, the user must
> have run some form of "inspect the results" command before further 
> configuration
> could take place on the corresponding interface.
> 
> Just like with loopback interfaces then, we really need to have predictable
> naming creation mechanisms in every aspect of the API presented by VPP.

If "create memif socket file  id  [master|slave|"
works for you i would suggest that we go that way.

So scenario above will look like:

master 1:
create memif socket file /tmp/memif1.sock id 11 server
create memif id 33 master socket-id 11

Interface name: memif-11/33

master 2:
create memif socket file /tmp/memif2.sock id 22 server
create memif id 33 master socket-id 22

Interface name: memif-22/33


slave:
create memif socket file /tmp/memif1.sock id 11 client
create memif socket file /tmp/memif2.sock id 22 client
create memif id 33 slave socket-id 11
create memif id 33 slave socket-id 22

Interface names: memif-11/33, memif-22/33




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Some memif API and Naming Questions

2018-01-14 Thread Damjan Marion (damarion)
Jon,

Each memif connection between master and slave is uniquely identified
by AF_UNIX socket and ID pair. This is first law of memif :)

On unix side AF_UNIX socket is identified by filename in the filesystem
but that is too long  so in VPP each AF_UNIX socket have assigned file_id and 
that index is used in
memif interface name.

So as you described, memif interface is formatted as follows:

memif-/

You cannot have 2 VPP instances listening on the same AF_UNIX socket, so
only one VPP instance can be listener. Because of that you cannot configure
both master and slave to use same AF_UNIX socket.
This is simply how system works.

On other side it is perfectly fine to have one slave connected to 2 different
masters where both connections have same interface ID. That's why
interface name is constructed out of 2 numbers. Having only interface_id
in interface name will simply not work in this case.

Possible area for improvement is adding explicit "create memif listener 
 " cli and API so
you can better control assignment mapping of file_id to actual AF_UNIX socket.
Today file_id is simply index to the pool of memif files.

Beside that, i don't see what else we can do to make your life easier

Thanks,

Damjan

On 13 Jan 2018, at 15:33, Jon Loeliger 
> wrote:

Hi VPPeople,

I am working on adding memif support to our system and need
to design some User Interface pieces for it.  I am having a bit of
a hard time with one naming aspect of the memif components.

Each memif entry has a unique u32 id assigned to it.
Each memif entry requires a socket as part of its setup.
Each memif entry can be either a slave or a master role.
Each memif entry has a corresponding SW IF as well.
The SW IF name is created using the template "memif%d/%d"
where the first %d is the pool index of the socket's filename,
and the second %d is the unique id.

There is a default socket filename that is used when no socket
name is provided in a "memif_create" API call.  I am able to
create either "master" or "slave" entries with unique ids, but not
both using the default socket.  Changing the socket name
allows both master and slave entries to be created.  Eventually,
some experimentation lead me to believe that there must be
an underlying problem with using the same socket for master
and for slave entries.

In fact, reading some of the code, I found src/plugins/memif/memif.c, line 585:

  /* existing socket file can be either master or slave but cannot be both 
*/
OK.

So if both master and slave entries might be present in the system,
specifying the socket filename is required at some point.

In the User Interface that I have, I allow a user to declare that they want
to make a memif and state attributes about it.  Specifically, I allow them
to make a role "slave" or "master", and set some queue sizes and lengths.
The User shouldn't ever care about some underlying socket file name here.
(It is purely an implementation detail they don't care about.)

Later, the User should be able to specify attributes about the corresponding
SW IF for the memif.  However, there is no way for the user to know (predict
with certainty) the name of that SW interface.  They can get close, but there
is no way to know for sure.  It will be something like "memif.../" where
the "" part is a unique number they provided in the memif declaration
and setup earlier.  However, the User cannot predict the socket number.

And lest you say something like "but creating the memif instance returns
the sw_if_index, and you can use that to lookup the SW IF's name", understand
that this is NOT necessarily an interactive system.  We have to be able to
set up batch configuration without screen scraping the results.  The user has
to be able to reliably predict the corresponding SW IF name.

So let's ask some questions and posit some solutions.

I'm willing to grant that different sockets are needed for the master and slave
roles.  Are there ever more than two sockets needed for all of the memif
instances?  Is it good enough to have one slave, and one master socket?
If so, then we can just use the role to map an instance to a socket and have
the corresponding SW IF name be, say, memif-slave/ or memif-master/.

Right now, as implemented, the "unique id" is not "for all memifs".  It is for
"each {master,slave" group, though.  That is one can have "unique id 23"
as both master and slave.  More accurately, the id need only be unique within
a given socket pool index.  (And we know there will be different sockets for
masters and for slaves.)  So we could totally change this looser uniqueness
requirement (per socket) to "for all memif instances".  With this approach,
we could use the unique id to, well, uniquely identify the SW IF using a name
like "memif-".  We'd have to keep a mapping to socket within the memif
entry (but that is already done!)  We just wouldn't place the socket pool index
in part of the SW IF name.

Another solution would 

Re: [vpp-dev] Please install missing RPMs: \npackage python34 is not installed

2017-12-15 Thread Damjan Marion (damarion)

Have you tried to do "make instal-dep" in the top level directory?

> On 15 Dec 2017, at 04:05, 重新开始 <15803846...@qq.com> wrote:
> 
> Hi, everyone
>I build vpp on centos 7.3, and had executed make install-dep. It is ok. 
> but when i make build vpp . it print "Please install missing RPMs: \npackage 
> python34 is not installed". Then i install python3.4 and  after installed the 
> python is ok. make build vpp is still : "Please install missing RPMs: 
> \npackage python34 is not installed", Why ?
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Damjan Marion (damarion)


On 15 Dec 2017, at 08:52, Marco Varlese 
<mvarl...@suse.de<mailto:mvarl...@suse.de>> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +0000, Damjan Marion (damarion) wrote:
Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/

also:

https://gerrit.fd.io/r/#/c/9813/ - abandoned but it shows that something was 
wrong



So generally speaking i would like to question having verify jobs for multiple
distros.
Is there really a value in compiling same code on different distros. Yes I
know gcc version can be different,
but that can be addressed in simpler way, if it needs to be addressed at all.

More distros means more moving parts and bigger chance that something will
fail.
Well, I am not sure how to interpret this but (in theory) a build should be
reproducible in the first place and I should not worry about problems with build
outcomes. It doesn't only affect openSUSE and I raised it many times over the
mailing-list; when you need to run "recheck" multiple times to have a build
succeed. IMHO the issue should be addressed and not solved by putting it under
the carpet...

We all know that we have extreme fragile system, as obviously we are not be 
able to
fix that in almost 2 years, so as long as the system is as it increasing 
complexity doesn't help
and just causes frustration.

Also it cost resources
That is a different matter and if that's the case then it should be discussed
seriously; raising this argument now, after having had people investing their
times in getting stuff up and running isn't really a cool thing...

Marco, decision to have verify jobs on 2 distros was made much before you 
joined the project,
and I don't remember serious decision on that topic, it might be that at that 
time
we were simply unexperienced, or maybe we didn't expect infra to be so fragile.

Fact is that now we have ridiculous situation, 2 verify jobs says patch is OK, 
3rd one says
it is not. Which one to trust?

So please don't take this personal, i know you invested time to get suse build 
working, but still
I think it is a valid question to ask, do we really need 3 verify jobs. Should 
we have 4 tomorrow
if somebody invest his time to do verify job on Archlinux for example?

Thanks,

Damjan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] openSUSE build fails

2017-12-14 Thread Damjan Marion (damarion)

Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).

So generally speaking i would like to question having verify jobs for multiple 
distros.
Is there really a value in compiling same code on different distros. Yes I know 
gcc version can be different,
but that can be addressed in simpler way, if it needs to be addressed at all.

More distros means more moving parts and bigger chance that something will fail.
Also it cost resources

Thoughts?

Damjan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] DPDK 17.11

2017-11-20 Thread Damjan Marion (damarion)

DPDK 17.11 support is merged this morning but it is still not default.

Before making it default, will be good if people can give it a try and report 
issues.

It is as simple as:

make dpdk-install-dev DPDK_VERSION=17.11

Thanks,

— 
Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Damjan Marion (damarion)

Currently it is just cosmetic…..

Does it work with testpmd?

—
Damjan

On 1 Nov 2017, at 13:14, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Ok Thanks I will debug where the problem lies.

However is this just a display issue or problem lies with data path as well 
because I am able to receive packets via this NIC to VPP from outside world? 
Any concern here?

-Nitin
____
From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 5:39:24 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP


mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan

On 1 Nov 2017, at 12:55, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Here is the detail


(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin


____
From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan

On 1 Nov 2017, at 12:34, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Please find show pci output


DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb


Just Fyi I am running VPP on aarch64.

Thanks,
Nitin


From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
<daniel.bern...@bell.ca<mailto:daniel.bern...@bell.ca>> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Damjan Marion (damarion)

mlx5 dpdk driver is telling us that speed_capa = 0, so no much love here.

You should get at least ETH_LINK_SPEED_50G bit set by dpdk driver.

—
Damjan

On 1 Nov 2017, at 12:55, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Here is the detail


(gdb) p *dev_info
$1 = {pci_dev = 0x51c4e0, driver_name = 0x76eb38a8 "net_mlx5", if_index = 
8, min_rx_bufsize = 32, max_rx_pktlen = 65536, max_rx_queues = 65535,
  max_tx_queues = 65535, max_mac_addrs = 128, max_hash_mac_addrs = 0, max_vfs = 
0, max_vmdq_pools = 0, rx_offload_capa = 15, tx_offload_capa = 1679, reta_size 
= 512,
  hash_key_size = 40 '(', flow_type_rss_offloads = 0, default_rxconf = 
{rx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'}, 
rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx_deferred_start = 0 '\000'}, default_txconf = 
{tx_thresh = {pthresh = 0 '\000', hthresh = 0 '\000', wthresh = 0 '\000'},
tx_rs_thresh = 0, tx_free_thresh = 0, txq_flags = 0, tx_deferred_start = 0 
'\000'}, vmdq_queue_base = 0, vmdq_queue_num = 0, vmdq_pool_base = 0, 
rx_desc_lim = {
nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, nb_mtu_seg_max = 
0}, tx_desc_lim = {nb_max = 65535, nb_min = 0, nb_align = 1, nb_seg_max = 0, 
nb_mtu_seg_max = 0}, speed_capa = 0, nb_rx_queues = 0, nb_tx_queues = 0}

Thanks,
Nitin


________
From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 5:17 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you put breakpoint to port_type_from_speed_capa and catpture dev_info.

I.e:

$ make build debug
(gdb) b port_type_from_speed_capa
(gdb) r

(gdb) p * dev_info

—
Damjan

On 1 Nov 2017, at 12:34, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Please find show pci output


DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:0b:00.0   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.1   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.1   0  8086:10c9   2.5 GT/s x4  igb
:0b:00.1   0  14e4:16a1   8.0 GT/s x8  bnx2x   OCP 10GbE Dual Port 
SFP+ Adapter
:32:00.0   0  15b3:1013   8.0 GT/s x16 mlx5_core   CX416A - ConnectX-4 
QSFP28
:13:00.0   0  8086:10c9   2.5 GT/s x4  igb


Just Fyi I am running VPP on aarch64.

Thanks,
Nitin

____
From: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, November 1, 2017 3:09 PM
To: Saxena, Nitin
Cc: Bernier, Daniel; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP


Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
<daniel.bern...@bell.ca<mailto:daniel.bern...@bell.ca>> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:
Supported ports: [ FIBRE Backplane ]
Supported link modes:   1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes:  

[vpp-dev] MACCHIATObin and VPP

2017-11-01 Thread Damjan Marion (damarion)

If people are interested, there is ongoing work[1] to bring VPP up
on Marvell MACCHIATObin[2] board. Interesting ARM64 community board with
SFP+ ports.

[1] https://github.com/MarvellEmbeddedProcessors/vpp-marvell
[2] http://macchiatobin.net

— 
Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 50GE interface support on VPP

2017-11-01 Thread Damjan Marion (damarion)

Can you share “show pci” output from VPP?

—
Damjan

On 30 Oct 2017, at 14:22, Saxena, Nitin 
<nitin.sax...@cavium.com<mailto:nitin.sax...@cavium.com>> wrote:

Hi Damjan,

I am still seeing UnkownEthernet32/0/0/0 interface with Mellanox Connect X-4 
NIC. I am using vpp v17.10 tag. I think the specified gerrit patch in following 
mail is part of v17.10 release.

Attached logs.

Thanks,
Nitin



From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>
Sent: Wednesday, July 5, 2017 5:38 AM
To: Bernier, Daniel
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] 50GE interface support on VPP

Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
<daniel.bern...@bell.ca<mailto:daniel.bern...@bell.ca>> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:
Supported ports: [ FIBRE Backplane ]
Supported link modes:   1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes:  1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 4Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Cannot get wake-on-lan settings: Operation not permitted
Current message level: 0x0004 (4)
   link
Link detected: yes

localadmin@sm981:~$ sudo vppctl show interface
  Name   Idx   State  Counter  Count
UnknownEthernet81/0/0 1 up   rx packets
723257
 rx bytes
68599505
 tx packets 
39495
 tx bytes 
2093235
 drops 
723257
 ip4
48504
UnknownEthernet81/0/1 2 up   rx packets
723194
 rx bytes
68592678
 tx packets 
39495
 tx bytes 
2093235
 drops 
723194
 ip4
48504
local00down


Any ideas where this could be fixed?

Thanks,

Daniel Bernier | Bell Canada

___
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp_configure_args_vpp = --disable-japi compilation issue

2017-10-03 Thread Damjan Marion (damarion)



On 3 Oct 2017, at 11:47, Avinash Dhar Dubey 
> wrote:

Hello,

I am trying to compile vpp with flag vpp_configure_args_vpp =  --disable-japi 
by modifying the file datapath/vpp/build-data/platforms/vpp.mk. 
Its resulting in broken deb packages.

Any help on how to disable japi as i want to compile vpp with minimal 
dependencies.


deb packaging doesn’t support custom configurations. If you go that way you 
will need to take care for packaging by yourself…


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Poor L3/L4 Performance

2017-09-25 Thread Damjan Marion (damarion)

Dear Alessio,

It is hard to guess where is the problem out of your description,
but I would not be surprised that your implementation of those graph nodes is 
not properly performance tuned.
One missing prefetch can hurt performance really badly.

If you are able to share your code I can take a quick look...

Thanks,

Damjan


On 25 Sep 2017, at 07:12, Alessio Silvestro 
> wrote:

Dear all,

I am performing some experiments on VPP in order to get some performance 
metrics for specific applications.

I am working on vpp v17.04.2-2.

In order to have a baseline of my system, I run L2 XConnect (XC) as in 
[https://perso.telecom-paristech.fr/~drossi/paper/vpp-bench-techrep.pdf].

In this case, I can achieve, similarly to the paper, ~13Mpps -- which somehow 
confirm that the
current setup is correct.

I implemented 2 further experiments:

1) L3-Xconnect

I implemented a new node that listens for traffic with specific ether_type with 
the following api:

ethernet_register_input_type(vm, ETHERNET_TYPE_X, my_node.index)

Once the traffic is received, the node sends the traffic directly to l2_output 
without any further processing.

The achieved packet rate is less than 5 Mpps.

2) L4-Xconnect

I implemented another node that listens for UDP traffic on  a specific port 
with the following api:


udp_register_dst_port (vm, UDP_DST_PORT_vxlan, vxlan_input_node.index, 1 /* 
is_ip4 */);

Once the traffic is received, the node sends the traffic directly to l2_output 
without any further processing.

The achieved packet rate is less than 4 Mpps.


The testbed is composed of 2 servers. The first server is running VPP whereas 
the second server runs the traffic generator (packetgen). The servers are 
equipped with Intel NICs capable of dual-port 10 Gbps full-duplex link. 
Generated packets have the size of 64kb.

VPP is configured to run with one main thread and one worker thread. Therefore, 
the previous values are meant for a single CPU-core.

In my opinion those values are a bit too low compared to other state-of-the-art 
approaches.

Do you have any idea on why this is happening and, if this is my fault, how I 
can fix it.

Thanks,
Alessio

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

2017-09-19 Thread Damjan Marion (damarion)
We don’t want official binaries to be linked against libibverbs so, and forcing 
all vpp consumers to install
subset of OFED packages to get VPP running.

If you are able to  statically link all dependencies into dpdk_plugin then 
mlx4/mlx5 pmd can be enabled in default build.
We do similar thing with IPSec MB libs...

On 19 Sep 2017, at 09:12, Shachar Beiser 
<shacha...@mellanox.com<mailto:shacha...@mellanox.com>> wrote:
Hi Damjan,

  Can you please explain why dynamic linkage preventing to enable “mlx4/5  PMDs 
as default” ?

-Shachar Beiser.

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Tuesday, September 19, 2017 3:23 PM
To: Shachar Beiser <shacha...@mellanox.com<mailto:shacha...@mellanox.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; Shahaf Shuler 
<shah...@mellanox.com<mailto:shah...@mellanox.com>>
Subject: Re: net/mlx5: install libmlx5 & libibverbs if no OFED



I need to take a a deeper look into it. I’m currently on business travel so It 
will take a bit more time.

If I get it right this still uses dynamically linked libraries so we cannot 
enable mlx4/5  PMDs as default.
Is that correct?

Thanks.,

Damjan

On 19 Sep 2017, at 06:29, Shachar Beiser 
<shacha...@mellanox.com<mailto:shacha...@mellanox.com>> wrote:

Hi ,

 I have sent a second patch for a review. I wait for comments .

  -Shachar Beiser.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

2017-09-19 Thread Damjan Marion (damarion)


I need to take a a deeper look into it. I’m currently on business travel so It 
will take a bit more time.

If I get it right this still uses dynamically linked libraries so we cannot 
enable mlx4/5  PMDs as default.
Is that correct?

Thanks.,

Damjan

On 19 Sep 2017, at 06:29, Shachar Beiser 
> wrote:

Hi ,

 I have sent a second patch for a review. I wait for comments .

  -Shachar Beiser.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] physmem rework patch

2017-09-07 Thread Damjan Marion (damarion)

On 7 Sep 2017, at 14:46, Billy McFall 
<bmcf...@redhat.com<mailto:bmcf...@redhat.com>> wrote:

To test, do we need to change anything else with our setup, like remove 
80-vpp.conf?

yes, it should work even without that file.


If I have HugePages_Total set to 8192 via grub, and 80-vpp.conf is set to the 
default of 1024, my system should stay at 8192 (provided there is enough free 
hugepages), correct?

Yes, VPP will pre-alloc more only if there is no free pages….


Thanks,
Billy

On Thu, Sep 7, 2017 at 6:30 AM, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:

Dear vpp-devers,

As I mentioned on the last community call, there is patch which significantly 
changes
the way how VPP is allocating wired memory, including the dpdk hugepages.

Patch is available here and it is passing verify jobs:

https://gerrit.fd.io/r/#/c/7701/

With this change, VPP is able dynamically pre-allocate hugepages, if they are 
not already available.

This change is affecting DPDK buffer mempools, as now they are allocated by 
VPP, and not directly by DPDK.
I use dpdk rte_mempool_xmem_create (...) to pass allocated wired memory region 
to dpdk.

Result is smaller memory footprint, mainly thanks to the better control of 
memory allocation we have.
DPDK is still allocating 64M/socket for it’s internal data structures and VPP 
allocates 40MB/socket for
our default number of buffers (16K). In case people want more buffers, it is 
enough to increase num_mbufs parameter
and VPP will increase size of mempool automatically, which is significant 
improvement as currently people need to play
with socket-mem parameter.

In total, footprint is reduced from 256M/socket to 104M/socket.

At the moment, code only deals with 2M pages, support for 1G pages for extreme 
VPP consumers will be added in the separate patch.

I will really appreciate if people can try this patch and report 
success/failure. It important that we test it in different
configurations before it is merged.

Thanks,

Damjan





___
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev



--
Billy McFall
SDN Group
Office of Technology
Red Hat

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] physmem rework patch

2017-09-07 Thread Damjan Marion (damarion)

Dear vpp-devers,

As I mentioned on the last community call, there is patch which significantly 
changes 
the way how VPP is allocating wired memory, including the dpdk hugepages.

Patch is available here and it is passing verify jobs:

https://gerrit.fd.io/r/#/c/7701/

With this change, VPP is able dynamically pre-allocate hugepages, if they are 
not already available.

This change is affecting DPDK buffer mempools, as now they are allocated by 
VPP, and not directly by DPDK.
I use dpdk rte_mempool_xmem_create (...) to pass allocated wired memory region 
to dpdk.

Result is smaller memory footprint, mainly thanks to the better control of 
memory allocation we have.
DPDK is still allocating 64M/socket for it’s internal data structures and VPP 
allocates 40MB/socket for 
our default number of buffers (16K). In case people want more buffers, it is 
enough to increase num_mbufs parameter 
and VPP will increase size of mempool automatically, which is significant 
improvement as currently people need to play 
with socket-mem parameter.

In total, footprint is reduced from 256M/socket to 104M/socket.

At the moment, code only deals with 2M pages, support for 1G pages for extreme 
VPP consumers will be added in the separate patch.

I will really appreciate if people can try this patch and report 
success/failure. It important that we test it in different 
configurations before it is merged.

Thanks,

Damjan





___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Hugepage/Memory Allocation Rework

2017-09-06 Thread Damjan Marion (damarion)
HI Billy,

On 6 Sep 2017, at 16:55, Billy McFall 
> wrote:

Damjan,

On the VPP call yesterday, you described the patch you are working on to rework 
how VPP allocates and uses hugepages. Per request from Jerome Tollet, I wrote 
VPP-958 to document some issues they were 
seeing. I believe your patch will address this issue. I added a comment to the 
JIRA. Is my comment in the JIRA accurate?

Save you from having to follow the link:

Damjan Marion is working on a patch that reworks how VPP uses memory. With the 
patch, VPP will not need to allocate memory using 80-vpp.conf. Instead, when 
VPP is started, it will check to insure there are enough free hugespages for it 
to function. If so, it will not touch the current huge page allocation. If not, 
it will attempt to allocate what it needs.

yes, it will pre-allocate delta.

This patch also reduces the default amount of memory VPP requires. This is a 
fairly big change so it will probably not be merged until after 17.10. I 
believe this patch will address the concerns of this JIRA. I will update this 
JIRA as progress is made.

yes

This may not be the final patch, but here is the current work in progress: 
https://gerrit.fd.io/r/#/c/7701/

yes

Thanks,

Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Damjan Marion (damarion)

On 6 Sep 2017, at 16:49, Balaji Kn 
<balaji.s...@gmail.com<mailto:balaji.s...@gmail.com>> wrote:

Hi Damjan,

I was trying to create 4k sub-interfaces for an interface and associate each 
sub-interface with vrf and observed a limitation in VPP 17.07 that was 
supporting only 874 VRFs and shared memory was unlinked for 875th VRF.

What do you mean by “shared memory was unlinked” ?
Which shared memory?


I felt this might be because of shortage of heap memory used in VPP and might 
be solved with  increase of huge page memory.

VPP heap is not using hugepages.


Regards,
Balaji

On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:

why do you need so much memory? Currently, for default number of buffers (16K 
per socket) VPP needs
around 40MB of hugepage memory so allocating 1G will be huge waste of memory….

Thanks,

Damjan

On 5 Sep 2017, at 11:15, Balaji Kn 
<balaji.s...@gmail.com<mailto:balaji.s...@gmail.com>> wrote:

Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
<balaji.s...@gmail.com<mailto:balaji.s...@gmail.com>> wrote:
Hello,

I am using v17.07. I am trying to configure huge page size as 1GB and reserve 
16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for huge 
page of size 2M.

output of vpp-conf file.
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024

# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=3096

# All groups allowed to access hugepages
vm.hugetlb_shm_group=0

# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=2147483648<tel:(214)%20748-3648>

Please can you let me know configurations i need to do so that VPP runs with 
1GB huge pages.

Host OS is supporting 1GB huge pages.

Regards,
Balaji


___
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Damjan Marion (damarion)

why do you need so much memory? Currently, for default number of buffers (16K 
per socket) VPP needs
around 40MB of hugepage memory so allocating 1G will be huge waste of memory….

Thanks,

Damjan

On 5 Sep 2017, at 11:15, Balaji Kn 
> wrote:

Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
> wrote:
Hello,

I am using v17.07. I am trying to configure huge page size as 1GB and reserve 
16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for huge 
page of size 2M.

output of vpp-conf file.
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024

# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=3096

# All groups allowed to access hugepages
vm.hugetlb_shm_group=0

# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=2147483648

Please can you let me know configurations i need to do so that VPP runs with 
1GB huge pages.

Host OS is supporting 1GB huge pages.

Regards,
Balaji


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Не удаётся просматреть код

2017-09-01 Thread Damjan Marion (damarion)

Если вы отправляете электронную почту на английском, кто-то может помочь вам ….


> On 1 Sep 2017, at 15:11, Алексей Болдырев  
> wrote:
> 
> При попытке открыть: https://gerrit.fd.io/r/
> Пишет Working...
> Потом:
> Code Review - Error
> Server Unavailable
> 504 Gateway Time-out
> 
> В чём причина?
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Dynamically change number of cores used by VPP?

2017-08-30 Thread Damjan Marion (damarion)

On 30 Aug 2017, at 13:59, Tobias Sundqvist 
> wrote:

Hi I guess the silence tells that there are no way of dynamically scale the 
number of cores used by VPP when VPP already has started.
If anyone has any idea if it is dynamically possible to change the way the 
cores is used just send me an email. Perhaps it could be possible to assign a 
certain core to not poll the interface or something similar, then the core 
would still be in use of VPP but the CPU load would go down.

BR /Tobias

On 24 August 2017 at 10:36, Tobias Sundqvist 
> wrote:
Hi we are building an application that uses vpp and when I have some questions 
that concerns multi core usage.

In the run that we are doing now we are using 4 cores and we use some DPDK 
polling of an interface which makes the cpu load go 100% on all cores. We are 
interested in energy consumption in the application we are creating and would 
like to be able to scale up and down the cpu usage during runtime.

Is it possible to dynamically change the number of cores used by VPP or specify 
somehow which cores that are used for polling of an certain interface during 
runtime?

During low and high traffic we would like to scale up and down the amount of 
cores that is used.

If the polling is causing the cpu to go to 100% would it help using Turboboost 
and Speedstep, would it actually lower the frequency when the traffic is low?

BR /Tobias


I did some work to enable interrupt mode with DPDK devices, but it is a bit 
hacky (it is digging some data form dpdk internal structures)
so it is not published. Together with adaptive interrupt/polling mode which VPP 
supports it might be solution to your problem.

Please note that you can define today which worker is poling which interface, 
so you can effectively remove all interfaces form  specific core ant that core 
should go to sleep.
This works on runtime without the need to switch interface off so it can also 
be a way to address your problem.

See “set interface rx-placement / show interface rx-placement” commands for 
details…

Thanks,

Damjan


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between different plugins

2017-08-30 Thread Damjan Marion (damarion)
Yes, also please note that you can at any time use vlib_get_plugin_symbol(..) 
function to get pointer to symbol in another plugin. If you get NULL then 
another plugin is not loaded.

So something like this should work, assuming that you want to go that way...

static clib_error_t *
bar_init (vlib_main_t * vm)
 {
clib_error_t *error = 0;
if (vlib_get_plugin_symbol (“foo_plugin.so”, “foo_init”) == 0)
{
  clib_warning ( “foo plugin not loaded. bar disabled”);
  bar_main.disabled = 1;
  return 0;
}

if ((error = vlib_call_init_function (vm, foo_init)))
 return error;

  // continue with bar init…
}



> On 30 Aug 2017, at 12:37, Dave Barach (dbarach)  wrote:
> 
> Explicit dependencies between plugins is probably not a good idea. There is 
> little to guarantee that both A and B will be loaded.
>  
> Please describe the use-case in more detail.  
>  
> Thanks… Dave
>  
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of wang.hu...@zte.com.cn
> Sent: Wednesday, August 30, 2017 4:01 AM
> To: vpp-dev@lists.fd.io
> Cc: zhao.qingl...@zte.com.cn; wu.bi...@zte.com.cn; gu.ji...@zte.com.cn; 
> dong.ju...@zte.com.cn
> Subject: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between 
> different plugins
>  
> Hi all:
> 
> How to control the order of  VLIB_INIT_FUNCTION (user xxx_init function) 
> called between Different plugins?
> 
> It depends on plugin name?or the sequence of loading plugin ?
> 
>  or is there any other way to adjust the order?
> 
>  
> 
> Thanks~
> 
>  
> 
>  
> 
>  
> 
> 王辉 wanghui
> 
>  
> 
> IT开发工程师 IT Development Engineer
> 虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
> Institute/Wireless Product Operation Division
> 
>  
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Duplicate Prefetching of 128 bytes memory.

2017-08-28 Thread Damjan Marion (damarion)


> On 27 Aug 2017, at 12:04, mrityunjay.kum...@wipro.com wrote:
> 
> Dear Team
> I would like bring it to your kind notice of below code of vpp-1707-dpdk 
> plunging.
> 
> static_always_inline void dpdk_prefetch_buffer_by_index (vlib_main_t * vm, 
> u32 bi)
> {
>   vlib_buffer_t *b;
>   struct rte_mbuf *mb;
>   b = vlib_get_buffer (vm, bi);
>   mb = rte_mbuf_from_vlib_buffer (b);
>   CLIB_PREFETCH (mb, CLIB_CACHE_LINE_BYTES, LOAD);
>   CLIB_PREFETCH (b, CLIB_CACHE_LINE_BYTES, LOAD);
> }
> 
> #define CLIB_PREFETCH(addr,size,type)   \
> do {\
>   void * _addr = (addr);  \
> \
>   ASSERT ((size) <= 4*CLIB_CACHE_LINE_BYTES); \
>   _CLIB_PREFETCH (0, size, type);   \
>   _CLIB_PREFETCH (1, size, type);   \
>   _CLIB_PREFETCH (2, size, type);   \
>   _CLIB_PREFETCH (3, size, type);   \
> } while (0)
> 
> 
> 
> 
> Here , Sizeof(rte_mbuf) = 128 and sizeof(vlib_buffer_t) = 128 + 
> HEAD_ROOM(128)= 256. 
> 
> In above code part, vlib_buffer is ahead of 128 bytes from start of rte_mbuf 
> structure.  As i understood one CLIB_PREFETCH will load 256 bytes from 
> memory. ,hence total pre-fetch  is 512 bytes. As per the above code first 
> CLIB_PREFETCH will load 256, which includes 128 of rte_mbuf + 128 of 
> vlib_buffer as well . 2nd CLIB_PREFETCH will also load vlib_buffer whcih as 
> ready has been loaded. 
> 
> I must say duplication in prefetching the memory. Please correct me if I am 
> wrong. 

Hi MJ,

Yes, you are wrong. Each invocation of CLIB_PREFETCH in the inline function you 
listed above will prefetch one cacheline, so 64 bytes, not 256.
Please look at _CLIB_PREFETCH macro for details…




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [discuss] Question about VPP support for ARM 64

2017-08-23 Thread Damjan Marion (damarion)

On 23 Aug 2017, at 06:30, Brian Brooks 
<brian.bro...@arm.com<mailto:brian.bro...@arm.com>> wrote:

Hi Damjan, George,

I just pulled lastest source and tried native build (platforms/vpp.mk) on ARMv8:

 cat: '/sys/bus/pci/devices/:00:01.0/uevent': No such file or directory

From dpdk/Makefile,

 ##
 # Intel x86
 ##
 ifeq ($(MACHINE),$(filter $(MACHINE),x86_64 i686))
 DPDK_TARGET   ?= $(MACHINE)-native-linuxapp-$(DPDK_CC)
 DPDK_MACHINE  ?= nhm
 DPDK_TUNE ?= core-avx2
 ##
 # Cavium ThunderX
 ##
 else ifneq (,$(findstring thunder,$(shell cat 
/sys/bus/pci/devices/:00:01.0/uevent | grep cavium)))
 export CROSS=""
 DPDK_TARGET   ?= arm64-thunderx-linuxapp-$(DPDK_CC)
 DPDK_MACHINE  ?= thunderx
 DPDK_TUNE ?= generic

So, I am thinking we need to modify this to support MACHINE=aarch64 and possibly
rework thunder detection to not fail hard on non-thunder machines.

Yes, unfortunately I don’t have non-thunder system to take care for this but 
should be easy.

Another thing which needs attention is proper cacheline size detection  during 
vpp build. thunder-x have 128 byte cacheline
and others are 64 if i get it right. Last time I was looking there was no way 
to find it out from sysfs but maybe new kernels
expose that info.


Regards,
Brian

On 08/22 17:55:20, George Zhao wrote:
Thanks Demjan,

Confirmed that your patches worked on our system as well.

George

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Tuesday, August 22, 2017 5:03 AM
To: George Zhao
Cc: Dave Barach (dbarach); discuss; csit-dev; vpp-dev
Subject: Re: [vpp-dev] [discuss] Question about VPP support for ARM 64

Dear George,

I tried on my Cavium ThunderX system with latest Ubuntu and after fixing few 
minor issues (all patches submitted to master) I got VPP running.
I use latest Ubuntu devel (17.10, mainly as I upgraded to new kernel in my 
attempts to get system working)

For me it is hard to help you with your particular system, as I don’t have 
access to similar one, but my guess is that it shouldn’’t be too hard to get it 
working.

Thanks,

Damjan

On 20 Aug 2017, at 23:12, George Zhao 
<george.y.z...@huawei.com<mailto:george.y.z...@huawei.com><mailto:george.y.z...@huawei.com>>
 wrote:

Hi Damian,

IT is Applied Micro overdrive 1000, here are the uname -a output:

$>> uname -a
Linux OD1K 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:10:33 UTC 2017 
aarch64 aarch64 aarch64 GNU/Linux

thanks
George
发件人:Damjan Marion (damarion)
收件人:George Zhao
抄送:dbarach,discuss,csit-dev,vpp-dev
时间:2017-08-20 10:03:27
主题:Re: [vpp-dev] [discuss] Question about VPP support for ARM 64



George, are you using ThunderX platform?

I spent few hours today trying to install latest ubuntu on my ThunderX system 
but no luck, kernel hangs at some point, both ubuntu provided and manually 
compiled.

Can you share about more details about your system?

Thanks,

Damjan



On 19 Aug 2017, at 22:48, George Zhao 
<george.y.z...@huawei.com<mailto:george.y.z...@huawei.com><mailto:george.y.z...@huawei.com>>
 wrote:

If a bug is filed, may I have the bug number, I would be love to trace this 
patch.

BTW, how do I file a bug for VPP, I did a quick wiki search with no luck.

Thanks,
George

-Original Message-
From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
Sent: Saturday, August 19, 2017 7:42 AM
To: George Zhao 
<george.y.z...@huawei.com<mailto:george.y.z...@huawei.com><mailto:george.y.z...@huawei.com>>
Cc: 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>; 
disc...@lists.fd.io<mailto:disc...@lists.fd.io><mailto:disc...@lists.fd.io>; 
csit-...@lists.fd.io<mailto:csit-...@lists.fd.io><mailto:csit-...@lists.fd.io>; 
Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com><mailto:damar...@cisco.com>>
Subject: RE: [discuss] Question about VPP support for ARM 64

+1, pls add the typedef...

Thanks… Dave

-Original Message-
From: Damjan Marion (damarion)
Sent: Saturday, August 19, 2017 9:09 AM
To: Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com><mailto:dbar...@cisco.com>>
Cc: George Zhao 
<george.y.z...@huawei.com<mailto:george.y.z...@huawei.com><mailto:george.y.z...@huawei.com>>;
 vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io><mailto:vpp-dev@lists.fd.io>; 
disc...@lists.fd.io<mailto:disc...@lists.fd.io><mailto:disc...@lists.fd.io>; 
csit-...@lists.fd.io<mailto:csit-...@lists.fd.io><mailto:csit-...@lists.fd.io>
Subject:

Re: [vpp-dev] [discuss] Question about VPP support for ARM 64

2017-08-22 Thread Damjan Marion (damarion)
Dear George,

I tried on my Cavium ThunderX system with latest Ubuntu and after fixing few 
minor issues (all patches submitted to master) I got VPP running.
I use latest Ubuntu devel (17.10, mainly as I upgraded to new kernel in my 
attempts to get system working)

For me it is hard to help you with your particular system, as I don’t have 
access to similar one, but my guess is that it shouldn’’t be too hard to get it 
working.

Thanks,

Damjan

On 20 Aug 2017, at 23:12, George Zhao 
<george.y.z...@huawei.com<mailto:george.y.z...@huawei.com>> wrote:

Hi Damian,

IT is Applied Micro overdrive 1000, here are the uname -a output:

$>> uname -a
Linux OD1K 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:10:33 UTC 2017 
aarch64 aarch64 aarch64 GNU/Linux

thanks
George
发件人:Damjan Marion (damarion)
收件人:George Zhao
抄送:dbarach,discuss,csit-dev,vpp-dev
时间:2017-08-20 10:03:27
主题:Re: [vpp-dev] [discuss] Question about VPP support for ARM 64



George, are you using ThunderX platform?

I spent few hours today trying to install latest ubuntu on my ThunderX system 
but no luck, kernel hangs at some point, both ubuntu provided and manually 
compiled.

Can you share about more details about your system?

Thanks,

Damjan



> On 19 Aug 2017, at 22:48, George Zhao 
> <george.y.z...@huawei.com<mailto:george.y.z...@huawei.com>> wrote:
>
> If a bug is filed, may I have the bug number, I would be love to trace this 
> patch.
>
> BTW, how do I file a bug for VPP, I did a quick wiki search with no luck.
>
> Thanks,
> George
>
> -Original Message-
> From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
> Sent: Saturday, August 19, 2017 7:42 AM
> To: George Zhao <george.y.z...@huawei.com<mailto:george.y.z...@huawei.com>>
> Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; 
> disc...@lists.fd.io<mailto:disc...@lists.fd.io>; 
> csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>; Damjan Marion (damarion) 
> <damar...@cisco.com<mailto:damar...@cisco.com>>
> Subject: RE: [discuss] Question about VPP support for ARM 64
>
> +1, pls add the typedef...
>
> Thanks… Dave
>
> -Original Message-
> From: Damjan Marion (damarion)
> Sent: Saturday, August 19, 2017 9:09 AM
> To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
> Cc: George Zhao <george.y.z...@huawei.com<mailto:george.y.z...@huawei.com>>; 
> vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; 
> disc...@lists.fd.io<mailto:disc...@lists.fd.io>; 
> csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
> Subject: Re: [discuss] Question about VPP support for ARM 64
>
>
> GCC is able to compile ARM64 code with 256-bit vectors even if target 
> platform have only 128-bit registers.
>
> I.e. for the u8x32 version of that function it generates:
>
> ARM64:
> dpdk_buffer_init_from_template(void*, void*, void*, void*, void*):
>ld1 {v0.16b - v1.16b}, [x4], 32
>st1 {v0.16b - v1.16b}, [x3], 32
>st1 {v0.16b - v1.16b}, [x2], 32
>st1 {v0.16b - v1.16b}, [x1], 32
>st1 {v0.16b - v1.16b}, [x0], 32
>ld1 {v0.16b - v1.16b}, [x4]
>st1 {v0.16b - v1.16b}, [x3]
>st1 {v0.16b - v1.16b}, [x2]
>st1 {v0.16b - v1.16b}, [x1]
>st1 {v0.16b - v1.16b}, [x0]
>ret
>
> intel x86-64 without AVX2:
>
> dpdk_buffer_init_from_template(void*, void*, void*, void*, void*):
> push   %rbp
> mov%rsp,%rbp
> and$0xffe0,%rsp
> lea0x10(%rsp),%rsp
> movdqa (%r8),%xmm1
> movdqa 0x10(%r8),%xmm0
> movdqa %xmm0,0x10(%rcx)
> movdqa %xmm1,(%rcx)
> movdqa %xmm1,(%rdx)
> movdqa %xmm0,0x10(%rdx)
> movdqa %xmm1,(%rsi)
> movdqa %xmm0,0x10(%rsi)
> movdqa %xmm1,(%rdi)
> movdqa %xmm0,0x10(%rdi)
> movdqa 0x20(%r8),%xmm1
> movdqa 0x30(%r8),%xmm0
> movdqa %xmm0,0x30(%rcx)
> movdqa %xmm1,0x20(%rcx)
> movdqa %xmm1,0x20(%rdx)
> movdqa %xmm0,0x30(%rdx)
> movdqa %xmm1,0x20(%rsi)
> movdqa %xmm0,0x30(%rsi)
> movdqa %xmm1,0x20(%rdi)
> movdqa %xmm0,0x30(%rdi)
> leaveq
> retq
>
>
> So i think here it is only about missing typedef….
>
>
>> On 19 Aug 2017, at 14:51, Dave Barach (dbarach) 
>> <dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
>>
>> Dear George,
>>
>> This specific issue isn’t anywhere near as bad as you might think. As given, 
>> the code confuses 128-bit vectors with 256-bit vectors, and 64-bit vectors 
>> with 128-bit vectors.
>>
>> Question: does the hardware involved support 256-bit vectors? Probably 
>> not... It almost certainly does support 128-bit vectors.
>>
>> To make progress, use th

Re: [vpp-dev] [discuss] Question about VPP support for ARM 64

2017-08-20 Thread Damjan Marion (damarion)


George, are you using ThunderX platform?

I spent few hours today trying to install latest ubuntu on my ThunderX system 
but no luck, kernel hangs at some point, both ubuntu provided and manually 
compiled.

Can you share about more details about your system?

Thanks,

Damjan



> On 19 Aug 2017, at 22:48, George Zhao <george.y.z...@huawei.com> wrote:
> 
> If a bug is filed, may I have the bug number, I would be love to trace this 
> patch.
> 
> BTW, how do I file a bug for VPP, I did a quick wiki search with no luck.
> 
> Thanks,
> George
> 
> -Original Message-
> From: Dave Barach (dbarach) [mailto:dbar...@cisco.com] 
> Sent: Saturday, August 19, 2017 7:42 AM
> To: George Zhao <george.y.z...@huawei.com>
> Cc: vpp-dev@lists.fd.io; disc...@lists.fd.io; csit-...@lists.fd.io; Damjan 
> Marion (damarion) <damar...@cisco.com>
> Subject: RE: [discuss] Question about VPP support for ARM 64
> 
> +1, pls add the typedef...
> 
> Thanks… Dave
> 
> -Original Message-
> From: Damjan Marion (damarion) 
> Sent: Saturday, August 19, 2017 9:09 AM
> To: Dave Barach (dbarach) <dbar...@cisco.com>
> Cc: George Zhao <george.y.z...@huawei.com>; vpp-dev@lists.fd.io; 
> disc...@lists.fd.io; csit-...@lists.fd.io
> Subject: Re: [discuss] Question about VPP support for ARM 64
> 
> 
> GCC is able to compile ARM64 code with 256-bit vectors even if target 
> platform have only 128-bit registers.
> 
> I.e. for the u8x32 version of that function it generates:
> 
> ARM64:
> dpdk_buffer_init_from_template(void*, void*, void*, void*, void*):
>ld1 {v0.16b - v1.16b}, [x4], 32
>st1 {v0.16b - v1.16b}, [x3], 32
>st1 {v0.16b - v1.16b}, [x2], 32
>st1 {v0.16b - v1.16b}, [x1], 32
>st1 {v0.16b - v1.16b}, [x0], 32
>ld1 {v0.16b - v1.16b}, [x4]
>st1 {v0.16b - v1.16b}, [x3]
>st1 {v0.16b - v1.16b}, [x2]
>st1 {v0.16b - v1.16b}, [x1]
>st1 {v0.16b - v1.16b}, [x0]
>ret
> 
> intel x86-64 without AVX2:
> 
> dpdk_buffer_init_from_template(void*, void*, void*, void*, void*):
> push   %rbp
> mov%rsp,%rbp
> and$0xffe0,%rsp
> lea0x10(%rsp),%rsp
> movdqa (%r8),%xmm1
> movdqa 0x10(%r8),%xmm0
> movdqa %xmm0,0x10(%rcx)
> movdqa %xmm1,(%rcx)
> movdqa %xmm1,(%rdx)
> movdqa %xmm0,0x10(%rdx)
> movdqa %xmm1,(%rsi)
> movdqa %xmm0,0x10(%rsi)
> movdqa %xmm1,(%rdi)
> movdqa %xmm0,0x10(%rdi)
> movdqa 0x20(%r8),%xmm1
> movdqa 0x30(%r8),%xmm0
> movdqa %xmm0,0x30(%rcx)
> movdqa %xmm1,0x20(%rcx)
> movdqa %xmm1,0x20(%rdx)
> movdqa %xmm0,0x30(%rdx)
> movdqa %xmm1,0x20(%rsi)
> movdqa %xmm0,0x30(%rsi)
> movdqa %xmm1,0x20(%rdi)
> movdqa %xmm0,0x30(%rdi)
> leaveq 
> retq   
> 
> 
> So i think here it is only about missing typedef….
> 
> 
>> On 19 Aug 2017, at 14:51, Dave Barach (dbarach) <dbar...@cisco.com> wrote:
>> 
>> Dear George,
>> 
>> This specific issue isn’t anywhere near as bad as you might think. As given, 
>> the code confuses 128-bit vectors with 256-bit vectors, and 64-bit vectors 
>> with 128-bit vectors.
>> 
>> Question: does the hardware involved support 256-bit vectors? Probably 
>> not... It almost certainly does support 128-bit vectors.
>> 
>> To make progress, use the known-good u8x16 / 128-bit vector code:   
>> 
>> static_always_inline void
>> dpdk_buffer_init_from_template (void *d0, void *d1, void *d2, void *d3,
>>  void *s)
>> {
>> #if defined(CLIB_HAVE_VEC128)
>>  int i;
>>  for (i = 0; i < 4; i++)
>>{
>>  *(u8x16 *) (((u8 *) d0) + i * 16) =
>> *(u8x16 *) (((u8 *) d1) + i * 16) =
>> *(u8x16 *) (((u8 *) d2) + i * 16) =
>> *(u8x16 *) (((u8 *) d3) + i * 16) = *(u8x16 *) (((u8 *) s) + i * 16);
>>}
>> #else
>> #error "CLIB_HAVE_VEC128 has to be defined"
>> #endif
>> }
>> 
>> Responsible parties - they know who they are - will be back from PTO 
>> shortly. We need to clean up / create CLIB_HAVE_VEC_256 and move the 256-bit 
>> vector engine code...
>> 
>> You could also try adding “typedef u8 u8x32 _vector_size(32)” but I somehow 
>> doubt that will produce anything other than a compiler error.
>> 
>> HTH… Dave
>> 
>> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
>> Behalf Of George Zhao
>> Sent: Friday, August 18, 2017 7:32 PM
>> To: 'vpp-dev@lists.fd.io' <vpp-dev@lists.fd.io>; 'disc...@lists.fd.io' 
>> <disc...@lists.fd.io>; 'csit-...@lists.fd.io' <csit-

Re: [vpp-dev] [discuss] Question about VPP support for ARM 64

2017-08-19 Thread Damjan Marion (damarion)

GCC is able to compile ARM64 code with 256-bit vectors even if target platform 
have only 128-bit registers.

I.e. for the u8x32 version of that function it generates:

ARM64:
dpdk_buffer_init_from_template(void*, void*, void*, void*, void*):
ld1 {v0.16b - v1.16b}, [x4], 32
st1 {v0.16b - v1.16b}, [x3], 32
st1 {v0.16b - v1.16b}, [x2], 32
st1 {v0.16b - v1.16b}, [x1], 32
st1 {v0.16b - v1.16b}, [x0], 32
ld1 {v0.16b - v1.16b}, [x4]
st1 {v0.16b - v1.16b}, [x3]
st1 {v0.16b - v1.16b}, [x2]
st1 {v0.16b - v1.16b}, [x1]
st1 {v0.16b - v1.16b}, [x0]
ret

intel x86-64 without AVX2:

dpdk_buffer_init_from_template(void*, void*, void*, void*, void*):
 push   %rbp
 mov%rsp,%rbp
 and$0xffe0,%rsp
 lea0x10(%rsp),%rsp
 movdqa (%r8),%xmm1
 movdqa 0x10(%r8),%xmm0
 movdqa %xmm0,0x10(%rcx)
 movdqa %xmm1,(%rcx)
 movdqa %xmm1,(%rdx)
 movdqa %xmm0,0x10(%rdx)
 movdqa %xmm1,(%rsi)
 movdqa %xmm0,0x10(%rsi)
 movdqa %xmm1,(%rdi)
 movdqa %xmm0,0x10(%rdi)
 movdqa 0x20(%r8),%xmm1
 movdqa 0x30(%r8),%xmm0
 movdqa %xmm0,0x30(%rcx)
 movdqa %xmm1,0x20(%rcx)
 movdqa %xmm1,0x20(%rdx)
 movdqa %xmm0,0x30(%rdx)
 movdqa %xmm1,0x20(%rsi)
 movdqa %xmm0,0x30(%rsi)
 movdqa %xmm1,0x20(%rdi)
 movdqa %xmm0,0x30(%rdi)
 leaveq 
 retq   


So i think here it is only about missing typedef….


> On 19 Aug 2017, at 14:51, Dave Barach (dbarach)  wrote:
> 
> Dear George,
>  
> This specific issue isn’t anywhere near as bad as you might think. As given, 
> the code confuses 128-bit vectors with 256-bit vectors, and 64-bit vectors 
> with 128-bit vectors.
>  
> Question: does the hardware involved support 256-bit vectors? Probably not... 
> It almost certainly does support 128-bit vectors.
>  
> To make progress, use the known-good u8x16 / 128-bit vector code:   
>  
> static_always_inline void
> dpdk_buffer_init_from_template (void *d0, void *d1, void *d2, void *d3,
>   void *s)
> {
> #if defined(CLIB_HAVE_VEC128)
>   int i;
>   for (i = 0; i < 4; i++)
> {
>   *(u8x16 *) (((u8 *) d0) + i * 16) =
>  *(u8x16 *) (((u8 *) d1) + i * 16) =
>  *(u8x16 *) (((u8 *) d2) + i * 16) =
>  *(u8x16 *) (((u8 *) d3) + i * 16) = *(u8x16 *) (((u8 *) s) + i * 16);
> }
> #else
> #error "CLIB_HAVE_VEC128 has to be defined"
> #endif
> }
>  
> Responsible parties - they know who they are - will be back from PTO shortly. 
> We need to clean up / create CLIB_HAVE_VEC_256 and move the 256-bit vector 
> engine code...
>  
> You could also try adding “typedef u8 u8x32 _vector_size(32)” but I somehow 
> doubt that will produce anything other than a compiler error.
>  
> HTH… Dave
>  
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of George Zhao
> Sent: Friday, August 18, 2017 7:32 PM
> To: 'vpp-dev@lists.fd.io' ; 'disc...@lists.fd.io' 
> ; 'csit-...@lists.fd.io' 
> Subject: [vpp-dev] Question about VPP support for ARM 64
>  
> We encounter following issues while trying to build VPP over ARM 64. It seems 
> right now only ARM32 are supported in the code. I list the steps we tried and 
> hope VPP folks can help us work around this issue.
>  
> Steps: 
> 1. install Ubuntu 16.04 on OD1K  
> $>> uname -a
> Linux OD1K 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:10:33 UTC 2017 
> aarch64 aarch64 aarch64 GNU/Linux
>  
> 2. git clone VPP 17.04 and build VPP
> ## Error:
> make[2]: Entering directory '/home/huawei/GIT/vpp.1704/dpdk'
> cat: '/sys/bus/pci/devices/:00:01.0/uevent': No such file or directory
>  
> **Work around to bypass MakeFile:
> ##
> # Cavium ThunderX
> ##
> #else ifneq (,$(findstring thunder,$(shell cat 
> /sys/bus/pci/devices/:00:01.0/uevent | grep cavium)))
> else
> export CROSS=""
> DPDK_TARGET   ?= arm64-thunderx-linuxapp-$(DPDK_CC)
> DPDK_MACHINE  ?= thunderx
> DPDK_TUNE ?= generic
>  
> 3. Then,  make build and failed following:
> /home/huawei/GIT/vpp.1704/build-data/../src/plugins/dpdk/device/node.c:276:9: 
> error: `u8x32' undeclared (first use in this function)
>*(u8x32 *) (((u8 *) d0) + i * 32) =
>  
> ** Check vppinfra/vppinfra/vector.h   and don’t find u8x32 with “aarch64”
> #if defined (__aarch64__) || defined (__arm__)
> typedef unsigned int u32x4 _vector_size (16);
> typedef u8 u8x16 _vector_size (16);
> typedef u16 u16x8 _vector_size (16);
> typedef u32 u32x4 _vector_size (16);
> typedef u64 u64x2 _vector_size (16);
> #endif
>  
> 4. According  https://wiki.fd.io/view/VPP/Alternative_builds
> The VPP seems to support arm32 only .
> export PLATFORM=arm32
>  
>  
> *Questions:
> Did I miss some steps or should include other header files that defines u8x32?
>  
>  
> 

Re: [vpp-dev] 50GE interface support on VPP

2017-07-04 Thread Damjan Marion (damarion)
Hi Daniel,

Can you try with this patch?

https://gerrit.fd.io/r/#/c/7418/

Regards,

Damjan

On 4 Jul 2017, at 22:14, Bernier, Daniel 
> wrote:

Hi,

I have ConnectX-4 50GE interfaces running on VPP and for some reason, they 
appear as “Unknown” even when running as 40GE.

localadmin@sm981:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
81:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

localadmin@sm981:~$ ethtool ens1f0
Settings for ens1f0:
Supported ports: [ FIBRE Backplane ]
Supported link modes:   1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes:  1000baseKX/Full
1baseKR/Full
4baseKR4/Full
4baseCR4/Full
4baseSR4/Full
4baseLR4/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 4Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Cannot get wake-on-lan settings: Operation not permitted
Current message level: 0x0004 (4)
   link
Link detected: yes

localadmin@sm981:~$ sudo vppctl show interface
  Name   Idx   State  Counter  Count
UnknownEthernet81/0/0 1 up   rx packets
723257
 rx bytes
68599505
 tx packets 
39495
 tx bytes 
2093235
 drops 
723257
 ip4
48504
UnknownEthernet81/0/1 2 up   rx packets
723194
 rx bytes
68592678
 tx packets 
39495
 tx bytes 
2093235
 drops 
723194
 ip4
48504
local00down


Any ideas where this could be fixed?

Thanks,

Daniel Bernier | Bell Canada

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] DPDK PMD

2017-06-29 Thread Damjan Marion (damarion)

> On 27 Jun 2017, at 20:07, Burt Silverman  wrote:
> 
> I came across the idea of running DPDK in non poll mode for low power/albeit 
> lower performance, but I don't remember where. I am just wondering if anyone 
> in VPP has done that, and if you have an easy way to configure that when 
> running VPP. Thanks.

I have some preliminary code which allows DPDK drivers to work in interrupt 
mode. 
Unfortunately I hit some issues with interrupt support in DPDK so I’m not sure 
of that stuff is ready for prime time.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] fatal error: rte_config.h: No such file or directory

2017-06-29 Thread Damjan Marion (damarion)

> On 28 Jun 2017, at 11:00, Samuel S  wrote:
> 
> i need to include dpdk.h from  plugins/dpdk/device/
> but when i include this header compiler give this error:
> fatal error: rte_config.h: No such file or directory
> #include 
>  
> who can i fix this probleam?

Can you provide whole build sequence you are using?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] debug own plugin

2017-06-29 Thread Damjan Marion (damarion)

> On 29 Jun 2017, at 11:25, Tobias Sundqvist  wrote:
> 
> Hi I am devloping a crypto node using vpp (version 17.02) on Ubuntu. I first 
> setup the nodes that I am going to use and it works fine just forwarding the 
> packets as it should.
> But now I have implemented some crypto functions inside the nodes and also 
> added a new node of type process that should initialize some crypto parts 
> before the nodes can be used.
> 
> My problem is now that vpp crashes during startup when it loads my plugin and 
> I cannot see why in the log. I only see:
> 
> load_one_plugin:184: Loaded plugin: gtpu.so (Encapsulates packets with a GTPU 
> header.)
> vpp[11278]: received signal SIGSEGV, PC 0x7f6b09e1785a, faulting address 
> 0x7f6ac9471ff0
> 
> and vpp has not been started (with the latest changes in my plugin vpp starts 
> normally)
> 
> If I start vpp with gdb then I can step from the main program but I can never 
> reach my plugin code and I cannot set breakpoints inside the process 
> function, it will never reach that code.
> 
> Is there some way to debug my own plugin or get more information in the vpp 
> log.

Have you set your plugin path properly when you start vpp from gdb? i.e.

gdb> r unix interactive plugin_path /path/to/plugin/dir

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Performance of VPP bridge with Mellanox 40G NIC

2017-06-21 Thread Damjan Marion (damarion)
Hi Vladimir,

> On 21 Jun 2017, at 15:41, Vladimir Torgovitsky  
> wrote:
> 
> Hi,
> I'm testing performance of different NICs and have an issue with Mellanox 40G 
> ConnectX-4 device, where VPP performance seem to be similar to Linux and 
> doesn't improve.
> I am testing throughput of UDP traffic, packets of size 64B, 128B and 1518B, 
> on single core.
> Two questions:
>1. Is there any performance report of 40G NICs (any vendor)?

I can share my numbers with intel XL710. It is around 18Mpps single physical 
core on broadwell 3.2GHz cpu without TurboBoost
This is with standard IPv4 forwarding path (driver rx, mandatory IP checks, fib 
loookup, l2 rewrite, tx).

Also look at CSIT wiki for more perf reports...


>2. Anyone familiar with a ways of compiling/configuring VPP with Mellanox 
> drivers? Any special flags to enable?

Due to OFED dependency we do not build VPP packages with MLX support. To build 
VPP by yourself you need to:

1. install OFED packages (libmlx*, libibverbs*, kernel dkms)
2. build DPDK development package with MLX support
   make dpdk-install-dev DPDK_MLX5_PMD=y

3. uncomment following line in build-data/platforms/vpp.mk
# vpp_uses_dpdk_mlx5_pmd = yes

4. build and install vpp 

> 
> Thanks,
> Vladimir
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Dreaded MLX5

2017-06-08 Thread Damjan Marion (damarion)


> On 7 Jun 2017, at 17:18, Bernier, Daniel  wrote:
> 
> Hi,
>  
> Maybe someone has seen this before. Trying to compile stable branch on in 
> order to support ConnectX-5 interfaces.
>  
> -  Installed MLNX_OFED on the host
> -  Created a container with all the required packages to compile 
> locally (not in vagrant VM).
> -  Running the container in “privileged mode”
> -  Cloned stable/1704 branch
>  
> But I get the following error messages https://pastebin.com/eA2nLt5F
>  
> In the hope that someone has caught this before

Do you have OFED packages installed inside container? Looks like you are 
missing libmlx5-dev….

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #41302] Missing jvpp 1704 artifacts in fd.io.snapshot repository

2017-06-07 Thread Damjan Marion (damarion)

Personally I think that jar filenames should not contain dot releases are 
bugfix-only releases as they should be drop-in replacements.

If you still want to go that way, please use version script.


On 6 Jun 2017, at 18:09, Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hi,

could we bump vpp version in stable/1704 branch
to 17.04.2 in order to match next release version?

Here is patch:
https://gerrit.fd.io/r/#/c/7031/

The reason is to have correct version of jvpp artifacts in nexus (more details 
below).

Regards,
Marek

-Original Message-
From: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: 6 czerwca 2017 17:21
To: 
'fdio-helpd...@rt.linuxfoundation.org'
 
>
Cc: Ed Warnicke (eaw) >
Subject: RE: [FD.io Helpdesk #41302] Missing jvpp 1704 artifacts 
in fd.io.snapshot repository

Thanks for information! We will ask vpp and nsh projects to update version of 
jvpp artifacts to 17.04.1-SNAPSHOT

Marek

-Original Message-
From: Andrew Grimberg via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: 2 czerwca 2017 16:56
To: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
>
Cc: Ed Warnicke (eaw) >
Subject: Re: [FD.io Helpdesk #41302] Missing jvpp 1704 artifacts 
in fd.io.snapshot repository

Greetings folks,

Nexus automatically cleans up _any_ snapshot for that exists in the snapshot 
repositories that has the _same_ version as a released artifact. This is part 
of the design of nexus.

Since io.fd.nsh_sfc:nsh-sfc:jar:17.04 is in the release repository it will 
clean up any of the artifacts in the snapshot repository on a daily basis that 
match io.fd.nsh_sfc:nsh-sfc:jar:17.04-SNAPSHOT

This goes for any other artifact that you've done a release of. Your only 
proper way forward is to either start depending on the released version _or_ 
the new SNAPSHOT version.

-Andy-

On 06/01/2017 10:22 PM, mgrad...@cisco.com via RT 
wrote:

Hi,

It looks like the vpp snapshots have been removed again.

Any idea why it happened?

Marek


-Original Message-
From: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: 31 maja 2017 16:56
To: 
'fdio-helpd...@rt.linuxfoundation.org'
>
Cc: Ed Warnicke (eaw) >
Subject: RE: [FD.io Helpdesk #41302] Missing jvpp 1704 artifacts 
in
fd.io.snapshot repository

Thanks. It looks like we have the same issue with nsh:

https://nexus.fd.io/content/repositories/fd.io.snapshot/io/fd/nsh_sfc/
nsh-sfc/

recheck of

https://gerrit.fd.io/r/#/c/6659/2

failed

https://jenkins.fd.io/job/hc2vpp-verify-1704-centos7/52/console

because of missing io.fd.nsh_sfc:nsh-sfc:jar:17.04-SNAPSHOT

Regards,
Marek

-Original Message-
From: Vanessa Valderrama via RT
[mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: 31 maja 2017 16:37
To: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)

Cc: Ed Warnicke (eaw) 
Subject: [FD.io Helpdesk #41302] Missing jvpp 1704 artifacts in
fd.io.snapshot repository

Marek,

I issued a remerge and the artifacts are in the repo.  I an still investigating 
why they were removed.


On Wed May 31 04:38:22 2017, mgrad...@cisco.com wrote:
Hi,

I've noticed that vpp 17.04 artifacts were removed from

https://nexus.fd.io/content/repositories/fd.io.snapshot/io/fd/vpp/**/

This causes hc2vpp stable/1704 build failures.
Do you know why that happened and how we can prevent it from
happening in the future?

Ed: could you please remerge https://gerrit.fd.io/r/#/c/6880/ ?

Regards,
Marek





--
Andrew J Grimberg
Lead, IT Release Engineering
The Linux Foundation


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP/How To Build The Sample Plugin

2017-05-31 Thread Damjan Marion (damarion)

Something like:

https://gerrit.fd.io/r/#/c/6962/


> On 31 May 2017, at 18:50, Damjan Marion (damarion) <damar...@cisco.com> wrote:
> 
> What about:
> 
> export VPP_WITH_SAMPLE_PLUGIN=yes
> make build
> make run
> 
> Does this work for you?
> 
>> On 31 May 2017, at 18:45, Kinsella, Ray <ray.kinse...@intel.com> wrote:
>> 
>> I think the idea that a user needs to go all the effort to install VPP and 
>> then Sample plugin, in order to run trivial sample code as way too much work.
>> 
>> This is something we expect, they are going to copy and base a new plugin 
>> off. So why we would bother making them go to all the effort to install it 
>> on their system just to play with it.
>> 
>> Either way, I don't see this being reconciled, so I consigned the patch to 
>> /dev/null.
>> 
>> Ray K
>> 
>> On 31/05/2017 17:31, Damjan Marion (damarion) wrote:
>>>> 
>>>> On 31 May 2017, at 18:18, Kinsella, Ray <ray.kinse...@intel.com> wrote:
>>>> 
>>>> 
>>>> Ok - but that doesn't get us any closer to helping newbies use the sample 
>>>> plugin with 'make build' and 'make run', right? They still need to install 
>>>> vpp, then the sample-plugin - lots of hoops.
>>> 
>>> make run is not built for running out-of-tree plugins but this should work:
>>> 
>>> make bootstrap
>>> make pkg-deb
>>> dpkg -i build-root/*.deb
>>> cd src/examples/sample-plugin
>>> autoreconf -fis
>>> ./configure
>>> make
>>> sudo make install
>>> 
>>>> 
>>>> I disagree with a documentation heavy approach in principle, the wiki 
>>>> suggests, that it similarly goes 'out of sync' quiet quickly.
>>>> 
>>>> BTW - I wasn't advocating PLUGIN_DISABLED, I provided build-data configs 
>>>> in the same way we do enabling/disabling dpdk features.
>>> 
>>> ok
>>> 
>>>> 
>>>> The updated patch provides the separation between example/sample plugins 
>>>> and plugins that was asked for. It re-uses all the same autotools configs 
>>>> as src/plugins, so shouldn't go out of sync.
>>> 
>>> i still disagree, sample-plugin should be stand-alone autotools project, 
>>> you are removing configure.ac so for me it is no-go.
>>> 
>>>> 
>>>> Ray K
>>>> 
>>>> 
>>>> On 31/05/2017 17:05, Damjan Marion (damarion) wrote:
>>>>> 
>>>>> I do not agree with that proposal, I think we need to have one sample of 
>>>>> out-of-tree plugin as it is today.
>>>>> 
>>>>> Still, I agree that we need to help newbies and my proposal is that we 
>>>>> just document build process for out-of-tree plugins with simple README.md 
>>>>> inside src/examples/sample-plugin.
>>>>> 
>>>>> btw I consider use of PLUGIN_DISABLED (as default choice) as evil, as it 
>>>>> mens that plugin will go out of sync sooner or later.
>>>>> 
>>>>> 
>>>>>> On 31 May 2017, at 17:37, Kinsella, Ray <ray.kinse...@intel.com> wrote:
>>>>>> 
>>>>>> 
>>>>>> Ok, typically example/sample code is intended to be used by the newest 
>>>>>> of the new, newbies. So the sample plugin should work with 'make build' 
>>>>>> and 'make run' with the minimum of hoops to enable. Asking these users 
>>>>>> to install and configure VPP, then do the same for the sample plugin is 
>>>>>> too much. I think that this thread exists, is testament that the UX 
>>>>>> could be better - too many hoops.
>>>>>> 
>>>>>> So here I what I suggest to fix.
>>>>>> 
>>>>>> We create src/examples/plugins, put the sample plugin in here.
>>>>>> 
>>>>>> The examples plugins (src/examples/plugins) are in-tree plugins and 
>>>>>> build in exactly the same way as src/plugins from a build PoV 
>>>>>> (PLUGIN_ENABLED etc), with the exception that the examples plugins are 
>>>>>> disabled by default. They also live in the sample directory with no 
>>>>>> symlinks etc to src/plugin. We then provide a way to explicitly enable 
>>>>>> them with a build-data config.
>>>>>> 
>>>>>> I reworked the patch along these lines, do

Re: [vpp-dev] VPP/How To Build The Sample Plugin

2017-05-31 Thread Damjan Marion (damarion)
> 
> On 31 May 2017, at 18:18, Kinsella, Ray <ray.kinse...@intel.com> wrote:
> 
> 
> Ok - but that doesn't get us any closer to helping newbies use the sample 
> plugin with 'make build' and 'make run', right? They still need to install 
> vpp, then the sample-plugin - lots of hoops.

make run is not built for running out-of-tree plugins but this should work:

make bootstrap
make pkg-deb
dpkg -i build-root/*.deb
cd src/examples/sample-plugin
autoreconf -fis
./configure
make
sudo make install

> 
> I disagree with a documentation heavy approach in principle, the wiki 
> suggests, that it similarly goes 'out of sync' quiet quickly.
> 
> BTW - I wasn't advocating PLUGIN_DISABLED, I provided build-data configs in 
> the same way we do enabling/disabling dpdk features.

ok

> 
> The updated patch provides the separation between example/sample plugins and 
> plugins that was asked for. It re-uses all the same autotools configs as 
> src/plugins, so shouldn't go out of sync.

i still disagree, sample-plugin should be stand-alone autotools project, you 
are removing configure.ac so for me it is no-go.

> 
> Ray K
> 
> 
> On 31/05/2017 17:05, Damjan Marion (damarion) wrote:
>> 
>> I do not agree with that proposal, I think we need to have one sample of 
>> out-of-tree plugin as it is today.
>> 
>> Still, I agree that we need to help newbies and my proposal is that we just 
>> document build process for out-of-tree plugins with simple README.md inside 
>> src/examples/sample-plugin.
>> 
>> btw I consider use of PLUGIN_DISABLED (as default choice) as evil, as it 
>> mens that plugin will go out of sync sooner or later.
>> 
>> 
>>> On 31 May 2017, at 17:37, Kinsella, Ray <ray.kinse...@intel.com> wrote:
>>> 
>>> 
>>> Ok, typically example/sample code is intended to be used by the newest of 
>>> the new, newbies. So the sample plugin should work with 'make build' and 
>>> 'make run' with the minimum of hoops to enable. Asking these users to 
>>> install and configure VPP, then do the same for the sample plugin is too 
>>> much. I think that this thread exists, is testament that the UX could be 
>>> better - too many hoops.
>>> 
>>> So here I what I suggest to fix.
>>> 
>>> We create src/examples/plugins, put the sample plugin in here.
>>> 
>>> The examples plugins (src/examples/plugins) are in-tree plugins and build 
>>> in exactly the same way as src/plugins from a build PoV (PLUGIN_ENABLED 
>>> etc), with the exception that the examples plugins are disabled by default. 
>>> They also live in the sample directory with no symlinks etc to src/plugin. 
>>> We then provide a way to explicitly enable them with a build-data config.
>>> 
>>> I reworked the patch along these lines, does it make sense?
>>> 
>>> Ray K
>>> 
>>> On 31/05/2017 10:15, Damjan Marion (damarion) wrote:
>>>> 
>>>> The idea of sample plugin is to show people how to build out-of-tree 
>>>> plugin. As that plugin was broken several times due to changes we made I 
>>>> created special ebuild package which builds sample plugin as part of 
>>>> verify job to ensure that plugin will not be broken again due to changes 
>>>> in vpp.
>>>> 
>>>> Saying that, I strongly disagree that we move sample plugin into 
>>>> src/plugins, as that is place for in-tree plugins which actually do 
>>>> something useful.
>>>> If people want to create additional in-tree plugin, there is many samples 
>>>> already in src/plugins so I don't see an need for additional one.
>>>> 
>>>> So to continue discussion on this particular change, what do you think 
>>>> that it is broken?
>>>> 
>>>> For me sequence:
>>>> 
>>>> autoreconf -fis
>>>> ./configure
>>>> make
>>>> make install
>>>> 
>>>> Works perfectly fine. Off-course you need to have install vpp-dev package 
>>>> on your system...
>>>> 
>>>> 
>>>>> On 30 May 2017, at 13:30, Kinsella, Ray <ray.kinse...@intel.com> wrote:
>>>>> 
>>>>> The UX for the sample plugin is broken. Especially when you consider that 
>>>>> the people most likely to try it and use it, are those least familiar 
>>>>> with VPP.
>>>>> 
>>>>> I tried the use it a few months ago in training and found the UX similar 
>>>>> then. So I put together

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Damjan Marion (damarion)

I think i fixed the issue. New version is in gerrit. If you still see the crash 
please try to capture backtrace.

Thanks,

Damajn

> On 24 May 2017, at 16:29, Damjan Marion (damarion) <damar...@cisco.com> wrote:
> 
> 
> Any chance you can capture backtrace?
> 
> just "gdb —args vpp unix interactive”
> 
> Thanks,
> 
> Damjan
> 
>> On 24 May 2017, at 13:10, Michal Cmarada -X (mcmarada - PANTHEON 
>> TECHNOLOGIES at Cisco) <mcmar...@cisco.com> wrote:
>> 
>> Hi,
>> 
>> I tried your patch, I built rpms from it and then reinstalled vpp with those 
>> rpms. I ensured that the interface was not bound to kernel or dpdk. But I 
>> got Segmentation fault. See output:
>> 
>> [root@overcloud-novacompute-1 ~]# dpdk-devbind --status
>> 
>> Network devices using DPDK-compatible driver
>> 
>> 
>> 
>> Network devices using kernel driver
>> ===
>> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
>> unused=vfio-pci,uio_pci_generic
>> 
>> Other network devices
>> =
>> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
>> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
>> 
>> Crypto devices using DPDK-compatible driver
>> ===
>> 
>> 
>> Crypto devices using kernel driver
>> ==
>> 
>> 
>> Other crypto devices
>> 
>> 
>> 
>> 
>> [root@overcloud-novacompute-1 ~]# vpp unix interactive
>> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
>> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
>> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development 
>> Kit (DPDK))
>> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
>> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
>> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
>> addressing for IPv6)
>> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
>> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
>> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
>> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid 
>> Deployment on IPv4 Infrastructure (RFC5969))
>> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
>> (experimetal))
>> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
>> Translation)
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/snat_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
>> load_one_plugin:63: Loaded plugin: 
>> /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
>> vlib_pci_bind_to_uio: Skipping PCI device :06:00.0 as host interface 
>> enp6s0 is up
>> Segmentation fault
>> 
>> Michal
>> 
>> -Original Message

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-24 Thread Damjan Marion (damarion)

Any chance you can capture backtrace?

 just "gdb —args vpp unix interactive”

Thanks,

Damjan

> On 24 May 2017, at 13:10, Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES 
> at Cisco) <mcmar...@cisco.com> wrote:
> 
> Hi,
> 
> I tried your patch, I built rpms from it and then reinstalled vpp with those 
> rpms. I ensured that the interface was not bound to kernel or dpdk. But I got 
> Segmentation fault. See output:
> 
> [root@overcloud-novacompute-1 ~]# dpdk-devbind --status
> 
> Network devices using DPDK-compatible driver
> 
> 
> 
> Network devices using kernel driver
> ===
> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> 
> Other network devices
> =
> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> 
> Crypto devices using DPDK-compatible driver
> ===
> 
> 
> Crypto devices using kernel driver
> ==
> 
> 
> Other crypto devices
> 
> 
> 
> 
> [root@overcloud-novacompute-1 ~]# vpp unix interactive
> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development 
> Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
> on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
> (experimetal))
> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
> Translation)
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/snat_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
> vlib_pci_bind_to_uio: Skipping PCI device :06:00.0 as host interface 
> enp6s0 is up
> Segmentation fault
> 
> Michal
> 
> -Original Message-
> From: Damjan Marion (damarion) 
> Sent: Tuesday, May 23, 2017 6:39 PM
> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
> <mcmar...@cisco.com>
> Cc: Dave Barach (dbarach) <dbar...@cisco.com>; Marco Varlese 
> <marco.varl...@suse.com>; Kinsella, Ray <ray.kinse...@intel.com>; 
> vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
> master
> 
> 
> Can you try following patch without manual bind:
> 
> https://gerrit.fd.io/r/#/c/6846
> 
> Thanks,
> 
> Damjan
> 
> 
>> On 23 May 2017, at 15:53, Michal Cm

Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 master

2017-05-23 Thread Damjan Marion (damarion)

Can you try following patch without manual bind:

https://gerrit.fd.io/r/#/c/6846

Thanks,

Damjan


> On 23 May 2017, at 15:53, Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Hi Dave,
> 
> The manual binding helped. I used uio_pci_generic and now VPP finally sees 
> them. Thanks.
> 
> Michal
> 
> -Original Message-
> From: Dave Barach (dbarach) 
> Sent: Tuesday, May 23, 2017 3:39 PM
> To: Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco) 
> ; Marco Varlese ; Kinsella, Ray 
> ; vpp-dev@lists.fd.io
> Subject: RE: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
> master
> 
> Please attempt to bind the VIC device(s) manually - to uio_pci_generic - 
> using dpdk-devbind. 
> 
> Until / unless that works, there isn't a chance that vpp will drive the 
> devices. You may have better luck with the igb_uio kernel module, or not... 
> 
> Thanks… Dave
> 
> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of Michal Cmarada -X (mcmarada - PANTHEON TECHNOLOGIES at Cisco)
> Sent: Tuesday, May 23, 2017 9:13 AM
> To: Marco Varlese ; Kinsella, Ray 
> ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP has no interfaces after update from 1704 to 1707 
> master
> 
> Hi,
> 
> I meant that they are in DOWN state in "ip link list":
> [root@overcloud-novacompute-1 ~]# ip link list
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
> DEFAULT qlen 1
>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: enp6s0:  mtu 1500 qdisc mq state UP mode 
> DEFAULT qlen 1000
>link/ether 00:25:b5:00:01:50 brd ff:ff:ff:ff:ff:ff
> 3: enp7s0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
> qlen 1000
>link/ether 00:25:b5:00:01:4f brd ff:ff:ff:ff:ff:ff
> 4: enp8s0:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
> qlen 1000
>link/ether 00:25:b5:00:01:4e brd ff:ff:ff:ff:ff:ff
> 
> I also tried to unbind them like you suggested. then the status of 
> dpdk-nicbind is:
> [root@overcloud-novacompute-1 tools]# dpdk_nic_bind --status
> 
> Network devices using DPDK-compatible driver 
> 
> 
> 
> Network devices using kernel driver
> ===
> :06:00.0 'VIC Ethernet NIC' if=enp6s0 drv=enic 
> unused=vfio-pci,uio_pci_generic *Active*
> :09:00.0 'VIC Ethernet NIC' if=enp9s0 drv=enic 
> unused=vfio-pci,uio_pci_generic *Active*
> :0a:00.0 'VIC Ethernet NIC' if=enp10s0 drv=enic 
> unused=vfio-pci,uio_pci_generic *Active*
> :0f:00.0 'VIC Ethernet NIC' if=enp15s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :10:00.0 'VIC Ethernet NIC' if=enp16s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :11:00.0 'VIC Ethernet NIC' if=enp17s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> :12:00.0 'VIC Ethernet NIC' if=enp18s0 drv=enic 
> unused=vfio-pci,uio_pci_generic
> 
> Other network devices
> =
> :07:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> :08:00.0 'VIC Ethernet NIC' unused=enic,vfio-pci,uio_pci_generic
> 
> [root@overcloud-novacompute-1 tools]# vpp unix interactive
> vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
> load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development 
> Kit (DPDK))
> load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
> load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
> addressing for IPv6)
> load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
> on IPv4 Infrastructure (RFC5969))
> load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
> (experimetal))
> load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address 
> Translation)
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
> load_one_plugin:63: Loaded plugin: 
> 

Re: [vpp-dev] make in debian 8

2017-05-23 Thread Damjan Marion (damarion)

> On 23 May 2017, at 16:25, emma sdi  wrote:
> 
> Dear VPP folks,
> 
> I build vpp in debian 8, with a little changes in makefile.
> Do you want this kind of commits?!

sure, submit to gerrit for review….

Thanks
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] CI Tests Failing

2017-05-22 Thread Damjan Marion (damarion)

I just disabled “make test” on Centos. We cannot continue like this. We can 
easily put it back after problem is fixed.



On 22 May 2017, at 14:28, Ed Warnicke 
> wrote:

Do you have any insight on this?

Ed

On Mon, May 22, 2017 at 1:29 AM, Klement Sekera -X (ksekera - PANTHEON 
TECHNOLOGIES at Cisco) > wrote:

Hi,

the centos python crash is known, but we're unsure about the root cause.
Building newer python from source on centos vm makes the crashes go away
so we're assuming that the (older) python itself might be the culprit,
since we haven't seen these on ubuntu at all.

Regarding the second crash - I'm not sure whether this is 'make test'
fault or not.

If somebody could translate from java/hudson/... to english and/or
provide logs, then I could take a look..

Thanks,
Klement

Quoting Kinsella, Ray (2017-05-22 10:21:21)
> Hi folks,
>
> Not sure if it's just me, but some CI tests have suddenly start failing
> for me. Is it just me or a wider problem?
>
> Ray K
>
>
> CENTOS
> https://jenkins.fd.io/job/vpp-verify-master-centos7/5568/
>
> 19:52:10 IP Multicast Signabash: line 1: 21723 Segmentation fault
> (core dumped) python run_tests.py -d
> /w/workspace/vpp-verify-master-centos7/test
> 19:54:03 make[2]: *** [test] Error 139
> 19:54:03 make[2]: Leaving directory
> `/w/workspace/vpp-verify-master-centos7/test'
> 19:54:03 make[1]: *** [test] Error 2
> 19:54:03 make[1]: Leaving directory `/w/workspace/vpp-verify-master-centos7'
> 19:54:03 make: *** [verify] Error 2
> 19:54:04 Build step 'Execute shell' marked build as failure
>
>
> UBUNTU
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/5573/console
>
> 19:37:57 IP NULL route
>   OK
> 19:37:57
> FATAL:
> command execution failed
> 19:48:45 java.io.EOFException
> 19:48:45at
> java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2638)
> 19:48:45at
> java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3113)
> 19:48:45at
> java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853)
> 19:48:45at 
> java.io.ObjectInputStream.(ObjectInputStream.java:349)
> 19:48:45at
> hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)
> 19:48:45at
> hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
> 19:48:45at
> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59)
> 19:48:45 Caused: java.io.IOException: Unexpected termination of the channel
> 19:48:45at
> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:73)
> 19:48:45 Caused: java.io.IOException: Backing channel
> 'ubuntu1604-basebuild-4c-4g-5113' is disconnected.
> 19:48:45at
> hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)
> 19:48:45at
> hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)
> 19:48:45at com.sun.proxy.$Proxy87.isAlive(Unknown Source)
> 19:48:45at
> hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1043)
> 19:48:45at
> hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1035)
> 19:48:45at
> hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
> 19:48:45at
> hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
> 19:48:45at
> hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
> 19:48:45at
> hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
> 19:48:45at
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
> 19:48:45at hudson.model.Build$BuildExecution.build(Build.java:206)
> 19:48:45at hudson.model.Build$BuildExecution.doRun(Build.java:163)
> 19:48:45at
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
> 19:48:45at hudson.model.Run.execute(Run.java:1728)
> 19:48:45at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> 19:48:45at
> hudson.model.ResourceController.execute(ResourceController.java:98)
> 19:48:45at hudson.model.Executor.run(Executor.java:405)
> 19:48:45 Build step 'Execute shell' marked build as failure
> 19:48:45 FATAL: channel is already closed
> 19:48:45 java.io.EOFException
> 19:48:45at
> java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2638)
> 19:48:45at
> java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3113)
> 19:48:45at
> java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853)
> 19:48:45at 
> java.io.ObjectInputStream.(ObjectInputStream.java:349)
> 19:48:45at
> 

Re: [vpp-dev] CSIT borked on master

2017-05-15 Thread Damjan Marion (damarion)

“recheck" will not be enough. All patches must be rebased so they pick up my 
fix...

On 15 May 2017, at 13:38, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:


Hi Marco,

I’ll restart the jobs once we’ve got them passing again.

For your reference, you can do it manually by typing ‘recheck’ as a code review 
comment in gerrit.

regards,
neale

From: Marco Varlese <marco.varl...@suse.com<mailto:marco.varl...@suse.com>>
Date: Monday, 15 May 2017 at 12:17
To: "Damjan Marion (damarion)" <damar...@cisco.com<mailto:damar...@cisco.com>>, 
"Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>, vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] CSIT borked on master

Hi Damjan,

Once you're patch is merged, is it possible to kick off the builds which 
currently are all marked as Verified-1 so to have a clean state on them?

If I could do that manually I would do it at least for mine.


Thanks,
Marco

On Mon, 2017-05-15 at 10:54 +, Damjan Marion (damarion) wrote:

This issue is caused by bug in DPDK 17.05 caused by following commit:

http://dpdk.org/browse/dpdk/commit/?id=ee1843b

It happens only with old QEMU emulation (I repro it with “pc-1.0”) which VIRL 
uses.

Fix (revert) is in gerrit:

https://gerrit.fd.io/r/#/c/6690/

Regards,

Damjan


On 13 May 2017, at 20:34, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:


Hi Chris,

Yes, every CSIT job on master is borked.
I think I’ve narrowed this down to all VAT sw_interface_dump returning 
bogus/garbage MAC addresses. No Idea why, can’t repro yet. I’ve a speculative 
DPDK 17.05 bump backout job in the queue, for purposes of elimination.

Regards,
/neale



From: "Luke, Chris" <chris_l...@comcast.com<mailto:chris_l...@comcast.com>>
Date: Saturday, 13 May 2017 at 19:04
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"yug...@telincn.com<mailto:yug...@telincn.com>" 
<yug...@telincn.com<mailto:yug...@telincn.com>>, vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.

CSIT seems to be barfing on every job at the moment :(

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Neale Ranns (nranns)
Sent: Saturday, May 13, 2017 11:20
To: yug...@telincn.com<mailto:yug...@telincn.com>; vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.


https://gerrit.fd.io/r/#/c/6674/<https://gerrit.fd.noclick_io/r/#/c/6674/>

/neale

From: "yug...@telincn.com<mailto:yug...@telincn.com>" 
<yug...@telincn.com<mailto:yug...@telincn.com>>
Date: Saturday, 13 May 2017 at 14:24
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.

Hi neale,
Could you leave me a msg then?

Thanks,
Ewan


yug...@telincn.com<mailto:yug...@telincn.com>

From: Neale Ranns (nranns)<mailto:nra...@cisco.com>
Date: 2017-05-13 20:33
To: yug...@telincn.com<mailto:yug...@telincn.com>; 
vpp-dev<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.
Hi Ewan,

That’s a bug. I’ll fix it ASAP.

Thanks,
neale

From: <vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on 
behalf of "yug...@telincn.com<mailto:yug...@telincn.com>" 
<yug...@telincn.com<mailto:yug...@telincn.com>>
Date: Saturday, 13 May 2017 at 03:24
To: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.

Hi, all
Below are my main configs, others are default.
When i knock into this cmd  "vppctl ip route 0.0.0.0/0 via 10.10.40.1" to add 
one default route,
the vpp crashed, it looks like this func fib_entry_get_resolving_interface call 
itself recursivly  till vpp's crash.
Is there something wrong?




config  info
root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show int addr
GigabitEthernet2/6/0 (up):
  192.168.60.1/24
GigabitEthernet2/7/0 (up):
  10.10.55.51/24
host-vGE2_6_0 (up):
host-vGE2_7_0 (up):
local0 (dn):




root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show ip fib
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
0.0.0.0/0
  unic

Re: [vpp-dev] CSIT borked on master

2017-05-15 Thread Damjan Marion (damarion)

This issue is caused by bug in DPDK 17.05 caused by following commit:

http://dpdk.org/browse/dpdk/commit/?id=ee1843b

It happens only with old QEMU emulation (I repro it with “pc-1.0”) which VIRL 
uses.

Fix (revert) is in gerrit:

https://gerrit.fd.io/r/#/c/6690/

Regards,

Damjan


On 13 May 2017, at 20:34, Neale Ranns (nranns) 
> wrote:


Hi Chris,

Yes, every CSIT job on master is borked.
I think I’ve narrowed this down to all VAT sw_interface_dump returning 
bogus/garbage MAC addresses. No Idea why, can’t repro yet. I’ve a speculative 
DPDK 17.05 bump backout job in the queue, for purposes of elimination.

Regards,
/neale



From: "Luke, Chris" >
Date: Saturday, 13 May 2017 at 19:04
To: "Neale Ranns (nranns)" >, 
"yug...@telincn.com" 
>, vpp-dev 
>
Subject: RE: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.

CSIT seems to be barfing on every job at the moment :(

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Neale Ranns (nranns)
Sent: Saturday, May 13, 2017 11:20
To: yug...@telincn.com; vpp-dev 
>
Subject: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.


https://gerrit.fd.io/r/#/c/6674/

/neale

From: "yug...@telincn.com" 
>
Date: Saturday, 13 May 2017 at 14:24
To: "Neale Ranns (nranns)" >, vpp-dev 
>
Subject: Re: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.

Hi neale,
Could you leave me a msg then?

Thanks,
Ewan


yug...@telincn.com

From: Neale Ranns (nranns)
Date: 2017-05-13 20:33
To: yug...@telincn.com; 
vpp-dev
Subject: Re: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.
Hi Ewan,

That’s a bug. I’ll fix it ASAP.

Thanks,
neale

From: > on 
behalf of "yug...@telincn.com" 
>
Date: Saturday, 13 May 2017 at 03:24
To: vpp-dev >
Subject: [vpp-dev] Segmentation fault in recursivly lookuping fib entry.

Hi, all
Below are my main configs, others are default.
When i knock into this cmd  "vppctl ip route 0.0.0.0/0 via 10.10.40.1" to add 
one default route,
the vpp crashed, it looks like this func fib_entry_get_resolving_interface call 
itself recursivly  till vpp's crash.
Is there something wrong?



config  info
root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show int addr
GigabitEthernet2/6/0 (up):
  192.168.60.1/24
GigabitEthernet2/7/0 (up):
  10.10.55.51/24
host-vGE2_6_0 (up):
host-vGE2_7_0 (up):
local0 (dn):



root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root# vppctl show ip fib
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:0 buckets:1 uRPF:0 to:[142:12002]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:1 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.10.55.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:10 buckets:1 uRPF:9 to:[0:0]]
[0] [@4]: ipv4-glean: GigabitEthernet2/7/0
10.10.55.51/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:11 buckets:1 uRPF:10 to:[0:0]]
[0] [@2]: dpo-receive: 10.10.55.51 on GigabitEthernet2/7/0
192.168.60.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:7 to:[0:0]]
[0] [@4]: ipv4-glean: GigabitEthernet2/6/0
192.168.60.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:8 to:[60:3600]]
[0] [@2]: dpo-receive: 192.168.60.1 on GigabitEthernet2/6/0
192.168.60.30/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:12 buckets:1 uRPF:11 to:[60:3600]]
[0] [@5]: ipv4 via 192.168.60.30 GigabitEthernet2/6/0: 
f44d3016eac1000c2904f74e0800
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:3 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:2 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:4 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4

root@ubuntu:/usr/src/1704/VBRASV100R001/vpp1704/build-root#






Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.

Re: [vpp-dev] VPP kni related query

2017-05-15 Thread Damjan Marion (damarion)
Please avoid unicast emails. Adding vpp-dev@
lists.fd.io
> On 15 May 2017, at 10:33, bindiya Kurle  wrote:
> 
> 
> Hi,
> 
> Was going through KNI code in VPP. As part of below change set, ki support 
> was removed from VPP.Any specific reasons to remove it.

Code was incomplete and outdated. Nobody was maintaining it.

> In our program applications listens on standard sockets.Hence we need this so 
> that application code will not change. Was this use case considered or there 
> is an alternative approach to this ?

Have you considered using linux packet interface (af-packet). Debug cli command 
is “create host-interface name ” ?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP and SR-IOV(?): No packets reaching VPP interfaces

2017-05-15 Thread Damjan Marion (damarion)
Yes, they are, That’s why it starts working….

On 15 May 2017, at 08:58, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Would the MTU settings be related to the max_rx_pktlen? Since as I mentioned in 
the other mail when I lowered it to 1500 as per Avinash's advice the packets 
are no longer dropped.

/T

On 12 May 2017 at 20:05, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:



On 12 May 2017, at 17:34, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Hm, OK.

I did a search for the HW CRC message earlier and it sounded like the message 
was just for information and had no real functional impact, but maybe it does.. 
(http://dpdk.org/dev/patchwork/patch/12080/)

Looks like on ixgbevf it is informational but on i40evf config fails.



Would there be a workaround outside  VPP for 2. somehow?

I’m still trying to understand what’s wrong. I have simple dpdk application 
which just dumps rx packets and by simply increasing .max_rx_pkt_len it stops 
working.
On the same time rte_eth_dev_info_get() says:

(gdb) p dev_info
$1 = {
 pci_dev = 0x55873740,
 driver_name = 0x7fffb4c51b67 "net_ixgbe_vf",
 if_index = 0,
 min_rx_bufsize = 1024,
 max_rx_pktlen = 9728,

So max_rx_pktlen is even bigger than what I set in .max_rx_pkt_len….


/Tomas

On 12 May 2017 at 16:30, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:

There are 2 problems:

1. HW CRC strip needs to be enabled for VFs, that’s why DPDK is failing to init 
device
2. VFs are dropping packets when .max_rx_pkt_len is set to 9216

Problem 1. is easy fixable by changing .hw_strip_crc to 1 in 
src/plugins/dpdk/device/init.c

Problem 2. seems to be outside of VPP control, but it can be workarounded by 
setting .max_rx_pkt_len to 1518. consequence of doing this is that we will  
(likely) loose jumbo frame support on VFs.

I’m going to submit patch which fixes both issues soon (actually workarounds 2. 
), I need to play a bit more with 2. first...



On 12 May 2017, at 12:08, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Unfortunately my MTU seems to be at 1500 already.

I did an upgrade to release 1704 and now none of the interfaces are discovered 
anymore. But it seems suspicious since there are basically no log printouts at 
startup either, below are 1701 vs 1704 for comparision. This isn't exclusive 
for this "SR-IOV" machine either I think, when running later VPP in for example 
virtual box I get the same problems, so I guess there's something additional 
that must be done that's maybe not documented on the wiki yet.

17.01:
--
vlib_plugin_early_init:213: plugin path /usr/lib/vpp_plugins
vpp[5066]: vlib_pci_bind_to_uio: Skipping PCI device :00:03.0 as host 
interface eth0 is up
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable 
clock cycles !
EAL: PCI device :00:03.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
EAL: PCI device :00:06.0 on NUMA socket -1
EAL:   probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device :00:07.0 on NUMA socket -1
EAL:   probe driver: 8086:10ed net_ixgbe_vf
DPDK physical memory layout:
Segment 0: phys:0x5cc0, len:2097152, virt:0x7f7e0b80, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x5d00, len:266338304, virt:0x7f7db160, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip

17.04:
--
vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins

/Tomas

On 12 May 2017 at 11:49, Gonsalves, Avinash (Nokia - IN/Bangalore) 
<avinash.gonsal...@nokia.com<mailto:avinash.gonsal...@nokia.com>> wrote:

I faced a similar issue with SR-IOV, and for some reason setting the MTU size 
to 1500 on the interface helped with ARP resolution.



Thanks,
Avinash







Thanks. I can try to use a later VPP version. A thing to note is that when

we did try to use the master release before, VPP failed to discover

interfaces, even when they were whitelisted. Not sure if something has

changed in the way VPP discover interfaces in later versions. I will try

with 1704 though.



Ah OK sorry, should have realized what PF was in this context. Not sure of

all the info that might be needed but here's what I could think of:



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# lspci -t -v

[...]

 +-03.0-[0b-0c]--+-00.0  Intel Corporation 82599ES 10-Gigabit

SFI/SFP+ Network Connection

 

Re: [vpp-dev] VPP and SR-IOV(?): No packets reaching VPP interfaces

2017-05-12 Thread Damjan Marion (damarion)



On 12 May 2017, at 17:34, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Hm, OK.

I did a search for the HW CRC message earlier and it sounded like the message 
was just for information and had no real functional impact, but maybe it does.. 
(http://dpdk.org/dev/patchwork/patch/12080/)

Looks like on ixgbevf it is informational but on i40evf config fails.



Would there be a workaround outside  VPP for 2. somehow?

I’m still trying to understand what’s wrong. I have simple dpdk application 
which just dumps rx packets and by simply increasing .max_rx_pkt_len it stops 
working.
On the same time rte_eth_dev_info_get() says:

(gdb) p dev_info
$1 = {
 pci_dev = 0x55873740,
 driver_name = 0x7fffb4c51b67 "net_ixgbe_vf",
 if_index = 0,
 min_rx_bufsize = 1024,
 max_rx_pktlen = 9728,

So max_rx_pktlen is even bigger than what I set in .max_rx_pkt_len….


/Tomas

On 12 May 2017 at 16:30, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:

There are 2 problems:

1. HW CRC strip needs to be enabled for VFs, that’s why DPDK is failing to init 
device
2. VFs are dropping packets when .max_rx_pkt_len is set to 9216

Problem 1. is easy fixable by changing .hw_strip_crc to 1 in 
src/plugins/dpdk/device/init.c

Problem 2. seems to be outside of VPP control, but it can be workarounded by 
setting .max_rx_pkt_len to 1518. consequence of doing this is that we will  
(likely) loose jumbo frame support on VFs.

I’m going to submit patch which fixes both issues soon (actually workarounds 2. 
), I need to play a bit more with 2. first...



On 12 May 2017, at 12:08, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Unfortunately my MTU seems to be at 1500 already.

I did an upgrade to release 1704 and now none of the interfaces are discovered 
anymore. But it seems suspicious since there are basically no log printouts at 
startup either, below are 1701 vs 1704 for comparision. This isn't exclusive 
for this "SR-IOV" machine either I think, when running later VPP in for example 
virtual box I get the same problems, so I guess there's something additional 
that must be done that's maybe not documented on the wiki yet.

17.01:
--
vlib_plugin_early_init:213: plugin path /usr/lib/vpp_plugins
vpp[5066]: vlib_pci_bind_to_uio: Skipping PCI device :00:03.0 as host 
interface eth0 is up
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable 
clock cycles !
EAL: PCI device :00:03.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
EAL: PCI device :00:06.0 on NUMA socket -1
EAL:   probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device :00:07.0 on NUMA socket -1
EAL:   probe driver: 8086:10ed net_ixgbe_vf
DPDK physical memory layout:
Segment 0: phys:0x5cc0, len:2097152, virt:0x7f7e0b80, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x5d00, len:266338304, virt:0x7f7db160, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip

17.04:
--
vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins

/Tomas

On 12 May 2017 at 11:49, Gonsalves, Avinash (Nokia - IN/Bangalore) 
<avinash.gonsal...@nokia.com<mailto:avinash.gonsal...@nokia.com>> wrote:

I faced a similar issue with SR-IOV, and for some reason setting the MTU size 
to 1500 on the interface helped with ARP resolution.



Thanks,
Avinash







Thanks. I can try to use a later VPP version. A thing to note is that when

we did try to use the master release before, VPP failed to discover

interfaces, even when they were whitelisted. Not sure if something has

changed in the way VPP discover interfaces in later versions. I will try

with 1704 though.



Ah OK sorry, should have realized what PF was in this context. Not sure of

all the info that might be needed but here's what I could think of:



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# lspci -t -v

[...]

 +-03.0-[0b-0c]--+-00.0  Intel Corporation 82599ES 10-Gigabit

SFI/SFP+ Network Connection

 |   +-00.1  Intel Corporation 82599ES 10-Gigabit

SFI/SFP+ Network Connection

 |   +-10.0  Intel Corporation 82599 Ethernet

Controller Virtual Function

 |   +-10.1  Intel Corporation 82599 Ethernet

Controller Virtual Function

 [...]



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# lshw -class 
network -businfo

Bus info  Device  Class  Description

==

Re: [vpp-dev] VPP and SR-IOV(?): No packets reaching VPP interfaces

2017-05-12 Thread Damjan Marion (damarion)

See my another post sent few mins ago to this list…

On 12 May 2017, at 11:10, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Thanks. I can try to use a later VPP version. A thing to note is that when we 
did try to use the master release before, VPP failed to discover interfaces, 
even when they were whitelisted. Not sure if something has changed in the way 
VPP discover interfaces in later versions. I will try with 1704 though.

Ah OK sorry, should have realized what PF was in this context. Not sure of all 
the info that might be needed but here's what I could think of:

root@node-4:~# lspci -t -v
[...]
 +-03.0-[0b-0c]--+-00.0  Intel Corporation 82599ES 10-Gigabit 
SFI/SFP+ Network Connection
 |   +-00.1  Intel Corporation 82599ES 10-Gigabit 
SFI/SFP+ Network Connection
 |   +-10.0  Intel Corporation 82599 Ethernet 
Controller Virtual Function
 |   +-10.1  Intel Corporation 82599 Ethernet 
Controller Virtual Function
 [...]

root@node-4:~# lshw -class network -businfo
Bus info  Device  Class  Description

pci@:0b:00.0  enp11s0f0   network82599ES 10-Gigabit SFI/SFP+ 
Network Connection
pci@:0b:00.1  enp11s0f1   network82599ES 10-Gigabit SFI/SFP+ 
Network Connection

root@node-4:~# cat /sys/class/net/enp11s0f0/device/sriov_totalvfs
63

root@node-4:~# cat /sys/class/net/enp11s0f0/device/sriov_numvfs
16

root@node-4:~# ethtool -i enp11s0f0
driver: ixgbe
version: 3.15.1-k
firmware-version: 0x61c10001
bus-info: :0b:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no

/Tomas

On 12 May 2017 at 10:44, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:

On 12 May 2017, at 08:01, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

(I forgot to mention before, this is running with VPP installed from binaries 
with release .stable.1701)

I strongly suggest that you use 17.04 release at least.


With PF do you mean packet filter? I don't think we have any such 
configuration. If there is anything else I should provide then please tell :)

PF = SR-IOV Physical Function


I decided to try to attach to the VPP process with gdb and I actually get a 
crash when trying to do "ip probe":

vpp# ip probe 10.0.1.1 TenGigabitEthernet0/6/0
exec error: Misc

Program received signal SIGSEGV, Segmentation fault.
ip4_probe_neighbor (vm=vm@entry=0x7f681533e720 , 
dst=dst@entry=0x7f67d345cc50, sw_if_index=sw_if_index@entry=1)
at 
/w/workspace/vpp-merge-1701-ubuntu1404/build-data/../vnet/vnet/ip/ip4_forward.c:2223
2223
/w/workspace/vpp-merge-1701-ubuntu1404/build-data/../vnet/vnet/ip/ip4_forward.c:
 No such file or directory.
(gdb) bt
#0  ip4_probe_neighbor (vm=vm@entry=0x7f681533e720 , 
dst=dst@entry=0x7f67d345cc50, sw_if_index=sw_if_index@entry=1)
at 
/w/workspace/vpp-merge-1701-ubuntu1404/build-data/../vnet/vnet/ip/ip4_forward.c:2223

Whether this is related or not I'm not sure because yesterday I could do the 
probe but got "Resolution failed". I've attached the stack trace at any rate.

/Tomas

On 11 May 2017 at 20:25, Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>> wrote:
Dear Tomas,

Can you please share your PF configuration so I can try to reproduce?

Thanks,

Damjan

On 11 May 2017, at 17:07, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Hello
Since the last mail I sent I've managed to get our test client working and VPP 
running in a KVM VM.

We are still facing some problems though. We have a two servers, one where the 
virtual machines are running and one we use as the openstack controller. They 
are connected to each other with a 10G NIC. We have SR-IOV configured for the 
10G NIC.

So VPP is installed in a VM, and all interfaces work OK, then can be reached 
from outside the VM etc. Following the basic examples on the wiki, we configure 
VPP to take over the interfaces:

vpp# set int ip address TenGigabitEthernet0/6/0 
10.0.1.101/24<http://10.0.1.101/24>
vpp# set int ip address TenGigabitEthernet0/7/0 
10.0.2.101/24<http://10.0.2.101/24>
vpp# set int state TenGigabitEthernet0/6/0 up
vpp# set int state TenGigabitEthernet0/7/0 up

But when trying to ping for example the physical NIC on the other server, we 
get no reply:

vpp# ip probe 10.0.1.1 TenGigabitEthernet0/6/0
ip probe-neighbor: Resolution failed for 10.0.1.1

If I do a tcpdump on the physical interface when trying to ping, I see ARP 
packets being sent so -something- is happening, but it seems that packets are 
not correctly arriving to VPP... I can't ping from the 

Re: [vpp-dev] VPP and SR-IOV(?): No packets reaching VPP interfaces

2017-05-12 Thread Damjan Marion (damarion)

There are 2 problems:

1. HW CRC strip needs to be enabled for VFs, that’s why DPDK is failing to init 
device
2. VFs are dropping packets when .max_rx_pkt_len is set to 9216

Problem 1. is easy fixable by changing .hw_strip_crc to 1 in 
src/plugins/dpdk/device/init.c

Problem 2. seems to be outside of VPP control, but it can be workarounded by 
setting .max_rx_pkt_len to 1518. consequence of doing this is that we will  
(likely) loose jumbo frame support on VFs.

I’m going to submit patch which fixes both issues soon (actually workarounds 2. 
), I need to play a bit more with 2. first...



On 12 May 2017, at 12:08, Tomas Brännström 
<tomas.a.brannst...@tieto.com<mailto:tomas.a.brannst...@tieto.com>> wrote:

Unfortunately my MTU seems to be at 1500 already.

I did an upgrade to release 1704 and now none of the interfaces are discovered 
anymore. But it seems suspicious since there are basically no log printouts at 
startup either, below are 1701 vs 1704 for comparision. This isn't exclusive 
for this "SR-IOV" machine either I think, when running later VPP in for example 
virtual box I get the same problems, so I guess there's something additional 
that must be done that's maybe not documented on the wiki yet.

17.01:
--
vlib_plugin_early_init:213: plugin path /usr/lib/vpp_plugins
vpp[5066]: vlib_pci_bind_to_uio: Skipping PCI device :00:03.0 as host 
interface eth0 is up
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable 
clock cycles !
EAL: PCI device :00:03.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
EAL: PCI device :00:06.0 on NUMA socket -1
EAL:   probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device :00:07.0 on NUMA socket -1
EAL:   probe driver: 8086:10ed net_ixgbe_vf
DPDK physical memory layout:
Segment 0: phys:0x5cc0, len:2097152, virt:0x7f7e0b80, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x5d00, len:266338304, virt:0x7f7db160, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip

17.04:
--
vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins

/Tomas

On 12 May 2017 at 11:49, Gonsalves, Avinash (Nokia - IN/Bangalore) 
<avinash.gonsal...@nokia.com<mailto:avinash.gonsal...@nokia.com>> wrote:

I faced a similar issue with SR-IOV, and for some reason setting the MTU size 
to 1500 on the interface helped with ARP resolution.



Thanks,
Avinash







Thanks. I can try to use a later VPP version. A thing to note is that when

we did try to use the master release before, VPP failed to discover

interfaces, even when they were whitelisted. Not sure if something has

changed in the way VPP discover interfaces in later versions. I will try

with 1704 though.



Ah OK sorry, should have realized what PF was in this context. Not sure of

all the info that might be needed but here's what I could think of:



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# lspci -t -v

[...]

 +-03.0-[0b-0c]--+-00.0  Intel Corporation 82599ES 10-Gigabit

SFI/SFP+ Network Connection

 |   +-00.1  Intel Corporation 82599ES 10-Gigabit

SFI/SFP+ Network Connection

 |   +-10.0  Intel Corporation 82599 Ethernet

Controller Virtual Function

 |   +-10.1  Intel Corporation 82599 Ethernet

Controller Virtual Function

 [...]



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# lshw -class 
network -businfo

Bus info  Device  Class  Description



pci at <https://lists.fd.io/mailman/listinfo/vpp-dev>:0b:00.0  enp11s0f0
   network82599ES 10-Gigabit

SFI/SFP+ Network Connection

pci at <https://lists.fd.io/mailman/listinfo/vpp-dev>:0b:00.1  enp11s0f1
   network82599ES 10-Gigabit

SFI/SFP+ Network Connection



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# cat 
/sys/class/net/enp11s0f0/device/sriov_totalvfs

63



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# cat 
/sys/class/net/enp11s0f0/device/sriov_numvfs

16



root at node-4<https://lists.fd.io/mailman/listinfo/vpp-dev>:~# ethtool -i 
enp11s0f0

driver: ixgbe

version: 3.15.1-k

firmware-version: 0x61c10001

bus-info: :0b:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no



/Tomas



On 12 May 2017 at 10:44, Damjan Marion (damarion) https://lists.fd.io/mailman/listinfo/vpp-dev>>

wrote:



&

Re: [vpp-dev] VPP and SR-IOV(?): No packets reaching VPP interfaces

2017-05-11 Thread Damjan Marion (damarion)
Dear Tomas,

Can you please share your PF configuration so I can try to reproduce?

Thanks,

Damjan

On 11 May 2017, at 17:07, Tomas Brännström 
> wrote:

Hello
Since the last mail I sent I've managed to get our test client working and VPP 
running in a KVM VM.

We are still facing some problems though. We have a two servers, one where the 
virtual machines are running and one we use as the openstack controller. They 
are connected to each other with a 10G NIC. We have SR-IOV configured for the 
10G NIC.

So VPP is installed in a VM, and all interfaces work OK, then can be reached 
from outside the VM etc. Following the basic examples on the wiki, we configure 
VPP to take over the interfaces:

vpp# set int ip address TenGigabitEthernet0/6/0 
10.0.1.101/24
vpp# set int ip address TenGigabitEthernet0/7/0 
10.0.2.101/24
vpp# set int state TenGigabitEthernet0/6/0 up
vpp# set int state TenGigabitEthernet0/7/0 up

But when trying to ping for example the physical NIC on the other server, we 
get no reply:

vpp# ip probe 10.0.1.1 TenGigabitEthernet0/6/0
ip probe-neighbor: Resolution failed for 10.0.1.1

If I do a tcpdump on the physical interface when trying to ping, I see ARP 
packets being sent so -something- is happening, but it seems that packets are 
not correctly arriving to VPP... I can't ping from the physical host either, 
but the ARP cache is updated on the host when trying to ping from VPP.

I've tried dumping counters etc. but I can't really see anything. The trace 
does not show anything either. This is the output from "show hardware":

vpp# show hardware
  NameIdx   Link  Hardware
TenGigabitEthernet0/6/01 up   TenGigabitEthernet0/6/0
  Ethernet address fa:16:3e:04:42:d1
  Intel 82599 VF
carrier up full duplex speed 1 mtu 9216
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024

tx frames ok   3
tx bytes ok  126
extended stats:
  tx good packets  3
  tx good bytes  126
TenGigabitEthernet0/7/02 up   TenGigabitEthernet0/7/0
  Ethernet address fa:16:3e:f2:15:a5
  Intel 82599 VF
carrier up full duplex speed 1 mtu 9216
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024

I've tried a similar setup between two virtual box VM's and that worked OK, so 
I'm thinking it might have something to do with SR-IOV for some reason. I'm 
having a hard time troubleshooting this since I'm not sure how to check where 
the packets actually get lost...

/Tomas

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Fwd: [dpdk-dev] [dpdk-announce] DPDK 17.05 released

2017-05-11 Thread Damjan Marion (damarion)

And we have DPDK 17.05 as default in VPP (merged 15 min ago).

Thanks,

Damjan


Begin forwarded message:

From: Thomas Monjalon >
Subject: [dpdk-dev] [dpdk-announce] DPDK 17.05 released
Date: 11 May 2017 at 04:39:53 GMT+2
To: annou...@dpdk.org

A new major release is available:
http://fast.dpdk.org/rel/dpdk-17.05.tar.xz

It is the biggest release ever!
1263 patches from 128 authors
1218 files changed, 121233 insertions(+), 32610 deletions(-)

There are 60 new contributors
(including authors, reviewers and testers):
Thanks to Alex Marginean, Alexey Kardashevskiy, Allain Legacy, Ami Sabo,
Andriy Berestovskyy, Billy McFall, Charles Myers, Chris Metcalf,
Cristian Sovaiala, David Riddoch, David Su, Derek Chickles, Ed Czeck,
Fangfang Wei, Gage Eads, Gang Jiang, Gang Yang, Geoff Thorpe,
Gregory Etelson, Guduri Prathyusha, Henry Cai, Herakliusz Lipiec,
Hiroki Shirokura, Horia Geanta Neag, Huanle Han, Ivan Nardi,
Jens Freimann, Jin Heo, Johan Samuelsson, John Jacques, John Miller,
Joseph Richard, Julien Castets, Laura Stroe, Laurent Hardy, Lijuan A Tu,
Mallesham Jatharakonda, Marcin Wilk, Marcin Wojtas, Mark Asselstine,
Mark Bloch, Matt Peters, Michal Krawczyk, Nirmoy Das, Pankaj Gupta,
Pawel Rutkowski, Roman Korynkevych, Roman Zhukov, Roy Pledge,
Sagar Abhang, Shepard Siegel, Shijith Thotton, Shrikrishna Khare,
Shyam Kumar Shrivastav, Srisivasubramanian S, Sunil Kulkarni,
Timothy Redaelli, Venkat Koppula, Vipin Varghese, Wei Wang.

These new contributors are associated with these domain names:
6wind.com, atomicrules.com, brocade.com, caviumnetworks.com,
ericsson.com, huawei.com, intel.com, ustc.edu.cn, mellanox.com,
nxp.com, oktetlabs.ru, oneconvergence.com, ozlabs.ru, radware.com,
redhat.com, scaleway.com, semihalf.com, solarflare.com, spirent.com,
suse.de, vmware.com, weka.io, windriver.com.

Some highlights:
- PCI and VDEV bus rework in progress
- mbuf rework
- event driven programming model
- software eventdev driver
- Cavium OCTEON TX eventdev driver
- Cavium LiquidIO driver
- NXP DPAA2 drivers
- Atomic Rules Arkville driver
- Wind River AVP driver
- DOCSIS BPI+ crypto

More details in the release notes:
http://dpdk.org/doc/guides/rel_notes/release_17_05.html

The new features for the 17.08 cycle must be submitted before the end
of May, in order to be reviewed and integrated during June.
The next release is expected to happen at the very beginning of August.

There were a lot of bugs discovered in the last days of 17.05.
It is probably a sign that DPDK is more and more tested.
If you want to dedicate a machine for DPDK testing
and automatically send/publish the reports,
please join the CI team on c...@dpdk.org:
http://dpdk.org/ml/listinfo/ci

Thanks everyone
DPDK: where collaboration meets networking

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VIRL jobs failing

2017-05-09 Thread Damjan Marion (damarion)

Looks like many VIRL jobs are failing with following error:

17:15:55 VIRL simulation start failed on 10.30.51.29

One sample run:

https://jenkins.fd.io/job/vpp-csit-verify-virl-master/5281/console

Can somebody take a look?

Thanks,

Damjan



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Damjan Marion (damarion)

Can you try this one:

https://gerrit.fd.io/r/#/c/6614/


It should fix PF case….

On 8 May 2017, at 17:01, Mircea Orban 
<mior...@hotmail.com<mailto:mior...@hotmail.com>> wrote:

It would be the same output because it’s the same server:

-  :0b:00.0-3 are the four 10G PFs
-  :0b:02.0 to 4 - 5 VFs for  :0b:00.0
-  And :0b:06.0 to 4 - 5VFs for :0b:00.1.

Thanks,
Mircea


From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:51 AM
To: Mircea Orban <mior...@hotmail.com<mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC


Thanks,

what about PF case? Can you also grab output for PF case?

On 8 May 2017, at 16:47, Mircea Orban 
<mior...@hotmail.com<mailto:mior...@hotmail.com>> wrote:

Here it is.

Thanks,
Mircea

vpp#
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:09:00.0   0  15b3:1007   8.0 GT/s x8  mlx4_core
:0b:00.0   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.1   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.2   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.3   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:02.0   0  8086:154c   unknown  vfio-pci
:0b:02.1   0  8086:154c   unknown  vfio-pci
:0b:02.2   0  8086:154c   unknown  vfio-pci
:0b:02.3   0  8086:154c   unknown  vfio-pci
:0b:02.4   0  8086:154c   unknown  vfio-pci
:0b:06.0   0  8086:154c   unknown  vfio-pci
:0b:06.1   0  8086:154c   unknown  vfio-pci
:0b:06.2   0  8086:154c   unknown  vfio-pci
:0b:06.3   0  8086:154c   unknown  vfio-pci
:0b:06.4   0  8086:154c   unknown      vfio-pci
vpp#
vpp#

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:18 AM
To: Mircea Orban <mior...@hotmail.com<mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC




On 5 May 2017, at 22:37, Mircea Orban 
<mior...@hotmail.com<mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Damjan Marion (damarion)

Thanks,

what about PF case? Can you also grab output for PF case?

On 8 May 2017, at 16:47, Mircea Orban 
<mior...@hotmail.com<mailto:mior...@hotmail.com>> wrote:

Here it is.

Thanks,
Mircea

vpp#
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:09:00.0   0  15b3:1007   8.0 GT/s x8  mlx4_core
:0b:00.0   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.1   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.2   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.3   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:02.0   0  8086:154c   unknown  vfio-pci
:0b:02.1   0  8086:154c   unknown  vfio-pci
:0b:02.2   0  8086:154c   unknown  vfio-pci
:0b:02.3   0  8086:154c   unknown  vfio-pci
:0b:02.4   0  8086:154c   unknown  vfio-pci
:0b:06.0   0  8086:154c   unknown  vfio-pci
:0b:06.1   0  8086:154c   unknown  vfio-pci
:0b:06.2   0  8086:154c   unknown  vfio-pci
:0b:06.3   0  8086:154c   unknown  vfio-pci
:0b:06.4   0  8086:154c   unknown  vfio-pci
vpp#
vpp#

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:18 AM
To: Mircea Orban <mior...@hotmail.com<mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC




On 5 May 2017, at 22:37, Mircea Orban 
<mior...@hotmail.com<mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Damjan Marion (damarion)



On 5 May 2017, at 22:37, Mircea Orban 
> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question: make realclean

2017-05-08 Thread Damjan Marion (damarion)

On 3 May 2017, at 17:20, Jon Loeliger 
> wrote:

Hey VPP Builders,

Do you ever use "cd build-root; make distclean"?
Does it look sort of like this:


jdl $ cd build-root/
jdl $ make distclean
rm -rf /home/jdl/workspace/vpp/build-root/build-*/
rm -rf /home/jdl/workspace/vpp/build-root/build-tool-*
rm -rf /home/jdl/workspace/vpp/build-root/install-*
rm -rf /home/jdl/workspace/vpp/build-root/images-*
rm -rf /home/jdl/workspace/vpp/build-root/tools
rm -rf /home/jdl/workspace/vpp/build-root/*.deb
rm -rf /home/jdl/workspace/vpp/build-root/*.rpm
rm -rf /home/jdl/workspace/vpp/build-root/*.changes
rm -rf /home/jdl/workspace/vpp/build-root/python
if [ -e /usr/bin/dh ];then (cd 
/home/jdl/workspace/vpp/build-root/deb/;debian/rules clean); fi
rm -f /home/jdl/workspace/vpp/build-root/deb/debian/*.install
rm -f /home/jdl/workspace/vpp/build-root/deb/debian/changelog

Remember back in

commit c06eeb0e3c9c1a9fa8f913e2d785b03220bfdabd
Author: Damjan Marion >
Date:   Tue Apr 18 15:26:39 2017 +0200

Fix "make dist" to include version number, docouple it from rpm packaging

Change-Id: If2f9976d668089026c97b897cf449bff09050631
Signed-off-by: Damjan Marion >

when we moved the RPM building pieces out of build-root/rpm and
placed them under extras/rpm instead?

Should we have also modified the distclean make target to rm the rpms
out of extras/rpm too?  Or was that an intentional change as well?

Makes sense to do that from top level makefile, i.e. run "git clean -fdX” but
I would prefer that build-root/Makefile only takes care for build-root/.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] dpdk_device_input - not checking for vlan header...

2017-05-08 Thread Damjan Marion (damarion)
> 
> On 3 May 2017, at 13:28, Nagaprabhanjan Bellaru  wrote:
> 
> Hi,
> 
> It looks like dpdk_device_input() - is not checking if there is a vlan header 
> in the packet or not and always sets the buffer->current_data to 14 
> (smac+dmac+ethtype). Because of that ip4_input is not able to recognize a 
> correct IP packet. 
> 
> For example, I have a subinterface created with vlan100 - which is trying to 
> send IP packets. The receiving side is setting l3offset to 14 and feeding the 
> packet to ip4-input-no-checksum and is getting dropped there.
> 
> Am I missing something? "show interface" shows the main and the sub 
> interface. "show trace" for dpdk_input shows the vlan tag as 100. But 
> ip4_input_inline gets a buffer with current_data as 14 instead of 18 
> (accounting for vlan header)

We do respect VLAN ethertypes, is your ethertype set correctly?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-21 Thread Damjan Marion (damarion)


Sent from my iPhone

> On 21 Apr 2017, at 20:46, Ernst, Eric <eric.er...@intel.com> wrote:
> 
> On Fri, Apr 21, 2017 at 06:04:34PM +, Damjan Marion (damarion) wrote:
>>> On 21 Apr 2017, at 20:02, Ernst, Eric <eric.er...@intel.com> wrote:
>>> 
>>> Ugh.  Definitely not.  I was just trying to run following the directions I 
>>> could find after installing VPP binary package.
>>> 
>>> What needs to change from the default? 
>> 
>> Then you likely need to install vpp-plugins package...
>> 
> 
> Thanks Damjan.  I installed the vpp-plugins package and can use multiple 
> worker threads
> without crashing.  Since I'll only be using DPDK, I shouldn't hit that crash 
> again,
> but seems like a bug/issue anyway.
> 
> --Eric

Internal buffer manager (which is used when dpdk plugin is not loaded) is not 
thread safe. It was built in days when vpp was single threaded app.

It is on the todo list...


> 
> 
>>> 
>>> Thanks,
>>> Eric
>>> 
>>> -Original Message-
>>> From: Damjan Marion (damarion) [mailto:damar...@cisco.com] 
>>> Sent: Friday, April 21, 2017 10:59 AM
>>> To: Ernst, Eric <eric.er...@intel.com>
>>> Cc: Steven Luong (sluong) <slu...@cisco.com>; Billy McFall 
>>> <bmcf...@redhat.com>; vpp-dev@lists.fd.io
>>> Subject: Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?
>>> 
>>>> 
>>>> On 21 Apr 2017, at 18:28, Ernst, Eric <eric.er...@intel.com> wrote:
>>>> 
>>>> Backtrace and startup.conf found below:
>>>> 
>>>> ===>Backtrace::
>>>> (gdb) run -c /etc/vpp/startup.conf
>>>> Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf [Thread 
>>>> debugging using libthread_db enabled] Using host libthread_db library 
>>>> "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>>> vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins [New 
>>>> Thread 0x7fff97041700 (LWP 68889)] [New Thread 0x7fff96840700 (LWP 
>>>> 68890)] [New Thread 0x7fff9603f700 (LWP 68891)]
>>>> /usr/bin/vpp[68885]: unknown input `
>>>> unix_physmem_init: use huge pages
>>>> unix_physmem_init: use huge pages
>>>> 
>>>> Thread 2 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
>>>> [Switching to Thread 0x7fff97041700 (LWP 68889)]
>>>> 0x77149a62 in ?? () from 
>>>> /usr/lib/x86_64-linux-gnu/libvnet.so.0
>>>> (gdb) bt
>>>> #0  0x77149a62 in ?? () from 
>>>> /usr/lib/x86_64-linux-gnu/libvnet.so.0
>>>> #1  0x77757f89 in dispatch_node () from 
>>>> /usr/lib/x86_64-linux-gnu/libvlib.so.0
>>>> #2  0x7775827d in dispatch_pending_node () from 
>>>> /usr/lib/x86_64-linux-gnu/libvlib.so.0
>>>> #3  0x77758537 in vlib_worker_loop () from 
>>>> /usr/lib/x86_64-linux-gnu/libvlib.so.0
>>>> #4  0x769e9c60 in clib_calljmp () from 
>>>> /usr/lib/x86_64-linux-gnu/libvppinfra.so.0
>>>> #5  0x7fff97040f20 in ?? ()
>>>> #6  0x767a96ca in start_thread (arg=0x0) at 
>>>> pthread_create.c:333
>>>> #7  0x in ?? ()
>>>> 
>>>> ---
>>>> 
>>>> ===>Startup.conf:
>>>> vhost-user {
>>>> coalesce-frames 0
>>>> }
>>>> 
>>>> unix {
>>>> nodaemon
>>>> log /tmp/vpp.log
>>>> full-coredump
>>>> }
>>>> 
>>>> api-trace {
>>>> on
>>>> }
>>>> 
>>>> api-segment {
>>>> gid vpp
>>>> }
>>>> 
>>>> cpu {
>>>>  skip-cores 4
>>>>  workers 2
>>>> }
>>>> 
>>> 
>>> You are trying to run VPP without dpdk plugin loaded. Is this intentional?
>>> 
>>> 
>>> ___
>>> vpp-dev mailing list
>>> vpp-dev@lists.fd.io
>>> https://lists.fd.io/mailman/listinfo/vpp-dev
>> 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-21 Thread Damjan Marion (damarion)
> On 21 Apr 2017, at 20:02, Ernst, Eric <eric.er...@intel.com> wrote:
> 
> Ugh.  Definitely not.  I was just trying to run following the directions I 
> could find after installing VPP binary package.
> 
> What needs to change from the default? 

Then you likely need to install vpp-plugins package...

> 
> Thanks,
> Eric
> 
> -----Original Message-
> From: Damjan Marion (damarion) [mailto:damar...@cisco.com] 
> Sent: Friday, April 21, 2017 10:59 AM
> To: Ernst, Eric <eric.er...@intel.com>
> Cc: Steven Luong (sluong) <slu...@cisco.com>; Billy McFall 
> <bmcf...@redhat.com>; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?
> 
>> 
>> On 21 Apr 2017, at 18:28, Ernst, Eric <eric.er...@intel.com> wrote:
>> 
>> Backtrace and startup.conf found below:
>> 
>> ===>Backtrace::
>> (gdb) run -c /etc/vpp/startup.conf
>> Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf [Thread 
>> debugging using libthread_db enabled] Using host libthread_db library 
>> "/lib/x86_64-linux-gnu/libthread_db.so.1".
>> vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins [New 
>> Thread 0x7fff97041700 (LWP 68889)] [New Thread 0x7fff96840700 (LWP 
>> 68890)] [New Thread 0x7fff9603f700 (LWP 68891)]
>> /usr/bin/vpp[68885]: unknown input `
>> unix_physmem_init: use huge pages
>> unix_physmem_init: use huge pages
>> 
>> Thread 2 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fff97041700 (LWP 68889)]
>> 0x77149a62 in ?? () from 
>> /usr/lib/x86_64-linux-gnu/libvnet.so.0
>> (gdb) bt
>> #0  0x77149a62 in ?? () from 
>> /usr/lib/x86_64-linux-gnu/libvnet.so.0
>> #1  0x77757f89 in dispatch_node () from 
>> /usr/lib/x86_64-linux-gnu/libvlib.so.0
>> #2  0x7775827d in dispatch_pending_node () from 
>> /usr/lib/x86_64-linux-gnu/libvlib.so.0
>> #3  0x77758537 in vlib_worker_loop () from 
>> /usr/lib/x86_64-linux-gnu/libvlib.so.0
>> #4  0x769e9c60 in clib_calljmp () from 
>> /usr/lib/x86_64-linux-gnu/libvppinfra.so.0
>> #5  0x7fff97040f20 in ?? ()
>> #6  0x767a96ca in start_thread (arg=0x0) at 
>> pthread_create.c:333
>> #7  0x in ?? ()
>> 
>> ---
>> 
>> ===>Startup.conf:
>> vhost-user {
>> coalesce-frames 0
>> }
>> 
>> unix {
>> nodaemon
>> log /tmp/vpp.log
>> full-coredump
>> }
>> 
>> api-trace {
>> on
>> }
>> 
>> api-segment {
>> gid vpp
>> }
>> 
>> cpu {
>>   skip-cores 4
>>   workers 2
>> }
>> 
> 
> You are trying to run VPP without dpdk plugin loaded. Is this intentional?
> 
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-21 Thread Damjan Marion (damarion)
> 
> On 21 Apr 2017, at 18:28, Ernst, Eric  wrote:
> 
> Backtrace and startup.conf found below:
> 
> ===>Backtrace::
> (gdb) run -c /etc/vpp/startup.conf
> Starting program: /usr/bin/vpp -c /etc/vpp/startup.conf
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins
> [New Thread 0x7fff97041700 (LWP 68889)]
> [New Thread 0x7fff96840700 (LWP 68890)]
> [New Thread 0x7fff9603f700 (LWP 68891)]
> /usr/bin/vpp[68885]: unknown input `
> unix_physmem_init: use huge pages
> unix_physmem_init: use huge pages
> 
> Thread 2 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fff97041700 (LWP 68889)]
> 0x77149a62 in ?? () from /usr/lib/x86_64-linux-gnu/libvnet.so.0
> (gdb) bt
> #0  0x77149a62 in ?? () from /usr/lib/x86_64-linux-gnu/libvnet.so.0
> #1  0x77757f89 in dispatch_node () from 
> /usr/lib/x86_64-linux-gnu/libvlib.so.0
> #2  0x7775827d in dispatch_pending_node () from 
> /usr/lib/x86_64-linux-gnu/libvlib.so.0
> #3  0x77758537 in vlib_worker_loop () from 
> /usr/lib/x86_64-linux-gnu/libvlib.so.0
> #4  0x769e9c60 in clib_calljmp () from 
> /usr/lib/x86_64-linux-gnu/libvppinfra.so.0
> #5  0x7fff97040f20 in ?? ()
> #6  0x767a96ca in start_thread (arg=0x0) at pthread_create.c:333
> #7  0x in ?? ()
> 
> ---
> 
> ===>Startup.conf:
> vhost-user {
>  coalesce-frames 0
> }
> 
> unix {
>  nodaemon
>  log /tmp/vpp.log
>  full-coredump
> }
> 
> api-trace {
>  on
> }
> 
> api-segment {
>  gid vpp
> }
> 
> cpu {
>skip-cores 4
>workers 2
> }
> 

You are trying to run VPP without dpdk plugin loaded. Is this intentional?


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Fwd: VPP

2017-04-21 Thread Damjan Marion (damarion)

> On 20 Apr 2017, at 23:18, Mahdi Eshaghi  wrote:
> 
> 
> 
> Hi
> can use dpdk ring in vpp?

It is doable, You will  need to extend dpdk plugin code to deal with that stuff.

> can en-queue packet in vpp and dequeue packet in another process?

We are going to add shred library to talk with VPP over the memif shared memory 
packet interface….
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] should socket be deleted after vhost-user rm?

2017-04-21 Thread Damjan Marion (damarion)

> On 21 Apr 2017, at 00:02, Ernst, Eric  wrote:
> 
> Is it expected that the socket be kept on filesystem after the vhost-user 
> interface
> is removed from the system?  This surprised me.

Looks like we miss single unlink(…) in that code… We need a volunteer to submit 
a patch….

> 
> To recreate:
> create vhost socket /tmp/sock2.sock server
> delete vhost  
> vpp# create vhost socket /tmp/sock2.sock server
> VirtualEthernet0/0/1
> vpp# delete vhost-user VirtualEthernet0/0/1
> 
> $ ls /tmp | grep sock
> sock2.sock
> 
> 
> Thanks,
> Eric
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-21 Thread Damjan Marion (damarion)


> On 21 Apr 2017, at 04:10, Steven Luong (sluong) <slu...@cisco.com> wrote:
> 
> Eric,
> 
> How do you configure the startup.conf with multiple worker threads? Did you 
> change both corelist-workers and workers? For example, this is how I 
> configure 2 worker threads using core 2 and 14.
> 
>   corelist-workers 2,14
>   workers 2
> 
> Any chance you can start vpp with gdb to get the backtrace to see where it 
> went belly up?

Those 2 options are exclusive to each other, either corelist-workers or workers 
should be used…


> 
> Steven
> 
> On 4/20/17, 5:32 PM, "Ernst, Eric" <eric.er...@intel.com> wrote:
> 
>Makes sense, thanks Steven.
> 
>One more round of questions -- I expected the numbers I got between the 
> two VMs (~2gpbs) given that I had just a single core running for VPP.  I went 
> ahead and amended my startup.conf in order to make use of 2 and then again as 
> 4 worker threads, all within the same socket.
> 
>After booting the VMs and testing basic connectivity (ping!), I begin to 
> either run ab and nginx, or just iperf between the VMs.  In either case, in 
> short time VPP crashes.  Does this ring a bell?  I am still ramping on VPP 
> and understand I likely am making some assumptions that are wrong.
> Guidance?
> 
>With two workers:
>Apr 20 17:17:03 eernstworkstation systemd[1]: 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.device: 
> Job 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.device/start
>  timed out.
>Apr 20 17:17:03 eernstworkstation systemd[1]: Timed out waiting for device 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.device.
>Apr 20 17:17:03 eernstworkstation systemd[1]: Dependency failed for 
> /dev/disk/by-uuid/def55f66-6b20-47c6-a02f-bdaf324ed3b7.
>Apr 20 17:17:03 eernstworkstation systemd[1]: 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.swap: 
> Job 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.swap/start
>  failed with result 'dependenc
>Apr 20 17:17:03 eernstworkstation systemd[1]: 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.device: 
> Job 
> dev-disk-by\x2duuid-def55f66\x2d6b20\x2d47c6\x2da02f\x2dbdaf324ed3b7.device/start
>  failed with result 'timeo
>Apr 20 17:17:06 eernstworkstation vpp[38637]: /usr/bin/vpp[38637]: 
> received signal SIGSEGV, PC 0x7f0d02b5b49c, faulting address 0x7f1cc12f5770
>Apr 20 17:17:06 eernstworkstation /usr/bin/vpp[38637]: received signal 
> SIGSEGV, PC 0x7f0d02b5b49c, faulting address 0x7f1cc12f5770
>Apr 20 17:17:06 eernstworkstation systemd[1]: vpp.service: Main process 
> exited, code=killed, status=6/ABRT
>Apr 20 17:17:06 eernstworkstation systemd[1]: vpp.service: Unit entered 
> failed state.
>Apr 20 17:17:06 eernstworkstation systemd[1]: vpp.service: Failed with 
> result 'signal'.
>Apr 20 17:17:06 eernstworkstation systemd[1]: vpp.service: Service 
> hold-off time over, scheduling restart.
> 
>Apr 20 17:17:06 eernstworkstation systemd[1]: Stopped vector packet 
> processing engine.
> 
> 
> 
>-----Original Message-
>From: Steven Luong (sluong) [mailto:slu...@cisco.com] 
>Sent: Thursday, April 20, 2017 4:33 PM
>To: Ernst, Eric <eric.er...@intel.com>; Billy McFall <bmcf...@redhat.com>
>Cc: Damjan Marion (damarion) <damar...@cisco.com>; vpp-dev@lists.fd.io
>Subject: Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?
> 
>Eric,
> 
>In my testing, I notice my number is 2 to 3X better when coalesce is 
> disabled. I am using Ivy Bridge. So it looks like the mileage varies a lot 
> with Sandy Bridge, 40X better.
> 
>What is coalesce?
>When the driver places descriptors into the vring, it may request 
> interrupt or no interrupt after the device is done processing with the 
> descriptors. If the driver wants interrupt, the device may send it 
> immediately if coalesce is not enabled. If it is enabled, the device will 
> delay posting the interrupt until more descriptors are received to meet the 
> coalesce number. This is an attempt to reduce the number of interrupts 
> generated to the driver. My guess is when coalesce is enabled, the 
> application, iperf3 in this case, is not shooting packets as fast as it can 
> until it receives the interrupt for the packets sent. Thus the total 
> bandwidth number looks bad. By disabling coalesce, the application is 
> shooting a lot more packets in the interval at the expense of more interrupts 
> are generated in the VM.
> 
>I don’t know why coalesce is enabled by default. This 

Re: [vpp-dev] Connectivity issue when using vhost-user on 17.04?

2017-04-20 Thread Damjan Marion (damarion)

Eric,

long time ago ( i think 3+ years) when I wrote original vhost-user driver in 
vpp,
I added feature-mask knob to cli which messes up with feature bitmap purely for 
debugging
reasons.

And I regret many times…

Somebody dig it out and documented it somewhere, for to me unknown reasons.
Now it spreads like a virus and I cannot stop it :)

So please don’t use it, it is evil….

Thanks,

Damjan

> On 20 Apr 2017, at 20:49, Ernst, Eric  wrote:
> 
> All,
> 
> After updating the startup.conf to not reference DPDK, per direction in 
> release
> notification thread, I was able to startup vpp and create interfaces.
> 
> Now that I'm testing, I noticed that I can no longer ping between VM hosts 
> which
> make use of vhost-user interfaces and are connected via l2 bridge domain
> (nor l2 xconnect).  I double checked, then reverted back to 17.01, where I 
> could
> again verify connectivity between the guests.
> 
> Any else seeing this, or was there a change in how this should be set up?  For
> reference, I have my (simple) setup described @ a gist at [1].
> 
> Thanks,
> eric
> 
> 
> [1] - https://gist.github.com/egernst/5982ae6f0590cd83330faafacc3fd545
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build failure with latest VPP

2017-04-18 Thread Damjan Marion (damarion)

> On 18 Apr 2017, at 12:52, Marco Varlese <marco.varl...@suse.com> wrote:
> 
> On Fri, 2017-04-14 at 11:19 +, Damjan Marion (damarion) wrote:
>> Marco,
>> 
>> If you want to do downstream packaging and link against shared dpdk, you can 
>> do it by compiling directly from autotools project. Basically:
>> 
>> cd src/
>> autoreconf -fis
>> export CFLAGS=….
>> ./configure —flags
>> make
>> make install
>> 
>> Please note that we are intentionally linking against static DPDK libs as 
>> want
>> to have flexibility
>> of adding additional patches to dpdk build. Currently we have bunch of 
>> patcher
>> related to Mellanox ConnectX-5 
>> which are not available in latest dpdk release.
> I understand why you use the internal DPDK version. 
> However, I am hopeful (for the future) that cross-collaboration with the DPDK
> project can avoid the need of keep doing this (for all the good reason of 
> having
> a common up-stream project with all the goodies in it rather than some sort of
> "fork" in a consumer project).

+1

> 
>> 
>> May I ask what are your distro guidance when it comes to optimization of the
>> code for specific 
>> microarchitectures? Do you need to support all x86_64 systems or just few
>> latest generations?
> We support DPDK since version 2.2 (roughly mid-2015) so I would say any
> processor which is not older than 2 years...
> 
>> 
>> How do you compile DPDK?
> I think the easiest for me here is to post here the link to our .spec file 
> used
> to generate the package we ship...
> https://build.opensuse.org/package/view_file/network/dpdk/dpdk.spec?expand=1
> You should be able to access it; in case you can't, please, let me know so I 
> can
> send it to you…

OK, so looks like your DPDK distro is compiled with -march=core2 which is 11 
year
old instruction set. That means that SSE4 and AVX/AVX2 instructions are 
disabled.

If i get it right that also means that vector PMDs are disabled (at least i40e) 
as it
requires SSE4.1 instructions.

All this means that performance of VPP linked against your dpdk libraries will 
be significantly
slower.


> 
> I'm going to try your suggested steps and will let you know how it goes.
> 
>> 
>> Thanks,
>> 
>> Damjan
> Thanks,
> Marco
> 
> 
>> 
>> 
>>> 
>>> On 12 Apr 2017, at 11:33, Marco Varlese <marco.varl...@suse.com> wrote:
>>> 
>>> BTW, in case you're wondering which commands I am using to build:
>>> 
>>>> 
>>>> make bootstrap
>>>> make build (using build-release produces the same issue)
>>> 
>>> 
>>> Regards,
>>> Marco
>>> 
>>> On Tue, 2017-04-11 at 09:27 +0200, Marco Varlese wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I am facing a build issue with the latest VPP and not sure if others have
>>>> seen
>>>> the same? (I'm copying/pasting the errors below)
>>>> 
>>>> It appears to be broken for both "shared dpdk" and using the "in-repo"
>>>> dpdk
>>>> source code. Both compilation mode worked just fine for me using VPP 17.01
>>>> so
>>>> not sure if I have to change anything in the .mk files or build the code
>>>> differently...
>>>> 
>>>> I have to say that since I am very interested in consuming the VPP code
>>>> downstream the "shared mode" compilation option is much more valuable to
>>>> me...
>>>> 
>>>> Any help would be much appreciated.
>>>> 
>>>> 
>>>> When building in shared mode for dpdk I get the following error:
>>>> 
>>>> t -f 'vpp/app/version.c' || echo '/home/abuild/rpmbuild/BUILD/vpp/build-
>>>> data/../src/'`vpp/app/version.c
>>>> [  415s] /home/abuild/rpmbuild/BUILD/vpp/build-
>>>> data/../src/vpp/vnet/main.c:21:29: fatal error: vpp/app/version.h: No such
>>>> file
>>>> or directory
>>>> [  415s]  #include 
>>>> [  415s]  ^
>>>> [  415s] compilation terminated.
>>>> [  415s] make[4]: *** [Makefile:5872: vpp/vnet/bin_vpp-main.o] Error 1
>>>> [  415s] make[4]: *** Waiting for unfinished jobs
>>>> [  415s] /home/abuild/rpmbuild/BUILD/vpp/build-
>>>> data/../src/vpp/app/version.c:17:29: fatal error: vpp/app/version.h: No
>>>> such
>>>> file or directory
>>>> [  415s]  #include 
>>>> [  415s]

[vpp-dev] extras/ in repo

2017-04-18 Thread Damjan Marion (damarion)

current situation in the repo is quite messy when it comes to different 
“extras” we have so I would like to propose following change:

I would like to add extras/ top level dir with different non-core stuff. For 
example

extras
├── deb
├── docker
├── emacs
├── rpm
├── suse
├── vagrant
└── vim

and then give some relief to build-root directory and allow him to be what it 
is supposed to be.

Let me know what do you think about this proposal…

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] is MLX4 PMD supported ?

2017-04-18 Thread Damjan Marion (damarion)
Hi Mircea,

MLX4 devices are not supported today mainly because we don’t have any device to 
test with.

To make it work, following changes are needed:

1. dpdk/Makefile
Here you will need to add 2 lines similar to MLX5. Just search for MLX5 and 
duplicate matching lines so they refer to MLX4
- After this is done you will be able to compile dpdk development packages with 
“make dpdk-install-dev DPDK_MLX4_PMD=y”

2. when compiling VPP you need to set "vpp_uses_dpdk_mlx5_pmd=yes”, i.e. “make 
vpp_uses_dpdk_mlx5_pmd=yes pkg-deb”
Both mlx4 and mlx5 are using same libs so using existing mlx5 know should be 
fine, if that bugs you you can modify build-data/packages/vpp.mk, 
src/configure.ac and src/plugins/dpdk.am (either remove “5” or duplicate lines 
so you have both “4” and “5”)

3. Finally most demanding part is to modify dpdk init code so it recognizes 
those NICs properly. You will likely need to search for MLX5 in following 
files and create simlar code for mlx4. Files are:
src/plugins/dpdk/device/dpdk.h 
src/plugins/dpdk/device/format.c
src/plugins/dpdk/device/init.c

I can help with this if somebody provides access to ubuntu computer with mlx4 
card installed.

Hope this helps,

Damjan



> On 17 Apr 2017, at 18:23, Mircea Orban  wrote:
> 
> Hello,
> 
> I am trying to build VPP for an environment with Mellanox ConnectX-3 Pro 
> NICs, and I encounter issues when I try to run VPP (see dpdk plugin undefined 
> symbol error):
> 
> vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins
> load_one_plugin:188: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:147: /usr/lib/vpp_plugins/dpdk_plugin.so: undefined symbol: 
> ibv_fork_init
> load_one_plugin:188: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
> load_one_plugin:188: Loaded plugin: ila_plugin.so (Identifier-locator 
> addressing for IPv6)
> load_one_plugin:188: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:188: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:188: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
> on IPv4 Infrastructure (RFC5969))
> load_one_plugin:188: Loaded plugin: memif_plugin.so (Packet Memory Interface 
> (experimetal))
> load_one_plugin:188: Loaded plugin: snat_plugin.so (Network Address 
> Translation)
> Segmentation fault (core dumped)
> 
> I added these lines to the DPDK Makefile (and it builds with no errors):
> 
> DPDK_MLX4_PMD ?= y
> DPDK_MLX4_DEBUG   ?= n
> DPDK_MLX4_SGE_WR_N?= 1
> DPDK_MLX4_MAX_INLINE  ?= 0
> DPDK_MLX4_TX_MP_CACHE  ?= 8
> DPDK_MLX4_SOFT_COUNTERS  ?= 1
> 
>   $(call set,RTE_LIBRTE_MLX4_PMD,$(DPDK_MLX4_PMD))
>   $(call set,RTE_LIBRTE_MLX4_DEBUG,$(DPDK_MLX4_DEBUG))
>   $(call set,RTE_LIBRTE_MLX4_SGE_WR_N,$(DPDK_MLX4_SGE_WR_N))
>   $(call set,RTE_LIBRTE_MLX4_MAX_INLINE,$(DPDK_MLX4_MAX_INLINE))
>   $(call set,RTE_LIBRTE_MLX4_TX_MP_CACHE,$(DPDK_MLX4_TX_MP_CACHE))
>   $(call set,RTE_LIBRTE_MLX4_SOFT_COUNTERS,$(DPDK_MLX4_SOFT_COUNTERS))
> 
> 
> 
> Thanks,
> Mircea
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] whitelist pci pass-through and host interface

2017-04-14 Thread Damjan Marion (damarion)



> On 12 Apr 2017, at 03:57, Yewei Tang via vpp-dev  wrote:
> 
> Hi Friends,
> I have some question regarding pci pass-though for bare metal host interface.
> All following questions are for running vpp on bare metal ubuntu.
> 
> 1. host interface won't showing up.
> As i understand, when i start vpp, it can see all the host interface that are 
> in down state and not in host routing table. This is true if i run vpp in a 
> vagrant box ("show int" shows the down interface of vbox host). But this was 
> not happening when i ran vpp on bare metal ubuntu host. is there any 
> additional setting for bare metal linux?
> 
> 2. Add host interface to vpp by using "create host-interface "
> The "create host-interface" command works fine with veth pair or tap 
> interface. For host physical nic interface, such as eth2, i can add it to 
> vpp. It show up as host-eth2. But it seems cannot passing traffic. Is it true 
> the "create host-interface " command does not suppose to work with physical 
> interface?
> 
> 3. pci passthrough.
> With kernel 4.4.1, if i add pci number to dpdk white list in 
> /etc/vpp/startup.conf and bring up vpp, vpp can see the interface and pass 
> packets. But after upgrade kernel to 4.8, (ubuntu 16.4.02) even i added the 
> interface pci number to white list, vpp can not see the pass through 
> interface anymore. 
> Does vpp dpdk support kernel newer than 4.4.1?
> 

Can you share output of "show pci" command?

> 
> Thanks a lot!
> 
> Yewei
> 
> Some related post on vpp-dev. but i cannot find solution to my questions.
> https://lists.fd.io/pipermail/vpp-dev/2016-March/000235.html
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] whitelist pci pass-through and host interface

2017-04-14 Thread Damjan Marion (damarion)


Sent from my iPhone

> On 12 Apr 2017, at 03:57, Yewei Tang via vpp-dev  wrote:
> 
> Hi Friends,
> I have some question regarding pci pass-though for bare metal host interface.
> All following questions are for running vpp on bare metal ubuntu.
> 
> 1. host interface won't showing up.
> As i understand, when i start vpp, it can see all the host interface that are 
> in down state and not in host routing table. This is true if i run vpp in a 
> vagrant box ("show int" shows the down interface of vbox host). But this was 
> not happening when i ran vpp on bare metal ubuntu host. is there any 
> additional setting for bare metal linux?
> 
> 2. Add host interface to vpp by using "create host-interface "
> The "create host-interface" command works fine with veth pair or tap 
> interface. For host physical nic interface, such as eth2, i can add it to 
> vpp. It show up as host-eth2. But it seems cannot passing traffic. Is it true 
> the "create host-interface " command does not suppose to work with physical 
> interface?
> 
> 3. pci passthrough.
> With kernel 4.4.1, if i add pci number to dpdk white list in 
> /etc/vpp/startup.conf and bring up vpp, vpp can see the interface and pass 
> packets. But after upgrade kernel to 4.8, (ubuntu 16.4.02) even i added the 
> interface pci number to white list, vpp can not see the pass through 
> interface anymore. 
> Does vpp dpdk support kernel newer than 4.4.1?
> 
> 
> Thanks a lot!
> 
> Yewei
> 
> Some related post on vpp-dev. but i cannot find solution to my questions.
> https://lists.fd.io/pipermail/vpp-dev/2016-March/000235.html
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Build failure with latest VPP

2017-04-14 Thread Damjan Marion (damarion)

Marco,

If you want to do downstream packaging and link against shared dpdk, you can 
do it by compiling directly from autotools project. Basically:

cd src/
autoreconf -fis
export CFLAGS=….
./configure —flags
make
make install

Please note that we are intentionally linking against static DPDK libs as want 
to have flexibility
of adding additional patches to dpdk build. Currently we have bunch of patcher 
related to Mellanox ConnectX-5 
which are not available in latest dpdk release.

May I ask what are your distro guidance when it comes to optimization of the 
code for specific 
microarchitectures? Do you need to support all x86_64 systems or just few 
latest generations?

How do you compile DPDK?

Thanks,

Damjan


> On 12 Apr 2017, at 11:33, Marco Varlese  wrote:
> 
> BTW, in case you're wondering which commands I am using to build:
> 
>> make bootstrap
>> make build (using build-release produces the same issue)
> 
> 
> Regards,
> Marco
> 
> On Tue, 2017-04-11 at 09:27 +0200, Marco Varlese wrote:
>> Hi,
>> 
>> I am facing a build issue with the latest VPP and not sure if others have 
>> seen
>> the same? (I'm copying/pasting the errors below)
>> 
>> It appears to be broken for both "shared dpdk" and using the "in-repo" dpdk
>> source code. Both compilation mode worked just fine for me using VPP 17.01 so
>> not sure if I have to change anything in the .mk files or build the code
>> differently...
>> 
>> I have to say that since I am very interested in consuming the VPP code
>> downstream the "shared mode" compilation option is much more valuable to 
>> me...
>> 
>> Any help would be much appreciated.
>> 
>> 
>> When building in shared mode for dpdk I get the following error:
>> 
>> t -f 'vpp/app/version.c' || echo '/home/abuild/rpmbuild/BUILD/vpp/build-
>> data/../src/'`vpp/app/version.c
>> [  415s] /home/abuild/rpmbuild/BUILD/vpp/build-
>> data/../src/vpp/vnet/main.c:21:29: fatal error: vpp/app/version.h: No such
>> file
>> or directory
>> [  415s]  #include 
>> [  415s]  ^
>> [  415s] compilation terminated.
>> [  415s] make[4]: *** [Makefile:5872: vpp/vnet/bin_vpp-main.o] Error 1
>> [  415s] make[4]: *** Waiting for unfinished jobs
>> [  415s] /home/abuild/rpmbuild/BUILD/vpp/build-
>> data/../src/vpp/app/version.c:17:29: fatal error: vpp/app/version.h: No such
>> file or directory
>> [  415s]  #include 
>> [  415s]  ^
>> [  415s] compilation terminated.
>> [  415s] make[4]: *** [Makefile:5900: vpp/app/bin_vpp-version.o] Error 1
>> [  415s] mv -f vpp/app/.deps/bin_vpp-vpe_cli.Tpo vpp/app/.deps/bin_vpp-
>> vpe_cli.Po
>> [  416s] mv -f vpp-api/pneum/.deps/libpneum_la-pneum.Tpo vpp-
>> api/pneum/.deps/libpneum_la-pneum.Plo
>> [  425s] make[4]: Leaving directory '/home/abuild/rpmbuild/BUILD/vpp/build-
>> root/build-vpp-native/vpp'
>> [  425s] make[3]: *** [Makefile:6764: all-recursive] Error 1
>> [  425s] make[3]: Leaving directory '/home/abuild/rpmbuild/BUILD/vpp/build-
>> root/build-vpp-native/vpp'
>> [  425s] make[2]: *** [Makefile:3426: all] Error 2
>> [  425s] make[2]: Leaving directory '/home/abuild/rpmbuild/BUILD/vpp/build-
>> root/build-vpp-native/vpp'
>> [  425s] make[1]: *** [Makefile:699: vpp-build] Error 2
>> [  425s] make[1]: Leaving directory '/home/abuild/rpmbuild/BUILD/vpp/build-
>> root'
>> [  425s] make: *** [Makefile:213: build-release] Error 2
>> [  425s] error: Bad exit status from /var/tmp/rpm-tmp.t3xVux (%build)
>> [  425s] 
>> [  425s] 
>> [  425s] RPM build errors:
>> [  425s] Bad exit status from /var/tmp/rpm-tmp.t3xVux (%build)
>> [  425s] 
>> [  425s] linux-yk3w.suse failed "build vpp.spec" at Tue Apr 11 07:19:21 UTC
>> 2017.
>> [  425s] 
>> 
>> 
>> On the other hand, when building the code using the in-repo dpdk source code 
>> I
>> get the following one:
>> 
>>   CC test.o
>> /usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld:
>> /usr/lib64/libmvec_nonshared.a(svml_finite_alias.oS): relocation 
>> R_X86_64_PC32
>> against undefined symbol `_ZGVbN2v_log@@GLIBC_2.22' can not be used when
>> making
>> a shared object; recompile with -fPIC
>> /usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld: 
>> final
>> link failed: Bad value
>> collect2: error: ld returned 1 exit status
>> /home/mvarlese/repos/vpp/build-root/build-vpp-native/dpdk/dpdk-
>> 17.02/mk/rte.app.mk:235: recipe for target 'cmdline_test' failed
>> make[9]: *** [cmdline_test] Error 1
>> /home/mvarlese/repos/vpp/build-root/build-vpp-native/dpdk/dpdk-
>> 17.02/mk/rte.subdir.mk:61: recipe for target 'cmdline_test' failed
>> make[8]: *** [cmdline_test] Error 2
>> make[8]: *** Waiting for unfinished jobs
>>   CC resource.o
>> 
>> 
>> Thanks and regards,
>> Marco
>> 
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
> ___
> vpp-dev mailing list
> 

Re: [vpp-dev] Tap IF Names

2017-03-28 Thread Damjan Marion (damarion)

> On 28 Mar 2017, at 12:15, Kinsella, Ray  wrote:
> 
> +1 to Jon's comments.
> 
> 
> On 24/03/2017 14:07, Pierre Pfister (ppfister) wrote:
>> Hello Jon,
>> 
>> No strong opinion on my side, but I'd just like to notice that there might 
>> be cases where multiple interfaces, in linux, have the same name, if they 
>> are in different network namespaces.
>> VPP could literally control the back-end of thousands of containers' 
>> interfaces, all called eth0.
> 
> Well your backend and your frontend device names are typically named 
> different. You are correct in that frontend device in the container is always 
> eth0, the backend device for each container's eth0 is uniquely named in the 
> default network namespace.

Can somebody come up with the patch proposal?

Thanks!
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] 5798: Simple patch to add checking for deps for RPMs

2017-03-27 Thread Damjan Marion (damarion)
Traveling last week, still processing backlog.

Merged… Thanks….

On 27 Mar 2017, at 16:29, Thomas F Herbert 
> wrote:

What is the outlook for this patch:
https://gerrit.fd.io/r/#/c/5798/
Patch set 2 was submitted on 3/21 to address Damjan's comments.

--Tom

--
Thomas F Herbert
SDN Group
Office of Technology
Red Hat

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP project committer nomination: Sergio Gonzales Monroy

2017-03-27 Thread Damjan Marion (damarion)

Hello VPP committers,

I would like to nominate Sergio Gonzales Monroy as a VPP project committer.

History of Sergio’s merged contributions to the VPP project: 

https://gerrit.fd.io/r/#/q/owner:sergio.gonzalez.monroy%2540intel.com+status:merged

shows significant amount of work done on VPP crypto/IPSec code including the 
integration with DPDK CryptoDev.

Current VPP committers please vote (+1, 0, -1) by replying to this email 
(reply-all), no later than 04.04.2017 6.00 AM PST.

Thanks,

Damjan


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp-plugins RPM dependency

2017-03-24 Thread Damjan Marion (damarion)

On 23 Mar 2017, at 14:26, Thomas F Herbert 
> wrote:



On 03/22/2017 04:36 PM, Feng Pan wrote:
So this would suggest that VPP by default (that is, by doing 'yum install vpp') 
will not have dpdk support, and vpp-plugins must also be installed to add it, I 
would think dpdk plugin should be either packaged or installed together with 
VPP by default, and can be disabled if desired.

In any case, will deployment model stay this way? I'll need to make changes to 
puppet module to include other packages if that's how it will be going forward. 
Also, dpdk section should be commented out in the config file so we can start 
VPP service using the default config.
The dpdk plugin build process  precipitated major headaches working for 
downstream packaging for Centos as well. I had a patch ready to submit for 
building from a dist tarball as a step toward building from a source rpm but 
that can't be used now.  The main problem is vpp can no longer build from a 
isolated tarball without git.

Out-of-tree build problem is very easily fixable, as I suggested in the review 
comments.It is unlikely to be more than 10 lines of shell script.

Installing the development rpm as part of downstream installation  is not an 
option either.

installation of devel rpm is not mandatory step, you can build vpp without 
development package installed.

The build process does not support dependency on an external rpm yet such as 
that which is built in rpm_dpdk project. I am thinking out load here but the 
best option for achieving packaging for 17.04 is to build dpdk from the 
tarball, bypass the dpdk rpm and include the upstream dpdk tarball in the srpm. 
Hopefully we can get a better solution figured out by next release over 17.0x 
that will work with the dpdk plugin concept.

I’m not getting what is wrong with current setup, vpp is simple autotools 
project and packaging is free to invoke “cd src; ./configure; make; make 
install” as it is done with hundreds of different projects.








___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp-plugins RPM dependency

2017-03-24 Thread Damjan Marion (damarion)

Dear Feng,

We have VPP consumers which would like to use VPP in the containerized 
environment without DPDK so dpdk needs to be separate package.
Regarding your particular problem, it should be fixed with 
https://gerrit.fd.io/r/#/c/5837/ as Dave suggested.

Thanks,

Damjan

On 22 Mar 2017, at 21:36, Feng Pan > 
wrote:

So this would suggest that VPP by default (that is, by doing 'yum install vpp') 
will not have dpdk support, and vpp-plugins must also be installed to add it, I 
would think dpdk plugin should be either packaged or installed together with 
VPP by default, and can be disabled if desired.

In any case, will deployment model stay this way? I'll need to make changes to 
puppet module to include other packages if that's how it will be going forward. 
Also, dpdk section should be commented out in the config file so we can start 
VPP service using the default config.

Thanks
Feng

On Wed, Mar 22, 2017 at 2:46 PM, Ed Warnicke 
> wrote:
Commenting out the dpdk stanza is a great workaround but we may want to look at 
bit more closely at the issue... as installing the vpp project should result in 
an out of the box runnable vpp.

Ed

On Wed, Mar 22, 2017 at 11:44 AM, Dave Barach (dbarach) 
> wrote:
Simply remove the “dpdk” stanza from /etc/vpp/startup.conf if you want to run 
vpp without the dpdk plugin.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Feng Pan
Sent: Wednesday, March 22, 2017 1:47 PM
To: vpp-dev >
Subject: [vpp-dev] vpp-plugins RPM dependency

Hi All,

With latest master builds of VPP (on Centos with rpm repo), it seems like it's 
necessary to install vpp-plugins package for vpp to start, without it, I get 
the following error when running vpp (with default config file):

vlib_plugin_early_init:360: plugin path /usr/lib/vpp_plugins
vlib_call_all_config_functions: unknown input `dpdk  '

Looking at the spec file, vpp package depends on vpp-lib only, so it appears 
that we need to add vpp-plugins to the dependency list too. However, it also 
looks like vpp-plugins depends on vpp package, so I'm trying to figure out what 
the right dependency relationship is :)

Thanks
Feng

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VXLAN multithreading issue

2017-03-23 Thread Damjan Marion (damarion)

I think I know where it might be a problem.
let me come back to you….

On 23 Mar 2017, at 13:08, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello vpp-dev,

With latest VPP build packages I am observing the issue with VXLAN and LISPGPE 
tunnels during my testing (CSIT and internal Cisco lab - manual testing). I am 
getting the following errors:

vpp# sh err
   CountNode  Reason
10 ip4-udp-lookup no listener for dst port
10 ip4-icmp-error destination unreachable 
response sent
10  vxlan4-encap  good packets encapsulated
10l2-output   L2 output packets
10l2-learnL2 learn packets
10l2-learnL2 learn hits
10l2-inputL2 input packets
10l2-floodL2 flood packets


Encapsulation of traffic into VXLAN is working properly but decapsulation is 
throwing "no listener for dst port"

vpp# sh trace

--- Start of thread 0 vpp_main ---
No packets in trace buffer
--- Start of thread 1 vpp_wk_0 ---
Packet 1

00:04:49:119217: dpdk-input
  FortyGigabitEtherneta/0/0 rx queue 0
  buffer 0x1d288a28: current data 14, length 94, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 108
buf_len 2176, data_len 108, ol_flags 0x180, data_off 128, phys_addr 
0x55264900
packet_type 0x291
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without 
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 00:00:71:51:52:d7 -> 3c:fd:fe:9d:1a:a0
  UDP: 11.0.0.2 -> 11.0.0.1
tos 0x00, ttl 64, length 94, checksum 0x648d
fragment id 0x
  UDP: 4789 -> 4789
length 74, checksum 0x9c98
00:04:49:119239: ip4-input-no-checksum
  UDP: 11.0.0.2 -> 11.0.0.1
tos 0x00, ttl 64, length 94, checksum 0x648d
fragment id 0x
  UDP: 4789 -> 4789
length 74, checksum 0x9c98
00:04:49:119249: ip4-lookup
  fib 0 dpo-idx 5 flow hash: 0x
  UDP: 11.0.0.2 -> 11.0.0.1
tos 0x00, ttl 64, length 94, checksum 0x648d
fragment id 0x
  UDP: 4789 -> 4789
length 74, checksum 0x9c98
00:04:49:119253: ip4-local
UDP: 11.0.0.2 -> 11.0.0.1
  tos 0x00, ttl 64, length 94, checksum 0x648d
  fragment id 0x
UDP: 4789 -> 4789
  length 74, checksum 0x9c98
00:04:49:119256: ip4-udp-lookup
  UDP: src-port 4789 dst-port 4789 (no listener)
00:04:49:119261: ip4-icmp-error
  UDP: 11.0.0.2 -> 11.0.0.1
tos 0x00, ttl 64, length 94, checksum 0x648d
fragment id 0x
  UDP: 4789 -> 4789
length 74, checksum 0x9c98
00:04:49:119265: ip4-lookup
  fib 0 dpo-idx 2 flow hash: 0x
  ICMP: 11.0.0.1 -> 11.0.0.2
tos 0x00, ttl 255, length 122, checksum 0xa580
fragment id 0x
  ICMP destination_unreachable port_unreachable checksum 0x135b
00:04:49:119265: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 2 : ipv4 via 11.0.0.2 FortyGigabitEtherneta/0/0: 
IP4: 3c:fd:fe:9d:1a:a0 -> 00:00:71:51:52:d7 flow hash: 0x
  IP4: 3c:fd:fe:9d:1a:a0 -> 00:00:71:51:52:d7
  ICMP: 11.0.0.1 -> 11.0.0.2
tos 0x00, ttl 254, length 122, checksum 0xa680
fragment id 0x
  ICMP destination_unreachable port_unreachable checksum 0x135b
00:04:49:119266: FortyGigabitEtherneta/0/0-output
  FortyGigabitEtherneta/0/0
  IP4: 3c:fd:fe:9d:1a:a0 -> 00:00:71:51:52:d7
  ICMP: 11.0.0.1 -> 11.0.0.2
tos 0x00, ttl 254, length 122, checksum 0xa680
fragment id 0x
  ICMP destination_unreachable port_unreachable checksum 0x135b
00:04:49:119268: FortyGigabitEtherneta/0/0-tx
  FortyGigabitEtherneta/0/0 tx queue 1
  buffer 0x1d288a28: current data -28, length 136, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  IP4: 3c:fd:fe:9d:1a:a0 -> 00:00:71:51:52:d7
  ICMP: 11.0.0.1 -> 11.0.0.2
tos 0x00, ttl 254, length 122, checksum 0xa680
fragment id 0x
  ICMP destination_unreachable port_unreachable checksum 0x135b

--- Start of thread 2 vpp_wk_1 ---
Packet 1

00:04:49:119215: dpdk-input
  FortyGigabitEtherneta/0/1 rx queue 0
  buffer 0x1d292712: current data 0, length 60, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  PKT MBUF: port 1, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr 
0x554d8380
packet_type 0x691
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER 

Re: [vpp-dev] R Signal events between graph nodes within different threads

2017-03-09 Thread Damjan Marion (damarion)

I’m trying to understand what will be the benefit of sending packet to main 
core.

Even if you send packet to main core which is possible, you will not get much 
benefit of that action as you cennot  send packet to the process node.

Why you cannot build control plane feature as separate application which 
programs VPP via API instead of trying to insert control plane into dataplane?


On 9 Mar 2017, at 09:43, wang.hu...@zte.com.cn<mailto:wang.hu...@zte.com.cn> 
wrote:


thanks a lot to xiyun!


which kind of scenario you want to pass the pkt from work thread to main 
thread. Although the pkt is already processed by work thread and still need to 
put it  to main thread for further processing?

What’s your real case☺

//In fact, our use case is that, we wanna  do some control-plane(slow-path) 
things in main thread instead of worker thread, such as dynamic routing 
protocol OSPF/BGP. But there is not a high performance method to do this.

The handoff mechanism which queuing between two different work threads maybe a 
good way, we will deep learning it first.


And  one more question, vlib_process_signal_event can be called in worker 
thread ? No concurrency problem? Is it safe when there are Multiple worker 
thread ?


王辉 wanghui


IT开发工程师 IT Development Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation Division

原始邮件
发件人: <xiyun...@intel.com<mailto:xiyun...@intel.com>>;
收件人: <dbar...@cisco.com<mailto:dbar...@cisco.com>>;王辉10067165; 
<hongjun...@intel.com<mailto:hongjun...@intel.com>>; 
<damar...@cisco.com<mailto:damar...@cisco.com>>;
抄送人: 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>;顾剑10036178;赵志刚10017628;潘凤艳00024606;
日 期 :2017年03月09日 14:38
主 题 :RE: [vpp-dev] R Signal events between graph nodes within different threads

Do be a little confusion.  AFAIK, the original question from ZTE, key message 
should be:   so we want to transfer some packet to main thread to continue 
processing

Is this right, @ZTE wanghui

which kind of scenario you want to pass the pkt from work thread to main 
thread. Although the pkt is already processed by work thread and still need to 
put it  to main thread for further processing?
What’s your real case☺

Did you consider the handoff mechanism before? But it seems to pkt queuing btw 
two different work threads

Thanks.
Regards

BTW:
In my understanding,  Hongjun and ZTE guys mentions following two ways

(1)vlib_process_signal_event



it seems to be just pass the event/message(Event type + opaque data) to process 
nodes which belong to main threads, instead of passing pkt from worker thread 
to main thread.
It is asynchronous way.


(2)vl_api_rpc_call_main_thread



it is synchronous method to call the RPC function belongs to main thread 
context, seems to be also not pkt passing btw two threads.

Let Dave and Damjan to correct me☺.  Thanks!

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Dave Barach (dbarach)
Sent: Wednesday, March 8, 2017 9:47 PM
To: wang.hu...@zte.com.cn<mailto:wang.hu...@zte.com.cn>; Ni, Hongjun 
<hongjun...@intel.com<mailto:hongjun...@intel.com>>; Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; 
gu.ji...@zte.com.cn<mailto:gu.ji...@zte.com.cn>; 
zhao.zhig...@zte.com.cn<mailto:zhao.zhig...@zte.com.cn>; 
pan.feng...@zte.com.cn<mailto:pan.feng...@zte.com.cn>
Subject: Re: [vpp-dev] R Signal events between graph nodes within different 
threads

Guys,

Oh, you want the main thread to process packets? Why didn’t you simply ask how 
to do that in the first place?

At worst, a few lines’ worth of code - to enable e.g. dpdk-input in thread-0 - 
might be required. Copying Damjan.

Thanks… Dave

From: wang.hu...@zte.com.cn<mailto:wang.hu...@zte.com.cn> 
[mailto:wang.hu...@zte.com.cn]
Sent: Wednesday, March 8, 2017 2:50 AM
To: hongjun...@intel.com<mailto:hongjun...@intel.com>
Cc: alaga...@gmail.com<mailto:alaga...@gmail.com>; Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>>; 
zhao.zhig...@zte.com.cn<mailto:zhao.zhig...@zte.com.cn>; 
gu.ji...@zte.com.cn<mailto:gu.ji...@zte.com.cn>; 
pan.feng...@zte.com.cn<mailto:pan.feng...@zte.com.cn>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: 答复: RE: R[vpp-dev] Signal events between graph nodes within different 
threads


Yes, I realize that, it could be work. Thanks to hongjun.

But vl_api_rpc_call_main_thread will degrade packet handling performance in 
worker thread, I don't know if there is a common and high performance way to do 
this?

And also  I guess vlib_process_signal_event can only be used in main thread? 
just my opinion, since I have not gone deep into the signal event code.





王辉 wanghui



IT开发工程师 IT Developme

[vpp-dev] VPP 17.01.1 Release

2017-03-06 Thread Damjan Marion (damarion)


VPP 17.01.1 release is published on Friday. artifacts on nexus server.

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] TCP stack in master

2017-03-02 Thread Damjan Marion (damarion)

In case people didn’t notice, since yesterday we have full TCP stack 
in master. It is “just” 17 KLOCs.

Thanks to Dave, Florin and all other folks participated in development of this 
great addition to VPP.

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp w/ dpdk-as-plugin

2017-03-02 Thread Damjan Marion (damarion)

this is startup.conf config which disables dpdk plugin and effectively makes 
normal vpp binary acting as vpp_lite.

plugins {
  plugin dpdk_plugin.so { disable }
}

I still need to do few smaller changes before fully depreciating vpp_lite….


> On 2 Mar 2017, at 01:07, Luke, Chris <chris_l...@comcast.com> wrote:
> 
> Ooh, nice
>  
>  
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of Dave Barach (dbarach)
> Sent: Wednesday, March 1, 2017 18:09
> To: Damjan Marion (damarion) <damar...@cisco.com>
> Cc: vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp w/ dpdk-as-plugin
>  
> “If you’re reading this message, vpp w/ dpdk-as-a-plugin is now running my 
> home gateway.”
>  
> No issues noted... Nice job, Damjan!
>  
> dbarach@vppgate:~$ telnet 0 5002
> Trying 0.0.0.0...
> Connected to 0.
> Escape character is '^]'.
> _____   _  ___
>  __/ __/ _ \  (_)__| | / / _ \/ _ \
> _/ _// // / / / _ \   | |/ / ___/ ___/
> /_/ /(_)_/\___/   |___/_/  /_/   
>  
> vpp# sh plug
> Plugin path is: /usr/lib/vpp_plugins
>  
>  Plugin   Version
>   1. ioam_plugin.so   17.04-rc0~329-gc3a814b
>   2. ila_plugin.so17.04-rc0~329-gc3a814b
>   3. dpdk_plugin.so   17.04-rc0~329-gc3a814b
>   4. acl_plugin.so17.04-rc0~329-gc3a814b
>   5. flowperpkt_plugin.so 17.04-rc0~329-gc3a814b
>   6. snat_plugin.so   17.04-rc0~329-gc3a814b
>   7. lb_plugin.so 17.04-rc0~329-gc3a814b
>   8. libsixrd_plugin.so   17.04-rc0~329-gc3a814b
> vpp#
>  
> Thanks… Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] MAINTAINERS file

2017-02-28 Thread Damjan Marion (damarion)

I submitted MAINTAINERS file proposal to gerrit:

https://gerrit.fd.io/r/#/c/5547/

It is not complete, more additions are expected when people self-nominate.

All people on the list are selected based on their contributions and they 
accepted to take this role.

Let me know if any issues before we merge…

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] memif - packet memory interface

2017-02-16 Thread Damjan Marion (damarion)

Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.

After a bit of tuning, I’m getting following results:

broadwell 3.2GHz, TurboBoost disabled:

IXIA - XL710-40G - VPP1 - MEMIF - VPP2 - XL710-40G - IXIA

Both VPP instances are running single-core.
So it is symetrical setup where each VPP is forwarding between physical NIC and 
MEMIF.

With 64B packets, I’m getting 13.6 Mpps aggregate throughput.
With 1500B packets, I’m getting around 29Gbps.

Good thing with this new setup, both VPPs can be inside un-priviledged 
containers.

New code is in gerrit...


> On 14 Feb 2017, at 14:21, Damjan Marion (damarion) <damar...@cisco.com> wrote:
> 
> 
> I got first pings running over new shared memory interface driver.
> Code [1] is still very fragile, but basic packet forwarding works ...
> 
> This interface defines master/slave relationship.
> 
> Some characteristics:
> - slave can run inside un-privileged containers
> - master can run inside container, but it requires global PID namespace and 
> PTRACE capability
> - initial connection is done over the unix socket, so for container 
> networking socket file needs to be mapped into container
> - slave allocates shared memory for descriptor rings and passes FD to master
> - slave is ring producer for both tx and rx, it fills rings with either full 
> or empty buffers
> - master is ring consumer, it reads descriptors and executes memcpy from/to 
> buffer
> - process_vm_readv, process_vm_writev linux system calls are used for copy of 
> data directly between master and slave VM (it avoids 2nd memcpy)
> - process_vm_* system calls are executed once per vector of packets
> - from security perspective, slave doesn’t have access to master memory
> - currently polling-only
> - reconnection should just work - slave runs reconnect process in case when 
> master disappears
> 
> TODO:
> - multi-queue
> - interrupt mode (likely simple byte read/write to file descriptor)
> - lightweight library to be used for non-VPP clients
> - L3 mode ???
> - perf tuning
> - user-mode memcpy - master maps slave buffer memory directly…
> - docs / specification
> 
> At this point I would really like to hear feedback from people,
> specially from the usability side.
> 
> config is basically:
> 
> create memif socket /path/to/unix_socket.file [master|slave]
> set int state memif0 up
> 
> DBGvpp# show interfaces
>  Name   Idx   State  Counter  
> Count
> local00down
> memif01 up
> DBGvpp# show interfaces address
> local0 (dn):
> memif0 (up):
>  172.16.0.2/24
> DBGvpp# ping 172.16.0.1
> 64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
> 64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
> 64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
> 64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
> 64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms
> 
> Statistics: 5 sent, 5 received, 0% packet loss
> DBGvpp# show interfaces
>  Name   Idx   State  Counter  
> Count
> local00down
> memif01 up   rx packets   
>   5
> rx bytes  
>490
> tx packets
>  5
> tx bytes  
>490
> drops 
>  5
> ip4   
>  5
> 
> 
> 
> 
> [1] https://gerrit.fd.io/r/#/c/5004/
> 
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] memif - packet memory interface

2017-02-14 Thread Damjan Marion (damarion)

I got first pings running over new shared memory interface driver.
Code [1] is still very fragile, but basic packet forwarding works ...

This interface defines master/slave relationship.

Some characteristics:
 - slave can run inside un-privileged containers
 - master can run inside container, but it requires global PID namespace and 
PTRACE capability
 - initial connection is done over the unix socket, so for container networking 
socket file needs to be mapped into container
 - slave allocates shared memory for descriptor rings and passes FD to master
 - slave is ring producer for both tx and rx, it fills rings with either full 
or empty buffers
 - master is ring consumer, it reads descriptors and executes memcpy from/to 
buffer
 - process_vm_readv, process_vm_writev linux system calls are used for copy of 
data directly between master and slave VM (it avoids 2nd memcpy)
 - process_vm_* system calls are executed once per vector of packets
 - from security perspective, slave doesn’t have access to master memory
 - currently polling-only
 - reconnection should just work - slave runs reconnect process in case when 
master disappears

TODO:
 - multi-queue
 - interrupt mode (likely simple byte read/write to file descriptor)
 - lightweight library to be used for non-VPP clients
 - L3 mode ???
 - perf tuning
 - user-mode memcpy - master maps slave buffer memory directly…
 - docs / specification
 
At this point I would really like to hear feedback from people,
specially from the usability side.

config is basically:

create memif socket /path/to/unix_socket.file [master|slave]
set int state memif0 up

DBGvpp# show interfaces
  Name   Idx   State  Counter  Count
local00down
memif01 up
DBGvpp# show interfaces address
local0 (dn):
memif0 (up):
  172.16.0.2/24
DBGvpp# ping 172.16.0.1
64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms

Statistics: 5 sent, 5 received, 0% packet loss
DBGvpp# show interfaces
  Name   Idx   State  Counter  Count
local00down
memif01 up   rx packets 
5
 rx bytes   
  490
 tx packets 
5
 tx bytes   
  490
 drops  
5
 ip4
5




[1] https://gerrit.fd.io/r/#/c/5004/


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] libpneum compilation flags

2017-02-13 Thread Damjan Marion (damarion)

On 13 Feb 2017, at 17:11, Gabriel Ganne 
> wrote:

Hi Burt,

Thank you for your input.
I pushed a new version of my commit (https://gerrit.fd.io/r/#/c/4576/) where I 
tried to do things more clearly.

I had a look here 
https://github.com/torvalds/linux/blob/master/arch/arm64/kernel/cacheinfo.c and 
it seems that on arm64, recent kernels should be able return a correct value. 
Which means that some day, they will.
Old ones will fallback to 64 Bytes.

Maybe someone who has a thunder platform can try it in order to see what 
getconf returns him.

I tried on my ThunderX system, and it returns 0, but kernel which I’m running 
is old (one from SDK).
I am not able to run standard ubuntu kernel for arm64 on that system, it just 
freezes very early in the boot process.
As ThunderX is listed as certified [1], I guess i’m doing something wrong….

[1] https://certification.ubuntu.com/hardware/201609-25111/
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Minnowboard Turbot dual-e

2017-01-26 Thread Damjan Marion (damarion)

With regards to home gateway discussion on the last call here is 
dual GigE version of Minnowboard Turbot.

http://www.adiengineering.com/products/minnowboard-turbot-duale/

It says availability Q3 2016 but I cannot find any place where it can be 
ordered.
Maybe netgate folks have more details….


Another interesting device:
http://www.up-board.org/upsquared/specifications-up2/

Unfortunately they decided to use Realtek for both GigE ports…. :(
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Error reading from file descriptor 9: Input/output error

2017-01-25 Thread Damjan Marion (damarion)

EAL throws this message to stderr, systemd collects it. There is no much of vpp 
involved here.


On 25 Jan 2017, at 16:00, Kinsella, Ray 
> wrote:

Hi Damjan,

I don't get how the the following message is being blurbed to the Syslog

"PCI INTX mask not supported\n"

Is causing the VPP log to run out of space and resulting in ... ?

"file descriptor 9: Input/output error".

On the patch ...

http://dpdk.org/dev/patchwork/patch/11622/

Steve Hemminger's advice looks like the right path to resolve.

Ray K

On 25/01/2017 14:08, Damjan Marion wrote:

This is long standing issue in DPDK e1000 driver likely caused by bad
e1000 emulation.
Still, bad emulation is still not good excuse for lack of error message
throttling.

There is even patch submitted to fix this issue:

http://dpdk.org/dev/patchwork/patch/11622/

But it never went upstream…

As Dave suggested, switch to vmxnet3 will solve this issue…

Thanks,

Damjan

On 25 Jan 2017, at 14:54, Andrew Li (zhaoxili) > wrote:

Thank you, Dave, and Damjan, that’s clear enough for this
question…We’ll plan to switch to Vmxnet3 then.

- Andrew
*From: *"Dave Barach (dbarach)" 
>
*Date: *Wednesday, 25 January 2017 at 9:37 PM
*To: *"Andrew Li (zhaoxili)" 
>, Damjan Marion 

>
*Cc: *"vpp-dev@lists.fd.io 
"
 >
*Subject: *RE: [vpp-dev] Error reading from file descriptor 9:
Input/output error

Vmxnet3 interfaces are strongly preferred: for performance, as well as
to make this specific issue disappear.

We have a patch floating around which turns off the disk-filling
syslog message, but if you simply switch to vmxnet3 interfaces the
pain will go away without rebuilding images, etc. etc.

HTH... D

*From:* vpp-dev-boun...@lists.fd.io
 [mailto:vpp-dev-boun...@lists.fd.io] *On
Behalf Of *Andrew Li (zhaoxili)
*Sent:* Wednesday, January 25, 2017 6:51 AM
*To:* Damjan Marion 
>
*Cc:* vpp-dev@lists.fd.io 

*Subject:* Re: [vpp-dev] Error reading from file descriptor 9:
Input/output error

Hi Damjan,

Yes…and will it hurt anything other than printing this error message?
If so we could move to VMXNET3.

Thanks,
Andrew
*From: *Damjan Marion 
>
*Date: *Wednesday, 25 January 2017 at 6:27 PM
*To: *"Andrew Li (zhaoxili)" 
>
*Cc: *"vpp-dev@lists.fd.io 
"
 >
*Subject: *Re: [vpp-dev] Error reading from file descriptor 9:
Input/output error



   On 25 Jan 2017, at 05:43, Andrew Li (zhaoxili) 

   > wrote:

   Hi vpp-dev,

   I’m encountering this strange issue: VPP keeps generating this
   error message into /var/log/syslog:

   Jan 24 23:28:19 localhost vpp[4749]: EAL: Error reading from file
   descriptor 9: Input/output error
   Jan 24 23:28:19 localhost vpp[4749]: /usr/bin/vpp[4749]: EAL:
   Error reading from file descriptor 9: Input/output error

   And my disk space is being eaten quickly…Haven’t encountered this
   before.

   VPP version(latest master):
   stack@devstack-vpp2:~/src/vpp$ git log
   commit f69ecfe09db52c672ccbe47e714bc9c9a70d5539
   Author: Andrew Yourtchenko 
   >
   Date:   Tue Jan 24 15:47:27 2017 +0100

   Does anyone know what this error means? Or how to solve this?
   Looks like a dpdk issue.


Are you using ESXi VM with e1000 interfaces?






___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] igb_uio -> uio_pci_generic

2017-01-25 Thread Damjan Marion (damarion)

> On 25 Jan 2017, at 06:00, Stephen Hemminger  
> wrote:
> 
> On Tue, 24 Jan 2017 23:05:55 -0500
> Burt Silverman  wrote:
> 
>> Hi Damjan,
>> 
>> My understanding is that CONFIG_VFIO_NOIOMMU will never be set in a stock
>> kernel, and you will need to build a custom kernel for that. I understand
>> that with this option, the kernel cannot guarantee that applications are
>> prevented from creating bugs that normally the kernel can guarantee will
>> not occur (outside of a kernel bug.) It therefore violates the fundamental
>> Linux system design. That being said, you may wish to accept the risk for
>> performance reasons and build a custom kernel. The other strange thing
>> would be that MSI or MSI-X style interrupts are not needed for performance.
>> The people who developed them have made a lot of noise about how they came
>> about for performance reasons. I have no direct experience, but to learn
>> that they are not important is a shock.
>> 
>> It seems to me that the Ubuntu 14.04 issue is really a separate one from
>> all of this, although I would imagine that the conclusion to stop
>> supporting it does not change.
>> 
>> Burt
>> 
> 
> The reality is any userspace I/O without IOMMU is insecure and can introduce
> bugs. It was only when changes to UIO were proposed that the UIO maintainer
> realized the problem and would not accept changes.  The VFIO maintainer
> was more enlightened "if you want to hang yourself, and you sign the 
> disclaimer,
> here is a prettier rope”.

Another reality is that on ubuntu systems we cannot use VFIO on systems without
IOMMU. So the real choice is really between igb_uio and uio_pci_generic. Maybe 
we can
convince ubuntu folks to enable vfio-noiommu in the next LTS, and then we can
reconsider this decision.

Please note that this is pure packaging problem, we are perfectly fine to use 
vfio-pci
in the VPP, folks just need to enable iommu and change one line in vpp config 
file.

> 
> MSI-X allows DPDK applications to be built with a hybrid polling model which
> is better for power and CPU consumption. Unfortunately this is supportable 
> only
> on some drivers, and configurations; plus from my terse reading of the FD.IO 
> code
> it is not possible now. Pure polling is a great only if you don't have to ever
> pay for power or CPU cycles. It sucks in virtual environments.

Power consumption is valid point, and we have support for interrupt mode for 
many
years in the codebase. If you look at src/vnet/devices/pci/ixge.c you will see 
our
old native niantic driver which supports interrupt mode and it is able to 
dynamically
switch between polling and interrupt mode based on load.

At the moment when DPDK was integrated in VPP, we had to disable that for DPDK 
interfaces 
as PMDs were lacking interrupt mode. Now we can re-enable that but question is 
what to do when 
we have mix of PMDs and only some of the support interrupt mode.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] igb_uio -> uio_pci_generic

2017-01-24 Thread Damjan Marion (damarion)

> On 24 Jan 2017, at 18:40, Damjan Marion (damarion) <damar...@cisco.com> wrote:
> 
>> 
>> On 24 Jan 2017, at 18:26, Stephen Hemminger <step...@networkplumber.org> 
>> wrote:
>> 
>> On Tue, 24 Jan 2017 17:14:42 +
>> "Damjan Marion (damarion)" <damar...@cisco.com> wrote:
>> 
>>> Is anybody aware of any valid reason why we cannot switch to uio_pci_generic
>>> as default PCI uio driver in ubuntu packages?
>>> 
>>> I think generally people don’t like out-of-tree modules, so as long as we 
>>> are getting
>>> the same service from uio_pci_generic we should use it…
>>> 
>>> Thanks,
>>> 
>>> Damjan
> 
>> uio_pci_generic does not support MSI or MSI-X interrupts, only legacy INTX.
> 
> I know but do we really care?
> 
>> 
>> The preference should always be to use VFIO. Even on systems without IOMMU.
> 
> What is the perf impact?
> 
> Also, I just tried with kernel 4.8 on rangeley ATOM, and i got:
> 
> [536030.250072] vfio-pci: probe of :00:14.0 failed with error -22
> [536030.253271] vfio-pci: probe of :00:14.0 failed with error -22
> 
> I guess I’m doing something wrong….

This explains:

grep VFIO_NOIO /boot/config-4.8.0-34-generic
# CONFIG_VFIO_NOIOMMU is not set

So vfio is out of the game of being default choice, people can still switch 
simply with one line change in /etc/vpp/startup.conf.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] igb_uio -> uio_pci_generic

2017-01-24 Thread Damjan Marion (damarion)

> On 24 Jan 2017, at 18:26, Stephen Hemminger <step...@networkplumber.org> 
> wrote:
> 
> On Tue, 24 Jan 2017 17:14:42 +0000
> "Damjan Marion (damarion)" <damar...@cisco.com> wrote:
> 
>> Is anybody aware of any valid reason why we cannot switch to uio_pci_generic
>> as default PCI uio driver in ubuntu packages?
>> 
>> I think generally people don’t like out-of-tree modules, so as long as we 
>> are getting
>> the same service from uio_pci_generic we should use it…
>> 
>> Thanks,
>> 
>> Damjan

> uio_pci_generic does not support MSI or MSI-X interrupts, only legacy INTX.

I know but do we really care?

> 
> The preference should always be to use VFIO. Even on systems without IOMMU.

What is the perf impact?

Also, I just tried with kernel 4.8 on rangeley ATOM, and i got:

[536030.250072] vfio-pci: probe of :00:14.0 failed with error -22
[536030.253271] vfio-pci: probe of :00:14.0 failed with error -22

I guess I’m doing something wrong….

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] igb_uio -> uio_pci_generic

2017-01-24 Thread Damjan Marion (damarion)

Is anybody aware of any valid reason why we cannot switch to uio_pci_generic
as default PCI uio driver in ubuntu packages?

I think generally people don’t like out-of-tree modules, so as long as we are 
getting
the same service from uio_pci_generic we should use it…

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] plugin infrastructure changes

2017-01-24 Thread Damjan Marion (damarion)

As discussed on the call, let’s continue discussion on the mailer.

My RFC patch is at: https://gerrit.fd.io/r/#/c/4824/

copy/paste from commit log:

==
This patch replaces requirement for vlib_plugin_register function
in the plugin so file and introduces new macro:

VLIB_PLUGIN_REGISTER () = {
  .version = "version string",
  .version_required = "requred version",
  .default_disabled = 1,
  .early_init = "early_init_function_name",
};

Plugin will nor be loaded if .default_disabled is set to 1
unless explicitely enabled in startup.conf.

If .verstion_required is set, plugin will not be loaded if there
is version mismatch between plugin and vpp. This can be bypassed
by setting "skip-version-check" for specific plugin.

If .early-init string is present, plugin loader will try to resolve
this specific symbol in the plugin namespace and make a function call.

Following startup.conf configuration is added:

plugins {
  path /path/to/plugin/directory
  plugin ila_plugin.so { enable skip-version-check }
  plugin acl_plugin.so { disable }
}

===


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP 17.01 Released

2017-01-20 Thread Damjan Marion (damarion)

The VPP 17.01 release i up. Many thanks to all contributors and testers
and specially Ed for helping to roll this release over the finish line.

.rpm and .deb packages are uploaded to the nexus server.

New features in the VPP 17.01:

- Integrated November 2016 DPDK release
- Complete rework of Forwarding Information Base (FIB)
- Performance Improvements
  - Improvements in DPDK input and output nodes
  - Improvements in L2 path
  - Improvmeents in IPv4 lookup node
- Feature Arcs Improvements
  - Consolidation of the code
  - New feature arcs
- device-input
- interface-output
- DPDK Cryptodev Support
  - Software and Hardware Crypto Support
- DPDK HQoS support
- Simple Port Analyzer (SPAN)
- Bidirectional Forwarding Detection
  - Basic implementation
- IPFIX Improvements
- L2 GRE over IPSec tunnels
- Link Layer Discovery Protocol (LLDP)
- Vhost-user Improvements
  - Performance Improvements
  - Multi-queue
  - Reconnect
- LISP Enhancements
  - Source/Dest control plane support
  - L2 over LISP and GRE
  - Map-Register/Map-Notify/RLOC-probing support
  - L2 API improvements, overall code hardening
- Plugins:
  - New: ACL
  - New: Flow per Packet
  - Improved: SNAT
- Multi-threading
- Flow export
- Doxygen Enhancements
- Luajit API bindings
- API Refactoring
  - file split
  - message signatures
- Python and Scapy based unit testing infrastructure
  - Infrastructure
  - Various tests
- Packet Generator improvements
- TUN/TAP jumbo frames support
- Other various bug fixes and improvements

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Plugin for mpls over gre

2017-01-18 Thread Damjan Marion (damarion)

> On 18 Jan 2017, at 22:39, Ed Warnicke  wrote:
> 
> Calvin,
> 
> We've had some consumers express interest in MPLS over UDP: 
> https://tools.ietf.org/html/rfc7510
> 
> Would you be interested in working on that?

Or maybe, https://tools.ietf.org/html/rfc2549 . :)

Calvin let us know if you need any help….

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Plugin for mpls over gre

2017-01-18 Thread Damjan Marion (damarion)



> On 18 Jan 2017, at 16:59, Calvin Ference  wrote:
> 
> Hey VPP community,
> 
> I was wondering if anyone had coded a plugin to do mpls over gre before? I'm  
> looking at getting my hands dirty in writing a plugin and I was thinking this 
> might be a good start, but if someone already did the work I'll find 
> something else.

Glad to see that you’re looking to implement some feature.
Bad news is that we already have mpls over gre implementation in the codebase.

Maybe something else?

Thanks,

Damjan

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

  1   2   >