Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread Yichen Wang via Lists.Fd.Io
If you want to use vfio-pci, you might want to check:
# dmesg | grep Virtualization
[5.208330] DMAR: Intel(R) Virtualization Technology for Directed I/O
If you don’t see above, vfio-pci will not work and the fix is to enable Intel 
VT-d in BIOS.

Also, uio_pci_generic won’t work with i40e, and if you want to use UIO you have 
to use igb_uio (Built-in in Ubuntu, and complied as KO for RHEL/CentOS).

Hope that helps.

Regards,
Yichen

From:  on behalf of "steven luong via Lists.Fd.Io" 

Reply-To: "Steven Luong (sluong)" 
Date: Monday, January 6, 2020 at 8:32 PM
To: Gencli Liu <18600640...@163.com>, "vpp-dev@lists.fd.io" 

Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

It is likely a resource problem – when VPP requests more descriptors and/or 
TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the 
interface. There are few ways to figure out what the problem is.

  1.  Bypass VPP and run testpmd with debug options turned on, something like 
this

--log-level=lib.eal,debug --log-level=pmd,debug

  1.  Reduce your RX/TX queues and descriptors to the minimum for the 
interface. What do you have in the dpdk section for the NIC, anyway?
  2.  Run VPP with bare minimum config.

unix { interactive }

I would start with (3) since it is the easiest. I hope DPDK will discover the 
NIC in show hardware if the interface is already bound to DPDK. If that is the 
case, you can proceed to check and see if your startup.conf oversubscribes the 
descriptors and/or TX/RX queues. If (3) still fails, try (1). It is a bit more 
work. I am sure you’ll figure out how to compile testpmd APP and run it.

Steven

From:  on behalf of Gencli Liu <18600640...@163.com>
Date: Monday, January 6, 2020 at 7:22 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

Hi Ezpeer :
Thank you for you advice.
I did a test to update X710's driver(i40e) and X710's Firmware:
i40e's new version : 2.10.19.30
Firmware's new version : 6.80  (inter delete NVM's 7.0 and 7.1 verison 
files because they introduced some serious errors).
(I will try again when inter republish NVM's 7.1).
UIO-driver use vfio-pci.
Even so, NIC still have no driver when use "vppctl show pci".
I aslo switch UIO-driver to uio_pci_generic by modidfy vpp.service and 
startup.conf, the result has little different but still not OK.

This is my environment:
[root@localhost i40e]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@localhost i40e]# uname -a
Linux localhost.localdomain 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 
17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost i40e]# uname -r
3.10.0-1062.4.1.el7.x86_64

[root@localhost i40e]# modinfo i40e
filename:   
/lib/modules/3.10.0-1062.4.1.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
version:2.10.19.30
license:GPL
description:Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
author: Intel Corporation, 
retpoline:  Y
rhelversion:7.7
srcversion: 9EB781BDF574D047F098566
alias:  pci:v8086d158Bsv*sd*bc*sc*i*
alias:  pci:v8086d158Asv*sd*bc*sc*i*
alias:  pci:v8086d37D3sv*sd*bc*sc*i*
alias:  pci:v8086d37D2sv*sd*bc*sc*i*
alias:  pci:v8086d37D1sv*sd*bc*sc*i*
alias:  pci:v8086d37D0sv*sd*bc*sc*i*
alias:  pci:v8086d37CFsv*sd*bc*sc*i*
alias:  pci:v8086d37CEsv*sd*bc*sc*i*
alias:  pci:v8086d0D58sv*sd*bc*sc*i*
alias:  pci:v8086d0CF8sv*sd*bc*sc*i*
alias:  pci:v8086d1588sv*sd*bc*sc*i*
alias:  pci:v8086d1587sv*sd*bc*sc*i*
alias:  pci:v8086d104Fsv*sd*bc*sc*i*
alias:  pci:v8086d104Esv*sd*bc*sc*i*
alias:  pci:v8086d15FFsv*sd*bc*sc*i*
alias:  pci:v8086d1589sv*sd*bc*sc*i*
alias:  pci:v8086d1586sv*sd*bc*sc*i*
alias:  pci:v8086d1585sv*sd*bc*sc*i*
alias:  pci:v8086d1584sv*sd*bc*sc*i*
alias:  pci:v8086d1583sv*sd*bc*sc*i*
alias:  pci:v8086d1581sv*sd*bc*sc*i*
alias:  pci:v8086d1580sv*sd*bc*sc*i*
alias:  pci:v8086d1574sv*sd*bc*sc*i*
alias:  pci:v8086d1572sv*sd*bc*sc*i*
depends:ptp
vermagic:   3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions
parm:   debug:Debug level (0=none,...,16=all) (int)

[root@localhost i40e]# ethtool -i p1p3
driver: i40e
version: 2.10.19.30
firmware-version: 6.80 0x80003c64 1.2007.0
expansion-rom-version:
bus-info: :3b:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

This is vfio-pci error:
[root@localhost ~]# lsmod | grep vfio
vfio_pci   41412  0
vfio_iommu_type1   22440  0
vfio   

Re: [vpp-dev] Is VppCom suitable for this scenario

2020-01-06 Thread Satya Murthy
Hi Florin,

Thank you very much for quick inputs.  I have gone through your youtube video 
from kubecon and it cleared lot of my doubts.
You presented it in a very clear manner.

As you rightly pointed out, VppCom will be a overhead for our use case.
All we need is just a shared memory communication to send and receive bigger 
messages.
Memif was not a candidate for this, since it will pose message size 
restrictions upto 64K.

In this case, what framework we can use to send/recv messages from VPP workers 
across shared memory.
Can we use SVM queues directly and get the message into our custom VPP plugin 
and process it
( in case of VPP receiving message from control plane app )

Any example code that already does this ? If so, can you please point this to 
us.
--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15073): https://lists.fd.io/g/vpp-dev/message/15073
Mute This Topic: https://lists.fd.io/mt/69461619/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp assert error whtn nginx start with ldp

2020-01-06 Thread jiangxiaoming
VPP crash when start nginx start with ldp. vpp code is master 
78565f38e8436dae9cd3a891b5e5d929209c87f9,
The crash stack is below: Anyone has any solution?
> 
> DBGvpp# 0: vl_api_memclnt_delete_t_handler:277: Stale clnt delete index
> 16777215 old epoch 255 cur epoch 0 0:
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320
> (session_get_from_handle) assertion `! pool_is_free
> (smm->wrk[thread_index].sessions, _e)' fails Program received signal
> SIGABRT, Aborted. 0x74a7 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55 55 return INLINE_SYSCALL
> (tgkill, 3, pid, selftid, sig); (gdb) bt #0  0x74a7 in
> __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> #1  0x74a34a28 in __GI_abort () at abort.c:90 #2 
> 0x00407458 in os_panic () at
> /home/dev/code/net-base/build/vpp/src/vpp/vnet/main.c:355 #3 
> 0x7587ad1f in debugger () at
> /home/dev/code/net-base/build/vpp/src/vppinfra/error.c:84 #4 
> 0x7587b0ee in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x7772b0c8 "%s:%d (%s) assertion `%s' fails") at
> /home/dev/code/net-base/build/vpp/src/vppinfra/error.c:143 #5 
> 0x773da25f in session_get_from_handle (handle=2) at
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320 #6 
> 0x773da330 in listen_session_get_from_handle (handle=2) at
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:548 #7 
> 0x773dac6b in app_listener_lookup (app=0x7fffd72f2188,
> sep_ext=0x7fffdc84fc80) at
> /home/dev/code/net-base/build/vpp/src/vnet/session/application.c:122 #8 
> 0x773de10d in vnet_listen (a=0x7fffdc84fc80) at
> /home/dev/code/net-base/build/vpp/src/vnet/session/application.c:979 #9 
> 0x773c33a9 in session_mq_listen_handler (data=0x13007fb89) at
> /home/dev/code/net-base/build/vpp/src/vnet/session/session_node.c:62 #10
> 0x77bb4f8a in vl_api_rpc_call_t_handler (mp=0x13007fb70) at
> /home/dev/code/net-base/build/vpp/src/vlibmemory/vlib_api.c:519 #11
> 0x77bc8dfc in vl_msg_api_handler_with_vm_node (am=0x77dd9e40
> , vlib_rp=0x130021000, the_msg=0x13007fb70,
> vm=0x766c0640 , node=0x7fffdc847000, is_private=0
> '\000') at /home/dev/code/net-base/build/vpp/src/vlibapi/api_shared.c:603 #12
> 0x77b9815c in vl_mem_api_handle_rpc (vm=0x766c0640
> , node=0x7fffdc847000) at
> /home/dev/code/net-base/build/vpp/src/vlibmemory/memory_api.c:748 #13
> 0x77bb3e05 in vl_api_clnt_process (vm=0x766c0640
> , node=0x7fffdc847000, f=0x0) at
> /home/dev/code/net-base/build/vpp/src/vlibmemory/vlib_api.c:326 #14
> 0x7641f1f5 in vlib_process_bootstrap (_a=140736887348176) at
> /home/dev/code/net-base/build/vpp/src/vlib/main.c:1475 #15
> 0x7589aef4 in clib_calljmp () at
> /home/dev/code/net-base/build/vpp/src/vppinfra/longjmp.S:123 #16
> 0x7fffdc2d5ba0 in ?? () #17 0x7641f2fd in vlib_process_startup
> (vm=0x7641fca0 , p=0x7fffdc2d5ca0,
> f=0x) at
> /home/dev/code/net-base/build/vpp/src/vlib/main.c:1497 Backtrace stopped:
> previous frame inner to this frame (corrupt stack?) (gdb) up 5 #5 
> 0x773da25f in session_get_from_handle (handle=2) at
> /home/dev/code/net-base/build/vpp/src/vnet/session/session.h:320 320 return
> pool_elt_at_index (smm->wrk[thread_index].sessions, session_index); (gdb)
> print thread_index $1 = 0 (gdb) info thread Id   Target Id         Frame 3 
> Thread 0x7fffb4e51700 (LWP 101019) "vpp_wk_0" 0x764188e6 in
> vlib_worker_thread_barrier_check () at
> /home/dev/code/net-base/build/vpp/src/vlib/threads.h:425 2    Thread
> 0x7fffb5652700 (LWP 101018) "eal-intr-thread" 0x74afbe63 in
> epoll_wait () at ../sysdeps/unix/syscall-template.S:81 * 1    Thread
> 0x77fd87c0 (LWP 101001) "vpp_main" 0x74a7 in __GI_raise
> (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55 (gdb) print
> session_index $2 = 2
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15072): https://lists.fd.io/g/vpp-dev/message/15072
Mute This Topic: https://lists.fd.io/mt/69497840/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread steven luong via Lists.Fd.Io
It is likely a resource problem – when VPP requests more descriptors and/or 
TX/RX queues for the NIC than the firmware has, DPDK fails to initialize the 
interface. There are few ways to figure out what the problem is.

  1.  Bypass VPP and run testpmd with debug options turned on, something like 
this

--log-level=lib.eal,debug --log-level=pmd,debug

  1.  Reduce your RX/TX queues and descriptors to the minimum for the 
interface. What do you have in the dpdk section for the NIC, anyway?
  2.  Run VPP with bare minimum config.

unix { interactive }

I would start with (3) since it is the easiest. I hope DPDK will discover the 
NIC in show hardware if the interface is already bound to DPDK. If that is the 
case, you can proceed to check and see if your startup.conf oversubscribes the 
descriptors and/or TX/RX queues. If (3) still fails, try (1). It is a bit more 
work. I am sure you’ll figure out how to compile testpmd APP and run it.

Steven

From:  on behalf of Gencli Liu <18600640...@163.com>
Date: Monday, January 6, 2020 at 7:22 PM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

Hi Ezpeer :
Thank you for you advice.
I did a test to update X710's driver(i40e) and X710's Firmware:
i40e's new version : 2.10.19.30
Firmware's new version : 6.80  (inter delete NVM's 7.0 and 7.1 verison 
files because they introduced some serious errors).
(I will try again when inter republish NVM's 7.1).
UIO-driver use vfio-pci.
Even so, NIC still have no driver when use "vppctl show pci".
I aslo switch UIO-driver to uio_pci_generic by modidfy vpp.service and 
startup.conf, the result has little different but still not OK.

This is my environment:
[root@localhost i40e]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@localhost i40e]# uname -a
Linux localhost.localdomain 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 
17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost i40e]# uname -r
3.10.0-1062.4.1.el7.x86_64

[root@localhost i40e]# modinfo i40e
filename:   
/lib/modules/3.10.0-1062.4.1.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
version:2.10.19.30
license:GPL
description:Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
author: Intel Corporation, 
retpoline:  Y
rhelversion:7.7
srcversion: 9EB781BDF574D047F098566
alias:  pci:v8086d158Bsv*sd*bc*sc*i*
alias:  pci:v8086d158Asv*sd*bc*sc*i*
alias:  pci:v8086d37D3sv*sd*bc*sc*i*
alias:  pci:v8086d37D2sv*sd*bc*sc*i*
alias:  pci:v8086d37D1sv*sd*bc*sc*i*
alias:  pci:v8086d37D0sv*sd*bc*sc*i*
alias:  pci:v8086d37CFsv*sd*bc*sc*i*
alias:  pci:v8086d37CEsv*sd*bc*sc*i*
alias:  pci:v8086d0D58sv*sd*bc*sc*i*
alias:  pci:v8086d0CF8sv*sd*bc*sc*i*
alias:  pci:v8086d1588sv*sd*bc*sc*i*
alias:  pci:v8086d1587sv*sd*bc*sc*i*
alias:  pci:v8086d104Fsv*sd*bc*sc*i*
alias:  pci:v8086d104Esv*sd*bc*sc*i*
alias:  pci:v8086d15FFsv*sd*bc*sc*i*
alias:  pci:v8086d1589sv*sd*bc*sc*i*
alias:  pci:v8086d1586sv*sd*bc*sc*i*
alias:  pci:v8086d1585sv*sd*bc*sc*i*
alias:  pci:v8086d1584sv*sd*bc*sc*i*
alias:  pci:v8086d1583sv*sd*bc*sc*i*
alias:  pci:v8086d1581sv*sd*bc*sc*i*
alias:  pci:v8086d1580sv*sd*bc*sc*i*
alias:  pci:v8086d1574sv*sd*bc*sc*i*
alias:  pci:v8086d1572sv*sd*bc*sc*i*
depends:ptp
vermagic:   3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions
parm:   debug:Debug level (0=none,...,16=all) (int)

[root@localhost i40e]# ethtool -i p1p3
driver: i40e
version: 2.10.19.30
firmware-version: 6.80 0x80003c64 1.2007.0
expansion-rom-version:
bus-info: :3b:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

This is vfio-pci error:
[root@localhost ~]# lsmod | grep vfio
vfio_pci   41412  0
vfio_iommu_type1   22440  0
vfio   32657  3 vfio_iommu_type1,vfio_pci
irqbypass  13503  2 kvm,vfio_pci
[root@localhost ~]#
[root@localhost ~]# dmesg
[   41.670075] VFIO - User Level meta-driver version: 0.3
[   43.380387] i40e :3b:00.0: removed PHC from p1p1
[   43.583958] vfio-pci: probe of :3b:00.0 failed with error -22
[   43.595876] i40e :3b:00.1: removed PHC from p1p2
[   43.811364] vfio-pci: probe of :3b:00.1 failed with error -22

[root@localhost ~]# cat /usr/lib/systemd/system/vpp.service
[Unit]
Description=Vector Packet Processing Process
After=syslog.target network.target auditd.service

[Service]
ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
#ExecStartPre=-/sbin/modprobe uio_pci_generic

[vpp-dev] confirm

2020-01-06 Thread 自由幻想
hello
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15070): https://lists.fd.io/g/vpp-dev/message/15070
Mute This Topic: https://lists.fd.io/mt/69496209/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] "vppctl show int" no NIC (just local0) #vpp #vnet

2020-01-06 Thread Gencli Liu
Hi Ezpeer :
Thank you for you advice.
I did a test to update X710's driver(i40e) and X710's Firmware:
i40e's new version : 2.10.19.30
Firmware's new version : 6.80  (inter delete NVM's 7.0 and 7.1 verison files 
because they introduced some serious errors).
(I will try again when inter republish NVM's 7.1).
UIO-driver use vfio-pci.
Even so, NIC still have no driver when use "vppctl show pci".
I aslo switch UIO-driver to uio_pci_generic by modidfy vpp.service and 
startup.conf, the result has little different but still not OK.

This is my environment:
[root@localhost i40e]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@localhost i40e]# uname -a
Linux localhost.localdomain 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 
17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost i40e]# uname -r
3.10.0-1062.4.1.el7.x86_64

[root@localhost i40e]# modinfo i40e
filename:       
/lib/modules/3.10.0-1062.4.1.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
version:        2.10.19.30
license:        GPL
description:    Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
author:         Intel Corporation, 
retpoline:      Y
rhelversion:    7.7
srcversion:     9EB781BDF574D047F098566
alias:          pci:v8086d158Bsv*sd*bc*sc*i*
alias:          pci:v8086d158Asv*sd*bc*sc*i*
alias:          pci:v8086d37D3sv*sd*bc*sc*i*
alias:          pci:v8086d37D2sv*sd*bc*sc*i*
alias:          pci:v8086d37D1sv*sd*bc*sc*i*
alias:          pci:v8086d37D0sv*sd*bc*sc*i*
alias:          pci:v8086d37CFsv*sd*bc*sc*i*
alias:          pci:v8086d37CEsv*sd*bc*sc*i*
alias:          pci:v8086d0D58sv*sd*bc*sc*i*
alias:          pci:v8086d0CF8sv*sd*bc*sc*i*
alias:          pci:v8086d1588sv*sd*bc*sc*i*
alias:          pci:v8086d1587sv*sd*bc*sc*i*
alias:          pci:v8086d104Fsv*sd*bc*sc*i*
alias:          pci:v8086d104Esv*sd*bc*sc*i*
alias:          pci:v8086d15FFsv*sd*bc*sc*i*
alias:          pci:v8086d1589sv*sd*bc*sc*i*
alias:          pci:v8086d1586sv*sd*bc*sc*i*
alias:          pci:v8086d1585sv*sd*bc*sc*i*
alias:          pci:v8086d1584sv*sd*bc*sc*i*
alias:          pci:v8086d1583sv*sd*bc*sc*i*
alias:          pci:v8086d1581sv*sd*bc*sc*i*
alias:          pci:v8086d1580sv*sd*bc*sc*i*
alias:          pci:v8086d1574sv*sd*bc*sc*i*
alias:          pci:v8086d1572sv*sd*bc*sc*i*
depends:        ptp
vermagic:       3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions
parm:           debug:Debug level (0=none,...,16=all) (int)

[root@localhost i40e]# ethtool -i p1p3
driver: i40e
version: 2.10.19.30
firmware-version: 6.80 0x80003c64 1.2007.0
expansion-rom-version:
bus-info: :3b:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

This is vfio-pci error:
[root@localhost ~]# lsmod | grep vfio
vfio_pci               41412  0
vfio_iommu_type1       22440  0
vfio                   32657  3 vfio_iommu_type1,vfio_pci
irqbypass              13503  2 kvm,vfio_pci
[root@localhost ~]#
[root@localhost ~]# dmesg
[   41.670075] VFIO - User Level meta-driver version: 0.3
[   43.380387] i40e :3b:00.0: removed PHC from p1p1
[   43.583958] vfio-pci: probe of :3b:00.0 failed with error -22
[   43.595876] i40e :3b:00.1: removed PHC from p1p2
[   43.811364] vfio-pci: probe of :3b:00.1 failed with error -22

[root@localhost ~]# cat /usr/lib/systemd/system/vpp.service
[Unit]
Description=Vector Packet Processing Process
After=syslog.target network.target auditd.service

[Service]
ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
#ExecStartPre=-/sbin/modprobe uio_pci_generic
ExecStartPre=-/sbin/modprobe vfio-pci
ExecStartPre=-/sbin/ifconfig p1p1 down
ExecStartPre=-/sbin/ifconfig p1p2 down
ExecStart=/usr/bin/numactl --cpubind=0 --membind=0 /usr/bin/vpp -c 
/etc/vpp/startup.conf
# ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
Type=simple
Restart=on-failure
RestartSec=5s
# Uncomment the following line to enable VPP coredumps on crash
# You still need to configure the rest of the system to collect them, see
# 
https://fdio-vpp.readthedocs.io/en/latest/troubleshooting/reportingissues/reportingissues.html#core-files
# for details
#LimitCORE=infinity

[Install]
WantedBy=multi-user.target

[root@localhost ~]# cat /etc/vpp/startup.conf
。。。
uio-driver vfio-pci
。。。

[root@localhost ~]# vppctl show pci | grep XL710
:3b:00.0   0  8086:1572   8.0 GT/s x8                  XL710 40GbE 
Controller          RV: 0x 86
:3b:00.1   0  8086:1572   8.0 GT/s x8                  XL710 40GbE 
Controller          RV: 0x 86
:3b:00.2   0  8086:1572   8.0 GT/s x8  i40e            XL710 40GbE 
Controller          RV: 0x 86
:3b:00.3   0  8086:1572   8.0 GT/s x8  i40e            XL710 40GbE 
Controller          RV: 0x 86

Re: [vpp-dev] vpp19.08 ipsec issue

2020-01-06 Thread Neale Ranns via Lists.Fd.Io


From: Terry 
Date: Tuesday 7 January 2020 at 13:12
To: "Neale Ranns (nranns)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re:Re: [vpp-dev] vpp19.08 ipsec issue

Hi Neale,

My understanding is that the interface GigabitEthernet2/1/0 should only protect 
traffic from 100.0.0.0/24 and 172.168.1.0/24 and let other traffic getting 
throuth.

# ipsec policy add spd 1 inbound priority 10 action protect sa 20 
local-ip-range 100.0.0.1 - 100.0.0.3 remote-ip-range 172.168.1.1 - 172.168.1.3
# ipsec policy add spd 1 outbound priority 10 action protect sa 10 
local-ip-range 100.0.0.1 - 100.0.0.3 remote-ip-range 172.168.1.1 - 172.168.1.3
These two lines define the rules to pretect 100.0.0.0/24 and 172.168.1.0/24 
with SA 10 and and SA 20.

# ipsec policy add spd 1 inbound priority 100 protocol 50 action bypass 
local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 
255.255.255.255
# ipsec policy add spd 1 outbound priority 100 protocol 50 action bypass 
local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 
255.255.255.255
These two lines define the rules to let all ESP traffic get throuth.

The packet TRACE information shows that there are no rules for other traffic  
to get throuth.
There are four action for IPSec policy: bypass,  discard, resolve, protect
In this scene, if I want  the traffic to access public network, I think the 
action should be BYPASS, and the rule form should be like follows:
# ipsec policy add spd 1 outbound priority 90 protocol 0 action bypass 
local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 
255.255.255.255

That’s correct. The ‘default’ action, in the absence of a hit in the SPD, is to 
drop.

When I add the rule for VPP1 and VPP2, user1 and user2 can ping each other.
But the tunnel is still not working. The VPP1 trace information is as 
follows(user1 ping user 2):

vpp# show trace
--- Start of thread 0 vpp_main ---
No packets in trace buffer
--- Start of thread 1 vpp_wk_0 ---
Packet 1

14:10:17:029929: dpdk-input
  GigabitEthernet2/0/0 rx queue 0
  buffer 0x9829a: current data 0, length 98, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x100
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x2f40a700
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 00:0c:29:70:bd:60 -> 00:0c:29:34:7e:8f
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x685b
fragment id 0xc09f, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xe0b8
14:10:17:029953: ethernet-input
  frame: flags 0x3, hw-if-index 1, sw-if-index 1
  IP4: 00:0c:29:70:bd:60 -> 00:0c:29:34:7e:8f
14:10:17:029958: ip4-input-no-checksum
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x685b
fragment id 0xc09f, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xe0b8
14:10:17:029961: nat44-in2out-worker-handoff
  NAT44_IN2OUT_WORKER_HANDOFF : next-worker 2 trace index 0

Packet 2

14:10:18:055696: dpdk-input
  GigabitEthernet2/0/0 rx queue 0
  buffer 0x982e8: current data 0, length 98, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace han
dle 0x101
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x2f40ba80
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 00:0c:29:70:bd:60 -> 00:0c:29:34:7e:8f
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x684e
fragment id 0xc0ac, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xe650
14:10:18:055719: ethernet-input
  frame: flags 0x3, hw-if-index 1, sw-if-index 1
  IP4: 00:0c:29:70:bd:60 -> 00:0c:29:34:7e:8f
14:10:18:055723: ip4-input-no-checksum
  ICMP: 100.0.0.3 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x684e
fragment id 0xc0ac, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xe650
14:10:18:055726: nat44-in2out-worker-handoff
  NAT44_IN2OUT_WORKER_HANDOFF : next-worker 2 trace index 1

--- Start of thread 2 vpp_wk_1 ---
Packet 1

14:10:17:029967: handoff_trace
  HANDED-OFF: from thread 1 trace index 0
14:10:17:029967: nat44-in2out
  NAT44_IN2OUT_FAST_PATH: sw_if_index 1, next index 3, session -1
14:10:17:029971: nat44-in2out-slowpath
  NAT44_IN2OUT_SLOW_PATH: sw_if_index 1, next index 0, session 12
14:10:17:029976: ip4-lookup
  fib 0 dpo-idx 5 flow hash: 0x
  ICMP: 192.168.1.1 -> 172.168.1.3
tos 0x00, ttl 64, length 84, checksum 0x0ab5
fragment id 0xc09f, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x2123
14:10:17:029979: ip4-arp
ICMP: 192.168.1.1 -> 172.168.1.3
  tos 0x00, ttl 64, length 84, checksum 0x0ab5

Re: [vpp-dev] vpp19.08 ipsec issue

2020-01-06 Thread Neale Ranns via Lists.Fd.Io


From: Terry 
Date: Monday 6 January 2020 at 23:51
To: "Neale Ranns (nranns)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re:Re:Re: [vpp-dev] vpp19.08 ipsec issue

[trim]

And when I ping 192.168.1.2 from 100.0.0.3(user1), the TRACE packet information 
is as follows:
Packet 1

00:38:45:983763: handoff_trace
  HANDED-OFF: from thread 1 trace index 0
00:38:45:983763: nat44-in2out
  NAT44_IN2OUT_FAST_PATH: sw_if_index 1, next index 3, session -1
00:38:45:983767: nat44-in2out-slowpath
  NAT44_IN2OUT_SLOW_PATH: sw_if_index 1, next index 0, session 6
00:38:45:983772: ip4-lookup
  fib 0 dpo-idx 3 flow hash: 0x
  ICMP: 192.168.1.1 -> 192.168.1.2

which SPD policy does/should this packet match ?

/neale

tos 0x00, ttl 64, length 84, checksum 0x080c
fragment id 0xaf49, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8943
00:38:45:983775: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 3 : ipv4 via 192.168.1.2 GigabitEthernet2/1/0: 
mtu:9000 000c29f77626000c29347e990800 flow hash: 0x
  : 000c29f77626000c29347e9908004554af4940003f01090cc0a80101c0a8
  0020: 010208008943ad4e00095427135e8f0c0c001011
00:38:45:983778: ipsec4-output-feature
  spd 1 policy -1
00:38:45:983780: error-drop
  rx:GigabitEthernet2/0/0
00:38:45:983783: drop
  dpdk-input: no error

Packet 2

00:38:47:007175: handoff_trace
  HANDED-OFF: from thread 1 trace index 1
00:38:47:007175: nat44-in2out
  NAT44_IN2OUT_FAST_PATH: sw_if_index 1, next index 3, session -1
00:38:47:007184: nat44-in2out-slowpath
  NAT44_IN2OUT_SLOW_PATH: sw_if_index 1, next index 0, session 6
00:38:47:007193: ip4-lookup
  fib 0 dpo-idx 3 flow hash: 0x
  ICMP: 192.168.1.1 -> 192.168.1.2
tos 0x00, ttl 64, length 84, checksum 0x07f5
fragment id 0xaf60, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xc1e4
00:38:47:007197: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 3 : ipv4 via 192.168.1.2 GigabitEthernet2/1/0: 
mtu:9000 000c29f77626000c29347e990800 flow hash: 0x
  : 000c29f77626000c29347e9908004554af6040003f0108f5c0a80101c0a8
  0020: 01020800c1e4ad4e000a5527135e556a0c001011
00:38:47:007202: ipsec4-output-feature
  spd 1 policy -1
00:38:47:007206: error-drop
  rx:GigabitEthernet2/0/0
00:38:47:007209: drop
  dpdk-input: no error

It looks like there are no rules for the traffic get throuth.
When I config this command:
# set interface ipsec spd GigabitEthernet2/1/0 1
All the packets can not get throuth GigabitEthernet2/1/0 interface.
How can I config the IPSec policy to only protect the IPSec traffic and leave 
other traffic to the normal forwarding?
In general, the user1 can access user2 with IPSec tunnel and can also access 
the public network with NAT in VPP1.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15067): https://lists.fd.io/g/vpp-dev/message/15067
Mute This Topic: https://lists.fd.io/mt/67970551/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io Jenkins/OpenStack Outage

2020-01-06 Thread Vanessa Valderrama
A fix has been implemented. Services have been restored. If you
experience any further issues, please open a ticket at
support.linuxfoundation.org.

Thank you,

Vanessa

On 1/6/20 1:03 PM, Vanessa Valderrama wrote:
> Our OpenStack cloud provider is having issues with network controller
> and that affects our CI OpenStack infrastructure accross all projects.
> We're working with the provider to fix the issue as quickly as possible.
>
> Please feel free to check the status page for additional updates.
>
> https://status.linuxfoundation.org/incidents/g22zdrl0vrfd
>
> Thank you,
> Vanessa
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15066): https://lists.fd.io/g/vpp-dev/message/15066
Mute This Topic: https://lists.fd.io/mt/69472490/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io Jenkins/OpenStack Outage

2020-01-06 Thread Vanessa Valderrama
Our OpenStack cloud provider is having issues with network controller
and that affects our CI OpenStack infrastructure accross all projects.
We're working with the provider to fix the issue as quickly as possible.

Please feel free to check the status page for additional updates.

https://status.linuxfoundation.org/incidents/g22zdrl0vrfd

Thank you,
Vanessa



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15065): https://lists.fd.io/g/vpp-dev/message/15065
Mute This Topic: https://lists.fd.io/mt/69472490/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io CLang debug build weirdness

2020-01-06 Thread Ray Kinsella
Interesting - Clang interprets the forward declaration as almost a statement of 
intent,
that the implementation is coming ... 

Ray K

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Benoit
> Ganne (bganne) via Lists.Fd.Io
> Sent: Monday 6 January 2020 12:49
> To: Kinsella, Ray 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] FD.io CLang debug build weirdness
> 
> Hi Ray,
> 
> > Anyone else come across this weirdness in the FD.io VPP Clang Debug
> Build
> 
> Yes, it is fixed in master and 19.08 with
> https://gerrit.fd.io/r/c/vpp/+/22649
> 
> Best
> ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15063): https://lists.fd.io/g/vpp-dev/message/15063
Mute This Topic: https://lists.fd.io/mt/69463637/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-01-06 14:00:24 UTC

2020-01-06 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15062): https://lists.fd.io/g/vpp-dev/message/15062
Mute This Topic: https://lists.fd.io/mt/69465348/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Contiv VPP - Run time debug log is not printing in vpp prompt

2020-01-06 Thread nidhyanandhan . a
Hi,
By Refering : https://wiki.fd.io/view/VPP/Software_Architecture#Format , In 
production images, clib_warnings result in syslog entries.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15061): https://lists.fd.io/g/vpp-dev/message/15061
Mute This Topic: https://lists.fd.io/mt/69399643/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io CLang debug build weirdness

2020-01-06 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Ray,

> Anyone else come across this weirdness in the FD.io VPP Clang Debug Build

Yes, it is fixed in master and 19.08 with https://gerrit.fd.io/r/c/vpp/+/22649

Best
ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15060): https://lists.fd.io/g/vpp-dev/message/15060
Mute This Topic: https://lists.fd.io/mt/69463637/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io CLang debug build weirdness

2020-01-06 Thread Ray Kinsella
Anyone else come across this weirdness in the FD.io VPP Clang Debug Build

: && ccache /usr/lib64/ccache/clang --target=x86_64-linux-gnu 
-Wno-address-of-packed-member -march=corei7 -mtune=corei7-avx -g -O0 
-DCLIB_DEBUG -DFORTIFY_SOURCE=2 -fstack-protector-all -fPIC -Werror 
-flax-vector-conversions -Wno-sometimes-uninitialized   
vcl/CMakeFiles/sock_test_server.dir/sock_test_server.c.o  -o 
bin/sock_test_server  
-Wl,-rpath,/root/src/vpp/build-root/build-vpp_debug-native/vpp/lib 
lib/libvppcom.so.19.04.3 -lpthread lib/libvlibmemoryclient.so.19.04.3 
lib/libsvm.so.19.04.3 lib/libvppinfra.so.19.04.3 -lm -lrt -lpthread && :
/usr/bin/ld: lib/libvppcom.so.19.04.3: undefined reference to 
`vnet_incremental_checksum_fp'
clang-9: error: linker command failed with exit code 1 (use -v to see 
invocation)

Basically what happens is that for anything that includes ip_packet.h but is 
not linked to vnet, CLang tries unsuccessfully to resolve symbol 
vnet_incremental_checksum_fp. It doesn't happen with the release build, I am 
assuming (always dangerous) because some optimization figures out that 
vnet_incremental_checksum_fp is never called and removes the dependency.

Ray K
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15059): https://lists.fd.io/g/vpp-dev/message/15059
Mute This Topic: https://lists.fd.io/mt/69463637/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] stats and errors

2020-01-06 Thread Ole Troan
Christan,

>>> There are some error counters that aren't errors, then there are statistic 
>>> counters.
>>> 
>>> I'm curious did one come before the other? I ask b/c some nodes count 
>>> non-error statistics using errors (e.g., arp, ipsec, ...) and I'm wondering 
>>> why.
>> 
>> The node errors as implemented in src/vlib/error.[ch] came first.
>> The stats segment is just a representation of these per-node counters. The 
>> stats segment is essentially a KV store,
>> where the node counters are given a key as 
>> "/err//
>> 
>> You are absoultely right that some nodes do use this counter infrastructure 
>> for non-error situations.
>> e.g. we have "no error", "encapsulated", "decapsulated" by some nodes and so 
>> on.
>> I at some point was thinking of placing these under a different path. Then I 
>> distracted myself by thinking of how all the counters could be represented 
>> in a data model instead (specifically a YANG tree).
> 
> Yeah, I noticed they all get collected under /err. In any case before I 
> noticed that, I had started using them (errors) for a bunch of stats I needed 
> to collect, and then realized this was probably not the right way to do this 
> (I started in the ipsec code which is where I got the idea :). They are a 
> convenient way to do this, which maybe is why they are being used that way in 
> some places.
> 
> I wonder if this could be cleaned up so that "show error" really only shows 
> errors (and is empty if there are no errors). Maybe there could be a simple 
> way to setup positive counters like there is for errors (i.e., the error 
> array in the node registration structure, and counters in the node).

What constitutes an error is somewhat subjective. I see the YANG models 
typically group all counters together as "statistics", and it's up to the user 
to interpret the counters.
E.g. https://tools.ietf.org/html/rfc8343#section-3

That said I wouldn't object to giving the "no error" counters that some nodes 
use a different path.
Perhaps make a generic counter per-node that is used to count the number of 
success packets per-node.

Cheers,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15058): https://lists.fd.io/g/vpp-dev/message/15058
Mute This Topic: https://lists.fd.io/mt/69440602/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] why VAT can not link libvnet.so library ?

2020-01-06 Thread Ole Troan
> Please don’t link vpp_api_test against libvnet.so. Your test plugin – and 
> vpp_api_test by implication – have no use for data plane node 
> implementations, and so on, and so forth.
>  
> Instead, create a client library which contains ip_types_api.c and similar 
> xxx_api.c files, and link the test plugin against it. That’s 100% 
> CMakeLists.txt hacking, won’t break anything, etc.
>  
> Copying Ole for a second opinion.

Yes, agree.

Best regards,
Ole

>  
> D.
>  
> From: Pei, Yulong  
> Sent: Friday, January 3, 2020 11:56 PM
> To: Dave Barach (dbarach) 
> Cc: vpp-dev@lists.fd.io
> Subject: RE: [vpp-dev] why VAT can not link libvnet.so library ?
>  
> Hello Dave,
>  
> I noticed that you are maintainer of VPP API TEST (VAT),  could you help me 
> about this issue ?
>  
> Best Regards
> Yulong Pei
>  
> From: vpp-dev@lists.fd.io  On Behalf Of Pei, Yulong
> Sent: Tuesday, December 31, 2019 5:47 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] why VAT can not link libvnet.so library ?
>  
> Dear VPP-dev,
>  
> I  bumped into an issue as below  since I called  ip_address_encode() 
> function in  src/plugins/lb/lb_test.c
>  
> ./vpp_api_test: symbol lookup error: 
> /root/vpp/build-root/build-vpp-native/vpp/lib/vpp_api_test_plugins/lb_test_plugin.so:
>  undefined symbol: ip_address_encode
>  
> But actually   ip_address_encode() was compiled in libvnet.so,
>  
> vpp/build-root/install-vpp-native/vpp/lib# nm libvnet.so |grep 
> ip_address_encode
> 00623d30 T ip_address_encode
>  
> So I do below changes in order to let VAT can link with libvnet.so,   but it 
> did not take effect.
>  
> diff --git a/src/vat/CMakeLists.txt b/src/vat/CMakeLists.txt
> index d512d9c17..81b99bd3d 100644
> --- a/src/vat/CMakeLists.txt
> +++ b/src/vat/CMakeLists.txt
> @@ -16,7 +16,7 @@
> ##
> add_vpp_library(vatplugin
>SOURCES plugin_api.c
> -  LINK_LIBRARIES vppinfra
> +  LINK_LIBRARIES vnet vppinfra
> )
>  
> ##
> @@ -33,6 +33,7 @@ add_vpp_executable(vpp_api_test ENABLE_EXPORTS
>DEPENDS api_headers
>  
>LINK_LIBRARIES
> +  vnet
>vlibmemoryclient
>svm
>vatplugin
>  
>  
> Could you guys help me about this issue ?
>  
>  
> Best Regards
> Yulong Pei

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15057): https://lists.fd.io/g/vpp-dev/message/15057
Mute This Topic: https://lists.fd.io/mt/69345918/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Is VppCom suitable for this scenario

2020-01-06 Thread Satya Murthy
Hi ,

Have one basic doubt on applicability of VppCom library for a use case that we 
have as below.

Use Case with following requirements:
1. control plane app needs to communicate with different VPP worker threads
2. control plane app may need to send messages to vpp workers with message size 
that can span upto a max size of 1 MB.
3. control plane app needs to have different VppCom channels with each worker

For the above scenario, is VppCom a suitable infrastructure ?
Using memif causes max size limit at 64KB. Hence, we are thinking about 
alternatives.

Please share your inputs on this.
( Also, is there any documentation on VppCom library ? )

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15056): https://lists.fd.io/g/vpp-dev/message/15056
Mute This Topic: https://lists.fd.io/mt/69461619/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-