[vpp-dev] Query on Inner packet Fragmentation and Reassembly

2020-07-01 Thread Satya Murthy
Hi ,

We have a use case, where we receive packets in a tunnel, and the inner packet 
may be fragments.
If we want to reassemble the inner fragments and get one single packet, does 
VPP already have a framework that has this functionality.
If it's already there, we can make use of it.

I saw MAP plugin, but I am not able to see the place where it reassembles ipv4 
fragments and outputs one single packet.

Any inputs/examples please.

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16850): https://lists.fd.io/g/vpp-dev/message/16850
Mute This Topic: https://lists.fd.io/mt/75234196/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query on Inner packet Fragmentation and Reassembly

2020-07-01 Thread Klement Sekera via lists.fd.io
Hi Murthy,

yes it does. Code is in ip4_sv_reass.c or ip4_full_reass.c. First one is 
shallow reassembly (as in, know 5 tuple for all fragments with having to 
actually reassemble them), second one is full reassembly.

Regards,
Klement

> On 1 Jul 2020, at 14:41, Satya Murthy  wrote:
> 
> Hi ,
> 
> We have a use case, where we receive packets in a tunnel, and the inner 
> packet may be fragments.
> If we want to reassemble the inner fragments and get one single packet, does 
> VPP already have a framework that has this functionality. 
> If it's already there, we can make use of it.
> 
> I saw MAP plugin, but I am not able to see the place where it reassembles 
> ipv4 fragments and outputs one single packet.
> 
> Any inputs/examples please.
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16851): https://lists.fd.io/g/vpp-dev/message/16851
Mute This Topic: https://lists.fd.io/mt/75234196/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread sadhanakesavan
Hi Team,
I am trying to explore tcp user space in my vm using vpp hoststack.
I install and brought up vpp with attached startup1 conf and vpp1 conf.
1. executed the vpp with start config :

sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp 
-c /etc/vpp/startup.conf

2.displayed interfaces

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vppctl
 show int

Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count

local0 0 down 0/0/0/0

2. -bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
 server uri tcp://10.0.0.1/24

vl_api_sock_init_shm_reply_t_handler:363: failed to init shmem

Segmentation fault

Is there anything I am missing obvious?


startup1.conf
Description: Binary data
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16852): https://lists.fd.io/g/vpp-dev/message/16852
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread sadhanakesavan
[Edited Message Follows]

Hi Team,
I am trying to explore tcp user space in my vm using vpp hoststack.
I install and brought up vpp with attached startup1 conf and vpp1 conf.
1. executed the vpp with start config :

sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp 
-c /etc/vpp/startup.conf

-bash-4.2$ cat /etc/vpp/vpp1.conf

session enable

create host-interface name vpp1

set int ip address host-vpp1 10.0.0.1/24

set int state host-vpp1 up

2.displayed interfaces

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vppctl
 show int

Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count

local0 0 down 0/0/0/0

2. -bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
 server uri tcp://10.0.0.1/24

vl_api_sock_init_shm_reply_t_handler:363: failed to init shmem

Segmentation fault

Is there anything I am missing obvious?


startup1.conf
Description: Binary data
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16852): https://lists.fd.io/g/vpp-dev/message/16852
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query on Inner packet Fragmentation and Reassembly

2020-07-01 Thread Satya Murthy
Thanks a lot Klement for this quick info.
This will serve our purpose.

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16853): https://lists.fd.io/g/vpp-dev/message/16853
Mute This Topic: https://lists.fd.io/mt/75234196/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Vectors/node and packet size

2020-07-01 Thread Jeremy Brown via lists.fd.io
Greetings,

This is my first post to the forum, so if this is not the right place for this 
post please let me know.

I had a question on VPP performance. We are running two testcases, we limit it 
to single threaded and just using one core in order to reduce as many variables 
as we can. In the two testcases, the only thing that changes, is the size 
incoming packet to VPP.

Using a 64 byte packet, we see a vectors/node of ~80. Simply changing that 
packet size to 1400 we see the same vectors/node fall down to ~2.

This is regardless of pps… there seems to be a non-linear decrease in 
vectors/node with increasing packet size. I was wondering if anyone had noticed 
some similar behavior.


64- byte packets

Thread 1 vpp_wk_0 (lcore 2)
Time 98.9, average vectors/node 80.35, last 128 main loops 0.00 per node 0.00
  vector rates in 1.2643e5, out 1.2643e5, drop 0.e0, punt 2.0228e-2
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
VirtualFuncEthernet88/10/4-out   active  90915 6249981  
 0  1.06e1   68.75
VirtualFuncEthernet88/10/4-txactive  90915 6249981  
 0  4.06e1   68.75
VirtualFuncEthernet88/11/5-out   active  73270 6249981  
 0  9.27e0   85.30
VirtualFuncEthernet88/11/5-txactive  73270 6249981  
 0  4.05e1   85.30
arp-inputactive  2   2  
 0  3.51e41.00
dpdk-input   polling116612933712499964  
 0  1.38e4 .01
error-punt   active  2   2  
 0  5.56e31.00
ethernet-input   active  2   2  
 0  1.47e41.00
gtpu4-encap  active  90914 6249980  
 0  1.01e2   68.75
gtpu4-input  active  73270 6249981  
 0  7.29e1   85.30
interface-output active  2   2  
 0  2.20e31.00
ip4-input-no-checksumactive 14557012499962  
 0  2.22e1   85.87
ip4-load-balance active  90914 6249980  
 0  1.77e1   68.75
ip4-localactive  73272 6249983  
 0  2.45e1   85.29
ip4-lookup   active 21884018749943  
 0  3.79e1   85.68
ip4-punt active  2   2  
 0  1.27e31.00
ip4-rewrite  active 23648218749940  
 0  2.75e1   79.29
ip4-udp-lookup   active  73270 6249981  
 0  2.44e1   85.301400-byte packets

Thread 1 vpp_wk_0 (lcore 2)
Time 102.1, average vectors/node 2.37, last 128 main loops 0.00 per node 0.00
  vector rates in 1.1841e5, out 1.1438e5, drop 4.0334e3, punt 1.9588e-2
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
VirtualFuncEthernet88/10/4-out   active2815250 5838981  
 0  8.18e12.07
VirtualFuncEthernet88/10/4-txactive2815250 5838981  
 0  1.25e22.07
VirtualFuncEthernet88/11/5-out   active2765634 5839804  
 0  8.42e12.11
VirtualFuncEthernet88/11/5-txactive2765634 5839804  
 0  2.32e22.11
arp-inputactive  9 825  
 0  2.25e3   91.67
dpdk-input   polling113698238812089787  
 0  1.44e4 .01
error-drop   active 397116  411823  
 0  1.37e21.04
error-punt   active  2   2  
 0  5.58e31.00
ethernet-input   active  9 825  
 0  7.42e1   91.67
gtpu4-encap  active2815249 5838980  
 0  2.21e22.07
gtpu4-input  active3161920 6249981  
 0  2.10e21.98
interface-output active  2  

Re: [vpp-dev] Vectors/node and packet size

2020-07-01 Thread Dave Barach via lists.fd.io
In order for the statistics to be accurate, please be sure to do the following:

Start traffic... “clear run”... wait a while to accumulate data... “show run”

Otherwise, the statistics will probably include a huge amount of dead airtime, 
data from previous runs, etc.

HTH... Dave

From: vpp-dev@lists.fd.io  On Behalf Of Jeremy Brown via 
lists.fd.io
Sent: Monday, June 29, 2020 12:22 PM
To: vpp-dev@lists.fd.io; Dany Gregoire 
Subject: [vpp-dev] Vectors/node and packet size

Greetings,

This is my first post to the forum, so if this is not the right place for this 
post please let me know.

I had a question on VPP performance. We are running two testcases, we limit it 
to single threaded and just using one core in order to reduce as many variables 
as we can. In the two testcases, the only thing that changes, is the size 
incoming packet to VPP.

Using a 64 byte packet, we see a vectors/node of ~80. Simply changing that 
packet size to 1400 we see the same vectors/node fall down to ~2.

This is regardless of pps… there seems to be a non-linear decrease in 
vectors/node with increasing packet size. I was wondering if anyone had noticed 
some similar behavior.


64- byte packets

Thread 1 vpp_wk_0 (lcore 2)
Time 98.9, average vectors/node 80.35, last 128 main loops 0.00 per node 0.00
  vector rates in 1.2643e5, out 1.2643e5, drop 0.e0, punt 2.0228e-2
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
VirtualFuncEthernet88/10/4-out   active  90915 6249981  
 0  1.06e1   68.75
VirtualFuncEthernet88/10/4-txactive  90915 6249981  
 0  4.06e1   68.75
VirtualFuncEthernet88/11/5-out   active  73270 6249981  
 0  9.27e0   85.30
VirtualFuncEthernet88/11/5-txactive  73270 6249981  
 0  4.05e1   85.30
arp-inputactive  2   2  
 0  3.51e41.00
dpdk-input   polling116612933712499964  
 0  1.38e4 .01
error-punt   active  2   2  
 0  5.56e31.00
ethernet-input   active  2   2  
 0  1.47e41.00
gtpu4-encap  active  90914 6249980  
 0  1.01e2   68.75
gtpu4-input  active  73270 6249981  
 0  7.29e1   85.30
interface-output active  2   2  
 0  2.20e31.00
ip4-input-no-checksumactive 14557012499962  
 0  2.22e1   85.87
ip4-load-balance active  90914 6249980  
 0  1.77e1   68.75
ip4-localactive  73272 6249983  
 0  2.45e1   85.29
ip4-lookup   active 21884018749943  
 0  3.79e1   85.68
ip4-punt active  2   2  
 0  1.27e31.00
ip4-rewrite  active 23648218749940  
 0  2.75e1   79.29
ip4-udp-lookup   active  73270 6249981  
 0  2.44e1   85.301400-byte packets

Thread 1 vpp_wk_0 (lcore 2)
Time 102.1, average vectors/node 2.37, last 128 main loops 0.00 per node 0.00
  vector rates in 1.1841e5, out 1.1438e5, drop 4.0334e3, punt 1.9588e-2
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
VirtualFuncEthernet88/10/4-out   active2815250 5838981  
 0  8.18e12.07
VirtualFuncEthernet88/10/4-txactive2815250 5838981  
 0  1.25e22.07
VirtualFuncEthernet88/11/5-out   active2765634 5839804  
 0  8.42e12.11
VirtualFuncEthernet88/11/5-txactive2765634 5839804  
 0  2.32e22.11
arp-inputactive  9 825  
 0  2.25e3   91.67
dpdk-input   polling113698238812089787  
 0  1.44e4 .01
error-drop   active 397116  411823  
 0  1.37e21.04
error-punt   active  2   2  
   

Re: [vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread Nathan Skrzypczak
Hi,

You're seeing this issue because vpp_echo is trying to use the old
connection mechanism. The configuration I typically run with is the
following [1]
And launching vpp_echo with :
./vpp_echo socket-name /var/run/vpp.sock server uri tcp://10.0.1.1/1234
./vpp_echo socket-name /var/run/vpp.sock client uri tcp://10.0.1.1/1234

Hope this helps

Cheers
-Nathan

[1] /etc/vpp/startup.conf
unix {
  interactive
  log /var/log/vpp/vpp.log
  cli-listen /var/run/vppcli.sock
  exec /etc/vpp/vpp1.conf
}
api-segment { prefix vpp1 }
cpu { workers 0 }
socksvr { socket-name /var/run/vpp.sock }
session {
  evt_qs_memfd_seg
  enable
}
plugins {
plugin dpdk_plugin.so { disable }
}



Le mer. 1 juil. 2020 à 16:29,  a écrit :

> [Edited Message Follows]
> Hi Team,
> I am trying to explore tcp user space in my vm using vpp hoststack.
> I install and brought up vpp with attached startup1 conf and vpp1 conf.
> 1. executed the vpp with start config :
>
> sudo
> /auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp
> -c /etc/vpp/startup.conf
>
> -bash-4.2$ cat /etc/vpp/vpp1.conf
>
> session enable
>
> create host-interface name vpp1
>
> set int ip address host-vpp1 10.0.0.1/24
>
> set int state host-vpp1 up
>
>
> 2.displayed interfaces
>
> -bash-4.2$ sudo
> /auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vppctl
> show int
>
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
>
> local00 down  0/0/0/0
>
>
> 2.-bash-4.2$ sudo
> /auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
> server uri tcp://10.0.0.1/24
>
> vl_api_sock_init_shm_reply_t_handler:363: failed to init shmem
>
> Segmentation fault
>
> Is there anything I am missing obvious?
>
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16856): https://lists.fd.io/g/vpp-dev/message/16856
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread sadhanakesavan
thank you very much, looks like a version issue - what version are you using? i 
built using the master branch
i tried that but i am getting this error :

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp 
-c /etc/vpp/startup1.conf

vnet_feature_arc_init:280: feature node 'ip6-mfib-forward-lookup' not found 
(after 'vrrp6-accept-owner-input', arc 'ip4-multicast')

tls_init_ca_chain:609: Could not initialize TLS CA certificates

tls_mbedtls_init:644: failed to initialize TLS CA chain

tls_init_ca_chain:874: Could not initialize TLS CA certificates

tls_openssl_init:948: failed to initialize TLS CA chain

clib_mem_create_fd: memfd_create: Invalid argument

session_vpp_event_queues_allocate:1501: failed to initialize queue segment

*___ _* _ _ ___

*__/ __/ _ \ (_)__* | | / / _ \/ _ \

*_/ _// // / / / _ \* | |/ / ___/ ___/

*/_/ /(_)_/\___/* |___/_/ /_/

DBGvpp# Aborted
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16857): https://lists.fd.io/g/vpp-dev/message/16857
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread Florin Coras
Hi, 

It seems that clib_mem_create_fd is not working in your vm. What kernel are you 
running?

Regards, 
Florin

> On Jul 1, 2020, at 8:33 AM, sadhanakesa...@gmail.com wrote:
> 
> thank you very much, looks like a version issue - what version are you using? 
> i built using the master branch 
> i tried that but i am getting this error :
> -bash-4.2$ sudo 
> /auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp 
> -c /etc/vpp/startup1.conf 
> vnet_feature_arc_init:280: feature node 'ip6-mfib-forward-lookup' not found 
> (after 'vrrp6-accept-owner-input', arc 'ip4-multicast')
> tls_init_ca_chain:609: Could not initialize TLS CA certificates
> tls_mbedtls_init:644: failed to initialize TLS CA chain
> tls_init_ca_chain:874: Could not initialize TLS CA certificates
> tls_openssl_init:948: failed to initialize TLS CA chain
> clib_mem_create_fd: memfd_create: Invalid argument
> session_vpp_event_queues_allocate:1501: failed to initialize queue segment
> _____   _  ___ 
>  __/ __/ _ \  (_)__| | / / _ \/ _ \
>  _/ _// // / / / _ \   | |/ / ___/ ___/
>  /_/ /(_)_/\___/   |___/_/  /_/
>
> DBGvpp# Aborted
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16858): https://lists.fd.io/g/vpp-dev/message/16858
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread sadhanakesavan
also i tried to get use-svm-api in existing kernel asa workaround to establish 
the tcp ,any fix suggestions?

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
 server uri tcp://10.0.0.1/24 use-svm-api

vl_map_shmem:582: region init fail

connect_to_vlib_internal:413: vl_client_api map rv -2

Timing qconnect:lastbyte

Missing Start Timing Event (qconnect)!

Missing End Timing Event (lastbyte)!

 TX 

0 bytes (0 mbytes, 0 gbytes) in 0.00 seconds

 RX 

0 bytes (0 mbytes, 0 gbytes) in 0.00 seconds



Received close on 0 streams (and 0 Quic conn)

Received reset on 0 streams (and 0 Quic conn)

Sent close on 0 streams (and 0 Quic conn)

Discarded 0 streams (and 0 Quic conn)



Got accept on 0 streams (and 0 Quic conn)

Got connected on 0 streams (and 0 Quic conn)

Failure Return Status: 42

ECHO_FAIL_SHMEM_CONNECT (5): shmem connect failed | ECHO_FAIL_CONNECT_TO_VPP 
(16): Couldn't connect to vpp | ECHO_FAIL_MISSING_START_EVENT (41): Expected 
event qconnect to happen, but it did not! | ECHO_FAIL_MISSING_END_EVENT (42): 
Expected event lastbyte to happen, but it did not!-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
 server uri tcp://10.0.0.1/24 use-svm-a
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16859): https://lists.fd.io/g/vpp-dev/message/16859
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread Florin Coras
As replied on the private thread, vpp-echo needs to know about the api prefix 
you’ve configured (vpp1). Try adding “chroot prefix vpp1” to the list of 
options. 

Regards,
Florin

> On Jul 1, 2020, at 10:46 AM, sadhanakesa...@gmail.com wrote:
> 
> also i tried to get use-svm-api in existing kernel asa workaround to 
> establish the tcp ,any fix suggestions?
> 
>  -bash-4.2$ sudo 
> /auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
>  server uri tcp://10.0.0.1/24 use-svm-api
> vl_map_shmem:582: region init fail
> connect_to_vlib_internal:413: vl_client_api map rv -2
> Timing qconnect:lastbyte
> Missing Start Timing Event (qconnect)!
> Missing End Timing Event (lastbyte)!
>  TX 
> 0 bytes (0 mbytes, 0 gbytes) in 0.00 seconds
>  RX 
> 0 bytes (0 mbytes, 0 gbytes) in 0.00 seconds
> 
> Received close on 0 streams (and 0 Quic conn)
> Received reset on 0 streams (and 0 Quic conn)
> Sent close on 0 streams (and 0 Quic conn)
> Discarded 0 streams (and 0 Quic conn)
> 
> Got accept on 0 streams (and 0 Quic conn)
> Got connected on  0 streams (and 0 Quic conn)
>
> Failure Return Status: 42
> ECHO_FAIL_SHMEM_CONNECT (5): shmem connect failed | ECHO_FAIL_CONNECT_TO_VPP 
> (16): Couldn't connect to vpp | ECHO_FAIL_MISSING_START_EVENT (41): Expected 
> event qconnect to happen, but it did not! | ECHO_FAIL_MISSING_END_EVENT (42): 
> Expected event lastbyte to happen, but it did not!-bash-4.2$ sudo 
> /auto/home.nas04/skesavan/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp_echo
>  server uri tcp://10.0.0.1/24 use-svm-a
> 
> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16860): https://lists.fd.io/g/vpp-dev/message/16860
Mute This Topic: https://lists.fd.io/mt/75236261/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] OpenGrok Question

2020-07-01 Thread Vanessa Valderrama
LF is doing reviewing our INFRA inventory. We'd like to know if the
community actively uses OpenGrok?

Thank you,
Vanessa
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16861): https://lists.fd.io/g/vpp-dev/message/16861
Mute This Topic: https://lists.fd.io/mt/75241978/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Can one memif have multiple sockets

2020-07-01 Thread RaviKiran Veldanda
Hi Team,
I realized that memif and corresponding sockets are tightly coupled.
Means for example memif1 --> /run/vpp/memif1.sock --> application.
We can not have multiple sockets for memif1, is my understanding correct?
memif1 --> /run/vpp/memif1.sock if its already mapped, then we can not create 
memif1-->/run/vpp/memif2.sock

please let me know if my understanding wrong and we can create multiple sockets 
for single interface?
//Ravi
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16862): https://lists.fd.io/g/vpp-dev/message/16862
Mute This Topic: https://lists.fd.io/mt/75245182/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] How to active tx udp checksum offload #dpdk #mellanox

2020-07-01 Thread Raj Kumar
Hi,
I am using vpp stable/2005 code. I want to enable UDP checksum offload for tx . 
I changed the vpp startup.cfg file -

## Disables UDP / TCP TX checksum offload. Typically needed for use
## faster vector PMDs (together with no-multi-seg)
# no-tx-checksum-offload

## Enable UDP / TCP TX checksum offload
## This is the reversed option of 'no-tx-checksum-offload'
*enable-tcp-udp-checksum*

But , still it is not activated . However all these tx offloads are available.

With VPP 19.05 , I was able to activate it by adding following piece of code in 
./plugins/dpdk/device/init.c

case VNET_DPDK_PMD_CXGBE:
case VNET_DPDK_PMD_MLX4:
case VNET_DPDK_PMD_MLX5:
case VNET_DPDK_PMD_QEDE:
case VNET_DPDK_PMD_BNXT:
xd->port_type = port_type_from_speed_capa (&dev_info);

*if (dm->conf->no_tx_checksum_offload == 0)*
*{*
*xd->port_conf.txmode.offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;*
*xd->port_conf.txmode.offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;*
*xd->flags |=*
*DPDK_DEVICE_FLAG_TX_OFFLOAD |*
*DPDK_DEVICE_FLAG_INTEL_PHDR_CKSUM;*
*}*
break;

But, the above code does not work with vpp 20.05. I think , there is a newer 
version of DPDK in this release.

Please let me know if I am doing something wrong.

HundredGigabitEthernet12/0/0       2     up   HundredGigabitEthernet12/0/0
Link speed: 100 Gbps
Ethernet address b8:83:03:9e:68:f0
Mellanox ConnectX-4 Family
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif rx-ip4-cksum
rx: queues 2 (max 1024), desc 1024 (min 0 max 65535 align 1)
tx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.00 numa 0
switch info: name :12:00.0 domain id 0 port id 65535
max rx packet len: 65536
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
jumbo-frame scatter timestamp keep-crc rss-hash
rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
*tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso*
*outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso geneve-tnl-tso*
*multi-segs udp-tnl-tso ip-tnl-tso*
*tx offload active: multi-segs*
rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only l3-src-only
rss active:        ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
ipv6-ex ipv6
tx burst mode: No MPW + MULTI + TSO + INLINE + METADATA
rx burst mode: Scalar

tx frames ok                                    14311830
tx bytes ok                                 128602546562
rx frames ok                                        1877
rx bytes ok                                       228452

thanks,
-Raj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16863): https://lists.fd.io/g/vpp-dev/message/16863
Mute This Topic: https://lists.fd.io/mt/75246142/21656
Mute #dpdk: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/dpdk
Mute #mellanox: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/mellanox
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp crashes when running "ip punt redirect rx via "

2020-07-01 Thread Chuan Han via lists.fd.io
It seems this commit causes the issue.

https://gerrit.fd.io/r/c/vpp/+/27675
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16864): https://lists.fd.io/g/vpp-dev/message/16864
Mute This Topic: https://lists.fd.io/mt/75247302/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] failed to init shmem: tcp_echo server not working

2020-07-01 Thread sadhanakesavan
Hi,
I am unable to test if my server is up, could I be missing any configs in 
interfaces/ vpp hoststack setup :
I configured vpp1 server with the following commands :

1./auto/home.nas04/skesavan/ *vpp* /build-root/install- *vpp* _debug-native/ 
*vpp* /bin/ *vpp* -c /etc/ *vpp* /startup.conf
2. -bash-4.2$ sudo ./configurevpp1.sh

host-vpp1

host-vpp2

3.cat ./configure vpp1.sh

-bash-4.2$ cat configurevpp1.sh

#!/bin/bash

PATH=$PATH:./build-root/install-vpp_debug-native/vpp/bin/

if [ $USER != "root" ] ; then

echo "Restarting script with sudo..."

sudo $0 ${*}

exit

fi

# delete previous incarnations if they exist

ip link del dev vpp1

ip link del dev vpp2

ip netns del vpp1

ip netns del vpp2

#create namespaces

ip netns add vpp1

ip netns add vpp2

# create and configure 1st veth pair

ip link add name veth_vpp1 type veth peer name vpp1

ip link set dev vpp1 up

ip link set dev veth_vpp1 up netns vpp1

ip netns exec vpp1 \

bash -c "

ip link set dev lo up

ip addr add 172.16.1.2/24 dev veth_vpp1

ip route add 172.16.2.0/24 via 172.16.1.1

"

# create and configure 2st veth pair

ip link add name veth_vpp2 type veth peer name vpp2

ip link set dev vpp2 up

ip link set dev veth_vpp2 up netns vpp2

ip netns exec vpp2 \

bash -c "

ip link set dev lo up

ip addr add 172.16.2.2/24 dev veth_vpp2

ip route add 172.16.1.0/24 via 172.16.2.1

"

vppctl create host-interface name vpp1

vppctl create host-interface name vpp2

vppctl set int state host-vpp1 up

vppctl set int state host-vpp2 up

vppctl set int ip address host-vpp1 172.16.1.1/24

vppctl set int ip address host-vpp2 172.16.2.1/24

vppctl ip route add 172.16.1.0/24 via 172.16.1.1 host-vpp1

vppctl ip route add 172.16.2.0/24 via 172.16.2.1 host-vpp2

4.
DBGvpp# sh int host-vpp1

Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count

host-vpp1 1 up 9000/0/0/0

DBGvpp# sh int addr

host-vpp1 (up):

L3 122.16.1.1/24

L3 172.16.1.1/24

local0 (dn):

DBGvpp#

on vpp client :

1. -bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock show int addr

host-vpp1 (up):

L3 192.16.1.1/24

host-vpp2 (up):

L3 192.16.2.1/24

local0 (dn):

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock session enable

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock create host-interface name vpp2

host-vpp2

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock set int ip address host-vpp2 10.0.0.2/24

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock set int ip address host-vpp2 172.16.2.1/24

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock set int state host-vpp2 up

-bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock show int addr

host-vpp1 (up):

L3 192.16.1.1/24

host-vpp2 (up):

L3 192.16.2.1/24

L3 10.0.0.2/24

L3 172.16.2.1/24

2. -bash-4.2$ sudo 
/auto/home.nas04/skesavan/vpp2/build-root/install-vpp_debug-native/vpp/bin/vppctl
 -s /run/vpp/cli-vpp2.sock test echo client uri tcp://122.16.1.1/24

test failed

test echo clients: connect returned: -7

what could be missing potentially?
-vpp1 and vpp2 are two centos7 vms:have added strace output for the vpp2 
connect call.


strace-connect-vpp2
Description: Binary data
#!/bin/bash




PATH=$PATH:./vpp2/build-root/install-vpp_debug-native/vpp/bin/




if [ $USER != "root" ] ; then

echo "Restarting script with sudo..."

sudo $0 ${*}

exit

fi




# delete previous incarnations if they exist

ip link del dev vpp1

ip link del dev vpp2

ip netns del vpp1

ip netns del vpp2




#create namespaces

ip netns add vpp1

ip netns add vpp2




# create and configure 1st veth pair

ip link add name veth_vpp1 type veth peer name vpp1

ip link set dev vpp1 up

ip link set dev veth_vpp1 up netns vpp1




ip netns exec vpp1 \

  bash -c "

  ip link set dev lo up

  ip addr add 192.16.1.2/24 dev veth_vpp1

  ip route add 192.16.2.0/24 via 192.16.1.1

  "




# create and configure 2st veth pair

ip link add name veth_vpp2 type veth peer name vpp2

ip link set dev vpp2 up

ip link set dev veth_vpp2 up netns vpp2




ip netns exec vpp2 \

  bash -c "

  ip link set dev lo up

  ip addr add 192.16.2.2/24 dev veth_vpp2

  ip route add 192.16.1.0/24 via 192.16.2.1

  "




  vppctl -s /run/vpp/cli-vpp2.sock create host-interface name vpp1

  vppctl -s /run/vpp/cli-vpp2.sock create host-interface name vpp2

  vppctl -s /run/vpp/cli-vpp2.sock set int state host-vpp1 up