Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2019-12-06 Thread Florin Coras
Hi Lin, 

I don’t see anything obviously wrong. 

What is your vcl.conf? Also could you check the status of your nginx workers in 
vpp by doing: “show app” and then “show app ” where index is the index 
associated to your nginx app (if no other app is associated it should be 1). 

Here’s some example output: 

DBGvpp# sh app
Index NameNamespace
0 tls default
1 ldp-83053-app[shm]  default
DBGvpp# sh app 1
app-name ldp-83053-app[shm] app-index 1 ns-index 0 seg-size 38.15m
rx-fifo-size 97.66k tx-fifo-size 97.66k workers:
  wrk-index 1 app-index 1 map-index 0 api-client-index 0
  wrk-index 2 app-index 1 map-index 1 api-client-index 256
  wrk-index 3 app-index 1 map-index 2 api-client-index 512
  wrk-index 4 app-index 1 map-index 3 api-client-index 768
  wrk-index 5 app-index 1 map-index 4 api-client-index 1024 

Here nginx is registered into the session layer by ldp as “ld-83053-app” (83053 
is nginx’s pid) and it has 4 workers. 

Regards, 
Florin

> On Dec 6, 2019, at 12:47 AM, lin.yan...@zte.com.cn wrote:
> 
> Hi Florin,
> I have modified some configuration items of starup.conf and nginx.conf,but 
> the results were still the same.
> The nginx logs are:<捕获1.PNG>
> 
> The configuration files are in the attachment.
> I don't know what went wrong? Can you help me analyze it?
> Thanks,
> Yang.L <捕获1.PNG>-=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14821): https://lists.fd.io/g/vpp-dev/message/14821
> Mute This Topic: https://lists.fd.io/mt/64501057/675152
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480544
> Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480544
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14830): https://lists.fd.io/g/vpp-dev/message/14830
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to configure network between different namespaces using hoststack

2019-12-06 Thread Florin Coras
Hi Hanlin, 

Inline. 

> On Dec 5, 2019, at 7:00 PM, wanghanlin  wrote:
> 
> Hi Florin,
> Okay, regarding first question,  the following is the detailed use case:
> I have one 82599 nic in my Linux host. Then I allocate two VF interfaces 
> through SRIOV,  one VF place into a Linux namespace N1 and assign IP address 
> 192.168.1.2/24, another VF place into VPP.  
> I have three applications (just called APP1, APP2, APP3) communicating with 
> each other, and each application must get the source IP address (not 0.0.0.0) 
> after accept for a connect request.
> APP1 run in Linux namespce N1 and use IP address 192.168.1.2/24. APP2 run in 
> Linux namespace N2 and use IP address 192.168.1.3/24. APP3 run in Linux 
> namespace N3 and use IP address 192.168.1.4/24.  
> And finally, APP2 and APP3 need to run based LDP.
> 
> Let's summarize:
> APP1, N1, 192.168.1.2/24, outside VPP
> APP2, N2, 192.168.1.3/24, inside VPP
> APP3, N3, 192.168.1.4/24, inside VPP


FC: I assume N2 and N3 are mapped to app namespaces from VPP perspective. 
Additionally, those two prefixes, i.e., 192.168.1.3/24 and 192.168.1.4/24, do 
not need to be configured on interfaces part of N2 and N3 respectively. 

Then, from vpp perspective, APP2 and APP3 are “locally attached” and APP1 is 
“remote”. So, from my perspective, they’re at least two different networks. 
APP2 and APP3 could be the same or different networks.

For instance, you could assign 192.168.1.2/25 to N1 and then leave 
192.168.1.128/25 to vpp for N2 and N3. Within vpp you have two options:
- add two interfaces, say intN2 and intN3 with IPs 192.168.1.129/32 and 
192.168.1.130/32 and associate N2 and N3 app namespaces to those interfaces 
(not the fibs). Whenever initiating connections, APP2 and APP3 will pick up the 
ips of the interfaces associated to their respective app namespaces. 
- add one interface intN with IP 192.168.1.129/25 and associate both namespaces 
to it. If you need APP1 to use 192.168.1.129 and APP2 192.168.1.130, then 
you’ll need your apps to call bind before connecting (haven’t tested this but I 
think it should work). 

The above assumes APP2 and APP3 map to different app namespaces. If you want to 
use the same app namespace, to be able to use cut-through connections, then 
only option 2 works. Additionally, you need the two apps to attach with both 
local and global scope set. 

Hope this helps!

Regards, 
Florin

> 
> Then, my question is how to configure 192.168.1.3/24 and 192.168.1.4/24 in 
> VPP?
> 
> Thanks & Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> On 12/6/2019 03:56,Florin Coras 
>  wrote: 
> Hi Hanlin, 
> 
> Inline.
> 
>> On Dec 4, 2019, at 1:59 AM, wanghanlin > > wrote:
>> 
>> Hi Florin,
>> 
>> Thanks for your patient reply.  Still I have some doubt inline.
>> 
>>  
>> wanghanlin
>> 
>> wanghan...@corp.netease.com
>>  
>> 
>> 签名由 网易邮箱大师  定制
>> On 11/30/2019 02:47,Florin Coras 
>>  wrote: 
>> Hi Hanlin, 
>> 
>> Inline. 
>> 
>>> On Nov 29, 2019, at 7:12 AM, wanghanlin >> > wrote:
>>> 
>>> Hi Florin,
>>> Thanks for your reply.
>>> I just consider a very simple use case. Some apps in different containers 
>>> communicate through VPP, just in a L2 bridge domain.  
>>> Without hoststack,  we may add some host-interfaces in one bridge domain, 
>>> and assign IP address of veth interface in containers. In addition, a 
>>> physical nic also added in same bridge domain to communicate with other 
>>> hosts.
>>> But with hoststack, things seem complicated because we have to assign IP 
>>> address inside VPP.  
>> 
>> FC: Yes, with host stack transport protocols are terminated in vpp, 
>> therefore the interfaces must have IPs. Do you need network access to the 
>> container’s linux stack for other applications, i.e., do you need IPs in the 
>> container as well? Also, can’t you give the interfaces /32 IPs?
>> 
>> Hanlin:I need not access to contaner's linux stack now, I think I can create 
>> another host-interface with another IP if needed.  Also,  if I give the 
>> interfac

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-06 Thread Florin Coras
Hi Dom, 

Great to see progress! More inline. 

> On Dec 6, 2019, at 10:21 AM, dch...@akouto.com wrote:
> 
> Hi Florin,
> 
> Some progress, at least with the built-in echo app, thank you for all the 
> suggestions so far! By adjusting the fifo-size and testing in half-duplex I 
> was able to get close to 5 Gbps between the two openstack instances using the 
> built-in test echo app:
> 
> vpp# test echo clients gbytes 1 no-return fifo-size 100 uri 
> tcp://10.0.0.156/

FC: The cli for the echo apps is a bit confusing. Whatever you pass above is 
left shifted by 10 (multiplied by 1024) so that’s why I suggested to use 4096 
(~4MB). You can also use larger values, but above you are asking for ~1GB :-)

> 1 three-way handshakes in .26 seconds 3.86/s
> Test started at 745.163085
> Test finished at 746.937343
> 1073741824 bytes (1024 mbytes, 1 gbytes) in 1.77 seconds
> 605177784.33 bytes/second half-duplex
> 4.8414 gbit/second half-duplex
> 
> I need to get closer to 10 Gbps but at least there is good proof that the 
> issue is related to configuration / tuning. So, I switched back to iperf 
> testing with VCL, and I'm back to 600 Mbps, even though I can confirm that 
> the fifo sizes match what is configured in vcl.conf (note that in this test 
> run I changed that to 8 Mb each for rx and tx from the previous 16, but 
> results are the same when I use 16 Mb). I'm obviously missing something in 
> the configuration but I can't imagine what that might be. Below is my exact 
> startup.conf, vcl.conf and output from show session from this iperf run to 
> give the full picture, hopefully something jumps out as missing in my 
> configuration. Thank you for your patience and support with this, much 
> appreciated!

FC: Not entirely sure what the issue is but some things can be improved. More 
lower. 

> 
> [root@vpp-test-1 centos]# cat vcl.conf
> vcl {
>   rx-fifo-size 800
>   tx-fifo-size 800
>   app-scope-local
>   app-scope-global
>   api-socket-name /tmp/vpp-api.sock
> }

FC: This looks okay.

> 
> [root@vpp-test-1 centos]# cat /etc/vpp/startup.conf
> unix {
>   nodaemon
>   log /var/log/vpp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
>   interactive
> }
> dpdk {
>   dev :00:03.0{
>   num-rx-desc 65535
>   num-tx-desc 65535

FC: Not sure about this. I don’t have any experience with vhost interfaces, but 
for XL710s I typically use 256 descriptors. It might be too low if you start 
noticing lots of rx/tx drops with “show int”. 

>   }
> }
> session { evt_qs_memfd_seg }
> socksvr { socket-name /tmp/vpp-api.sock }
> api-trace {
>   on
> }
> api-segment {
>   gid vpp
> }
> cpu {
> main-core 7
> corelist-workers 4-6
> workers 3

FC: For starters, could you try this out with only 1 worker, since you’re 
testing with 1 connection. 

Also, did you try pinning iperf with taskset to a worker on the same numa like 
your vpp workers, in case you have multiple numas? Check with lscpu your cpu 
into numa distribution.  

You may want to pin iperf even if you have only one numa, just to be sure it 
won’t be scheduled by mistake on the cores vpp is using. 

> }
> buffers {
> ## Increase number of buffers allocated, needed only in scenarios with
> ## large number of interfaces and worker threads. Value is per numa 
> node.
> ## Default is 16384 (8192 if running unpriviledged)
> buffers-per-numa 128000

FC: For simple testing I only use 16k, but this value actually depends on the 
number of rx/tx descriptors you have configured. 

>  
> ## Size of buffer data area
> ## Default is 2048
> default data-size 8192

FC: Are you trying to use jumbo buffers? You need to add to the tcp stanza, 
i.e., tcp { mtu  }. But for starters don’t modify the 
buffer size, just to get an idea of where performance is without this. 

Afterwards, as Jerome suggested, you may want to try tso by enabling it for 
tcp, i.e., tcp { tso } in startup.conf and enabling tso for the nic by adding 
“tso on” to the nic’s dpdk stanza (if the nic actually supports it). You don’t 
need to change the buffer size for that. 

> }
> 
> vpp# sh session verbose 2
> Thread 0: no sessions
> [1:0][T] 10.0.0.152:41737->10.0.0.156:5201ESTABLISHED
>  index: 0 flags:  timers:
>  snd_una 124 snd_nxt 124 snd_una_max 124 rcv_nxt 5 rcv_las 5
>  snd_wnd 7999488 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 4 snd_wl2 124
>  flight size 0 out space 4413 rcv_wnd_av 7999488 tsval_recent 12893009
>  tsecr 10757431 tsecr_last_ack 10757431 tsval_recent_age 1995 snd_mss 1428
>  rto 200 rto_boff 0 srtt 3 us 3.887 rttvar 2 rtt_ts 0. rtt_seq 124
>  cong:   none algo newreno cwnd 4413 ssthresh 4194304 bytes_acked 0
>  cc space 4413 prev_cwnd 0 prev_ssthresh 0 rtx_bytes 0
>  snd_congestion 1736877166 dupack 0 limited_transmit 1736877166
>  sboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
>  last_bytes_delivered 0 high_sacked 

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-06 Thread dchons
Hi Florin,

Some progress, at least with the built-in echo app, thank you for all the 
suggestions so far! By adjusting the fifo-size and testing in half-duplex I was 
able to get close to 5 Gbps between the two openstack instances using the 
built-in test echo app:

vpp# test echo clients gbytes 1 no-return fifo-size 100 uri 
tcp://10.0.0.156/
1 three-way handshakes in .26 seconds 3.86/s
Test started at 745.163085
Test finished at 746.937343
1073741824 bytes (1024 mbytes, 1 gbytes) in 1.77 seconds
605177784.33 bytes/second half-duplex
4.8414 gbit/second half-duplex

I need to get closer to 10 Gbps but at least there is good proof that the issue 
is related to configuration / tuning. So, I switched back to iperf testing with 
VCL, and I'm back to 600 Mbps, even though I can confirm that the fifo sizes 
match what is configured in vcl.conf (note that in this test run I changed that 
to 8 Mb each for rx and tx from the previous 16, but results are the same when 
I use 16 Mb). I'm obviously missing something in the configuration but I can't 
imagine what that might be. Below is my exact startup.conf, vcl.conf and output 
from show session from this iperf run to give the full picture, hopefully 
something jumps out as missing in my configuration. Thank you for your patience 
and support with this, much appreciated!

*[root@vpp-test-1 centos]# cat vcl.conf*
vcl {
rx-fifo-size 800
tx-fifo-size 800
app-scope-local
app-scope-global
api-socket-name /tmp/vpp-api.sock
}

*[root@vpp-test-1 centos]# cat /etc/vpp/startup.conf*
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
interactive
}
dpdk {
dev :00:03.0{
num-rx-desc 65535
num-tx-desc 65535
}
}
session { evt_qs_memfd_seg }
socksvr { socket-name /tmp/vpp-api.sock }
api-trace {
on
}
api-segment {
gid vpp
}
cpu {
main-core 7
corelist-workers 4-6
workers 3
}
buffers {
## Increase number of buffers allocated, needed only in scenarios with
## large number of interfaces and worker threads. Value is per numa node.
## Default is 16384 (8192 if running unpriviledged)
buffers-per-numa 128000

## Size of buffer data area
## Default is 2048
default data-size 8192
}

*vpp# sh session verbose 2*
Thread 0: no sessions
[1:0][T] 10.0.0.152:41737->10.0.0.156:5201        ESTABLISHED
index: 0 flags:  timers:
snd_una 124 snd_nxt 124 snd_una_max 124 rcv_nxt 5 rcv_las 5
snd_wnd 7999488 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 4 snd_wl2 124
flight size 0 out space 4413 rcv_wnd_av 7999488 tsval_recent 12893009
tsecr 10757431 tsecr_last_ack 10757431 tsval_recent_age 1995 snd_mss 1428
rto 200 rto_boff 0 srtt 3 us 3.887 rttvar 2 rtt_ts 0. rtt_seq 124
cong:   none algo newreno cwnd 4413 ssthresh 4194304 bytes_acked 0
cc space 4413 prev_cwnd 0 prev_ssthresh 0 rtx_bytes 0
snd_congestion 1736877166 dupack 0 limited_transmit 1736877166
sboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 1736877166 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 1736877166 rescue_rxt 1736877166
stats: in segs 7 dsegs 4 bytes 4 dupacks 0
out segs 7 dsegs 2 bytes 123 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 2.484
err wnd data below 0 above 0 ack below 0 above 0
pacer: bucket 42459 tokens/period .685 last_update 61908201
Rx fifo: cursize 0 nitems 799 has_event 0
head 4 tail 4 segment manager 3
vpp session 0 thread 1 app session 0 thread 0
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 0 nitems 799 has_event 0
head 123 tail 123 segment manager 3
vpp session 0 thread 1 app session 0 thread 0
ooo pool 0 active elts newest 4294967295
[1:1][T] 10.0.0.152:53460->10.0.0.156:5201        ESTABLISHED
index: 1 flags: PSH pending timers: RETRANSMIT
snd_una 160482962 snd_nxt 160735718 snd_una_max 160735718 rcv_nxt 1 rcv_las 1
snd_wnd 7999488 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 1 snd_wl2 160482962
flight size 252756 out space 714 rcv_wnd_av 7999488 tsval_recent 12895476
tsecr 10759907 tsecr_last_ack 10759907 tsval_recent_age 4294966825 snd_mss 1428
rto 200 rto_boff 0 srtt 1 us 3.418 rttvar 2 rtt_ts 42.0588 rtt_seq 160485818
cong:   none algo newreno cwnd 253470 ssthresh 187782 bytes_acked 2856
cc space 714 prev_cwnd 382704 prev_ssthresh 187068 rtx_bytes 0
snd_congestion 150237062 dupack 0 limited_transmit 817908495
sboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 150242774 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 150235634 rescue_rxt 149855785
stats: in segs 84958 dsegs 0 bytes 0 dupacks 1237
out segs 112747 dsegs 112746 bytes 160999897 dupacks 0
fr 5 tr 0 rxt segs 185 bytes 264180 duration 2.473
err wnd data below 0 above 0 ack below 0 above 0
pacer: bucket 22180207 tokens/period 117.979 last_update 61e173e5
Rx fifo: cursize 0 nitems 799 has_event 0
head 0 tail 0 segment manager 3
vpp session 1 thread 1 app session 1 thread 0
ooo pool 0 active elts newest 0
Tx fifo: cursize 799 nitems 799 has_event 1
head 482961 tail 482960 segment manager 3
vpp 

Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-06 Thread Damjan Marion via Lists.Fd.Io


> On 6 Dec 2019, at 07:16, Prashant Upadhyaya  wrote:
> 
> Hi,
> 
> I use VPP with DPDK driver for I/O with NIC.
> For high speed switching of packets to and from kernel, I use DPDK KNI
> (kernel module and user space API's provided by DPDK)
> This works well because the vlib buffer is backed by the DPDK mbuf
> (KNI uses DPDK mbuf's)
> 
> Now, if I choose to use a native driver of VPP for I/O with NIC, is
> there a native equivalent in VPP to replace KNI as well ? The native
> equivalent should not lose out on performance as compared to KNI so I
> believe the tap interface can be ruled out here.
> 
> If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> would have to do a data copy between the vlib buffer and an mbuf  in
> addition to doing all the DPDK pool maintenance etc. The copies would
> be destructive for performance surely.
> 
> So I believe, the question is -- in presence of native drivers in VPP,
> what is the high speed equivalent of DPDK KNI.

You can use dpdk and native drivers on the same time.
How KNI performance compares to tap with vhost-net backend?


-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14826): https://lists.fd.io/g/vpp-dev/message/14826
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] CSIT - performance tests failing on Taishan

2019-12-06 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
>> the attached patch

Converted to Gerrit: [1].

Vratko.

[1] https://gerrit.fd.io/r/c/vpp/+/23849

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Juraj Linkeš
Sent: Thursday, December 5, 2019 11:11 AM
To: Lijian Zhang (Arm Technology China) ; Peter Mikus -X 
(pmikus - PANTHEON TECH SRO at Cisco) ; Benoit Ganne (bganne) 
; Maciek Konstantynowicz (mkonstan) ; 
vpp-dev ; csit-...@lists.fd.io
Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) ; 
Honnappa Nagarahalli 
Subject: Re: [vpp-dev] CSIT - performance tests failing on Taishan

Hi Lijian,

The patch helped, I can't reproduce the issue now.

Thanks,
Juraj

-Original Message-
From: Lijian Zhang (Arm Technology China) 
Sent: Thursday, December 5, 2019 7:16 AM
To: Juraj Linkeš ; Peter Mikus -X (pmikus - 
PANTHEON TECH SRO at Cisco) ; Benoit Ganne (bganne) 
; Maciek Konstantynowicz (mkonstan) ; 
vpp-dev ; csit-...@lists.fd.io
Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) ; 
Honnappa Nagarahalli 
Subject: RE: CSIT - performance tests failing on Taishan

Hi Juraj,
Could you please try the attached patch?
Thanks.
-Original Message-
From: Juraj Linkeš 
Sent: 2019年12月4日 18:12
To: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) ; 
Benoit Ganne (bganne) ; Maciek Konstantynowicz (mkonstan) 
; vpp-dev ; csit-...@lists.fd.io
Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) ; 
Lijian Zhang (Arm Technology China) ; Honnappa 
Nagarahalli 
Subject: RE: CSIT - performance tests failing on Taishan

Hi Ben, Lijian, Honnappa,

The issue is reproducible after the second invocation of show pci:
DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:11:00.0   2  8086:10fb   5.0 GT/s x8  ixgbe
:11:00.1   2  8086:10fb   5.0 GT/s x8  ixgbe
0002:f9:00.0   0  15b3:1015   8.0 GT/s x8  mlx5_core   CX4121A - ConnectX-4 
LX SFP28   PN: MCX4121A-ACAT_C12

   EC: A1

   SN: MT1745K13032

   V0: 0x 50 43 49 65 47 65 6e 33 ...

   RV: 0x ba
0002:f9:00.1   0  15b3:1015   8.0 GT/s x8  mlx5_core   CX4121A - ConnectX-4 
LX SFP28   PN: MCX4121A-ACAT_C12

   EC: A1

   SN: MT1745K13032

   V0: 0x 50 43 49 65 47 65 6e 33 ...

   RV: 0x ba DBGvpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:11:00.0   2  8086:10fb   5.0 GT/s x8  ixgbe
:11:00.1   2  8086:10fb   5.0 GT/s x8  ixgbe
Aborted
Makefile:546: recipe for target 'run' failed
make: *** [run] Error 134

I've tried to do some debugging with a debug build:
(gdb) bt
...
#5  0xbe775000 in format_vlib_pci_vpd (s=0x7efa9e80 "0002:f9:00.0   
0  15b3:1015   8.0 GT/s x8  mlx5_core   CX4121A - ConnectX-4 LX SFP28", 
args=0x7ef729b0) at /home/testuser/vpp/src/vlib/pci/pci.c:230
...
(gdb) frame 5
#5  0xbe775000 in format_vlib_pci_vpd (s=0x7efa9e80 "0002:f9:00.0   
0  15b3:1015   8.0 GT/s x8  mlx5_core   CX4121A - ConnectX-4 LX SFP28", 
args=0x7ef729b0) at /home/testuser/vpp/src/vlib/pci/pci.c:230
230   else if (*(u16 *) & data[p] == *(u16 *) id)
(gdb) info locals
data = 0x7efa9cd0 "PN\025MCX4121A-ACAT_C12EC\002A1SN\030MT1745K13032", 
' ' , "V0\023PCIeGen3 x8RV\001\272"
id = 0xaaa8  indent = 91 string_types = {0xbe7b7950 "PN", 
0xbe7b7958 "EC", 0xbe7b7960 "SN", 0xbe7b7968 "MN", 0x0} p = 0 
first_line = 1

Looks like something went wrong with the 'id' variable. More is attached.

As a temporary workaround (until we fix this), we're going to replace show pci 
with something else in CSIT: https://gerrit.fd.io/r/c/csit/+/23785

Juraj

-Original Message-
From: Peter Mikus -X (pmikus - PANTHEON TECH SRO at Cisco) 
Sent: Tuesday, December 3, 2019 3:58 PM
To: Juraj Linkeš ; Benoit Ganne (bganne) 
; Maciek Konstantynowicz (mkonstan) ; 
vpp-dev ; csit-...@lists.fd.io
Cc: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) ; 
lijian.zh...@arm.com; Honnappa Nagarahalli 
Subject: RE: CSIT - performance tests failing on Taishan

Latest update is that Benoit has no access over VPN so he did try to replicate 
in local lab (assuming x86).
I will do quick fix in CSIT. I will disable MLX driver on Taishan.

Peter Mikus
Engineer - Software
Cisc

[vpp-dev] Regarding high speed I/O with kernel

2019-12-06 Thread Prashant Upadhyaya
Hi,

I use VPP with DPDK driver for I/O with NIC.
For high speed switching of packets to and from kernel, I use DPDK KNI
(kernel module and user space API's provided by DPDK)
This works well because the vlib buffer is backed by the DPDK mbuf
(KNI uses DPDK mbuf's)

Now, if I choose to use a native driver of VPP for I/O with NIC, is
there a native equivalent in VPP to replace KNI as well ? The native
equivalent should not lose out on performance as compared to KNI so I
believe the tap interface can be ruled out here.

If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
would have to do a data copy between the vlib buffer and an mbuf  in
addition to doing all the DPDK pool maintenance etc. The copies would
be destructive for performance surely.

So I believe, the question is -- in presence of native drivers in VPP,
what is the high speed equivalent of DPDK KNI.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14824): https://lists.fd.io/g/vpp-dev/message/14824
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2019-12-06 14:02:36 UTC

2019-12-06 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 1
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14823): https://lists.fd.io/g/vpp-dev/message/14823
Mute This Topic: https://lists.fd.io/mt/67469852/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Should bond's inactive alave be blocked when bond in mode "active-backup" ? #vnet #vapi #vpp

2019-12-06 Thread gencli Liu
Hi everyone:
I met a problem in bonding ( mode : active-backup) when I use VPP.
I am not know how bonding(mode:avtive-backup) works in linux exactly.
So I just did a test for this problem .

Topology:
linux-A     |   linux-B
/ em3 -- | -- em3(10.10.1.99/24)   
Vxlan1(192..192.1.99/24)
Vxlan1(192..192.1.100/24)  bond0(10.10.1.100/24) -/                 
                     |
\- em4 -  | -- em4(172.16.152.60) - 
Wireshark

When *bond0* 's active slave in *linux-A* is *em3* , I ping *192.192.1.100* 
from *linux-B* 's terminal, ping is *ok* !
I down the *em3* in *linux* - *A* and up it later on. I do this just want to 
change *bond0* 's active slave in *linux* - *A* to *em4*. I use cmd " *cat 
/proc/net/bonding/bond0* " to make sure it works well.
So *bond0* 's active slave in *linux-A* is *em4* , I ping *192.192.1.100* from 
*linux-B* 's terminal, ping is *not ok* (that makes sense).
But Wireshart softwear in *linux* - *B* can't capture any packets about ping on 
*em4* in *linux-B*. At the same time, *bond0* and *em4* in *linux* - *A* does 
not send any packets, and *me3* and *bond0* in *linux* - *A* drop packets with 
Same frequency like ping.

So I think the linux's bond( mode : active-backup ) drop packtets incoming from 
inactive-slave !

First, I create a bond on linux-A(CentOS Linux release 7.7.1908 (Core)), the 
relsut like this:
*[root@localhost ~]# cat /proc/net/bonding/bond0*
*Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)*
**
*Bonding Mode: fault-tolerance (active-backup)*
*Primary Slave: None*
*Currently Active Slave: em3*
*MII Status: up*
*MII Polling Interval (ms): 100*
*Up Delay (ms): 0*
*Down Delay (ms): 0*
**
*Slave Interface: em3*
*MII Status: up*
*Speed: 100 Mbps*
*Duplex: full*
*Link Failure Count: 2*
*Permanent HW addr: 20:04:0f:f5:8a:3a*
*Slave queue ID: 0*
**
*Slave Interface: em4*
*MII Status: up*
*Speed: 100 Mbps*
*Duplex: full*
*Link Failure Count: 3*
*Permanent HW addr: 20:04:0f:f5:8a:3b*
*Slave queue ID: 0*
(We can see than bond0 has two slave interface, em3 is the active slave and em4 
is the inactive slave)
Second, I create a vxlan interface on linux-A, the CMD and result like this:
*[root@localhost ~]#* *ip link add vxlan1 type vxlan id 139265 remote 
10.10.1.99 dstport 4789 dev bond0*
*[root@localhost ~]#* *ip link set vxlan1 up*
*[root@localhost ~]#* *ip addr add 192.192.1.100/24 dev vxlan1*
*[root@localhost ~]# ifconfig*
*bond0: flags=5187  mtu 1500*
*inet 10.10.1.100  netmask 255.255.255.0  broadcast 10.10.1.255*
*inet6 fe80::2204:fff:fef5:8a3a  prefixlen 64  scopeid 0x20*
*ether 20:04:0f:f5:8a:3a  txqueuelen 0  (Ethernet)*
*RX packets 9441  bytes 981899 (958.8 KiB)*
*RX errors 0  dropped 2320  overruns 0  frame 0*
*TX packets 343  bytes 42390 (41.3 KiB)*
*TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0*

*em3: flags=6211  mtu 1500*
*ether 20:04:0f:f5:8a:3a  txqueuelen 1000  (Ethernet)*
*RX packets 7064  bytes 729818 (712.7 KiB)*
*RX errors 0  dropped 1067  overruns 0  frame 0*
*TX packets 269  bytes 36586 (35.7 KiB)*
*TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0*
*device interrupt 72*
**
*em4: flags=6211  mtu 1500*
*ether 20:04:0f:f5:8a:3a  txqueuelen 1000  (Ethernet)*
*RX packets 2377  bytes 252081 (246.1 KiB)*
*RX errors 0  dropped 20893  overruns 0  frame 0*
*TX packets 74  bytes 5804 (5.6 KiB)*
*TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0*
*device interrupt 73*
**
*vxlan1: flags=4163  mtu 1450*
*inet 192.192.1.100  netmask 255.255.255.0  broadcast 0.0.0.0*
*inet6 fe80::f077:8cff:fe45:89b9  prefixlen 64  scopeid 0x20*
*ether f2:77:8c:45:89:b9  txqueuelen 0  (Ethernet)*
*RX packets 230  bytes 18256 (17.8 KiB)*
*RX errors 0  dropped 0  overruns 0  frame 0*
*TX packets 305  bytes 30322 (29.6 KiB)*
*TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0*

** Thrid, I create a vxlan interface on *linux-B* , the CMD and result like 
this:
*[root@localhost ~]# ip link add vxlan1 type vxlan id 139265 remote 10.10.1.100 
dstport 4789 dev em3*
*[root@localhost ~]# ip link set vxlan1 up*
*[root@localhost ~]# ip addr add 192.192.1.99/24 dev vxlan1*
*[root@localhost ~]# ifconfig*
*em3: flags=4099  mtu 1500*
*inet 10.10.1.99  netmask 255.255.255.0  broadcast 10.10.1.255*
*ether 20:04:0f:f2:79:0f  txqueuelen 1000  (Ethernet)*
*RX packets 0  bytes 0 (0.0 B)*
*RX errors 0  dropped 0  overruns 0  frame 0*
*TX packets 0  bytes 0 (0.0 B)*
*TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0*
*device interrupt 73*

*vxlan1: flags=4163  mtu 1450*
*inet 192.192.1.99  netmask 255.255.255.0  broadcast 0.0.0.0*
*inet6 fe80::f077:8cff:fe45:89b9  prefixlen 64  scopeid 0x20*
*ether f2:77:8c:45:89:b9  txqueuelen 0  (Ethernet)*
*RX packets 230  bytes 18256 (17.8 KiB)*
*RX errors 0  dropped 0  overruns 0  frame 0*
*TX packets 305  bytes 30322 (29.6 KiB)*
*TX errors 0  dropped 0 overruns 0  carrier 0  collision