Re: [vpp-dev] VPP Socket API how to use from the application #socket-api #vpp #sock-api

2021-09-22 Thread Ole Troan
Ravi,

> Thanks for your response, I have a question on other language,
> So we don't have any SOCKET API support for C/C++?
> The socket API support is possible with go/python right?

Yes, contributions to VAPI for socket transport is of course welcome!

Best regards,
Ole


signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20189): https://lists.fd.io/g/vpp-dev/message/20189
Mute This Topic: https://lists.fd.io/mt/85796959/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #socket-api:https://lists.fd.io/g/vpp-dev/mutehashtag/socket-api
Mute #sock-api:https://lists.fd.io/g/vpp-dev/mutehashtag/sock-api
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

2021-09-22 Thread Jieqiang Wang
Thanks for your explanation, Damjan. Based on your words, it seems 
inappropriate to apply mbuf-fast-free on VPP, even for SMP systems …

From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
lists.fd.io
Sent: Thursday, September 23, 2021 12:18 AM
To: Jieqiang Wang 
Cc: Benoit Ganne (bganne) ; vpp-dev ; 
Lijian Zhang ; Honnappa Nagarahalli 
; Govindarajan Mohandoss 
; Ruifeng Wang ; Tianyu 
Li ; Feifei Wang ; nd 
Subject: Re: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP vector 
mode


—
Damjan




On 22.09.2021., at 11:50, Jieqiang Wang 
mailto:jieqiang.w...@arm.com>> wrote:

Hi Ben,

Thanks for your quick feedback. A few comments inline.

Best Regards,
Jieqiang Wang

-Original Message-
From: Benoit Ganne (bganne) mailto:bga...@cisco.com>>
Sent: Friday, September 17, 2021 3:34 PM
To: Jieqiang Wang mailto:jieqiang.w...@arm.com>>; 
vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: Lijian Zhang mailto:lijian.zh...@arm.com>>; Honnappa 
Nagarahalli 
mailto:honnappa.nagaraha...@arm.com>>; 
Govindarajan Mohandoss 
mailto:govindarajan.mohand...@arm.com>>; 
Ruifeng Wang mailto:ruifeng.w...@arm.com>>; Tianyu Li 
mailto:tianyu...@arm.com>>; Feifei Wang 
mailto:feifei.wa...@arm.com>>; nd 
mailto:n...@arm.com>>
Subject: RE: Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

Hi Jieqiang,

This looks like an interesting optimization but you need to check that the 
'mbufs to be freed should be coming from the same mempool' rule holds true. 
This won't be the case on NUMA systems (VPP creates 1 buffer pool per NUMA).
This should be easy to check with eg. 'vec_len (vm->buffer_main->buffer_pools) 
== 1'.

Jieqiang: That's a really good point here. Like you said, it holds true on SMP 
systems and we can check by if the numbers of buffer pool equal to 1. But I am 
wondering that is this check too strict? If the worker CPUs and NICs used 
reside in the same NUMA node, I think mbufs come from the same mempool and we 
still meet the requirement here.  What do you think?

Please note that VPP is not using DPDK mempools. We are faking them by 
registering our own mempool handlers.
There is special trick how refcnt > 1 is handled. All packets which have vpp 
ref count > 1 are sent to DPDK code as members of another fake mempool which 
have cache turned off.
In reality that means that DPDK will have 2 fake mempools per numa, and all 
packets going to DPDK code will always have set refcnt to 1.



For the rest, I think we do not use DPDK mbuf refcounting at all as we maintain 
our own anyway, but someone more knowledgeable than me should confirm.

Jieqiang: This echoes with the experiments(IPv4 multicasting and L2 flood) I 
have done. All the mbufs in the two test cases are copied instead of ref 
counting. But this also needs double-check from VPP experts like you mentioned.

see above….



I'd be curious to see if we can measure a real performance difference in CSIT.

Jieqiang: Let me trigger some performance test cases in CSIT and come back to 
you with related performance figures.

—
Damjan
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20188): https://lists.fd.io/g/vpp-dev/message/20188
Mute This Topic: https://lists.fd.io/mt/85669132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] patch: fix cxq_vring field in virtio (affects taps)

2021-09-22 Thread Mohsin Kazmi via lists.fd.io
Hi Ivan,

Thank you so much for reporting the issue. cxq_vring field is specific to 
virtio pci. It should not be accessed or set in tap driver.
Please find the proper fix here:

  1.  https://gerrit.fd.io/r/c/vpp/+/33798/1
  2.  https://gerrit.fd.io/r/c/vpp/+/33796/3

-br
Mohsin

From:  on behalf of Ivan Shvedunov 
Date: Wednesday, September 22, 2021 at 2:10 PM
To: vpp-dev 
Subject: [vpp-dev] patch: fix cxq_vring field in virtio (affects taps)

  Hi,
  I've noticed that VPP returns bad host IPv6 addresses from 
sw_interface_tap_v2_dump() API call. As it turned out, the problem is due to 
this line:
https://github.com/FDio/vpp/blob/ef356f57b54b948d990b293514f062aebf86da72/src/vnet/devices/tap/tap.c#L743
with cxq_vring field belonging to the part of a union inside virtio_if_t that's 
only used for virtio_pci, but also coincides with host_ip6_addr in the other 
part of the union. As there are more code paths that use this field without 
checking virtio type, for example in virtio_show(), I thought it's probably 
safer to move it out of the union:
https://gerrit.fd.io/r/c/vpp/+/33791
This patch fixes the issue with TAP details.

--
Ivan Shvedunov mailto:ivan...@gmail.com>>
;; My GPG fingerprint is: 2E61 0748 8E12 BB1A 5AB9  F7D0 613E C0F8 0BC5 2807

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20187): https://lists.fd.io/g/vpp-dev/message/20187
Mute This Topic: https://lists.fd.io/mt/85789362/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Socket API how to use from the application #socket-api #vpp #sock-api

2021-09-22 Thread RaviKiran Veldanda
Hi Ole,
Thanks for your response, I have a question on other language,
So we don't have any SOCKET API support for C/C++?
The socket API support is possible with go/python right?
//Ravi

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20186): https://lists.fd.io/g/vpp-dev/message/20186
Mute This Topic: https://lists.fd.io/mt/85796959/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #socket-api:https://lists.fd.io/g/vpp-dev/mutehashtag/socket-api
Mute #sock-api:https://lists.fd.io/g/vpp-dev/mutehashtag/sock-api
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Socket API how to use from the application #socket-api #vpp #sock-api

2021-09-22 Thread Ole Troan
Ravi,

The VPP binary API supports two transports. Shared memory and unix domain 
sockets. 
VAPI is the C language binding for the binary API. I think it only supports 
shared memory now. 

Other language bindings support both or only sockets.

Cheers,
Ole

> On 22 Sep 2021, at 20:31, RaviKiran Veldanda  wrote:
> 
> Hi Experts,
> I was trying to find out how to use socket-api instead of "VAPI" based API.
> Can you please provide how to use socket-api any pointer will be a great help.
> //Ravi 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20185): https://lists.fd.io/g/vpp-dev/message/20185
Mute This Topic: https://lists.fd.io/mt/85796959/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #socket-api:https://lists.fd.io/g/vpp-dev/mutehashtag/socket-api
Mute #sock-api:https://lists.fd.io/g/vpp-dev/mutehashtag/sock-api
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP Socket API how to use from the application #socket-api #vpp #sock-api

2021-09-22 Thread RaviKiran Veldanda
Hi Experts,
I was trying to find out how to use socket-api instead of "VAPI" based API.
Can you please provide how to use socket-api any pointer will be a great help.
//Ravi

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20184): https://lists.fd.io/g/vpp-dev/message/20184
Mute This Topic: https://lists.fd.io/mt/85796959/21656
Mute #socket-api:https://lists.fd.io/g/vpp-dev/mutehashtag/socket-api
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #sock-api:https://lists.fd.io/g/vpp-dev/mutehashtag/sock-api
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] config 1G hugepage

2021-09-22 Thread Damjan Marion via lists.fd.io

With running VPP you can do:

$ grep huge /proc/$(pgrep vpp)/numa_maps
10 default file=/memfd:buffers-numa-0\040(deleted) huge dirty=19 N0=19 
kernelpagesize_kB=2048
100260 default file=/memfd:buffers-numa-1\040(deleted) huge dirty=19 N1=19 
kernelpagesize_kB=2048
1004c0 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 
kernelpagesize_kB=2048

1st line - 19 2048K memfd backed  hugepages on numa 0
2nd line - 19 2048K memfd backed hugepages on numa 1
3rd line - one 2048K anonymous hugepage on numa 1

first two are buffer pool memory, 3rd one is likely some physmem used by native 
driver


If you add to startup.conf:

memory {
  main-heap-page-size 1G
}


$grep huge /proc/$(pgrep vpp)/numa_maps
10 default file=/memfd:buffers-numa-0\040(deleted) huge dirty=19 N0=19 
kernelpagesize_kB=2048
100260 default file=/memfd:buffers-numa-1\040(deleted) huge dirty=19 N1=19 
kernelpagesize_kB=2048
1004c0 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 
kernelpagesize_kB=2048
7fbc default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 
kernelpagesize_kB=1048576

last line is main heap allocated as single anonymous 1G hugepage.

VPP is not using filesystem backed hugepages so you will not find anything in 
/var/run/huge….

— 
Damjan



> On 21.09.2021., at 20:11, Mohsen Meamarian  wrote:
> 
> Hi,
> Thanks, Is there a way to make sure how many Hugespages are ready to Vpp 
> using? Immediately after Start Vpp, I open the "/ run / vpp / hugepages " 
> file but it is empty. Is the VPP mechanism to occupy the Hugepage if needed 
> or does Vpp reserve it for itself from the beginning? 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20183): https://lists.fd.io/g/vpp-dev/message/20183
Mute This Topic: https://lists.fd.io/mt/85744775/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

2021-09-22 Thread Damjan Marion via lists.fd.io

— 
Damjan



> On 22.09.2021., at 11:50, Jieqiang Wang  wrote:
> 
> Hi Ben,
> 
> Thanks for your quick feedback. A few comments inline.
> 
> Best Regards,
> Jieqiang Wang
> 
> -Original Message-
> From: Benoit Ganne (bganne) 
> Sent: Friday, September 17, 2021 3:34 PM
> To: Jieqiang Wang ; vpp-dev 
> Cc: Lijian Zhang ; Honnappa Nagarahalli 
> ; Govindarajan Mohandoss 
> ; Ruifeng Wang ; Tianyu 
> Li ; Feifei Wang ; nd 
> Subject: RE: Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode
> 
> Hi Jieqiang,
> 
> This looks like an interesting optimization but you need to check that the 
> 'mbufs to be freed should be coming from the same mempool' rule holds true. 
> This won't be the case on NUMA systems (VPP creates 1 buffer pool per NUMA).
> This should be easy to check with eg. 'vec_len 
> (vm->buffer_main->buffer_pools) == 1'.
 Jieqiang: That's a really good point here. Like you said, it holds true on 
 SMP systems and we can check by if the numbers of buffer pool equal to 1. 
 But I am wondering that is this check too strict? If the worker CPUs and 
 NICs used reside in the same NUMA node, I think mbufs come from the same 
 mempool and we still meet the requirement here.  What do you think?

Please note that VPP is not using DPDK mempools. We are faking them by 
registering our own mempool handlers.
There is special trick how refcnt > 1 is handled. All packets which have vpp 
ref count > 1 are sent to DPDK code as members of another fake mempool which 
have cache turned off.
In reality that means that DPDK will have 2 fake mempools per numa, and all 
packets going to DPDK code will always have set refcnt to 1.

> 
> For the rest, I think we do not use DPDK mbuf refcounting at all as we 
> maintain our own anyway, but someone more knowledgeable than me should 
> confirm.
 Jieqiang: This echoes with the experiments(IPv4 multicasting and L2 flood) 
 I have done. All the mbufs in the two test cases are copied instead of ref 
 counting. But this also needs double-check from VPP experts like you 
 mentioned.

see above….

> 
> I'd be curious to see if we can measure a real performance difference in CSIT.
 Jieqiang: Let me trigger some performance test cases in CSIT and come back 
 to you with related performance figures.

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20182): https://lists.fd.io/g/vpp-dev/message/20182
Mute This Topic: https://lists.fd.io/mt/85669132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] getting ip6-glean:address overflow drops

2021-09-22 Thread mark antony
Hi all,
I get about 5 million packet drops due to ip6-glean: address overflow
drops. I attached my "vppctl sh error" command result. I am sure about my
configuration. What could be the possible reasons for this error?

thanks,
mark
vpp# sh errors
   CountNode  Reason
   336 ip6-icmp-input router advertisements sent
 1  GigabitEthernet88/0/3-output  interface is down
 7  GigabitEthernetaf/0/0-output  interface is down
 7  GigabitEthernetaf/0/1-output  interface is down
 7  GigabitEthernetaf/0/2-output  interface is down
 7  GigabitEthernetaf/0/3-output  interface is down
 7  GigabitEthernetb1/0/0-output  interface is down
 7  GigabitEthernetb1/0/1-output  interface is down
 7  GigabitEthernetb1/0/2-output  interface is down
 7  GigabitEthernetb1/0/3-output  interface is down
50arp-input   ARP request IP4 source 
address learned
   5059470ip6-glean   address overflow drops
 39813ip6-glean   neighbor solicitations sent
 57237ip4-glean   address overflow drops
79ip4-glean   ARP requests sent
 25837 ip6-icmp-input neighbor solicitations for 
unknown targets
50arp-input   ARP request IP4 source 
address learned
   5059412ip6-glean   address overflow drops
 39801ip6-glean   neighbor solicitations sent
 61581ip4-glean   address overflow drops
84ip4-glean   ARP requests sent
 26474 ip6-icmp-input neighbor solicitations for 
unknown targets
35arp-input   ARP request IP4 source 
address learned
   2529688ip6-glean   address overflow drops
 19893ip6-glean   neighbor solicitations sent
 98141ip4-glean   address overflow drops
58ip4-glean   ARP requests sent
 13540 ip6-icmp-input neighbor solicitations for 
unknown targets
   434GigabitEthernet18/0/0-txTx packet drops (dpdk tx 
failure)
25arp-input   ARP request IP4 source 
address learned
   2529722ip6-glean   address overflow drops
 19893ip6-glean   neighbor solicitations sent
190983ip4-glean   address overflow drops
95ip4-glean   ARP requests sent
 13206 ip6-icmp-input neighbor solicitations for 
unknown targets
   469GigabitEthernet18/0/1-txTx packet drops (dpdk tx 
failure)
38arp-input   ARP request IP4 source 
address learned
   2529801ip6-glean   address overflow drops

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20181): https://lists.fd.io/g/vpp-dev/message/20181
Mute This Topic: https://lists.fd.io/mt/85793501/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Is there are way to debug slow plugins activation after VPP startup/reload.

2021-09-22 Thread Юрий Иванов
Hi,
I've go very simple config for NAT44:
create interface rdma host-if enp1s0f0 name rdma-0

create sub-interfaces rdma-0 934
create sub-interfaces rdma-0 935

set interface ip address rdma-0.934 10.0.100.10/31
set interface ip address rdma-0.935 18.31.0.1/25

set interface state rdma-0 up
set interface state rdma-0.934 up
set interface state rdma-0.935 up

ip route add 10.0.0.0/8 via 10.0.100.11 rdma-0.934
ip route add 0.0.0.0/0 via 18.31.0.126 rdma-0.935

nat44 enable sessions 90

set int nat44 in rdma-0.934 out rdma-0.935
nat44 add address 18.31.0.2-18.31.0.124
In case of server reboot or vpp process failure, nat doesn't work for 
relatively long time period.
Moreover some servers starts working where others are not yet.
After 10-20 minutes it start work as expected.
Ping is working to private servers from vpp process.
Vpp is builded from source (vpp v21.10-rc0~361-g5aa06abf2) but work delay the 
same in repo version.

VPP starts create some sessions but it simply not working from the PC side.
Below some outputs:
vpp# show nat44 sessions
NAT44 ED sessions:
 thread 0 vpp_main: 33 sessions 
i2o 10.0.101.15 proto icmp port 45327 fib 0
o2i 18.31.0.19 proto icmp port 45327 fib 0
   external host 91.233.219.251:45327
   i2o flow: match: saddr 10.0.101.15 sport 45327 daddr 91.233.219.251 
dport 45327 proto ICMP fib_idx 0 rewrite: saddr 18.31.0.19 daddr 91.233.219.251 
icmp-id 45327 txfib 0
   o2i flow: match: saddr 91.233.219.251 sport 45327 daddr 18.31.0.19 dport 
45327 proto ICMP fib_idx 0 rewrite: daddr 10.0.101.15 icmp-id 45327 txfib 0
   index 0
   last heard 11.39
   total pkts 9, total bytes 396
   dynamic translation

i2o 10.0.101.15 proto icmp port 45329 fib 0
o2i 18.31.0.19 proto icmp port 45329 fib 0
   external host 92.223.5.15:45329
   i2o flow: match: saddr 10.0.101.15 sport 45329 daddr 92.223.5.15 dport 
45329 proto ICMP fib_idx 0 rewrite: saddr 18.31.0.19 daddr 92.223.5.15 icmp-id 
45329 txfib 0
   o2i flow: match: saddr 92.223.5.15 sport 45329 daddr 18.31.0.19 dport 
45329 proto ICMP fib_idx 0 rewrite: daddr 10.0.101.15 icmp-id 45329 txfib 0
   index 1
   last heard 11.39
   total pkts 9, total bytes 396
   dynamic translation

i2o 10.0.101.15 proto icmp port 45330 fib 0
o2i 18.31.0.19 proto icmp port 45330 fib 0
   external host 92.223.6.32:45330
   i2o flow: match: saddr 10.0.101.15 sport 45330 daddr 92.223.6.32 dport 
45330 proto ICMP fib_idx 0 rewrite: saddr 18.31.0.19 daddr 92.223.6.32 icmp-id 
45330 txfib 0
   o2i flow: match: saddr 92.223.6.32 sport 45330 daddr 18.31.0.19 dport 
45330 proto ICMP fib_idx 0 rewrite: daddr 10.0.101.15 icmp-id 45330 txfib 0
   index 2
   last heard 11.39
   total pkts 9, total bytes 396
   dynamic translation
...

PC
suser@influxdb-1:~$ ip add
2: ens18:  mtu 1500 qdisc fq_codel state UP 
group default qlen 1000
link/ether e6:74:6e:8a:29:ac brd ff:ff:ff:ff:ff:ff
inet 10.0.101.15/25 brd 10.0.101.127 scope global ens18
   valid_lft forever preferred_lft forever
inet6 fe80::e474:6eff:fe8a:29ac/64 scope link
   valid_lft forever preferred_lft forever
suser@influxdb-1:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4092ms

suser@influxdb-1:~$

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20180): https://lists.fd.io/g/vpp-dev/message/20180
Mute This Topic: https://lists.fd.io/mt/85790901/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] patch: fix cxq_vring field in virtio (affects taps)

2021-09-22 Thread Ivan Shvedunov
  Hi,
  I've noticed that VPP returns bad host IPv6 addresses from
sw_interface_tap_v2_dump() API call. As it turned out, the problem is due
to this line:
https://github.com/FDio/vpp/blob/ef356f57b54b948d990b293514f062aebf86da72/src/vnet/devices/tap/tap.c#L743
with cxq_vring field belonging to the part of a union inside virtio_if_t
that's only used for virtio_pci, but also coincides with host_ip6_addr in
the other part of the union. As there are more code paths that use this
field without checking virtio type, for example in virtio_show(), I thought
it's probably safer to move it out of the union:
https://gerrit.fd.io/r/c/vpp/+/33791
This patch fixes the issue with TAP details.

-- 
Ivan Shvedunov 
;; My GPG fingerprint is: 2E61 0748 8E12 BB1A 5AB9  F7D0 613E C0F8 0BC5 2807

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20179): https://lists.fd.io/g/vpp-dev/message/20179
Mute This Topic: https://lists.fd.io/mt/85789362/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] memif_tx_burst() is failing with INVALID_ARG after migrating to fdio-21.06 version

2021-09-22 Thread Satya Murthy
Hi,

We had our software based on 20.05 vpp version.
Recently, we moved to fdio-2106 version and could compile it successfully.

However, memif_tx_burst() function is failing to send message to VPP with error 
code "INVALID_ARGUMENT".
This was working in 20.05 version without any issues.

I see that Jakub Grajciar changed in this area quite a bit.
One of the change is:

commit 1421748e3cd98d7355b1a1db283803a571569927
Author: Jakub Grajciar 
Date:   Thu Jan 14 13:23:48 2021 +0100

libmemif: set data offset for memif buffer

Update descriptor offset based on data pointer
in memif_buffer_t.
Slave only, master will not modify the descriptor.

Type: feature

Are these changes not backward compatible ?
What changes we need to make to get this working in 21.06 version.

Jakub Grajciar,
Appreciate If you can give your inputs on this.

--
Thanks & Regards,
Murthy

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20178): https://lists.fd.io/g/vpp-dev/message/20178
Mute This Topic: https://lists.fd.io/mt/85787566/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

2021-09-22 Thread Jieqiang Wang
Hi Ben,

Thanks for your quick feedback. A few comments inline.

Best Regards,
Jieqiang Wang

-Original Message-
From: Benoit Ganne (bganne) 
Sent: Friday, September 17, 2021 3:34 PM
To: Jieqiang Wang ; vpp-dev 
Cc: Lijian Zhang ; Honnappa Nagarahalli 
; Govindarajan Mohandoss 
; Ruifeng Wang ; Tianyu 
Li ; Feifei Wang ; nd 
Subject: RE: Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

Hi Jieqiang,

This looks like an interesting optimization but you need to check that the 
'mbufs to be freed should be coming from the same mempool' rule holds true. 
This won't be the case on NUMA systems (VPP creates 1 buffer pool per NUMA).
This should be easy to check with eg. 'vec_len (vm->buffer_main->buffer_pools) 
== 1'.
>>> Jieqiang: That's a really good point here. Like you said, it holds true on 
>>> SMP systems and we can check by if the numbers of buffer pool equal to 1. 
>>> But I am wondering that is this check too strict? If the worker CPUs and 
>>> NICs used reside in the same NUMA node, I think mbufs come from the same 
>>> mempool and we still meet the requirement here.  What do you think?

For the rest, I think we do not use DPDK mbuf refcounting at all as we maintain 
our own anyway, but someone more knowledgeable than me should confirm.
>>> Jieqiang: This echoes with the experiments(IPv4 multicasting and L2 flood) 
>>> I have done. All the mbufs in the two test cases are copied instead of ref 
>>> counting. But this also needs double-check from VPP experts like you 
>>> mentioned.

I'd be curious to see if we can measure a real performance difference in CSIT.
>>> Jieqiang: Let me trigger some performance test cases in CSIT and come back 
>>> to you with related performance figures.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jieqiang
> Wang
> Sent: vendredi 17 septembre 2021 06:07
> To: vpp-dev 
> Cc: Lijian Zhang ; Honnappa Nagarahalli
> ; Govindarajan Mohandoss
> ; Ruifeng Wang ;
> Tianyu Li ; Feifei Wang ; nd
> 
> Subject: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP
> vector mode
>
> Hi VPP maintainers,
>
>
>
> Recently VPP has upgraded the DPDK version to DPDK-21.08, which
> includes two optimization patches[1][2] from Arm DPDK team. With the
> mbuf-fast-free flag, the two patches add code segment to accelerate
> mbuf free in PMD TX path for i40e driver, which shows quite obvious
> performance improvement from DPDK L3FWD benchmarking results.
>
>
>
> I tried to verify the benefits that those optimization patches can
> bring up to VPP, but find out that mbuf-fast-free flag is not enabled
> in
> VPP+DPDK by default.
>
> Applying DPDK mbuf-fast-free has some constraints, e.g,
>
> * mbufs to be freed should be coming from the same mempool
> * ref_cnt == 1 always in mbuf meta-data when user apps call DPDK
> rte_eth_tx_burst ()
> * No TX checksum offload
> * No jumble frame
>
> But VPP vector mode(set by adding ‘no-tx-checksum-offload’ and
> ‘no-multi- seg’ parameters in dpdk section of the startup.conf) seems
> to satisfy all the requirements. So I made a few code changes shown as
> below to set mbuf- fast-free flag by default in VPP vector mode and
> did some benchmarking for
> IPv4 routing test cases with 1 flow/10k flows. The benchmarking
> results show both throughput improvement and CPU cycles saved
> regarding DPDK transmit function.
>
>
>
> So any thought on enabling mbuf-fast-free tx offload flag in VPP
> vector mode?  Any feedback is welcome :)
>
>
>
> Code Changes:
>
>
>
> diff --git a/src/plugins/dpdk/device/init.c
> b/src/plugins/dpdk/device/init.c
>
> index f7c1cc106..0fbdd2317 100644
>
> --- a/src/plugins/dpdk/device/init.c
>
> +++ b/src/plugins/dpdk/device/init.c
>
> @@ -398,6 +398,8 @@ dpdk_lib_init (dpdk_main_t * dm)
>
>   xd->port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
>
>   xd->flags |= DPDK_DEVICE_FLAG_MAYBE_MULTISEG;
>
> }
>
> +  if (dm->conf->no_multi_seg && dm->conf->no_tx_checksum_offload)
>
> +   xd->port_conf.txmode.offloads |=
> + DEV_TX_OFFLOAD_MBUF_FAST_FREE;
>
>
>
>xd->tx_q_used = clib_min (dev_info.max_tx_queues, tm-
> >n_vlib_mains);
>
>
>
> Benchmark Results:
>
>
>
> 1 flow, bidirectional
>
> Throughput(Mpps):
>
>
>
> Original
>
> Patched
>
> Ratio
>
> N1SDP
>
> 11.62
>
> 12.44
>
> +7.06%
>
> ThunderX2
>
> 9.52
>
> 10.16
>
> +6.30%
>
> Dell 8268
>
> 17.82
>
> 18.20
>
> +2.13%
>
>
>
> CPU cycles overhead for DPDK transmit function(recorded by Perf tools):
>
>
>
> Original
>
> Patched
>
> Ratio
>
> N1SDP
>
> 13.08%
>
> 5.53%
>
> -7.55%
>
> ThunderX2
>
> 11.01%
>
> 6.68%
>
> -4.33%
>
> Dell 8268
>
> 10.78%
>
> 7.35%
>
> -3.43%
>
>
>
> 10k flows, bidirectional
>
> Throughput(Mpps):
>
>
>
> Original
>
> Patched
>
> Ratio
>
> N1SDP
>
> 8.48
>
> 9.0
>
> +6.13%
>
> ThunderX2
>
> 8.84
>
> 9.26
>
> +4.75%
>
> Dell 8268
>
> 15.04
>
> 15.40
>
> +2.39%
>
>
>
> CPU cycles overhead for DPDK transmit function(recorded by Perf tools):
>
>
>
> Origina

Re: [vpp-dev] VPP vnet event for route/FIB table update?

2021-09-22 Thread Neale Ranns

Hi,

No. the expectation is that the entity that added the route would inform other 
entities in the system. VPP is not the system message bus.

/neale


From: vpp-dev@lists.fd.io  on behalf of PRANAB DAS via 
lists.fd.io 
Date: Tuesday, 21 September 2021 at 13:29
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] VPP vnet event for route/FIB table update?

Hi

We have an application that requires to be notified when a FIB table is updated.
Is there any VPP event API (an equivalent of NETLINK_ROUTE ) to monitor FIB 
table updates?

Thank you very much

- Pranab Das

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20176): https://lists.fd.io/g/vpp-dev/message/20176
Mute This Topic: https://lists.fd.io/mt/85763142/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 21.10 RC1 is TOMORROW 23 sep 2021 12:00 UTC

2021-09-22 Thread Andrew Yourtchenko
Hi all,

You might have seen the email from Dave yesterday about the outage. Thanks a 
lot for everyone involved in fixing it. As per our discussion with Dave, the 
RC1 milestone will happen 24 hours later than planned, to compensate for the 
loss of access to the infra. 

--a /* your friendly 21.10 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20175): https://lists.fd.io/g/vpp-dev/message/20175
Mute This Topic: https://lists.fd.io/mt/85785695/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-