Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread chetan bhasin
Thanks Steven for the response.

As per VPP 18.01 , only Bonded interface state is show in "show interface "
CLI .

Thanks,
Chetan



On Mon, Apr 20, 2020 at 8:49 PM Steven Luong (sluong) 
wrote:

> First, your question has nothing to do with bonding. Whatever you are
> seeing is true regardless of bonding configured or not.
>
>
>
> Show interfaces displays the admin state of the interface. Whenever you
> set the admin state to up, it is displayed as up regardless of the physical
> carrier is up or down. While the admin state may be up, the physical
> carrier may be down.
>
>
>
> Show hardware displays the physical state of the interface, carrier up or
> down. Admin state must be set to up prior to seeing the hardware carrier
> state to up.
>
>
>
> Steven
>
>
>
> *From: * on behalf of chetan bhasin <
> chetan.bhasin...@gmail.com>
> *Date: *Sunday, April 19, 2020 at 11:40 PM
> *To: *vpp-dev 
> *Subject: *[vpp-dev] Query regarding bonding in Vpp 19.08
>
>
>
> Hi,
>
>
>
> I am using vpp 19.08 , When I use bonding configuration , I am seeing
> below output of "show int " CLI .
>
> Query : Is it ok to show the status of slave interface as up in "show
> interface" CLI while as per the show hardware-interface its down ?
>
>
>
> vpp# show
>
> *int*
>
> Name
>
> Idx
>
> State
>
> MTU (L3/IP4/IP6/MPLS)
>
> Counter
>
> Count
>
> BondEthernet0
>
> 3
>
> up 9000/0/0/0
>
> rx packets 12
>
> BondEthernet0.811
>
> 4
>
> up 0/0/0/0
>
> rx packets 6
>
> BondEthernet0.812
>
> 5
>
> up 0/0/0/0
>
> rx packets 6
>
> device_5d/0/0
>
> 1
>
> up 9000/0/0/0
>
> rx packets 12
>
> device_5d/0/1
>
> 2
>
> up 9000/0/0/0
>
> rx packets 17
>
> rx bytes 1100
>
> drops 14
>
> local0 0
>
> down 0/0/0/0
>
>
>
> Thanks,
>
> Chetan
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16121): https://lists.fd.io/g/vpp-dev/message/16121
Mute This Topic: https://lists.fd.io/mt/73144225/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [csit-report] Regressions as of 2020-04-03 14:00:14 UTC #email

2020-04-20 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io
> 3n-hsw

Around 2nd of April there happened a regression
in eth-l2bdscale1mmaclrn test, visible here [0].
Trending needed multiple runs to identify it is there,
and alerting is configured not to report "old" regressions
(so it did not report this one).

Anyway, the closer look shows previously the test had
larger stdev, and even closer look shows there was small
duration stretching going on.
The regression than happened when [1] fixed it by adding TRex workers.
Unintentionally, as it was fixing primarily Mellanox tests.
See [2] for more details on that.

Vratko.

[0] https://docs.fd.io/csit/master/trending/trending/l2-3n-hsw-xl710.html#t1c
[1] https://gerrit.fd.io/r/c/csit/+/26262
[2] https://lists.fd.io/g/csit-report/message/2406

-Original Message-
From: csit-rep...@lists.fd.io  On Behalf Of 
nore...@jenkins.fd.io
Sent: Friday, 2020-April-03 18:07
To: Fdio+Csit-Report via Email Integration 
Subject: [csit-report] Regressions as of 2020-04-03 14:00:14 UTC #email

Following regressions occured in the last trending job runs, listed per testbed 
type.



2n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-skx/881, 
VPP version: 20.05-rc0~433-g0c7aa7ab5~b930

No regressions

3n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-3n-skx/862, 
VPP version: 20.05-rc0~433-g0c7aa7ab5~b930

tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6ip6-ip6base-srv6enc1sid-mrr.78b-2t1c-avf-ethip6ip6-ip6base-srv6enc1sid-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6ip6-ip6base-srv6enc1sid-mrr.78b-4t2c-avf-ethip6ip6-ip6base-srv6enc1sid-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6ip6-ip6base-srv6enc1sid-mrr.78b-8t4c-avf-ethip6ip6-ip6base-srv6enc1sid-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr
tests.vpp.perf.vm 
vhost.25ge2p1xxv710-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr.64b-8t4c-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr


2n-clx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-clx/282, 
VPP version: 20.05-rc0~456-g57a5a2df5~b953

tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-2t1c-avf-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-4t2c-avf-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-8t4c-avf-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-ethip4-ip4base-eth-2memif-1dcr-mrr.64b-8t4c-avf-ethip4-ip4base-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-8t4c-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-8t4c-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-100ge2p1cx556a-rdma-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-2t1c-rdma-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-100ge2p1cx556a-rdma-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-2t1c-rdma-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr

Re: [vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread steven luong via lists.fd.io
First, your question has nothing to do with bonding. Whatever you are seeing is 
true regardless of bonding configured or not.

Show interfaces displays the admin state of the interface. Whenever you set the 
admin state to up, it is displayed as up regardless of the physical carrier is 
up or down. While the admin state may be up, the physical carrier may be down.

Show hardware displays the physical state of the interface, carrier up or down. 
Admin state must be set to up prior to seeing the hardware carrier state to up.

Steven

From:  on behalf of chetan bhasin 

Date: Sunday, April 19, 2020 at 11:40 PM
To: vpp-dev 
Subject: [vpp-dev] Query regarding bonding in Vpp 19.08

Hi,

I am using vpp 19.08 , When I use bonding configuration , I am seeing below 
output of "show int " CLI .
Query : Is it ok to show the status of slave interface as up in "show 
interface" CLI while as per the show hardware-interface its down ?

vpp# show
int
Name
Idx
State
MTU (L3/IP4/IP6/MPLS)
Counter
Count
BondEthernet0
3
up 9000/0/0/0
rx packets 12
BondEthernet0.811
4
up 0/0/0/0
rx packets 6
BondEthernet0.812
5
up 0/0/0/0
rx packets 6
device_5d/0/0
1
up 9000/0/0/0
rx packets 12
device_5d/0/1
2
up 9000/0/0/0
rx packets 17
rx bytes 1100
drops 14
local0 0
down 0/0/0/0

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16119): https://lists.fd.io/g/vpp-dev/message/16119
Mute This Topic: https://lists.fd.io/mt/73144225/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [csit-report] Regressions as of 2020-04-18 14:00:15 UTC #email

2020-04-20 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io
> -rnd-mrr

This is a consequence of [0],
fixing an old bug in CSIT code.
Previously, the traffic was not random enough.

Vratko.

[0] https://gerrit.fd.io/r/c/csit/+/26456

-Original Message-
From: csit-rep...@lists.fd.io  On Behalf Of 
nore...@jenkins.fd.io
Sent: Saturday, 2020-April-18 17:51
To: Fdio+Csit-Report via Email Integration 
Subject: [csit-report] Regressions as of 2020-04-18 14:00:15 UTC #email

Following regressions occured in the last trending job runs, listed per testbed 
type.



2n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-skx/886, 
VPP version: 20.05-rc0~545-g8daeea9a5~b1042

tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale2m-rnd-mrr.64b-2t1c-avf-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale2m-rnd-mrr.64b-4t2c-avf-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale2m-rnd-mrr.64b-2t1c-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-avf-dot1q-ip6base-mrr.78b-2t1c-avf-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-dot1q-ip6base-mrr.78b-2t1c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-avf-dot1q-ip6base-mrr.78b-4t2c-avf-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-dot1q-ip6base-mrr.78b-4t2c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-avf-dot1q-ip6base-mrr.78b-8t4c-avf-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-dot1q-ip6base-mrr.78b-8t4c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-avf-dot1q-ip6base-mrr.78b-2t1c-avf-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-avf-dot1q-ip6base-mrr.78b-4t2c-avf-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-avf-dot1q-ip6base-mrr.78b-8t4c-avf-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-dot1q-ip6base-mrr.78b-2t1c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-dot1q-ip6base-mrr.78b-8t4c-dot1q-ip6base-mrr
tests.vpp.perf.l2.2n1l-10ge2p1x710-avf-eth-l2xcbase-mrr.64b-2t1c-avf-eth-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-dot1q-l2xcbase-mrr.64b-2t1c-avf-dot1q-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-dot1q-l2xcbase-mrr.64b-2t1c-dot1q-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-dot1q-l2xcbase-mrr.64b-4t2c-dot1q-l2xcbase-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-dot1q-l2bdbasemaclrn-eth-2vhostvr1024-1vm-vppl2xc-mrr.64b-8t4c-avf-dot1q-l2bdbasemaclrn-eth-2vhostvr1024-1vm-vppl2xc-mrr


2n-clx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-clx/297, 
VPP version: 20.05-rc0~554-gce815deb7~b1051

tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale2m-rnd-mrr.64b-2t1c-avf-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale200k-rnd-mrr.64b-2t1c-avf-ethip4-ip4scale200k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale2m-rnd-mrr.64b-4t2c-avf-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale200k-rnd-mrr.64b-4t2c-avf-ethip4-ip4scale200k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale20k-rnd-mrr.64b-2t1c-ethip4-ip4scale20k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale200k-rnd-mrr.64b-2t1c-ethip4-ip4scale200k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale2m-rnd-mrr.64b-2t1c-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale20k-rnd-mrr.64b-4t2c-ethip4-ip4scale20k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale200k-rnd-mrr.64b-4t2c-ethip4-ip4scale200k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale2m-rnd-mrr.64b-4t2c-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip6.2n1l-100ge2p1cx556a-rdma-dot1q-ip6base-mrr.78b-2t1c-rdma-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-100ge2p1cx556a-rdma-dot1q-ip6base-mrr.78b-4t2c-rdma-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-100ge2p1cx556a-rdma-dot1q-ip6base-mrr.78b-8t4c-rdma-dot1q-ip6base-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr.64b-4t2c-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-eth-l2bdbasemaclrn-eth-2vhostvr1024-1vm-vppl2xc-mrr.64b-4t2c-avf-eth-l2bdbasemaclrn-eth-2vhostvr1024-1vm-vppl2xc-mrr


3n-hsw, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master/1148, VPP 
version: 20.05-rc0~554-gce815deb7~b1051

No regressions

3n-tsh, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-3n-tsh/182, 
VPP version: 20.05-rc0~554-gce815deb7~b3316

No regressions

3n-dnv, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-3n-dnv/389, 
VPP version: 20.05-rc0~554-gce815deb7~b1051

No regressions



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16118): https://lists.fd.io/g/vpp-dev/message/16118
Mute This Topic: https://lists.fd.io/mt/73147141/21656
Mute #email: https://lists.fd.io/mk?hashtag=email=1480452
Group Owner: vpp-dev+ow...@lists.fd.io

Re: [vpp-dev] perftest stuck: "do_not_use_dut2_ssd_failure"

2020-04-20 Thread Dave Barach via lists.fd.io
Ack, thanks for the info.

From: Jan Gelety -X (jgelety - PANTHEON TECH SRO at Cisco) 
Sent: Monday, April 20, 2020 4:40 AM
To: Dave Barach (dbarach) ; csit-...@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: RE: perftest stuck: "do_not_use_dut2_ssd_failure"

Hello Dave,

3n-skx perf job has been aborted.

I guess you can use 2n-skx testbed to test your changes so, please, use trigger 
perftest-2n-skx.

ETA for availability of 3n-skx perf testbeds is unknown at the moment as we are 
waiting for new/repaired SSDs.

Regards,
Jan

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Dave Barach via 
lists.fd.io
Sent: Saturday, April 18, 2020 2:06 PM
To: csit-...@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] perftest stuck: "do_not_use_dut2_ssd_failure"

Folks,

I kicked off a "perftest-3n-skx" run for https://gerrit.fd.io/r/c/vpp/+/26549. 
24 hours later, the job is still stuck:

07:58:00 +++ python3 
/w/workspace/vpp-csit-verify-perf-master-3n-skx/csit/resources/tools/scripts/topo_reservation.py
 -t 
/w/workspace/vpp-csit-verify-perf-master-3n-skx/csit/topologies/available/lf_3n_skx_testbed31.yaml
 -r jenkins-vpp-csit-verify-perf-master-3n-skx-28
07:58:01 Diagnostic commands:
07:58:01 + ls --full-time -cd '/tmp/reservation_dir'/*
07:58:01 -rw-rw-r-- 1 testuser testuser 0 2020-04-14 00:54:01.698249847 -0700 
/tmp/reservation_dir/do_not_use_dut2_ssd_failure
07:58:01
07:58:01 Attempting testbed reservation.
07:58:01 Testbed already reserved by:
07:58:01 /tmp/reservation_dir/do_not_use_dut2_ssd_failure

Someone with the appropriate credentials might as well kill the job.

Is there an ETA for making Skylake per-patch performance testing available 
again?

Thanks... Dave
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16117): https://lists.fd.io/g/vpp-dev/message/16117
Mute This Topic: https://lists.fd.io/mt/73105683/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [csit-report] Progressions as of 2020-04-19 02:00:13 UTC #email

2020-04-20 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io
> 3n-hsw

Progressions showing the fixes described in [4]
do restore the previous performance.

Vratko.

[4] https://lists.fd.io/g/csit-report/message/2488

-Original Message-
From: csit-rep...@lists.fd.io  On Behalf Of 
nore...@jenkins.fd.io
Sent: Sunday, 2020-April-19 06:07
To: Fdio+Csit-Report via Email Integration 
Subject: [csit-report] Progressions as of 2020-04-19 02:00:13 UTC #email

Following progressions occured in the last trending job runs, listed per 
testbed type.



2n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-skx/886, 
VPP version: 20.05-rc0~545-g8daeea9a5~b1042

tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-2t1c-avf-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-8t4c-avf-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-2t1c-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-ethip4-ip4base-eth-2memif-1dcr-mrr.64b-2t1c-ethip4-ip4base-eth-2memif-1dcr-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-dot1q-ip4base-mrr.64b-2t1c-avf-dot1q-ip4base-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-dot1q-ip4base-mrr.64b-8t4c-avf-dot1q-ip4base-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4base-mrr.64b-8t4c-avf-ethip4-ip4base-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale20k-mrr.64b-8t4c-avf-ethip4-ip4scale20k-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale200k-mrr.64b-8t4c-avf-ethip4-ip4scale200k-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale2m-mrr.64b-8t4c-avf-ethip4-ip4scale2m-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale20k-rnd-mrr.64b-8t4c-avf-ethip4-ip4scale20k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale200k-rnd-mrr.64b-8t4c-avf-ethip4-ip4scale200k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4-ip4scale2m-rnd-mrr.64b-8t4c-avf-ethip4-ip4scale2m-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-dot1q-ip4base-mrr.64b-2t1c-dot1q-ip4base-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4base-mrr.64b-8t4c-ethip4-ip4base-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale20k-mrr.64b-8t4c-ethip4-ip4scale20k-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale200k-mrr.64b-8t4c-ethip4-ip4scale200k-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale2m-mrr.64b-8t4c-ethip4-ip4scale2m-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale20k-rnd-mrr.64b-8t4c-ethip4-ip4scale20k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-ethip4-ip4scale200k-rnd-mrr.64b-8t4c-ethip4-ip4scale200k-rnd-mrr
tests.vpp.perf.ip4.2n1l-25ge2p1xxv710-avf-ethip4udp-ip4base-iacl50sf-10kflows-mrr.64b-8t4c-avf-ethip4udp-ip4base-iacl50sf-10kflows-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-eth-l2patch-mrr.64b-2t1c-avf-eth-l2patch-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-eth-l2patch-mrr.64b-4t2c-avf-eth-l2patch-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-mrr.64b-4t2c-avf-eth-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-mrr.64b-8t4c-avf-eth-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-eth-l2bdbasemaclrn-mrr.64b-8t4c-avf-eth-l2bdbasemaclrn-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-dot1q-l2bdbasemaclrn-mrr.64b-2t1c-dot1q-l2bdbasemaclrn-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2xcbase-mrr.64b-2t1c-eth-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-dot1q-l2bdbasemaclrn-mrr.64b-4t2c-dot1q-l2bdbasemaclrn-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2patch-mrr.64b-4t2c-eth-l2patch-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2xcbase-mrr.64b-4t2c-eth-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-dot1q-l2xcbase-mrr.64b-8t4c-dot1q-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2patch-mrr.64b-8t4c-eth-l2patch-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2xcbase-mrr.64b-8t4c-eth-l2xcbase-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2bdbasemaclrn-mrr.64b-8t4c-eth-l2bdbasemaclrn-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-avf-eth-l2bdscale10kmaclrn-mrr.64b-8t4c-avf-eth-l2bdscale10kmaclrn-mrr
tests.vpp.perf.l2.2n1l-25ge2p1xxv710-eth-l2bdscale10kmaclrn-mrr.64b-2t1c-eth-l2bdscale10kmaclrn-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2vhostvr1024-1vm-mrr.64b-2t1c-avf-eth-l2xcbase-eth-2vhostvr1024-1vm-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr.64b-4t2c-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-dot1q-l2bdbasemaclrn-eth-2vhostvr1024-1vm-mrr.64b-4t2c-avf-dot1q-l2bdbasemaclrn-eth-2vhostvr1024-1vm-mrr
tests.vpp.perf.vm 
vhost.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2vhostvr1024-1vm-mrr.64b-4t2c-avf-eth-l2xcbase-eth-2vhostvr1024-1vm-mrr
tests.vpp.perf.vm 

Re: [vpp-dev] [csit-report] Regressions as of 2020-04-15 02:00:17 UTC #email

2020-04-20 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io
> 3n-hsw

This turned out to be a bug on CSIT side,
breaking numa detection, affecting every NIC on numa 1.
The bug was introduced in [0] and fixed in [1] and [2].

Vratko.

[0] https://gerrit.fd.io/r/c/csit/+/25363
[1] https://gerrit.fd.io/r/c/csit/+/26569
[2] https://gerrit.fd.io/r/c/csit/+/26572

-Original Message-
From: csit-rep...@lists.fd.io  On Behalf Of 
nore...@jenkins.fd.io
Sent: Wednesday, 2020-April-15 06:15
To: Fdio+Csit-Report via Email Integration 
Subject: [csit-report] Regressions as of 2020-04-15 02:00:17 UTC #email

Following regressions occured in the last trending job runs, listed per testbed 
type.



2n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-skx/885, 
VPP version: 20.05-rc0~528-g7357043d2~b1025

tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-8t4c-avf-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.ip6.2n1l-10ge2p1x710-ethip6-ip6base-mrr.78b-2t1c-ethip6-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-avf-ethip6-ip6base-mrr.78b-4t2c-avf-ethip6-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-ethip6-ip6base-mrr.78b-2t1c-ethip6-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-dot1q-ip6base-mrr.78b-4t2c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.2n1l-25ge2p1xxv710-ethip6-ip6base-mrr.78b-4t2c-ethip6-ip6base-mrr


3n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-3n-skx/864, 
VPP version: 20.05-rc0~509-g9cbfb4c51~b1006

tests.vpp.perf.l2.25ge2p1xxv710-dot1q-l2xcbase-mrr.64b-4t2c-dot1q-l2xcbase-mrr


2n-clx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-clx/293, 
VPP version: 20.05-rc0~532-g4fde4ae03~b1029

tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-8t4c-avf-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr


3n-hsw, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master/1145, VPP 
version: 20.05-rc0~534-gd724e4f43~b1031

tests.vpp.perf.ip4.40ge2p1xl710-dot1q-ip4base-mrr.64b-1t1c-dot1q-ip4base-mrr
tests.vpp.perf.ip4.40ge2p1xl710-dot1q-ip4base-mrr.64b-2t2c-dot1q-ip4base-mrr
tests.vpp.perf.ip4 
tunnels.40ge2p1xl710-ethip4vxlan-l2xcbase-mrr.64b-1t1c-ethip4vxlan-l2xcbase-mrr
tests.vpp.perf.ip4 
tunnels.40ge2p1xl710-ethip4vxlan-l2bdbasemaclrn-mrr.64b-1t1c-ethip4vxlan-l2bdbasemaclrn-mrr
tests.vpp.perf.ip4 
tunnels.40ge2p1xl710-ethip4vxlan-l2xcbase-mrr.64b-2t2c-ethip4vxlan-l2xcbase-mrr
tests.vpp.perf.ip4 
tunnels.40ge2p1xl710-ethip4vxlan-l2bdbasemaclrn-mrr.64b-2t2c-ethip4vxlan-l2bdbasemaclrn-mrr
tests.vpp.perf.ip4 
tunnels.40ge2p1xl710-ethip4vxlan-l2xcbase-mrr.64b-4t4c-ethip4vxlan-l2xcbase-mrr
tests.vpp.perf.ip4 
tunnels.40ge2p1xl710-ethip4vxlan-l2bdbasemaclrn-mrr.64b-4t4c-ethip4vxlan-l2bdbasemaclrn-mrr
tests.vpp.perf.ip6.40ge2p1xl710-dot1q-ip6base-mrr.78b-1t1c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.40ge2p1xl710-ethip6-ip6base-mrr.78b-1t1c-ethip6-ip6base-mrr
tests.vpp.perf.ip6.40ge2p1xl710-dot1q-ip6base-mrr.78b-2t2c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.40ge2p1xl710-ethip6-ip6base-mrr.78b-2t2c-ethip6-ip6base-mrr
tests.vpp.perf.ip6.40ge2p1xl710-dot1q-ip6base-mrr.78b-4t4c-dot1q-ip6base-mrr
tests.vpp.perf.ip6.40ge2p1xl710-ethip6-ip6base-mrr.78b-4t4c-ethip6-ip6base-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec4tnlsw-ip4base-int-aes256gcm-mrr.imix-1t1c-ethip4ipsec4tnlsw-ip4base-int-aes256gcm-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec4tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr.imix-1t1c-ethip4ipsec4tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1000tnlsw-ip4base-int-aes256gcm-mrr.imix-1t1c-ethip4ipsec1000tnlsw-ip4base-int-aes256gcm-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1000tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr.imix-1t1c-ethip4ipsec1000tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1tnlsw-ip4base-int-aes256gcm-mrr.imix-1t1c-ethip4ipsec1tnlsw-ip4base-int-aes256gcm-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr.imix-1t1c-ethip4ipsec1tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec4tnlsw-ip4base-int-aes256gcm-mrr.imix-2t2c-ethip4ipsec4tnlsw-ip4base-int-aes256gcm-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec4tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr.imix-2t2c-ethip4ipsec4tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1000tnlsw-ip4base-int-aes256gcm-mrr.imix-2t2c-ethip4ipsec1000tnlsw-ip4base-int-aes256gcm-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1000tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr.imix-2t2c-ethip4ipsec1000tnlsw-ip4base-int-aes128cbc-hmac512sha-mrr
tests.vpp.perf.crypto.40ge2p1xl710-ethip4ipsec1tnlsw-ip4base-int-aes256gcm-mrr.imix-2t2c-ethip4ipsec1tnlsw-ip4base-int-aes256gcm-mrr

Re: [vpp-dev] perftest stuck: "do_not_use_dut2_ssd_failure"

2020-04-20 Thread Jan Gelety via lists.fd.io
Hello Dave,

3n-skx perf job has been aborted.

I guess you can use 2n-skx testbed to test your changes so, please, use trigger 
perftest-2n-skx.

ETA for availability of 3n-skx perf testbeds is unknown at the moment as we are 
waiting for new/repaired SSDs.

Regards,
Jan

From: vpp-dev@lists.fd.io  On Behalf Of Dave Barach via 
lists.fd.io
Sent: Saturday, April 18, 2020 2:06 PM
To: csit-...@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] perftest stuck: "do_not_use_dut2_ssd_failure"

Folks,

I kicked off a "perftest-3n-skx" run for https://gerrit.fd.io/r/c/vpp/+/26549. 
24 hours later, the job is still stuck:

07:58:00 +++ python3 
/w/workspace/vpp-csit-verify-perf-master-3n-skx/csit/resources/tools/scripts/topo_reservation.py
 -t 
/w/workspace/vpp-csit-verify-perf-master-3n-skx/csit/topologies/available/lf_3n_skx_testbed31.yaml
 -r jenkins-vpp-csit-verify-perf-master-3n-skx-28
07:58:01 Diagnostic commands:
07:58:01 + ls --full-time -cd '/tmp/reservation_dir'/*
07:58:01 -rw-rw-r-- 1 testuser testuser 0 2020-04-14 00:54:01.698249847 -0700 
/tmp/reservation_dir/do_not_use_dut2_ssd_failure
07:58:01
07:58:01 Attempting testbed reservation.
07:58:01 Testbed already reserved by:
07:58:01 /tmp/reservation_dir/do_not_use_dut2_ssd_failure

Someone with the appropriate credentials might as well kill the job.

Is there an ETA for making Skylake per-patch performance testing available 
again?

Thanks... Dave
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16114): https://lists.fd.io/g/vpp-dev/message/16114
Mute This Topic: https://lists.fd.io/mt/73105683/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] worker barrier state

2020-04-20 Thread Neale Ranns via lists.fd.io

Hi Chris,

Comments inline...

On 15/04/2020 15:14, "Christian Hopps"  wrote:

Hi Neale,

I agree that something like 4, is probably the correct approach. I had a 
side-meeting with some of the ARM folks (Govind and Honnappa), and we thought 
using a generation number for the state rather than just waiting "long-enough" 
to recycle it could work. The generation number would be the atomic value 
associated with the state. So consider this API:

 - MP-safe pools store generation numbers alongside each object.
 - When you allocate a new object from the pool you get an index and 
generation number.
 - When storing the object index you also save the generation number.
 - When getting a pointer to the object you pass the API the index and 
generation number and it will return NULL if the generation number did not 
match the one stored with the object in the pool.
 - When you delete a pool object its generation number is incremented (with 
barrier).

The size of the generation number needs to be large enough to guarantee 
there is no wrap with objects still in the system that have stored the 
generation number. Technically this is a "long-enough" aspect of the scheme. :) 
One could imagine using less than 64 bits for the combination of index and 
generation, if that was important.

It's a good scheme, I like it.
I assume the pool indices would be 64 bit and the separation between vector 
index and generation would be hidden from the user. Maybe a 32 bit value would 
suffice in most cases, but why skimp...

The advantage over just waiting N seconds to recycle the index is that the 
system scales better, i.e., if you just wait N seconds to reuse, and are 
creating and deleting objects at a significant rate, your pool can blow up in 
the N seconds of time. With the generation number this is not a problem as you 
can re-use the object immediately. Another advantage is that you don't have to 
have the timer logic (looping per pool or processing all pools) to free up old 
indices.

Yes, for my time based scheme, the size of the pool will become dependent on 
some integration over a rate of change, which is not deterministic, which is 
not great, but I don't suppose all APIs are subject to large churn.
With the generation scheme the pool always requires more memory, since you're 
storing a generation value for each index, but being a deterministic size (even 
though probably bigger), I'd probably take that.
I wouldn't use timer logic in my scheme. I'd make the pool's free-list a fifo 
(as opposed to the stack it is today) and each entry in the list has the index 
and the time it was added. If t_now - t_head < t_wrap I can pull from the 
free-list, else the pool needs to grow.

The generation number scheme will still need the thread barrier to 
increment the generation number to make sure no-one is using the object in 
parallel. But this is a common problem with deleting non-reference-counted 
shared state I believe.

I don't think you strictly need the barrier, you can still use a 
make-before-break update. One downside of the generation approach is that nodes 
that try and fetch the state using the index will get NULL, so the only option 
is to drop, as opposed to what the make-before-break change determined. Mind 
you, this is probably fine for most practical purposes. Again if we're talking 
SAs, then at this point the SA is decoupled from the graph (i.e. it's no longer 
protecting the tunnel or it's not linked to a policy in the SPD), so drop is 
all we can do anyway.

When you mentioned packet counters, that's really a reference count I 
guess. The trade-off here seems to me to be 2 cache-line-invalidates per packet 
(once on ingress once on egress) for the counter vs a barrier hit (all packet 
processing stops) per delete of the state. For your setup that you measured the 
packet counter solution how long does it spend from the barrier sync request to 
release (i.e., how long is the system not processing packets)?

As an example in the basic test setup I had that measured the increase in clock 
cycles for adj counters, here's the time taken for the CLI to execute the 
addition of two ipsec tunnels:
   3.786220665: cli-cmd: create ipsec tunnel
   3.786540648: cli-cmd: create ipsec tunnel OK
   3.786544389: cli-cmd: create ipsec tunnel
   3.786577392: cli-cmd: create ipsec tunnel OK

(collected with 'elog trace cli' and 'sh event-logger')

I see it as a trade-off between a cost for every packet forwarded versus how 
many may be dropped during API calls. I wouldn't want the scheme employed to 
ensure safe delete to affect the overall packet through put - most of the time 
I'm not changing the state...

Now we have a few potential schemes in mind, IIRC you focus was on the deletion 
of SAs. Can you remind me again what additional state you had associated with 
the SA that you needed to deal with.


/neale


Thanks,
Chris.

> On Apr 15, 2020, at 

[vpp-dev] Query regarding bonding in Vpp 19.08

2020-04-20 Thread chetan bhasin
Hi,

I am using vpp 19.08 , When I use bonding configuration , I am seeing below
output of "show int " CLI .
Query : Is it ok to show the status of slave interface as up in "show
interface" CLI while as per the show hardware-interface its down ?

vpp# show int Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
BondEthernet0 3 up 9000/0/0/0 rx packets 12 BondEthernet0.811 4 up 0/0/0/0
rx packets 6 BondEthernet0.812 5 up 0/0/0/0 rx packets 6 device_5d/0/0 1 up
9000/0/0/0 rx packets 12 device_5d/0/1 2 up 9000/0/0/0 rx packets 17 rx
bytes 1100 drops 14 local0 0 down 0/0/0/0

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16112): https://lists.fd.io/g/vpp-dev/message/16112
Mute This Topic: https://lists.fd.io/mt/73144225/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-