Re: [vpp-dev] CI Job failures: Java Connection Closed Exception

2021-07-19 Thread Dave Wallace

Folks,

Vanessa performed a Jenkins reset at my request to see if that would 
resolve this problem.  Unfortunately the Jenkins reset did not resolve 
the connection resets. A recheck of gerrit change after the Jenkins 
restart failed with multiple job failures due to TCP connection resets:


https://gerrit.fd.io/r/c/vpp/+/32858/6#message-c77806c2fd58c3c00935e1b5589a402e4b670f9f

There has also been no correlation with Ping Monitor events, Nomad 
cluster events, Nomad host, subnet, or docker image.


Investigation continues in the datapath between the Jenkins openstack 
instance and the Nomad cluster.


Thanks again for your patience.
-daw-

On 7/19/2021 11:29 AM, Dave Wallace via lists.fd.io wrote:

Folks,

There have been large numbers CI job failures due to 'Java Connection 
Closed Exception' that appear to have started occurring on July 17.


I have opened a ticket with Vexxhost and am actively diagnosing the 
problem with them.


Thank you for your patience while the issue is being resolved.
-daw-






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19836): https://lists.fd.io/g/vpp-dev/message/19836
Mute This Topic: https://lists.fd.io/mt/84310372/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] CI Job failures: Java Connection Closed Exception

2021-07-19 Thread Dave Wallace

Folks,

There have been large numbers CI job failures due to 'Java Connection 
Closed Exception' that appear to have started occurring on July 17.


I have opened a ticket with Vexxhost and am actively diagnosing the 
problem with them.


Thank you for your patience while the issue is being resolved.
-daw-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19835): https://lists.fd.io/g/vpp-dev/message/19835
Mute This Topic: https://lists.fd.io/mt/84310372/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 21.06 - RDMA - Cannot allocate memory

2021-07-19 Thread Sergio Tur
Hi. We're having some trouble making the native rdma driver work in vpp
21.06 on centos 7.9 installed inside an azure VM with accelerated NICs.
We followed the instructions explained in the vpp wiki but when trying to
create an interface  it returns the error: "Cannot allocate memory".

Has anyone managed to make vpp+rdma (no-dpdk) work in centos 7.9 inside a
VM on azure?

Below is some platform info attached and some of the steps we followed.

--- We upgraded the kernel (just in case), thou it fails with the same
error in the original kernel 3.X that's included by default with centos 7.9

*[root@testvm ~]# uname -r*
*5.13.2-1.el7.elrepo.x86_64*

--- We made sure ulimits memlock is set to unlimited

*[root@testvm ~]# ulimit -a*
*core file size  (blocks, -c) 0*
*data seg size   (kbytes, -d) unlimited*
*scheduling priority (-e) 0*
*file size   (blocks, -f) unlimited*
*pending signals (-i) 63951*
*max locked memory   (kbytes, -l) unlimited*
*max memory size (kbytes, -m) unlimited*
*open files  (-n) 1024*
*pipe size(512 bytes, -p) 8*
*POSIX message queues (bytes, -q) 819200*
*real-time priority  (-r) 0*
*stack size  (kbytes, -s) 8192*
*cpu time   (seconds, -t) unlimited*
*max user processes  (-u) 63951*
*virtual memory  (kbytes, -v) unlimited*
*file locks  (-x) unlimited*

--- We disabled selinux (we want to make it work first)

*[root@testvm ~]# getenforce*
*Disabled*

--- We loaded ib_uverbs

*[root@testvm ~]# modprobe ib_uverbs*
*[root@testvm ~]# lsmod | grep ib_uverbs*
*ib_uverbs 147456  1 mlx5_ib*
*ib_core   356352  2 ib_uverbs,mlx5_ib*

--- We checked the accelerated network cards were actually there and had a
compatible mellanox driver (network 0 is the management interface and it's
not accelerated)

*[root@testvm ~]# lshw -c network*
*  *-network:0*
*   description: Ethernet interface*
*   product: MT27710 Family [ConnectX-4 Lx Virtual Function]*
*   vendor: Mellanox Technologies*
*   physical id: 1*
*   bus info: pci@731a:00:02.0*
*   logical name: eth4*
*   version: 80*
*   serial: 00:0d:3a:be:ab:34*
*   width: 64 bits*
*   clock: 33MHz*
*   capabilities: pciexpress msix bus_master cap_list ethernet physical
autonegotiation*
*   configuration: autonegotiation=off broadcast=yes driver=mlx5_core
driverversion=5.13.2-1.el7.elrepo.x86_64 firmware=14.25.8368
(MSF0010110035) latency=0 link=yes multicast=yes slave=yes*
*   resources: iomemory:f0-ef irq:0 memory:fe010-fe01f*
*  *-network:1*
*   description: Ethernet interface*
*   product: MT27710 Family [ConnectX-4 Lx Virtual Function]*
*   vendor: Mellanox Technologies*
*   physical id: 2*
*   bus info: pci@ebc3:00:02.0*
*   logical name: eth3*
*   version: 80*
*   serial: 00:0d:3a:be:aa:e5*
*   width: 64 bits*
*   clock: 33MHz*
*   capabilities: pciexpress msix bus_master cap_list ethernet physical
autonegotiation*
*   configuration: autonegotiation=off broadcast=yes driver=mlx5_core
driverversion=5.13.2-1.el7.elrepo.x86_64 firmware=14.25.8368
(MSF0010110035) latency=0 link=yes multicast=yes slave=yes*
*   resources: iomemory:f0-ef irq:0 memory:fe000-fe00f*
*  *-network:0*
*   description: Ethernet interface*
*   physical id: 1*
*   logical name: eth0*
*   serial: 00:0d:3a:29:31:01*
*   capabilities: ethernet physical*
*   configuration: autonegotiation=off broadcast=yes driver=hv_netvsc
driverversion=5.13.2-1.el7.elrepo.x86_64 duplex=full firmware=N/A
ip=10.99.4.4 link=yes multicast=yes*
*  *-network:1*
*   description: Ethernet interface*
*   physical id: 2*
*   logical name: eth1*
*   serial: 00:0d:3a:be:aa:e5*
*   capabilities: ethernet physical autonegotiation*
*   configuration: autonegotiation=off broadcast=yes driver=hv_netvsc
driverversion=5.13.2-1.el7.elrepo.x86_64 firmware=N/A ip=10.99.6.4 link=yes
multicast=yes*
*  *-network:2*
*   description: Ethernet interface*
*   physical id: 3*
*   logical name: eth2*
*   serial: 00:0d:3a:be:ab:34*
*   capabilities: ethernet physical autonegotiation*
*   configuration: autonegotiation=off broadcast=yes driver=hv_netvsc
driverversion=5.13.2-1.el7.elrepo.x86_64 firmware=N/A ip=10.99.9.4 link=yes
multicast=yes*

--- We checked there's plenty of free memory

*[root@testvm ~]# cat /proc/meminfo | grep MemTotal*
*MemTotal:   16396568 kB*

--- This is our very simple vpp test configuration

*[root@testvm ~]# nano /etc/vpp/startup.conf*

*unix {*
*interactive*
*log /var/log/vpp/vpp.log*
*full-coredump*
*cli-listen /run/vpp/cli.sock*
*gid vpp*
*}*

*api-trace {*
*on*
*}*

*api-segment {*
*gid vpp*
*}*

*socksvr {*
*default*
*}*

*cpu {*
*workers 2*
*

Re: [vpp-dev] vpp main thread crashed at mspace_put

2021-07-19 Thread Satya Murthy
Thanks Sudhir for the quick inputs.
Will check if we have any leaks.

Also, as a side question, How are you finding leaks in VPP ?
Are you using address sanitizer (or) something else?

--
Thanks & Regards,
Murthy

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19833): https://lists.fd.io/g/vpp-dev/message/19833
Mute This Topic: https://lists.fd.io/mt/81600282/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp main thread crashed at mspace_put

2021-07-19 Thread Sudhir CR via lists.fd.io
Hi Murthy,
We observed this issue when memory is exhausted in our system (due to
memory leak in our application).
After solving the above mentioned issue we have not observed this issue.

Regards,
Sudhir

On Mon, Jul 19, 2021 at 4:46 PM Satya Murthy 
wrote:

> Hi Sudhir,
>
> Were you able to find a solution to this problem.
> We are also facing similar issue.
>
> Any inputs would be helpful.
>
> --
> Thanks & Regards,
> Murthy
> 
>
>

-- 
NOTICE TO
RECIPIENT This e-mail message and any attachments are 
confidential and may be
privileged. If you received this e-mail in error, 
any review, use,
dissemination, distribution, or copying of this e-mail is 
strictly
prohibited. Please notify us immediately of the error by return 
e-mail and
please delete this message from your system. For more 
information about Rtbrick, please visit us at www.rtbrick.com 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19832): https://lists.fd.io/g/vpp-dev/message/19832
Mute This Topic: https://lists.fd.io/mt/81600282/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP getting hanged after consecutive VAPI requests

2021-07-19 Thread Satya Murthy
Hi Chinmaya,

We are also facing similar issue and want to check with you if you are able to 
find a fix for this.
Appreciate any inputs regarding the same.

--
Thanks & Regards,
Murthy

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19831): https://lists.fd.io/g/vpp-dev/message/19831
Mute This Topic: https://lists.fd.io/mt/75986184/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp main thread crashed at mspace_put

2021-07-19 Thread Satya Murthy
Hi Sudhir,

Were you able to find a solution to this problem.
We are also facing similar issue.

Any inputs would be helpful.

--
Thanks & Regards,
Murthy

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19830): https://lists.fd.io/g/vpp-dev/message/19830
Mute This Topic: https://lists.fd.io/mt/81600282/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Reason for removing SUSE packaging support

2021-07-19 Thread Damjan Marion via lists.fd.io

Simply because there was nobody interested to volunteer to maintain it.

— 
Damjan

> 
> On 19.07.2021., at 12:04, Laszlo Király  wrote:
> 
> 
> Hello,
> 
> Could somebody evolve why the build support for Suse was removed? 
> Which was the last release with support for build on openSuse?  I found this 
> commit only mentioning this removal:
> 
> commit bc35f469c89daf0126937580b6972516b5007d3a
> Author: Dave Wallace 
> Date:   Fri Sep 18 15:35:01 2020 +
> 
> build: remove opensuse build infra
>
> - VPP on opensuse has not been supported
>   for several releases.
>
> Type: fix
>
> Signed-off-by: Dave Wallace 
> Change-Id: I2b5316ad5c20a843b8936f4ceb473f932a5338d9
> 
> 
> Is it planned to add it back soon? Or later?
> 
> --
> Laszlo Kiraly
> Ericsson Software Technology
> laszlo.kir...@est.tech
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19829): https://lists.fd.io/g/vpp-dev/message/19829
Mute This Topic: https://lists.fd.io/mt/84304700/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Reason for removing SUSE packaging support

2021-07-19 Thread Laszlo Király
Hello,

Could somebody evolve why the build support for Suse was removed?
Which was the last release with support for build on openSuse?  I found this 
commit only mentioning this removal:

commit bc35f469c89daf0126937580b6972516b5007d3a
Author: Dave Wallace 
Date:   Fri Sep 18 15:35:01 2020 +

build: remove opensuse build infra

- VPP on opensuse has not been supported
  for several releases.

Type: fix

Signed-off-by: Dave Wallace 
Change-Id: I2b5316ad5c20a843b8936f4ceb473f932a5338d9


Is it planned to add it back soon? Or later?

--
Laszlo Kiraly
Ericsson Software Technology
laszlo.kir...@est.tech

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19828): https://lists.fd.io/g/vpp-dev/message/19828
Mute This Topic: https://lists.fd.io/mt/84304700/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Buffer chains and pre-data area

2021-07-19 Thread Benoit Ganne (bganne) via lists.fd.io
> I guess I should also modify the value of "DPDK RTE_PKTMBUF_HEADROOM" but
> I don't know how I can do it ? Indeed, I can't find where this variable is
> defined.

You should be able to change it here: 
https://git.fd.io/vpp/tree/build/external/packages/dpdk.mk#n14

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19827): https://lists.fd.io/g/vpp-dev/message/19827
Mute This Topic: https://lists.fd.io/mt/84230132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-