Hi Akshaya,
Glad you were able to solve the issue. We’re slowly moving away from shm fifo
segments, i.e., /dev/shm segments, in favor of memfd segments.
Regards,
Florin
> On Nov 4, 2019, at 3:11 AM, Akshaya Nadahalli
> wrote:
>
> Hi Florin,
>
> This crash was due to setting hard limit
Hi Florin,
This crash was due to setting hard limit on /dev/shm partition to 100 MB.
After increasing that I am able to scale more connections.
For fifo allocation, I see that we can use either shm or memfd. Is there
any recommendation/preference on which one to use? Does memfd also
internally
io [mailto:vpp-dev@lists.fd.io] *On Behalf Of
> *Akshaya
> Nadahalli
> *Sent:* Thursday, October 24, 2019 4:38 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP crash in TCP FIFO allocation
>
>
>
> Hi,
>
>
>
> While testing VPP hoststack with large nu
-dev@lists.fd.io
Subject: [vpp-dev] VPP crash in TCP FIFO allocation
Hi,
While testing VPP hoststack with large number of TCP connections, I see VPP
crash in fifo allocation. Always crash is seen between 11k to 12k TCP
connections. Changing vcl config -
segment-size/add-segment-size/rx-fifo-size