Thanks Dave for the tip on core compression.

I was able to solve the issue of huge VSZ resulting into huge cores
afterall -- the culprit is DPDK.
There is a parameter in DPDK called CONFIG_RTE_MAX_MEM_MB which can be
set to a lower value than the default.

Regards
-Prashant

On Tue, Feb 4, 2020 at 5:22 PM Dave Barach (dbarach) <dbar...@cisco.com> wrote:
>
> As Ben wrote, please check out: 
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>
> Note the section(s) on core file handling; in particular, how to set up 
> on-the-fly core file compression...:
>
> Depending on operational requirements, it’s possible to compress corefiles as 
> they are generated. Please note that it takes several seconds’ worth of 
> wall-clock time to compress a vpp core file on the fly, during which all 
> packet processing activities are suspended.
>
> To create compressed core files on the fly, create the following script, e.g. 
> in /usr/local/bin/compressed_corefiles, owned by root, executable:
>
> #!/bin/sh
> exec /bin/gzip -f - >"/tmp/dumps/core-$1.$2.gz"
>
> Adjust the kernel core file pattern as shown:
>
> sysctl -w kernel.core_pattern="|/usr/local/bin/compressed_corefiles %e %t"
>
> HTH... Dave
>
> -----Original Message-----
> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Prashant 
> Upadhyaya
> Sent: Tuesday, February 4, 2020 4:38 AM
> To: Benoit Ganne (bganne) <bga...@cisco.com>
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
>
> Thanks Benoit.
> I don't have the core files at the moment (still taming the huge cores that 
> are generated, so they were disabled on the setup) Backtraces are present at 
> (with indicated config of the parameter) -- https://pastebin.com/1YS3ZWeb It 
> is a dual numa setup.
>
> Regards
> -Prashant
>
>
> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne) <bga...@cisco.com> wrote:
> >
> > Hi Prashant,
> >
> > Can you share your configuration and at least a backtrace of the
> > crash? Or even better a corefile:
> > https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportin
> > gissues.html
> >
> > Best
> > ben
> >
> > > -----Original Message-----
> > > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of
> > > Prashant Upadhyaya
> > > Sent: mardi 4 février 2020 09:15
> > > To: vpp-dev@lists.fd.io
> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > >  Woops, my mistake. I think I multiplied by 1024 extra.
> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> > >
> > > But the fact remains that my usecase is unstable at higher
> > > configured buffers but is stable at lower values like 100000 (this
> > > can by all means be my usecase/code specific issue)
> > >
> > > If anybody else facing issues with higher configured buffers, please
> > > do share.
> > >
> > > Regards
> > > -Prashant
> > >
> > >
> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> > > <praupadhy...@gmail.com> wrote:
> > > >
> > > > Hi,
> > > >
> > > > I am using DPDK Plugin with VPP19.08.
> > > > When I set the buffers-per-numa parameter to a high value, say,
> > > > 250000, I am seeing crashes in the system.
> > > >
> > > > (The corresponding parameter controlling number of mbufs in
> > > > VPP18.01 used to work well. This was in dpdk config section as
> > > > num-mbufs)
> > > >
> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > > which are uwords :-  uword buffer_mem_start;
> > > >   uword buffer_mem_size;
> > > >
> > > > Is it a mem size overflow in case the buffers-per-numa parameter
> > > > is set to a high value ?
> > > > I do need a high number of DPDK mbuf's in my usecase.
> > > >
> > > > Regards
> > > > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15329): https://lists.fd.io/g/vpp-dev/message/15329
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to