Re: [vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration
Hi Shiva, After back-porting , VPP is coming up fine even with 16 workers. Thanks a lot! Regards, Chetan Bhasin On Mon, Dec 23, 2019 at 6:08 PM chetan bhasin wrote: > Thanks Shiva for reply! > > We are using VPP 19.08.1. Let me try with fix provided. > > Thanks, > Chetan Bhasin > > > On Mon, Dec 23, 2019 at 5:37 PM Shiva Shankar > wrote: > >> Hi Chetan, >> Are you using latest master code? If not, can you verify your issue with >> below commit? >> https://gerrit.fd.io/r/#/c/vpp/+/22527/ >> >> On Mon, Dec 23, 2019 at 4:32 PM chetan bhasin >> wrote: >> >>> Hello Everyone, >>> >>> Merry Christmas everybody! >>> >>> We were using VPP 18.01 earlier that worked fine with 16 workers , now >>> we moved to vpp 19.08 and facing the below problem. >>> >>> A direction from you is much appreciated. >>> >>> Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)): >>> #0 0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at >>> third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094 >>> #1 mspace_mallinfo (msp=0x2b050cacb010) at >>> /third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797 >>> #2 0x2b028125fcb6 in mheap_usage (heap=, >>> usage=usage@entry=0x2b0284d04f40) at >>> third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389 >>> #3 0x0040a49e in do_stat_segment_updates (sm=0x6cca00 >>> ) at >>> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620 >>> #4 stat_segment_collector_process (vm=0x2b028096dc40 >>> , rt=, f=) at >>> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717 >>> #5 0x2b02807022d6 in vlib_process_bootstrap (_a=) at >>> third-party/vpp/vpp_1908/src/vlib/main.c:2754 >>> #6 0x2b02811fff54 in clib_calljmp () >>> >>> Thanks , >>> Chetan Bhasin >>> -=-=-=-=-=-=-=-=-=-=-=- >>> Links: You receive all messages sent to this group. >>> >>> View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950 >>> Mute This Topic: https://lists.fd.io/mt/69230345/3587972 >>> Group Owner: vpp-dev+ow...@lists.fd.io >>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [ >>> shivaashankar1...@gmail.com] >>> -=-=-=-=-=-=-=-=-=-=-=- >>> >> -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14963): https://lists.fd.io/g/vpp-dev/message/14963 Mute This Topic: https://lists.fd.io/mt/69230345/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration
Thanks Shiva for reply! We are using VPP 19.08.1. Let me try with fix provided. Thanks, Chetan Bhasin On Mon, Dec 23, 2019 at 5:37 PM Shiva Shankar wrote: > Hi Chetan, > Are you using latest master code? If not, can you verify your issue with > below commit? > https://gerrit.fd.io/r/#/c/vpp/+/22527/ > > On Mon, Dec 23, 2019 at 4:32 PM chetan bhasin > wrote: > >> Hello Everyone, >> >> Merry Christmas everybody! >> >> We were using VPP 18.01 earlier that worked fine with 16 workers , now we >> moved to vpp 19.08 and facing the below problem. >> >> A direction from you is much appreciated. >> >> Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)): >> #0 0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at >> third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094 >> #1 mspace_mallinfo (msp=0x2b050cacb010) at >> /third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797 >> #2 0x2b028125fcb6 in mheap_usage (heap=, >> usage=usage@entry=0x2b0284d04f40) at >> third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389 >> #3 0x0040a49e in do_stat_segment_updates (sm=0x6cca00 >> ) at >> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620 >> #4 stat_segment_collector_process (vm=0x2b028096dc40 , >> rt=, f=) at >> third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717 >> #5 0x2b02807022d6 in vlib_process_bootstrap (_a=) at >> third-party/vpp/vpp_1908/src/vlib/main.c:2754 >> #6 0x2b02811fff54 in clib_calljmp () >> >> Thanks , >> Chetan Bhasin >> -=-=-=-=-=-=-=-=-=-=-=- >> Links: You receive all messages sent to this group. >> >> View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950 >> Mute This Topic: https://lists.fd.io/mt/69230345/3587972 >> Group Owner: vpp-dev+ow...@lists.fd.io >> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [ >> shivaashankar1...@gmail.com] >> -=-=-=-=-=-=-=-=-=-=-=- >> > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14952): https://lists.fd.io/g/vpp-dev/message/14952 Mute This Topic: https://lists.fd.io/mt/69230345/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration
Hi Chetan, Are you using latest master code? If not, can you verify your issue with below commit? https://gerrit.fd.io/r/#/c/vpp/+/22527/ On Mon, Dec 23, 2019 at 4:32 PM chetan bhasin wrote: > Hello Everyone, > > Merry Christmas everybody! > > We were using VPP 18.01 earlier that worked fine with 16 workers , now we > moved to vpp 19.08 and facing the below problem. > > A direction from you is much appreciated. > > Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)): > #0 0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at > third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094 > #1 mspace_mallinfo (msp=0x2b050cacb010) at > /third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797 > #2 0x2b028125fcb6 in mheap_usage (heap=, > usage=usage@entry=0x2b0284d04f40) at > third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389 > #3 0x0040a49e in do_stat_segment_updates (sm=0x6cca00 > ) at > third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620 > #4 stat_segment_collector_process (vm=0x2b028096dc40 , > rt=, f=) at > third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717 > #5 0x2b02807022d6 in vlib_process_bootstrap (_a=) at > third-party/vpp/vpp_1908/src/vlib/main.c:2754 > #6 0x2b02811fff54 in clib_calljmp () > > Thanks , > Chetan Bhasin > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950 > Mute This Topic: https://lists.fd.io/mt/69230345/3587972 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [ > shivaashankar1...@gmail.com] > -=-=-=-=-=-=-=-=-=-=-=- > -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14951): https://lists.fd.io/g/vpp-dev/message/14951 Mute This Topic: https://lists.fd.io/mt/69230345/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] vpp 19.08 stuck at internal_mallinfo with 8 workers configuration
Hello Everyone, Merry Christmas everybody! We were using VPP 18.01 earlier that worked fine with 16 workers , now we moved to vpp 19.08 and facing the below problem. A direction from you is much appreciated. Thread 1 (Thread 0x2b027f144ec0 (LWP 68074)): #0 0x2b028125d448 in internal_mallinfo (m=0x2b050cacb010) at third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:2094 #1 mspace_mallinfo (msp=0x2b050cacb010) at /third-party/vpp/vpp_1908/src/vppinfra/dlmalloc.c:4797 #2 0x2b028125fcb6 in mheap_usage (heap=, usage=usage@entry=0x2b0284d04f40) at third-party/vpp/vpp_1908/src/vppinfra/mem_dlmalloc.c:389 #3 0x0040a49e in do_stat_segment_updates (sm=0x6cca00 ) at third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:620 #4 stat_segment_collector_process (vm=0x2b028096dc40 , rt=, f=) at third-party/vpp/vpp_1908/src/vpp/stats/stat_segment.c:717 #5 0x2b02807022d6 in vlib_process_bootstrap (_a=) at third-party/vpp/vpp_1908/src/vlib/main.c:2754 #6 0x2b02811fff54 in clib_calljmp () Thanks , Chetan Bhasin -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#14950): https://lists.fd.io/g/vpp-dev/message/14950 Mute This Topic: https://lists.fd.io/mt/69230345/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-