Hi list,

Recently I encountered a deadloop in 1 master + 1 worker mode of vpp(also
happens in 1 master 0 worker mode) when I do some vpp (based on vpp 21.10)
configuration through govpp.

The thread stats in gdb is:
(gdb) info threads
  Id   Target Id                                            Frame
* 1    Thread 0x7f735c3287c0 (LWP 274213) "vpp_main"
 internal_mallinfo (m=0x7f731c227040) at
/home/fortitude/glx/vpp/src/vppinfra/dlmalloc.c:2100
  2    Thread 0x7f73127d2700 (LWP 274214) "eal-intr-thread"
0x00007f735c5f449e in epoll_wait (epfd=16, events=0x7f73127d1d30,
maxevents=7, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
  3    Thread 0x7f7311fd1700 (LWP 274215) "vpp_wk_0"
 0x00007f735c5d774b in sched_yield () at
../sysdeps/unix/syscall-template.S:78

>From what I observed, it is the inner loop (segment_holds) become a dead
loop, and I still
have not found the reason (see bottom of the mail.)

there is old mail https://lists.fd.io/mt/89947053/675661 in this list
report the same issue,
but no answer yet and I can't reply to that thread, so I create this one.

Hope someone can give some clue on this.

static struct dlmallinfo internal_mallinfo(mstate m) {
struct dlmallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
ensure_initialization();
if (!PREACTION(m)) {
check_malloc_state(m);
if (is_initialized(m)) {
size_t nfree = SIZE_T_ONE; /* top always free */
size_t mfree = m->topsize + TOP_FOOT_SIZE;
size_t sum = mfree;
msegmentptr s = &m->seg;
while (s != 0) {
mchunkptr q = align_as_chunk(s->base);
while (segment_holds(s, q) &&
q != m->top && q->head != FENCEPOST_HEAD) {
size_t sz = chunksize(q);
sum += sz;
if (!is_inuse(q)) {
mfree += sz;
++nfree;
}
q = next_chunk(q);
}
s = s->next;
}
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21616): https://lists.fd.io/g/vpp-dev/message/21616
Mute This Topic: https://lists.fd.io/mt/92202054/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to