On Sun, Dec 14, 2025 at 04:23:02PM +0000, PR Bot wrote:
> Subject: [PR] Clarify backtrace instructions in Bug.yml
> Dear list!
> 
> Author: IFV <[email protected]>
> Number of patches: 1
> 
> This is an automated relay of the Github pull request:
>    Clarify backtrace instructions in Bug.yml
> 
> Patch title(s): 
>    Clarify backtrace instructions in Bug.yml
> 
> Link:
>    https://github.com/haproxy/haproxy/pull/3219
> 
> Edit locally:
>    wget https://github.com/haproxy/haproxy/pull/3219.patch && vi 3219.patch
> 
> Apply locally:
>    curl https://github.com/haproxy/haproxy/pull/3219.patch | git am -
> 
> Description:
>    Added clarification for obtaining a backtrace from a coredump in the
>    bug report template. --Generated by Copy-a-lot
> 
> Instructions:
>    This github pull request will be closed automatically; patch should be
>    reviewed on the haproxy mailing list ([email protected]). Everyone is
>    invited to comment, even the patch's author. Please keep the author and
>    list CCed in replies. Please note that in absence of any response this
>    pull request will be lost.

I don't quite understand the purpose of your patch, there's no need to disable
the master-worker in order to get a backtrace. Most people get their coredumps
from the haproxy started by the systemd unit file.

You just have to get the corefile generated by your crash and open it with gdb
afterwards. I tend to use the coredumpctl utility in order to do that, which
allows to start gdb on the latest coredump easily.

But if you are trying to obtain a backtrace directly by starting haproxy from
gdb (which I don't recommend most of the time), you just have to follow the
forks. For example:

% echo "@1; expert-mode on; debug dev panic" | socat /tmp/master.sock -
% gdb --args ./haproxy -W -f haproxy.cfg -S /tmp/master.sock
GNU gdb (Ubuntu 16.2-8ubuntu1) 16.2
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./haproxy...
(gdb) set follow-fork-mode child
(gdb) r
Starting program: /home/wla/Projects/haproxy/haproxy -W -f haproxy.cfg -S 
/tmp/master.sock

This GDB supports auto-downloading debuginfo from the following URLs:
  <https://debuginfod.ubuntu.com>
Enable debuginfod for this session? (y or [n]) n
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff7bff6c0 (LWP 18867)]
[Thread 0x7ffff7bff6c0 (LWP 18867) exited]
[Attaching after Thread 0x7ffff7f46b00 (LWP 18864) fork to child process 18868]
[New inferior 2 (process 18868)]
[Detaching after fork from parent process 18864]
[Inferior 1 (process 18864) detached]
[NOTICE]   (18864) : Initializing new worker (18868)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[NOTICE]   (18868) : haproxy version is 3.4-dev0-6eedd0-69
[NOTICE]   (18868) : path to executable is /home/wla/Projects/haproxy/haproxy
[WARNING]  (18868) : missing timeouts for proxy 'tcp'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[NOTICE]   (18868) : Automatically setting global.maxconn to 524253.
[New Thread 0x7ffff7bff6c0 (LWP 18869)]
[New Thread 0x7ffff51fa6c0 (LWP 18870)]
[New Thread 0x7fffebffe6c0 (LWP 18871)]
[New Thread 0x7fffeb7fd6c0 (LWP 18872)]
[New Thread 0x7fffeabfb6c0 (LWP 18873)]
[New Thread 0x7fffe9ff96c0 (LWP 18874)]
[New Thread 0x7fffe93f76c0 (LWP 18875)]
[New Thread 0x7fffd3fff6c0 (LWP 18876)]
[New Thread 0x7fffd37fe6c0 (LWP 18877)]
[New Thread 0x7fffd2bfc6c0 (LWP 18878)]
[New Thread 0x7fffd1ffa6c0 (LWP 18879)]
[New Thread 0x7fffd13f86c0 (LWP 18880)]
[New Thread 0x7fffbbfff6c0 (LWP 18881)]
[New Thread 0x7fffbb7fe6c0 (LWP 18882)]
[New Thread 0x7fffbabfc6c0 (LWP 18883)]
[NOTICE]   (18864) : Loading success.

PANIC! Thread 1 is about to kill the process (pid 18868).

HAProxy info:
  version: 3.4-dev0-6eedd0-69
  features: -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H 
-DEVICEATLAS +DL -ECH -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE +KTLS 
-LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH 
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER -OPENSSL -OPENSSL_AWSLC 
-OPENSSL_WOLFSSL -OT -PCRE -PCRE2 -PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL 
-PROMEX -PTHREAD_EMULATION -QUIC -QUIC_OPENSSL_COMPAT +RT +SHM_OPEN +SLZ -SSL 
-STATIC_PCRE -STATIC_PCRE2 +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB

Operating system info:
  virtual machine: no
  container: no
  kernel: Linux 6.14.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Sat Oct 11 
02:18:29 UTC 2025 x86_64
  userland: Ubuntu 25.04

* Thread 1 : id=0x7ffff7f46b00 act=1 glob=0 wq=1 rq=0 tl=0 tlsz=0 rqsz=0
      1/1    loops=1 ctxsw=7 stuck=0 prof=0 harmless=0 isolated=0 locks=0
             cpu_ns: poll=26044830 now=26256478 diff=211648
             curr_task=0x555556707940 (task) calls=2 last=0
               fct=0x555555732f10(task_process_applet) ctx=0x555556707710
             lock_hist: S:PROTO W:LISTENER U:LISTENER U:PROTO S:DNS U:DNS 
R:LISTENER U:LISTENER
             call trace(15):
             | 0x555555711e12 <00 00 00 e8 2e 52 ea ff]: 
ha_dump_backtrace+0x82/0x43d > main-0xf20
             | 0x555555713f6d <48 89 fb e8 93 fa ff ff]: 
ha_thread_dump_fill+0xad/0xaf > ha_thread_dump_one
             | 0x555555714227 <00 89 de e8 99 fc ff ff]: 
ha_dump_backtrace+0x2497 > ha_thread_dump_fill
             | 0x5555557143a8 <09 72 e6 e8 58 fd ff ff]: 
ha_dump_backtrace+0x2618 > ha_dump_backtrace+0x2370
             | 0x5555556d083d <89 df 4c 89 f2 41 ff d1]: main+0x1188dd
             | 0x5555556d18b8 <00 00 00 e8 98 eb ff ff]: 
cli_io_handler+0x388/0xad8 > main+0x1184f0
             | 0x555555733067 <40 48 8b 43 58 ff 50 18]: 
task_process_applet+0x157/0xa75
  Thread 2 : id=0x7ffff7bff6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/2    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=68132 now=344315 diff=276183
             curr_task=0
  Thread 3 : id=0x7ffff51fa6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/3    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=48791 now=239071 diff=190280
             curr_task=0
  Thread 4 : id=0x7fffebffe6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/4    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=48694 now=229304 diff=180610
             curr_task=0
  Thread 5 : id=0x7fffeb7fd6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/5    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=12624 now=226514 diff=213890
             curr_task=0
  Thread 6 : id=0x7fffeabfb6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/6    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=13153 now=206556 diff=193403
             curr_task=0
  Thread 7 : id=0x7fffe9ff96c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/7    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=18228 now=212694 diff=194466
             curr_task=0
  Thread 8 : id=0x7fffe93f76c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/8    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=8891 now=176496 diff=167605
             curr_task=0
  Thread 9 : id=0x7fffd3fff6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/9    loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=8303 now=162169 diff=153866
             curr_task=0
  Thread 10: id=0x7fffd37fe6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/10   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=8469 now=167525 diff=159056
             curr_task=0
  Thread 11: id=0x7fffd2bfc6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/11   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=7123 now=163129 diff=156006
             curr_task=0
  Thread 12: id=0x7fffd1ffa6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/12   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=9638 now=160897 diff=151259
             curr_task=0
  Thread 13: id=0x7fffd13f86c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/13   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=7454 now=141288 diff=133834
             curr_task=0
  Thread 14: id=0x7fffbbfff6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/14   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=7454 now=132937 diff=125483
             curr_task=0
  Thread 15: id=0x7fffbb7fe6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/15   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=10759 now=94784 diff=84025
             curr_task=0
  Thread 16: id=0x7fffbabfc6c0 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
      1/16   loops=0 ctxsw=0 stuck=0 prof=0 harmless=1 isolated=0 locks=0
             cpu_ns: poll=8171 now=74473 diff=66302
             curr_task=0

Hint: when reporting this bug to developers, please check if a core file was
      produced, open it with 'gdb', issue 't a a bt full', check that the
      output does not contain sensitive data, then join it with the bug report.
      For more info, please see https://github.com/haproxy/haproxy/issues/2374

Thread 2.1 "haproxy" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff7f46b00 (LWP 18868)]
__pthread_kill_implementation (threadid=<optimized out>, signo=6, no_tid=0) at 
./nptl/pthread_kill.c:44
warning: 44     ./nptl/pthread_kill.c: No such file or directory
(gdb) bt
#0  __pthread_kill_implementation (threadid=<optimized out>, signo=6, no_tid=0) 
at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (threadid=<optimized out>, signo=6) at 
./nptl/pthread_kill.c:89
#2  __GI___pthread_kill (threadid=<optimized out>, signo=signo@entry=6) at 
./nptl/pthread_kill.c:100
#3  0x00007ffff7c4579e in __GI_raise (sig=sig@entry=6) at 
../sysdeps/posix/raise.c:26
#4  0x00007ffff7c288cd in __GI_abort () at ./stdlib/abort.c:73
#5  0x00005555557142a8 in ha_panic () at src/debug.c:870
#6  0x00005555557143a8 in ha_panic () at src/debug.c:787
#7  debug_parse_cli_panic (args=<optimized out>, payload=<optimized out>, 
appctx=<optimized out>, private=<optimized out>) at src/debug.c:1150
#8  debug_parse_cli_panic (args=<optimized out>, payload=<optimized out>, 
appctx=<optimized out>, private=<optimized out>) at src/debug.c:1144
#9  0x00005555556d083d in cli_process_cmdline 
(appctx=appctx@entry=0x555556707710) at src/cli.c:846
#10 0x00005555556d18b8 in cli_io_handler (appctx=0x555556707710) at 
src/cli.c:1139
#11 0x0000555555733067 in task_process_applet (t=0x555556707940, 
context=0x555556707710, state=<optimized out>) at src/applet.c:948
#12 0x00005555557e0f01 in run_tasks_from_lists (budgets=<optimized out>) at 
src/task.c:662
#13 0x00005555557e1919 in process_runnable_tasks () at src/task.c:902
#14 0x00005555557286e7 in run_poll_loop () at src/haproxy.c:2892
#15 0x0000555555728e41 in run_thread_poll_loop (data=<optimized out>) at 
src/haproxy.c:3112
#16 0x00005555555b97da in main (argc=<optimized out>, argv=<optimized out>) at 
src/haproxy.c:3740
(gdb)

Regards,

-- 
William Lallemand




Reply via email to