Thanks RĂ¼diger. I hadn't expected it to be fixed in trunk long ago. I see now, that there are more useful pool debug backports sitting in trunk. Will look at it soon.

Regards,

Rainer

Am 17.07.2019 um 12:09 schrieb Ruediger Pluem:


On 07/17/2019 11:43 AM, Rainer Jung wrote:
Am 17.07.2019 um 10:03 schrieb Ruediger Pluem:


On 07/16/2019 11:28 PM, Rainer Jung wrote:
cross-posted to APR+HTTPD

Crahs happens in #2  0x00007faf4c154945 in raise () from /lib64/libc.so.6
#3  0x00007faf4c155f21 in abort () from /lib64/libc.so.6
#4  0x00007faf4c14d810 in __assert_fail () from /lib64/libc.so.6
#5  0x00007faf4c694219 in __pthread_tpp_change_priority () from 
/lib64/libpthread.so.0
#6  0x00007faf4c68cd76 in __pthread_mutex_lock_full () from 
/lib64/libpthread.so.0
#7  0x00007faf4cd07c29 in apr_thread_mutex_lock (mutex=0x2261fe0) at 
locks/unix/thread_mutex.c:108
#8  0x00007faf4cd08603 in apr_pool_walk_tree (pool=0x225a710, fn=0x7faf4cd07fc0 
<pool_num_bytes>, data=0x7faf45777c90)
at memory/unix/apr_pools.c:1515
#9  0x00007faf4cd08630 in apr_pool_walk_tree (pool=0x6a3ce0, fn=0x7faf4cd07fc0 
<pool_num_bytes>, data=0x7faf45777c90) at
memory/unix/apr_pools.c:1521
#10 0x00007faf4cd08630 in apr_pool_walk_tree (pool=0x6a3770, fn=0x7faf4cd07fc0 
<pool_num_bytes>, data=0x7faf45777c90) at
memory/unix/apr_pools.c:1521
#11 0x00007faf4cd08630 in apr_pool_walk_tree (pool=0x6a3110, fn=0x7faf4cd07fc0 
<pool_num_bytes>, data=0x7faf45777c90) at
memory/unix/apr_pools.c:1521
#12 0x00007faf4cd086df in apr_pool_num_bytes (pool=0x6d81, recurse=<value 
optimized out>) at
memory/unix/apr_pools.c:2304
#13 0x00007faf4cd0898f in apr_pool_log_event (pool=0x225a710, event=0x7faf4cd16e74 
"PCALLOC", file_line=0x7faf4cd16d78
"locks/unix/thread_mutex.c:50", deref=-1)
      at memory/unix/apr_pools.c:1543
#14 0x00007faf4cd098b8 in apr_pcalloc_debug (pool=0x225a710, size=64, 
file_line=0x7faf4cd16d78
"locks/unix/thread_mutex.c:50") at memory/unix/apr_pools.c:1814
#15 0x00007faf4cd07ce5 in apr_thread_mutex_create (mutex=0x225a798, flags=1, 
pool=0x225a710) at
locks/unix/thread_mutex.c:50
#16 0x00007faf4cd0a164 in apr_pool_clear_debug (pool=0x225a710, file_line=0x488f09 
"mpm_fdqueue.c:236") at
memory/unix/apr_pools.c:1911
#17 0x000000000046c455 in ap_queue_info_push_pool (queue_info=0x22648b0, 
pool_to_recycle=0x225a710) at mpm_fdqueue.c:236
#18 0x00007faf4bf18821 in process_lingering_close (cs=0x78d670) at event.c:1457
#19 0x00007faf4bf196a8 in worker_thread (thd=0x6cae80, dummy=<value optimized 
out>) at event.c:2083
#20 0x00007faf4c68b5f0 in start_thread () from /lib64/libpthread.so.0
#21 0x00007faf4c1f684d in clone () from /lib64/libc.so.6

So it seems a mutex gets created, which allocates memory, which in turn 
triggers debug logging, which walks pools and
finally tries to lock the not yet initialized lock.

Anyone aware of that? Any ideas how to fix?

This is strange. Before apr_thread_mutex_create is called by apr_pool_clear_debug 
pool->mutex is set to NULL. So IMHO in
frame #7 mutex should be NULL.
Which version of APR are you using?

1.7 with a few debug patches, that should really not make a difference here 
(but might offset line numbers a bit).
1.7.0, 1.7.x, 1.6.5 and 1.6.x do not differ in apr_pools.c.

I was looking at apr trunk. Maybe r1481186 fixes your issue.

Regards

RĂ¼diger

Reply via email to