Hi.
I am working for the wazo project(Wazo Platform · GitHub), and am trying to debug a deadlock scenario we’ve seen occur repeatedly in some clients with our software stack based on asterisk 22.2.0.
We have debian packaging infrastructure taking care of building asterisk for our usual purposes(GitHub - wazo-platform/asterisk: ☎ Asterisk Debian packaging and patches for Wazo).
For the wazo version where the issue has been reproduced, we have this version of our packaging repository: GitHub - wazo-platform/asterisk at wazo-25.07.
As our version of asterisk includes a number of patches, we want to go as far as possible in understanding that deadlock scenario before trying to reproduce on a vanilla asterisk(which could involve non-trivial work replicating the workload of the scenario without the wazo stack) and making an upstream report.
So given a coredump generated from the deadlock scenario, we have found a mutex cycle between 3 threads(by following ownership of locks).
One obstacle we have is that we are missing information(symbols) to properly analyse some of the stacks of the 3 threads involved.
Here are the backtraces we’re looking at(full backtraces attached as reference):
Thread 102 (Thread 0x7f8d0b0c1700 (LWP 17630)):
#0 __lll_lock_wait (futex=futex@entry=0x5576066955b8, private=0) at lowlevellock.c:52
__ret =
#1 0x00007f8d159698d1 in __GI___pthread_mutex_lock (mutex=0x5576066955b8) at ../nptl/pthread_mutex_lock.c:115
__futex = 0x5576066955b8
id =
type =
PRETTY_FUNCTION = “__pthread_mutex_lock”
id =
#2 0x00007f8d15dfd795 in ?? ()
No symbol table info available.
#3 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 31 (Thread 0x7f8d0c312700 (LWP 16748)):
#0 __lll_lock_wait (futex=futex@entry=0x5576058cb800, private=0) at lowlevellock.c:52
#1 0x00007f8d159698d1 in __GI___pthread_mutex_lock (mutex=0x5576058cb800) at ../nptl/pthread_mutex_lock.c:115
#2 0x00005575c831ebb8 in __ast_pthread_mutex_lock (filename=, lineno=lineno@entry=1388, func=, mutex_name=mutex_name@entry=0x7f8d10f28047 “instance”, t=) at lock.c:354
#3 0x00005575c8285849 in __ao2_lock (user_data=user_data@entry=0x5576058cb840, lock_how=lock_how@entry=AO2_LOCK_REQ_MUTEX, file=file@entry=0x7f8d10f28034 “res_rtp_asterisk.c”, func=func@entry=0x7f8d10f2d4f0 <PRETTY_FUNCTION.121> “ast_rtp_on_turn_rx_rtp_data”, line=line@entry=1388, var=var@entry=0x7f8d10f28047 “instance”) at astobj2.c:241
#4 0x00007f8d10f0e687 in ast_rtp_on_turn_rx_rtp_data (turn_sock=, pkt=0x557605a01558, pkt_len=124, peer_addr=0x557605a015e0, addr_len=16) at res_rtp_asterisk.c:1388
#5 0x00007f8d15d98152 in ?? ()
#6 0x00000000000000c4 in ?? ()
#7 0x00005576057ef028 in ?? ()
#8 0x00000000000000c4 in ?? ()
#9 0x0000557605383028 in ?? ()
#10 0x000000000000000a in ?? ()
#11 0x00007f8d15d952f4 in ?? ()
#12 0x0000000000000010 in ?? ()
#13 0x00007f8d15d950d7 in ?? ()
#14 0x0000000000000030 in ?? ()
#15 0x0000557605a01400 in ?? ()
#16 0x0000557605a014a8 in ?? ()
#17 0x0000000005042c28 in ?? ()
#18 0x0000000000000010 in ?? ()
#19 0x0000557605a014a8 in ?? ()
#20 0x0000000000000000 in ?? ()
Given the asynchronous nature of some of the code involved(e.g. ast_rtp_on_turn_rx_rtp_data
and ARI callbacks), it is hard for me to guess which modules would lay in those missing stack frames.
One can see that only the symbols for some frames from a few threads are missing.
Now my specific question is on how we can ensure we can obtain those missing symbols.
We have a debug symbol package for our asterisk build, autogenerated by the debian packaging process. And I tried obtaining all relevant debug symbol packages available for the debian dependencies.
Is it obvious which debug symbol sources might be missing in those tracebacks?
I also attached output of gdb info sources and info sharedlibrary, and the Dockerfile I’m using as a debugging environment(which provide all debugging symbol packages and utilities).
Thanks for any pointers!
Dockerfile.txt (1.9 KB)
gdb-sources.txt (159.0 KB)
gdb-sharedlibraries.txt (40.0 KB)
core-asterisk-2025-06-10T15-06-13Z-full.txt (440.1 KB)