Asterisk uses 100% CPU

Hi guys, I have a problem with my asterisk installation, it suddenly uses 100% CPU that causes all our phones (Analog and IP Phone) stop responding. I’ve already tried running asterisk on a dedicated appliance, Hyper V with 2 and 4 Cores, Asterisk version 16.19 and 16.20 still I get the same result. I have realtime enabled, I also tried hosting the database on a separate server, still the same problem. Currently, the only way to solve the issue is to reboot the asterisk host machine. When I run debug I get the same error message each time this issue happens.

[Sep 2 08:35:58] DEBUG[1845] res_odbc.c: Reusing ODBC handle 0x7fb2f0005e10 from class ‘asterisk’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Skip: 0; SQL: UPDATE sippeers SET ipaddr=?, fullcontact=?, lastms=?, useragent=?, defaultuser=?, regseconds=?, port=? WHERE name=?
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 1 (‘ipaddr’) = ‘192.168.3.5’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 2 (‘fullcontact’) = ‘sip:201@192.168.3.5:5060’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 3 (‘lastms’) = ‘-1’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 4 (‘useragent’) = ‘Grandstream GXP1615 1.0.4.67’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 5 (‘defaultuser’) = ‘201’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 6 (‘regseconds’) = ‘1630546558’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 7 (‘port’) = ‘5060’
[Sep 2 08:35:58] DEBUG[1845] res_config_odbc.c: Parameter 8 = ‘201’
[Sep 2 08:35:58] DEBUG[1845] res_odbc.c: Releasing ODBC handle 0x7fb2f0005e10 into pool
[Sep 2 08:35:58] DEBUG[1845] chan_sip.c: Trying to put ‘SIP/2.0 200’ onto UDP socket destined for 192.168.3.5:5060
[Sep 2 08:35:58] DEBUG[1845] res_odbc.c: Reusing ODBC handle 0x7fb2f0005e10 from class ‘asterisk’
[Sep 2 08:35:58] DEBUG[1845] res_odbc.c: Releasing ODBC handle 0x7fb2f0005e10 into pool
[Sep 2 08:35:58] DEBUG[1845] chan_sip.c: Allocating new SIP dialog for 588d468f79c44888287eb89e30c79f52@127.0.1.1:5060 - NOTIFY (No RTP)
[Sep 2 08:35:58] DEBUG[1040] devicestate.c: No provider found, checking channel drivers for SIP - 201
[Sep 2 08:35:58] DEBUG[1040] chan_sip.c: Checking device state for peer 201
[Sep 2 08:35:58] DEBUG[1845] chan_sip.c: Destroying SIP dialog 588d468f79c44888287eb89e30c79f52@127.0.1.1:5060
[Sep 2 08:35:58] DEBUG[1040] devicestate.c: Changing state for SIP/201 - state 5 (Unavailable)
[Sep 2 08:35:58] DEBUG[1040] stasis.c: Creating topic. name: devicestate:all/SIP/201, detail:
[Sep 2 08:35:58] DEBUG[1040] stasis.c: Topic ‘devicestate:all/SIP/201’: 0x7fb2ec023580 created
[Sep 2 08:35:58] DEBUG[1845] chan_sip.c: = Looking for Call ID: 51b69ee70602925a6a325eca46a08d3d@192.168.1.10:5060 (Checking To) --From tag as639f9c1c --To-tag 60794257
[Sep 2 08:35:58] DEBUG[1845] chan_sip.c: Stopping retransmission on ‘51b69ee70602925a6a325eca46a08d3d@192.168.1.10:5060’ of Request 102: Match Found
[Sep 2 08:35:58] DEBUG[1878] app_queue.c: Device ‘SIP/201’ changed to state ‘5’ (Unavailable) but we don’t care because they’re not a member of any queue.
[Sep 2 08:35:58] DEBUG[1040] devicestate.c: No provider found, checking channel drivers for SIP - 201
[Sep 2 08:35:58] DEBUG[1040] chan_sip.c: Checking device state for peer 201
[Sep 2 08:35:58] DEBUG[1040] devicestate.c: Changing state for SIP/201 - state 1 (Not in use)
[Sep 2 08:35:58] DEBUG[1878] app_queue.c: Device ‘SIP/201’ changed to state ‘1’ (Not in use) but we don’t care because they’re not a member of any queue.
[Sep 2 08:35:58] DEBUG[1043] chan_sip.c: Trying to put ‘NOTIFY sip:’ onto UDP socket destined for 192.168.3.8:5060
[Sep 2 08:35:58] DEBUG[1043] chan_sip.c: Trying to put ‘NOTIFY sip:’ onto UDP socket destined for 192.168.3.10:5060
[Sep 2 08:36:09] DEBUG[1837] res_pjsip_registrar.c: Woke up at 1630542969 Interval: 30
[Sep 2 08:36:09] DEBUG[1837] res_pjsip_registrar.c: Expiring 0 contacts
[Sep 2 08:36:17] DEBUG[2588] threadpool.c: Worker thread idle timeout reached. Dying.
[Sep 2 08:36:17] DEBUG[961] threadpool.c: Destroying worker thread 104
[Sep 2 08:36:39] DEBUG[1837] res_pjsip_registrar.c: Woke up at 1630542999 Interval: 30
[Sep 2 08:36:39] DEBUG[1837] res_pjsip_registrar.c: Expiring 0 contacts
[Sep 2 08:37:09] DEBUG[1837] res_pjsip_registrar.c: Woke up at 1630543029 Interval: 30
[Sep 2 08:37:09] DEBUG[1837] res_pjsip_registrar.c: Expiring 0 contacts
[Sep 2 08:37:39] DEBUG[1837] res_pjsip_registrar.c: Woke up at 1630543059 Interval: 30
[Sep 2 08:37:39] DEBUG[1837] res_pjsip_registrar.c: Expiring 0 contacts
[Sep 2 08:37:44] DEBUG[844] cdr.c: CDR Engine termination request received; waiting on messages…
[Sep 2 08:37:44] DEBUG[844] res_musiconhold.c: Destroying MOH class ‘default’
[Sep 2 08:37:44] DEBUG[844] res_musiconhold.c: Destroying MOH class ‘conference’
[Sep 2 08:37:44] DEBUG[844] asterisk.c: Asterisk ending (0).
[Sep 2 08:38:12] Asterisk 16.20.0 built by root @ asteriskubuntu20 on a x86_64 running Linux on 2021-08-27 12:01:11 UTC

Note: I’m not using pjsip
Hope someone else encountered this problem and was able to solve it or someone who can point me in the right direction on how to find the issue.

Thanks in advance

Please obtain a backtrace of the looping thread(s).

Getting a Backtrace - Asterisk Project - Asterisk Project Wiki.

When you are at 100% (one CPU) what does top or htop show for usage?

Hi @david551 will do this

Hi @BlazeStudios I can now confirm that this issue happens when a SIP Account tries to register with the asterisk server. I was able to re-create this over and over. The weird thing is we’ve been using this asterisk server for about2 years now and this is the first time I have encountered this. Heres the a screenshot of htop

I actually don’t see an issue here outside of the fact your system doesn’t seem to spread the load over all four CPUs. So what you have is a single CPU doing all the work and getting up to 100% of its usage. You still have three full CPU’s that haven’t been touched that can still be used to do stuff with.

Look at your load averages, 0.20, 0.11, 0.11. I can’t see how this system is “overloaded”. Unless for some reason you have 4 CPU cores and the system never touches the other three. In that case it’s not an Asterisk problem.

Hi @BlazeStudios, thats another issue that I wan to look into. But for now I can almost certainly confirm that this is an issue I encounter on version 16.19 and 16.20, I installed another asterisk server with v16.14 with all the same configuration as the other 2 versions and never encountered the issue.

Update:
I badly wanted to find out what causing this problem on my asterisk v16.20 Server I tried using it again then recreating the cause of the problem by registering a sip account. and here’s what I got by typing ‘core show locks’

=== <lock#> (): (times locked)

=== Thread ID: 0x7f5adbe3c700 LWP:2017 (do_monitor started at [30109] chan_sip.c restart_monitor())
=== —> Lock #0 (chan_sip.c): MUTEX 29468 handle_request_do &netlock 0x7f5b12ac4400 (1)
/usr/sbin/asterisk(__ast_pthread_mutex_lock+0xd0) [0x55c7659c90eb]
/usr/lib/asterisk/modules/chan_sip.so(+0xa09bb) [0x7f5b1286e9bb]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0791) [0x7f5b1286e791]
/usr/sbin/asterisk(ast_io_wait+0x193) [0x55c7659b9dc4]
/usr/lib/asterisk/modules/chan_sip.so(+0xa2e99) [0x7f5b12870e99]
/usr/sbin/asterisk(+0x203665) [0x55c765a95665]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5b5581b6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f5b54d5571f]
=== —> Lock #1 (chan_sip.c): MUTEX 9397 sip_pvt_lock_full pvt 0x7f5b30338eb0 (1)
/usr/sbin/asterisk(__ast_pthread_mutex_lock+0xd0) [0x55c7659c90eb]
/usr/sbin/asterisk(__ao2_lock+0x10b) [0x55c765902f66]
/usr/lib/asterisk/modules/chan_sip.so(+0x35664) [0x7f5b12803664]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0adb) [0x7f5b1286eadb]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0791) [0x7f5b1286e791]
/usr/sbin/asterisk(ast_io_wait+0x193) [0x55c7659b9dc4]
/usr/lib/asterisk/modules/chan_sip.so(+0xa2e99) [0x7f5b12870e99]
/usr/sbin/asterisk(+0x203665) [0x55c765a95665]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5b5581b6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f5b54d5571f]
=== —> Lock #2 (res_odbc.c): RDLOCK 258 ast_odbc_find_table &(&tableptr->columns)->lock 0x55c767d26ca0 (1)
/usr/sbin/asterisk(__ast_rwlock_rdlock+0xd0) [0x55c7659cb13e]
/usr/lib/asterisk/modules/res_odbc.so(ast_odbc_find_table+0x115) [0x7f5b039c4f34]
/usr/lib/asterisk/modules/res_config_odbc.so(+0x4a43) [0x7f5ae89f0a43]
/usr/sbin/asterisk(ast_update_realtime_fields+0xcc) [0x55c765acceaa]
/usr/sbin/asterisk(ast_update_realtime+0x10c) [0x55c765acd013]
/usr/lib/asterisk/modules/chan_sip.so(+0x8bf3b) [0x7f5b12859f3b]
/usr/lib/asterisk/modules/chan_sip.so(+0x8d1ee) [0x7f5b1285b1ee]
/usr/lib/asterisk/modules/chan_sip.so(+0x9f402) [0x7f5b1286d402]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0bd8) [0x7f5b1286ebd8]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0791) [0x7f5b1286e791]
/usr/sbin/asterisk(ast_io_wait+0x193) [0x55c7659b9dc4]
/usr/lib/asterisk/modules/chan_sip.so(+0xa2e99) [0x7f5b12870e99]
/usr/sbin/asterisk(+0x203665) [0x55c765a95665]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5b5581b6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f5b54d5571f]
=== —> Lock #3 (res_odbc.c): MUTEX 927 _ast_odbc_request_obj2 &class->lock 0x55c766680798 (1)
/usr/sbin/asterisk(__ast_pthread_mutex_lock+0xd0) [0x55c7659c90eb]
/usr/lib/asterisk/modules/res_odbc.so(_ast_odbc_request_obj2+0x13a) [0x7f5b039c7afd]
/usr/lib/asterisk/modules/res_config_odbc.so(+0x4a73) [0x7f5ae89f0a73]
/usr/sbin/asterisk(ast_update_realtime_fields+0xcc) [0x55c765acceaa]
/usr/sbin/asterisk(ast_update_realtime+0x10c) [0x55c765acd013]
/usr/lib/asterisk/modules/chan_sip.so(+0x8bf3b) [0x7f5b12859f3b]
/usr/lib/asterisk/modules/chan_sip.so(+0x8d1ee) [0x7f5b1285b1ee]
/usr/lib/asterisk/modules/chan_sip.so(+0x9f402) [0x7f5b1286d402]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0bd8) [0x7f5b1286ebd8]
/usr/lib/asterisk/modules/chan_sip.so(+0xa0791) [0x7f5b1286e791]
/usr/sbin/asterisk(ast_io_wait+0x193) [0x55c7659b9dc4]
/usr/lib/asterisk/modules/chan_sip.so(+0xa2e99) [0x7f5b12870e99]
/usr/sbin/asterisk(+0x203665) [0x55c765a95665]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5b5581b6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f5b54d5571f]
=== -------------------------------------------------------------------

=== Thread ID: 0x7f5adb37e700 LWP:2039 (mb_poll_thread started at [13430] app_voicemail.c start_poll_thread())
=== —> Lock #0 (app_voicemail.c): RDLOCK 13213 poll_subscribed_mailboxes &(&mwi_subs)->lock 0x7f5af6b5cbf0 (1)
/usr/sbin/asterisk(__ast_rwlock_rdlock+0xd0) [0x55c7659cb13e]
/usr/lib/asterisk/modules/app_voicemail.so(+0x2cf6e) [0x7f5af6943f6e]
/usr/lib/asterisk/modules/app_voicemail.so(+0x2d0fe) [0x7f5af69440fe]
/usr/sbin/asterisk(+0x203665) [0x55c765a95665]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5b5581b6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f5b54d5571f]
=== —> Waiting for Lock #1 (res_odbc.c): MUTEX 927 _ast_odbc_request_obj2 &class->lock 0x55c766680798 (1)
/usr/sbin/asterisk(__ast_pthread_mutex_lock+0xd0) [0x55c7659c90eb]
/usr/lib/asterisk/modules/res_odbc.so(_ast_odbc_request_obj2+0x13a) [0x7f5b039c7afd]
/usr/lib/asterisk/modules/res_odbc.so(_ast_odbc_request_obj+0x45) [0x7f5b039c804f]
/usr/lib/asterisk/modules/app_voicemail.so(+0x139b7) [0x7f5af692a9b7]
/usr/lib/asterisk/modules/app_voicemail.so(+0x2cea6) [0x7f5af6943ea6]
/usr/lib/asterisk/modules/app_voicemail.so(+0x2cf9b) [0x7f5af6943f9b]
/usr/lib/asterisk/modules/app_voicemail.so(+0x2d0fe) [0x7f5af69440fe]
/usr/sbin/asterisk(+0x203665) [0x55c765a95665]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f5b5581b6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f5b54d5571f]
=== — —> Locked Here: res_odbc.c line 927 (_ast_odbc_request_obj2)

You need the backtraces of the running processes, which you can only do using gdb, or one of the scripts that uses gdb to do this.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.