Stasis/m:devicestate reaches watermark very easy

Hi,

We are having some issues on a loaded asterisk machine (I know asterisk 13 is EOL and chan_sip too but the behavior should be the same for all versions regardless of the use of 13 or chan_sip vs pjsip).

The issue is always the same, task processors goes above the high water mark level for this:

Processor Processed In Queue Max Depth Low water High water
stasis/m:devicestate:all-00000002 2503009 0 86 450 500
stasis/m:devicestate:all-00000004 2503009 0 159 450 500
stasis/m:devicestate:all-00000441 2503009 0 3300 450 500

Can I suppose each of this are threads. Is it possible to add more maybe directly on the code? Would that improve the speed of handling the work items? If yes can you point me in the direction on the code where

Processor Processed In Queue Max Depth Low water High water
stasis/pool 486456361 0 561 450 500
stasis/pool-control 822588711 150 3832 450 500

What about pool and pool control? stasis thread pool is 80 we tested 100 too and the behavior is the same.
Actualy we have configured in statis.conf
initial_size = 10
idle_timeout_sec = 120
max_size = 80

We are also using Webrtc and we’ve seen at moments SYN flooding on the webrtc tcp port and asterisk stops responding, is it related with this? We already change all kernel parameters for high network throughput.

               net.ipv4.tcp_window_scaling=1
               net.ipv4.tcp_syncookies=0
               net.core.rmem_max=33554432
               net.core.wmem_max=33554432
               net.ipv4.tcp_rmem="10240 87380 16777216"
               net.ipv4.tcp_wmem="10240 87380 16777216"
               net.core.somaxconn=81918
               net.core.netdev_max_backlog=82920
               net.ipv4.tcp_max_syn_backlog=300000
               net.ipv4.tcp_keepalive_time=1800
               fs.file-max=2097152
               fs.inotify.max_user_watches=20000
               ulimit -n 1000000

Tested with sybcookies 0 and 1 kernel version > 5 ubuntu 20 ( 5.13.0-1024-gcp #29~20.04.1-Ubuntu SMP Thu Apr 14 23:15:00 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux)

Any hint on this? thanks

They are not threads. They are queues of work, for subscriptions. You can’t just throw more threads at it. If Asterisk is built in developer mode then there are CLI commands for stasis statistics (tab complete to explore them) which can further narrow down what exactly the subscriptions are and which are taking long to process (if they are).

To say that things should be the same in recent versions is also incorrect. There were fundamental performance improvements done which can have a ripple effect elsewhere.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.