New to the forum so if I don’t log this correctly I apologize, I am looking for some advice, we run a multi-tenant asterisk system using asterisk 18.16.0, I noticed recently that calls stop processing and extensions start deregistering when the stasis/m:devicestate:all in queue gets too large, we have about 32 servers all configured the same way however this 1, in particular, has this issue with the stasis/m:devicestate:all queues.
The only real difference is this asterisk server has a few hundred more queues than the other servers, could this be related? I am trying to wind down a few of these queues to see if it helps but I am wondering if anyone else has had this experience?
I am trying to log the Processed | In Queue | Max Depth | Low water | High water values so that I can provide more data if these values are polled and graphed over a time period. I also see the:
Unfortunately not, the thing that really seems to set this off is when there are 100+ calls and it seems to try to calculate endpoint status in the queues. Another thing I have noticed is that if you want the stasis In queue, max depth, low and high water, it seems that the in queue values increase yet the max depth hasn’t yet reached the low or high water values, not sure why thats happening.
So this happened on a different server earlier, symptoms were calls started closing off with no new calls starting, endpoints started to deregister slowly and you can see in the screenshot one of the m:devicestate lines (theres normally 3) started to queue like crazy, I think the above symptoms where as a result of the queue being so full but I still cant figure out how or why that queue becomes so back logged. Could it be webrtc? someone spamming my asterisk server with webrtc traffic?