we are seeing many taskprocessors going above the high watermark level. I would like to understand who is consuming the following taskprocesor queues in order to be able to understand where is the bottleneck
If you build in developer mode then “stasis statistics” CLI commands become available to give insight into what exactly is subscribed, how many messages they’re receiving, the time to process them. Taskprocessors are an implementation detail of that and do not give that level of information, aside from what can be deduced from the name. devicestate:all meaning all device state updates, and stasis/pool-control meaning the taskprocessor which is used to manage threadpool stuff.
yes but Im sharing the information, cause as I said seems like an asterisk bottleneck there and I dont know what this log means exactly. Yes there are several queues with several peers is this an app_queue limitation?
so seems there is nothing to do? could this be the fault of a SYN FLOOD on the tcp port for the webrtc peers? Im seeing this a lot. Do you have any recommendation?
I have some experience with app_queue you can see I have some contributions there can you point me where to look inside app_queue at least know which parts of the code to look?
Hi, im having the same issue with syn flood on 8089 with no aparent cause, but sometimes having the taskprocessors message.
Did you have any clie on hiw to solve it? Or what was causing it if coud solve it? Thanks!!