Task processor limits

Hi,

we are seeing many taskprocessors going above the high watermark level. I would like to understand who is consuming the following taskprocesor queues in order to be able to understand where is the bottleneck

taskprocessor.c: The ‘stasis/m:devicestate:all-000002a6’ task processor queue reached 500 scheduled tasks again.

stasis/m:devicestate:all-00000002 49280735 0 317 450 500
stasis/m:devicestate:all-00000004 49280736 0 294 450 500
stasis/m:devicestate:all-000002a6 49280736 0 7034 450 500

why there are always 3 queues? and why is always te last one that I see goes over the highwatermark?

the other that always goes above is

stasis/pool-control 2543594815 120 1347 450 500

again what is consuming this queue?

If you build in developer mode then “stasis statistics” CLI commands become available to give insight into what exactly is subscribed, how many messages they’re receiving, the time to process them. Taskprocessors are an implementation detail of that and do not give that level of information, aside from what can be deduced from the name. devicestate:all meaning all device state updates, and stasis/pool-control meaning the taskprocessor which is used to manage threadpool stuff.

development mode is on menu select? (dont optimze, debug threads and others?)or is other flag?

does that flag has a lot of impact on performance? thanks

Developer mode is enabled using configure:

./configure --enable-dev-mode

Stasis statistics have an impact, which is why they’re only in developer mode, though unmeasured.

I attach the data asked

topics.txt (551.9 KB)
subs.txt (265.3 KB)
messa.txt (3.1 KB)
locks.txt (45.7 KB)

Did you look at the information at all? Your post made it seem as though you wanted to investigate it yourself, not just provide information.

Regardless if you look at the subscribers one then app_queue is what is taking a long amount of time to process devicestate stuff:

Subscription                                                        Dropped     Passed    Lowest Invoke   Highest Invoke
app_queue.c:devicestate:all-3                                             0     251231                0            55987

Why that is, I don’t know. The longer it takes and the busier the system, the more the taskprocessor queue will accumulate.

yes but Im sharing the information, cause as I said seems like an asterisk bottleneck there and I dont know what this log means exactly. Yes there are several queues with several peers is this an app_queue limitation?

It’s quite possible that for usage there is an issue in app_queue, or quite possibly due to its design, to cause it to happen.

so seems there is nothing to do? could this be the fault of a SYN FLOOD on the tcp port for the webrtc peers? Im seeing this a lot. Do you have any recommendation?

It is unlikely that is the cause. Without digging into app_queue and understanding why, I don’t have anything to add.

I have some experience with app_queue you can see I have some contributions there can you point me where to look inside app_queue at least know which parts of the code to look?

The subscription is for device state. So, the handling of device state.

Hi, im having the same issue with syn flood on 8089 with no aparent cause, but sometimes having the taskprocessors message.
Did you have any clie on hiw to solve it? Or what was causing it if coud solve it? Thanks!!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.