Need help on fixing subm type Task Processor Queue Size Warnings

Hi Group

I have recently upgraded a couple of my sites to Asterisk 13 and I am getting these warnings on a number of them:
WARNING[26676][C-0000004d]: taskprocessor.c:888 in taskprocessor_push: The ‘subm:ast_channel_topic_all-cached-00000080’ task processor queue reached 500 scheduled tasks
WARNING[28353][C-0000006a]: taskprocessor.c:888 in taskprocessor_push: The ‘subm:ast_channel_topic_all-cached-00000082’ task processor queue reached 500 scheduled tasks

I have read the doco:
http://blogs.asterisk.org/2016/07/13/asterisk-task-processor-queue-size-warnings/

As such, I will be removing pjsip, hep and cel from modules.conf but Im not sure if this is going to fix the problem.
Does anyone have any idea what could be causing the issue? I dont seem to be having any processor load or memory issues affecting performance. Generally the box sits there doing very little.

So if removing the above modules does not fix it, what should I do? Up the High Water Mark?
Im hoping someone can help as its delaying a major upgrade of my sites.

Thanks
Mike

Ok I have now removed pjsip, hep, cel and ari from modules.conf and that didnt help.
Any ideas?

Thanks
Mike

The res_pjsip module is currently the only module that does anything with the task processor high water alerts. The res_pjsip_module responds by not accepting new work (e.g., calls) until the high water alert clears. Since you stated that you are not loading the pjsip modules these warnings are not that critical.

Richard

Thanks Richard very much for the reply. Really good to know.
I am still concerned however that these warnings are actually there though. I could just push up the high water mark but are the maximum depths I am seeing normal e.g. over 1000 for subm:ast_channel_topic_all-cached-x?

Although I cant confirm, the customer did mention that they had noticed a few odd no progress calls on the system. It could be unrelated but worth mentioning. Using chan_sip not pjsip though.

Thanks
Mike

So just another question. What sort of messages appear in the subm:ast_channel_topic_all category.
This might help me to what is causing the increased number?

Thanks
Mike

Anyone? Is this a bug?
If I cant fix this I will need to revert back to Asterisk 11. Not a good look!

Thanks
Mike

It’s not a bug per se. The topic holds information about active channels, and if it’s backing up then what is processing the messages can’t keep up. Why that would be I don’t know, depends on usage and system characteristics. It’s not something I’ve seen.

Thanks Josh
What I cant find is for this particular message type, what is processing the messages.
Just by making a single unanswered extension call, I generated a large number of messages:

Before call:
Processor Processed In Queue Max Depth Low water High water
subm:ast_channel_topic_all-0000001c 1 0 1 450 500
subm:ast_channel_topic_all-cached-00000017 41 0 38 450 500
subm:ast_channel_topic_all-cached-00000019 40 0 38 450 500

After call:
Processor Processed In Queue Max Depth Low water High water
subm:ast_channel_topic_all-0000001c 163 0 17 450 500
subm:ast_channel_topic_all-cached-00000017 300 0 38 450 500
subm:ast_channel_topic_all-cached-00000019 299 0 38 450 500

Any ideas?
Thanks
Mike

Sorry forgot to mention this was done in my lab environment so nothing else was happening at the time.

MIke

It is a cache, so it would be the cache implementation in main/stasis_cache.c

Really sorry Josh but I am not particularly familiar with this architecture.
Im also seeing issues on the subm:ast_channel_topic_all (not cache). Is this still the same?
Is there anything I can do or test to try to pinpoint where the issues is?

Thanks so much.
Mike

They are not the same, one publishes messages related to channels, the other listens to that and caches the messages. I haven’t investigated such things so I don’t know how best to look into it.

I have confirmed that no one has experienced any issues on any of the systems that this problem is occurring.

Thanks
Mike