Understanding Asterisk and load averabes

For a while now I have been trying to understand how Asterisk works and what causes the load average to “randomly” spike. A big part of what I do is host media streams on the phone. I use MusicOnHold with ffmpeg to play live events. It doesn’t matter if i am using a 4 core VPS or a 16 core box. After X amount of calls things get wonky. I did some testing with Asterisk 20 where I had another box send a bunch of concurrent calls. If I send 500 calls within a few seconds I see the CPU usage go up and the load starting to spike. If I leave it at that for a while over time the load comes down and for the most part is steady. If I change it up so that I am starting and stopping a lot of calls then the more calls I get the higher the load avg is. This clearly shows it being the CPS. What’s interesting is that the amount of calls is the same. Most of the time the active call count is < 500 and the CPU for the Asterisk PID stays pretty much the same however the load avg starts getting wonky. What happens on a per call setup that causes the load to get wonky like that? Is there any tweaking that can get done to Asterisk or the host OS?

A LOT of stuff happens whenever a new call is created. Your dialplan is executed, state is updated etc.

Once the call is established, all Asterisk has to do is forward the audio. (Assuming transcoding is not needed)

I generally observe load spikes when a lot of calls end up in queues, but not so much if they are processed by other means, eg. directly to extensions, sent out to external destinations, get a message, then disconnected etc.

Also I’m not sure exactly how you’re playing your streams, if they are started from the beginning for each user, you would add some extra load when ffmpeg loads up the stream, if it’s a livestream played back with FFMPEG as music on hold, you’d most likely only start the stream once, and every user should get the same audio stream. That is assuming Asterisk is actually smart about it, if not, you still get an FFMPEG stream for each and every user calling in, which adds a lot of extra work for processing X instances of the same stream.

You could also be dumping people in a conference where FFMPEG is somehow a participant eg. using the output of FFMPEG as the microphone input for a softphone, then Asterisk would only have to mix the audio streams, and forward them to each participant.

In my current set up I am using MusicOnHold with realtime so this way Asterisk only pulls one ffmpeg stream for all the listeners. On exit from MOH I have a bash script that checks to see if anyone else is listening to that class, if not then it does a moh unregister class.

I have in the past thought about using a conf bridge as opposed to putting everyone on hold. I wonder which would have a lower load on Asterisk. If it would be everyone being put on hold vs putting everyone into a conference bridge.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.