Sending Realtime Audio to External WebRTC Service

Hi everyone,

I want to send the audio stream in a channel to an external service via WebSocket using Asterisk 20.

My current implementation creates a Snooping Channel (spying on a channel on Dial) in ARI, then creates an external media channel. I’m creating two Snooping channels and two external media channels to separate in and out audio streams (for having stereo audio data). This method requires setting up an ARI application and requires 6 channels in total for a single call (2 channels in call, 2 snooping channels and 2 external media channels)

Is there a simpler/better/industry standard way to achieve this before moving forward?

From an Asterisk perspective, no. From a standard way, no. Everyone is doing their own kinda thing from what I’ve seen.

Thank you for your response.

I was wondering if AudioSocket is viable (it seems like it cannot be used together with Dial but maybe there’s a way).

Also, with my current method the external media channels are created before the call begins (if the dial fails, the channels are unnecessarily created.) Is there a way to only start snooping after the call begins? I don’t want to manage the Dial on ARI if possible (Dial in Dialplan).

Thank you

You can use the ‘U’ option to Dial[1] to invoke some dialplan logic via GoSub before bridging. At that point you could execute your ARI application if you wish to snoop and such.

[1] Dial - Asterisk Documentation

1 Like

I tried running Stasis inside subroutine via ‘U’ option, it worked but when I use continueInDialplan in ARI, it returns to the same priority which it already executed instead of continuing with the next priority, effectively repeating Stasis()

This was not happening when Stasis was used before Dial.