Correctly using AudioStreams with ExternalMedia + ARI

TL;DR - For using AudioSockets with external media, is it recommended to create separate audiosocket server for each call I am handling, or a single one but streams segregated using the UUID?

My setup is:
Recv Audio
SnoopChannel(spying on PJSIP Channel, Whisper: ‘none’) ↔ Bridge ↔ ExternalMedia a

Send Audio
PJSIP Channel <—> Bridge ↔ ExternalMedia a (same one)

I am building an AI enabled voice application on top of Asterisk. I have already played around with ExternalMedia, SnoopChannels and was able to get audio via RTP streams. I was trying to send back audio as RTP stream too, but let’s face it, its hard to stream RTP, for developers like me who are not telephony gurus. After countless hours of staring at Wireshark captures, I have discovered AudioSockets.

Now the question is, is AudioSocket designed to handle streams from multiple calls in the same server socket, or is it expected to create a new AudioSocket server for each call?

Hey so both audioSocket & externalMedia have support for in & out flow of media no need to use both to achieve a simple use case

when you are using externalMedia use connection_type server it gives a ip & port where you can send back the rtp you want to play

On the side if you jcolp if you see this, i wanted to understand what is the contain on asterisk for scalling if we are using externalMedia for in/out stream of media.

like already use avg 3 channel per call due to recorder also present and if there is playback instart than1 more temp channel, but adding external media also how much performance impact it will create any idea. (no transcoding is involved anywhere yet)

btw awesome presentation AstriCon24

I don’t have any details or information on scaling. As always, particular use cases greatly influence results and so individual testing should be done.