TL;DR - For using AudioSockets with external media, is it recommended to create separate audiosocket server for each call I am handling, or a single one but streams segregated using the UUID?
My setup is:
Recv Audio
SnoopChannel(spying on PJSIP Channel, Whisper: ‘none’) ↔ Bridge ↔ ExternalMedia a
Send Audio
PJSIP Channel <—> Bridge ↔ ExternalMedia a (same one)
I am building an AI enabled voice application on top of Asterisk. I have already played around with ExternalMedia, SnoopChannels and was able to get audio via RTP streams. I was trying to send back audio as RTP stream too, but let’s face it, its hard to stream RTP, for developers like me who are not telephony gurus. After countless hours of staring at Wireshark captures, I have discovered AudioSockets.
Now the question is, is AudioSocket designed to handle streams from multiple calls in the same server socket, or is it expected to create a new AudioSocket server for each call?