I created a Snoop channel(spy:both, whisper:both) and an external media channel, and connected them using a bridge. I can see data coming to my socket for the external media. However, when I send this data back as is to create an echo on the phone, it doesn’t work. How can I send the audio back?
The simple socket code I created for the external media is as follows:
You waited a grand total of 5 hours before bumping this thread. This forum has no guarantee on responses or an SLA.
You should start by verifying assumptions. Do a packet capture to see if the media is actually being sent back to Asterisk, and then use “rtp set debug on” to see if it is showing up.
You’re right, I’m very sorry. My parameters for creating the snoop channel are spy: in, whisper: out. My parameters for the external media channel are format: ulaw. When RTP debug is on, it prints the following to the screen. But on the phone, I don’t hear any echo of my own voice.
The External Media channel can also inject media into a bridge it’s a member of so you can play progress messages, music, IVR menus, etc.
The documentation states this. Logically, shouldn’t this injection process be done as I shared in the code? That is, from within the socket listening to the external media?
I should also add that media will only be injected if the channel in question that is being snooped on is being sent media by something else (such as if it is in a bridge talking to something else, or a playback is done to it). If it’s idle and not receiving media from something else, it won’t be injected. This is a limitation of the underlying API that is used for things like ChanSpy and Snoop channels.
Your shared information is very valuable. Thank you again for that. I printed and reviewed the logs[1] but couldn’t find any errors. Could this be related to the last thing you wrote? However, I didn’t fully understand your last point. I’m forwarding all the data coming to the external media back without any changes, but I’m not getting any sound. Do I need to play any audio file on the channel before transmitting the data?
If you snoop on “PJSIP/alice” then something has to ALREADY be sending audio to it in order to whisper into it. If nothing is already sending audio, nothing will happen and you won’t hear anything.
Understood, so if there were two people talking, when I send data through my external media, the other party would hear the sound. However, in my example, since there is no one else providing sound on the call except me, even though I send data from external media, the sound is not heard. This changes things Alright, we received the callers’ voices via external media, processed them elsewhere, and now we want to respond back on the call without anyone else being present. What kind of structure can we use? Do we need to somehow obtain the channel ID and play it on the channel? I’m not sure if this approach would be too complicated or correct. But I couldn’t think of another method.