Streaming from ARI snoop channel for Speech recognition


Hi all,

I’d like to stream audio in realtime to an external speech engine (Google, Watson). I am using ARI API’s to get the recoding path of the audio file and I am reading continuously from that file, while asterisk is still writing to it parallel.

Is this a good approach to stream audio for realtime transcription. Is anybody doing realtime transcription from Asterisk. I know there are multiple approaches for speech transcription, If it is possible can you share your approach. I am talking about streaming to cloud based speech engines like Google, Watson, nuance etc.



I’m trying to do the exact same thing you had problem on.
I’m currently fighting for getting the stream audio from Node.js ARI-Client and I can’t figure out how to do it.
I can imagine you could find a way to solve your problem, but whatever happened, if you could share your work it could be a great help for me!

Thank you in advance.


ARI itself does not currently provide a mechanism for getting the audio stream.


Hi, thank you for this quick answer.
Does ARI provide one for getting the audio file instead?


ARI itself no, but I’ve heard that it may be possible to configure the HTTP server itself to allow downloading of such files. I don’t have any experience with it though.


Maybe found a solution :wink:
This example allow to send POST request with a file stored on the disk :
On my environment, records are stored at this location /var/spool/asterisk/recording
So I just had to replace “filename” variable with ‘/var/spool/asterisk/recording/’’.wav’