I’d like to stream audio in realtime to an external speech engine (Google, Watson). I am using ARI API’s to get the recoding path of the audio file and I am reading continuously from that file, while asterisk is still writing to it parallel.
Is this a good approach to stream audio for realtime transcription. Is anybody doing realtime transcription from Asterisk. I know there are multiple approaches for speech transcription, If it is possible can you share your approach. I am talking about streaming to cloud based speech engines like Google, Watson, nuance etc.
I’m trying to do the exact same thing you had problem on.
I’m currently fighting for getting the stream audio from Node.js ARI-Client and I can’t figure out how to do it.
I can imagine you could find a way to solve your problem, but whatever happened, if you could share your work it could be a great help for me!
ARI itself no, but I’ve heard that it may be possible to configure the HTTP server itself to allow downloading of such files. I don’t have any experience with it though.
Maybe found a solution
This example allow to send POST request with a file stored on the disk : https://gist.github.com/alepez/9205394
On my environment, records are stored at this location /var/spool/asterisk/recording
So I just had to replace “filename” variable with ‘/var/spool/asterisk/recording/’+recording.name+’.wav’
@yeya yeah and heres an example project on how to use it to connect it to dialogflow - theres another in the nimble ape github org that takes audio and sends it to google.
This one and it’s associated ARI bridge project actually uses snoop
@[danjenkins] Could please inform us (or anyone else) if I we can do the following?
We have an asterisk that receives calls from several users at the same time. For each phone call we create a sound file and record only the Tx audio from the caller. After hang-up, we transfer the audio file to the google cloud and we use the STT API to receive the transcription text.
Could we made this with realtime streaming for each call channel separately?
Also, if we need that for 20 simultaneous calls, what processor/memory resources we will need?
Hi Dan, thanks for sharing your code with community. It is so inspiring.
I am just trying to install your stt audioserver solution but I have this error:
Error: Cannot find module ‘config’
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
at Function.Module._load (internal/modules/cjs/loader.js:562:25)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
at Object. (/home/audioserver/index.js:2:16)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions…js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
After a whole afternoon digging in stackoverflow and this forum, I am not able to realize what is wrong.
Do you know what could be happening?