I’m just starting to look into building with Asterisk, and one of the things I’m trying to sort out is how I might integrate with ASR/TTS services (either local or cloud). TTS is actually pretty straightforward, but Speech Rec would theoretically require stream access. It doesn’t look like it’s currently possible to get a raw media stream natively, however I did find a blog post and wiki page that proposed some interesting changes: Speech to Text / Text to Speech / Emotion - Asterisk Project - Asterisk Project Wiki. I didn’t see anything in the changes for v19 that mentioned this, so can I assume that this is still a WIP?
It is, and not really meant for this. We provide the external media[1] functionality for accessing media.
[1] External Media and ARI - Asterisk Project - Asterisk Project Wiki
This could be interesting as well:
Thanks-that AudioSocket implementation looks like it’ll be exactly what I need!
Coming back to this, and running into a couple of issues. The wiki seems pretty clear on how to kick external media off, but if I try with encapsulation “audiosocket” or transport “tcp” I get a 501 (not implemented) back from Asterisk. I’m going to keep looking at this, but is there a config flag that needs to be changed to enable audiosocket support in v18?
Figured out my problem. I’m using a .net SDK that wraps the API, and that wasn’t surfacing the error properly. It turns out that the data parameter on /channels/externalMedia is mandatory (not indicated in the docs). Once I passed a GUID to it, the socket worked:
string streamID = Guid.NewGuid().ToString();
await _ari.Channels.ExternalMediaAsync(m_appID, m_mediaListenSocket.LocalEndPoint.ToString(), "ulaw", encapsulation: "audiosocket", transport: "tcp", data: streamID, channelId: streamID);
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.