Audio pipe to external program


Im new to Asterisk and VoIP technology.
I need to setup a server that will allow me to:

pipe audio stream from a caller to a server.
respond to the caller with another stream - that is tream caller as sink and that server as a stream. Simply 2 way communication.

I used a text to speech protocol for the first part, however Im not sure how to reverse this process.

When it comes to programming my own module in C - I’d rather avoid doing it.

Ive tried understanding documentation, but Im quite overwhelmed by how large it is.

Im currently trying my best to setup WebRTC protocol for that.

Please point out some keywords and ways how can I obtain my goal and if WebRTC is indeed a right way.

1 Like

I am in a very similar situation regarding both

  1. my telephony expertise and
  2. that I want to use Asterisk solely for sending/receiving the caller’s audio to/from an external server as a stream (realtime audio passing i.e. no recordings) - meaning the external server could basically be replaced by a human on another phone.

From what I’ve read so far, I also get the impression that a lot of approaches out there aim to solve either speech-to-text (STT) or text-to-speech (TTS) but not both (STS). For me, it’s also unclear which interface should be used for what purpose…

Which of the following approaches is the easiest/best for Asterisk for STS?

Related threads:

1 Like

I have a project with STT(not TTS). my choice was ARI(External Meidal).
I wonder that it is a good choice or not ?

What programming language do you use and is your implementation public? If not, is it similar to GitHub - asterisk/asterisk-external-media?

I think since I have a non-deterministic dialplan (dynamic conversation with an unknown number of turns depending on the input), I think I will try ARI first.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.