AudioFork - question and answers

Discussion related to the AudioFork module. See Github repo.

Is the Asterisk module External Media the “standard” way of forking audio from Asterisk, and preferred over AudioFork?

It is the core supported mechanism, and used by Sangoma.

In a 2-way phone conversation, can External Media module grok the different channels? Scenario: my app prefers to ingest interwoven inbound / outbound audio chunks, which is fairly standard for telephony platforms that fork the real-time audio stream. However, in AudioFork, the best you can do is get a monochannel which contains both inbound / outbound audio chunks but does not segregate the two. Can we differentiate the real-time inbound / outbound audio chunks with External Media?

On a single external media no, you’d need two. External media is built on the same fundamental base as AudioFork.

Is there some way within the C-lang APIs that we can segregate inbound / outbound chunks? This is pretty standard in many CCaaS solutions I’ve seen. Just for example, the real-time audio stream out of Amazon Connect marks individual chunks with meta data – including whether each is inbound or outbound.

The audiohook API can provide each direction.

@jcolp: do you mean that within the same C-lang loop, we can read and mark each audio chunk in the conversation so as to indicate channel (IE, inbound or outbound)?

Audiohooks doesn’t work on the concept of inbound or outbound channels or groups of channels. It works on audio going TO a channel, or audio going FROM a channel. Just a single channel. Anything further channel aware is above it in the consumer of the API. It provides the ability to independently retrieve audio from both directions.