How do I implement a bridge where participants must take turns speaking (mutual exclusion)?

Hi. I’m struggling to figure out how to build the following functionality for a server application I am putting together. I have built a couple of C++ Asterisk applications with ARI+stasis and made some changes to channel drivers, so I can dive into the code when needed.

The server has 1 ALSA channel for playback and capture of an audio stream. I want to bridge this ALSA channel to multiple SIP channels, one for each connected client. However, unlike a mixing bridge or a holding bridge or an announcement bridge, etc, I want the bridge to enable exclusive speaking privileges to one channel at a time. Connected clients’ mics are nominally muted, and a client can claim use of the available bridge by unmuting their microphone and starting to speak. Specifically, I want this sort of bridge to have the following characteristics:

  • All channels can receive frames from the bridge for playback.
  • Only one channel can contribute frames to the bridge at a time. All other frames are ignored.
  • The bridge becomes available if an audio frame has not been received from any connected channel after a certain timeout, say 100 msec. The bridge becomes unavailable if it is available and receives an audio frame from any connected channel.
  • I’d like to give some feedback to connected SIP clients when the bridge is in use so I can turn on a light on the client device indicating such, directing those users to wait their turn (I am developing this client device software as well).

I am using Asterisk 16.8.0. Does such a feature already exist via apps and configuration? If I add it myself, I’d prefer to do this using ARI+stasis+configuration if at all possible. Is that possible? Or alternatively implement the bridge app in C and control/manage it via ARI using CreateBridge, Add, and friends.

I think I could implement this entirely in ARI if ARI had an event type for beginning and end of receiving a stream of frames. I could use a normal mixing bridge with all channels muted. When a frame is received on a channel to start a stream, unmute just that channel in the bridge and keep all others muted. When the stream end event is received after a 100 msec timeout, remute the channel and wait for a new stream start event. But I don’t think such an event type exists in ARI. TALK_DETECT does exist, but, as I understand it, actually inspects frame audio data for non-silence, which I don’t care about. I just care about any frame. I might be able to configure TALK_DETECT to open up the thresholds so it activates during any received frame and deactivates only for mute. That might work out.

Thanks for any help and guidance, much appreciated!

I don’t think anything really exists to do what you want. Using TALK_DETECT as a base you could certainly write something to see when frames are passing and such.

@jcolp, thanks for the lead and guidance. Given that I’ll probably have to write some custom asterisk infrastructure for this, I think I’d like to put as much logic into the custom code as I can, which would include the mutual exclusion part. Here are a couple of possibilities I have in mind. Can you let me know if either of these is a viable design, which is cleaner to implement, and what asterisk hooks or data structures I should keep in mind to add them? My gut feel is that option 1 is the most useful and general purpose.

  • Option 1: Create a new custom function TALK_EXCL. Like TALK_DETECT, operations are set and remove. Set takes two configuration parameters: an exclusion group number and a timeout threshold. When any AST_FRAME_VOICE is first received, all other channels belonging to the same exclusion group number are muted with mute_channel “in”. When the channel’s timeout_threshold expires after the last AST_FRAME_VOICE is received for that channel, unmute all channels in the same group number. For this I’d need to somehow associate channels by group number to look them up to do the muting and unmuting and use some kind of resettable timer (ast_(channel)_datastore and ast_settimeout?). I’d need to also signal to ARI when a talk group is in use and which channel has claimed talking privileges for the talk group (this part I haven’t figured out yet). The bridge used would be a standard mixing bridge. In my case, because all my channels would be in the same talk group, only 1 person in the whole bridge would be talking at a time, so no mixing is required, but in the general case, you could have two exclusions groups in a mixing bridge representing two different teams with audio from each team being mixed together but only 1 person from each team being allowed to speak at a time.
  • Option 2: I implement the same sort of functionality at the bridge level instead of the “exclusion group” level. Looking at bridge code, this looks like a lot more code than the scheme in option 1 and not as flexible because every participant in a bridge is automatically in the same exclusion group. I’d also have to hook it into the ARI bridge system, which I expect is harder than adding a new ARI event in a channel function. I basically just mention it for completeness here in case option 1 has a show-stopping flaw.

Do you think the option 1 design would work properly? Thanks again.

Without digging deeply into things I can’t really answer, but ensuring locking would not pose a problem would be the concern I’d have.

Can you clarify what locking you mean and what problem it might pose?

If you share information across multiple channels, then you have to ensure that the information is protected. Doing so safely is important or else you could end up with deadlocks.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.