Webrtc ARI video bridge

Hi,
I have created following webrtc app, using ARI interface on asterisk 15.4.0.

  1. WebRTC client1 calls asterisk (using jsSip library)
  2. Call is received and new bridge is created and incoming channel is added to bridge.
  3. New channel is originated to webrtc client2, after success this channel is added to bridge.

Problem is that on second client I’m not able to get correct remote video.

  1. If I create bridge with following options: “mixing,proxy_media,dtmf_events”, then both local and remote video on client1 are correct, but on client2 local and remote video is the same.
    It looks like that asterisk bridge is sending same video stream (from client2) to both clients.

  2. But if I add “video_sfu” to previous bridge options, video stream is not transferred neither to client1 nor to client2. If I check webrtc-internals I can see: on both clients video and audio is sent, but on both clients only audio is received. Received bytes for video on both is 0!

It looks like that problem is with bridge type but I’m not able to figure out, so any hint or search direction is welcome.

best regards
edvin

If using video_sfu you need to increase the number of permitted streams on the endpoint configured in chan_pjsip, and there is also extra work done client side to allow the streams to be viewed using video elements.

Hi,
thx for replay. I have already increased max_audio and max_video. (down you can see my pjsip config).

What do you mean by extra work on client side. Because for me it looks that problem is on server side. Because without sfu bridge type I can see remote video on client1 and client2 only it is the same. I read that for non sfu bridge single video is sent two both parties. And this is what I’m experiencing. But if I add video_sfu, then no video is received on both clients. I can confirm this also by inspecting webrtc-internals. Everthing is running on single computer and both clients are using chrome.

br
edvin

; -------------------------------------------
; webrtc
; -------------------------------------------

[transport-wss]
type=transport
protocol=wss
bind=0.0.0.0

[10000]
type=aor
max_contacts=1
remove_existing=yes

[10000]
type=auth
auth_type=userpass
username=10000
password=10000 ; This is a completely insecure password.  Do NOT expose this 

[10000]
type=endpoint
context=crossty-agents
aors=10000
auth=10000
webrtc=yes
direct_media=no
allow=!all,alaw,vp8,h264
dtls_ca_file=/etc/asterisk/keys/ca.crt
dtls_cert_file=/etc/asterisk/keys/asterisk.pem
max_audio_streams=4
max_video_streams=4

[10001]
type=aor
max_contacts=1
remove_existing=yes

[10001]
type=auth
auth_type=userpass
username=10001
password=10001 ; This is a completely insecure password.  Do NOT expose this 

[10001]
type=endpoint
context=crossty-agents
aors=10001
auth=10001
webrtc=yes
direct_media=no
allow=!all,alaw,vp8,h264
dtls_ca_file=/etc/asterisk/keys/ca.crt
dtls_cert_file=/etc/asterisk/keys/asterisk.pem
max_audio_streams=4
max_video_streams=4

In an SFU method Asterisk reinvites each participant and adds additional video streams. Each video stream is another participant in the bridge. The stream which carries the video FROM a participant is not used to carry video back. Work has to be done client side to actually deal with the streams that get added.

I’m not familiar with the new bridging structure, but “mixing” sounds to me a like conferencing. I though Asterisk only allowed one source in video conferences.

Seems like Asterisk 15 adds at least the framework for more sophisticated video conferencing.

There is a blog post[1] which covers this including example client code.

[1] https://blogs.asterisk.org/2017/09/20/asterisk-15-multi-stream-media-sfu/

Hi,
I have already read this blog and check source code, but was not able to find nothing specific that would make difference. The biggest difference is that I’m not using sdp-interop library because I have only tested chrome-chrome and ff-ff communtication and result are the same. The other big difference is that I use ari but in example you use app_confbridge. Are there some specifics in ari configuration for using video bridge?

edvin

There is nothing specific except for specifying it to be an sfu bridge. ARI has not been extended as of yet to have controls and other things over it, and it’s not actively tested. I’d suggest starting with the blog and the code there including ConfBridge, verifying all of that works for your environment, and then moving on to ARI. That eliminates some things and starts with a known working setup.

I will say you will need to use the sdp-interop library, as Chrome does not support the SDP format used yet (they’re working on it).

Hi,
yes I will try without ari using confbridge to see if it makes any difference.

btw: I’think that chrome 68 has already implemented something like this:

new RTCPeerConnection({sdpSemantics: “unified-plan”})

I’d give it a few more versions to stabilize just in case there are further issues, as well things haven’t been tested with it yet. When it comes to WebRTC stick to what is known to work at the start - then change variables. There are a lot of moving parts so going off the path immediately makes it extremely hard to determine what is going on.