Hello! I’m running 20.7.0 and setting up a system to handle WebRTC clients and I’m having an issue with static and possibly choppy audio through WSS. Here are some data points and what I’ve tested so far:
I’m using sipml5 to test WSS as our own client is still in development
I am testing this by dialing the following extension and listening to the audio:
exten => 200,1,NoOp()
same => n,Answer()
same => n,Record(/var/spool/asterisk/recording/${UNIQUEID}.wav,3,0,k)
same => n,HangUp()
Calls placed using Linphone and Zoiper via TCP/TLS, with SRTP enabled sound amazing, using the same computer and headset.
Certificate used for TLS/SRTP is the same as the one used for WSS/DTLS
Opus codec was used for all tests
Signaling seems to be fine; however, I am new to WebRTC so I may be missing something in the SDP.
I’m absolutely stumped and am probably missing something obvious but I would appreciate any help.
When using WebRTC, your audio packets are not transmitted via the web socket connection (WSS). Audio is sent and received via UDP packets after establishing a direct peer-to-peer connection using ICE. (One of the peers in this case may be your Asterisk Server).
To see this in action enable the RTP debugger on Asterisk, and use Wireshark on the PC, to see the packets flowing from Client to Server. You should very easily see the DTLS packets even if they are encrypted, it doesn’t matter, it’s more about the route its taking or missing packets that would cause the audio quality. And if the route is optimal (peer-to-peer), and and there are no missing packets, then your server may be struggling to transcode the audio - check the CPU usage.
Thank you for the reply - I didn’t see the notice that one was put out. We thankfully found the culprit, which was my co-worker’s home network, and moving him to another network fixed the problem straight away.